What are incentive design patterns?
An incentive design pattern is a configuration of attractors which indirectly or directly induce the desired behaviors. Unlike attractor patterns which attract human attention these incentive patterns can communicate signals which may indirectly motivate and coordinate the behavior of human agents but also for non-human agents in a multi-agent system. Stigmergic optimization is possible in these multi-agent systems through these incentive design patterns.
A quote from Wash and MacKie-Mason:
Humans are “smart components” in a system, but cannot be directly programmed to perform; rather, their auton-omy must be respected as a design constraint and incen-tives provided to induce desired behavior. Sometimes these incentives are properly aligned, and the humans don’t represent a vulnerability. But often, a misalignment of incentives causes a weakness in the system that can be exploited by clever attackers. Incentive-centered design tools help us understand these problems, and provide de-sign principles to alleviate them.
As an example while the attractor token might be a cryptocurrency the incentive design pattern effects the autonomous agent and human alike. Both can be incentivised by the configuration of incentives.
What is stigmergy?
Stigmergy is a process of coordination which is used by bees, ants, termites and even human beings. Ants use pheromones to lay a trace which is a sort of breadcrumb trail for other ants to follow to reach food for instance.
Humans can also utilize stigmergy in similar ways. Human beings can use virtual pheromones to lay a digital trace for the rest of the swarm. These virtual pheromones just like with the ants act as a breadcrumb trail. These virtual pheromones are the like attractors.
What is stigmergic optimization?
Stigmergic optimization is how ants find the best route to food by using pheromones to leave traces for all their peers. At first the trace patterns appear random because the ants try all different routes to reach their goal. Optimization takes place as the most efficient path is found and the pheromone traces allow the ant swarm to learn.
In the context of a multi-agent system the agents focused on acquiring attractor tokens at first would not know the best path to take. All paths would be tried in the beginning as agents follow the trail of attractors tokens to the destination. Over time the most efficient path to the destination would be found by the agents and an order would emerge as a result of stigmergic optimization allowing the swarm to solve complex problems.