All posts by darklight

Incentive design patterns and stigmergic optimization

What are incentive design patterns?

An incentive design pattern is a configuration of attractors which indirectly or directly induce the desired behaviors. Unlike attractor patterns which attract human attention these incentive patterns can communicate signals which may indirectly motivate and coordinate the behavior of human agents but also for non-human agents in a multi-agent system. Stigmergic optimization is possible in these multi-agent systems through these incentive design patterns.

A quote from Wash and MacKie-Mason:

Humans are “smart components” in a system, but cannot be directly programmed to perform; rather, their auton-omy must be respected as a design constraint and incen-tives provided to induce desired behavior. Sometimes these incentives are properly aligned, and the humans don’t represent a vulnerability. But often, a misalignment of incentives causes a weakness in the system that can be exploited by clever attackers. Incentive-centered design tools help us understand these problems, and provide de-sign principles to alleviate them.

As an example while the attractor token might be a cryptocurrency the incentive design pattern effects the autonomous agent and human alike. Both can be incentivised by the configuration of incentives.

What is stigmergy?

Stigmergy is a process of coordination which is used by bees, ants, termites and even human beings. Ants use pheromones to lay a trace which is a sort of breadcrumb trail for other ants to follow to reach food for instance.

Humans can also utilize stigmergy in similar ways. Human beings can use virtual pheromones to lay a digital trace for the rest of the swarm. These virtual pheromones just like with the ants act as a breadcrumb trail. These virtual pheromones are the like attractors.

What is stigmergic optimization?

Stigmergic optimization is how ants find the best route to food by using pheromones to leave traces for all their peers.  At first the trace patterns appear random because the ants try all different routes to reach their goal. Optimization takes place as the most efficient path is found and the pheromone traces allow the ant swarm to learn.

In the context of a multi-agent system the agents focused on acquiring attractor tokens at first would not know the best path to take. All paths would be tried in the beginning as agents follow the trail of attractors tokens to the destination. Over time the most efficient path to the destination would be found by the agents and an order would emerge as a result of stigmergic optimization allowing the swarm to solve complex problems.

References

Deterding, S., Sicart, M., Nacke, L., O’Hara, K., & Dixon, D. (2011, May). Gamification. using game-design elements in non-gaming contexts. In CHI’11 Extended Abstracts on Human Factors in Computing Systems (pp. 2425-2428). ACM.
Dipple, A. C. (2015). Collaboration in Web N. 0: Stigmergy and virtual pheromones.
Heylighen, F. (2015). Stigmergy as a Universal Coordination Mechanism: components, varieties and applications. Human Stigmergy: Theoretical Developments and New Applications. Springer. Retrieved from http://pespmc1. vub. ac. be/papers/stigmergy-varieties. pdf.
Obreiter, P., & Nimis, J. (2005). A taxonomy of incentive patterns (pp. 89-100). Springer Berlin Heidelberg.
Wash, R., & MacKie-Mason, J. K. (2006, July). Incentive-Centered Design for Information Security. In HotSec.

Attractor patterns and attractor tokens

  • A data sequence is a pattern.
  • Patterns are everywhere.
  • Some patterns are more attractive than others.
  • Attractor patterns are attractors of human attention.

What makes a pattern aesthetically pleasing?

Aesthetically pleasing patterns evolve from a process of natural selection.  The same process which is at work in various forms of genetic algorithms whether human based genetic algorithms (HBGAs) or interactive genetic algorithm (IGAs) the aesthetic quality is determined by the selector which in these examples must be human.

Measuring pattern attractiveness

An obvious way to measure the attractiveness of various different candidate patterns is to use the process of selection. For example in a market based approach to selection the patterns could be product designs. How do we determine whether or not the product is successful or a failure? By the popularity of the product, and how often it is used.

  1. Example: If the product pattern is a game then you could track how often the game is played to determine how attractive the game is.
  2. Example: If the product pattern is a song then you could track how often the song is listened to in order to determine how attractive it is.
  3. Example: If the product pattern is a website then you can see how often the website is visited and for how long visitors stay in order to determine how attractive it is.

Attractor patterns are sticky

In order for an attractor to be sticky a person has to not want to stop paying attention to it, not want to get rid of it, because it encourages psychological attachment to itself.  For example the habit of checking email or Facebook are examples because both product patterns are sticky.

Measuring stickiness of a pattern

A pattern is sticky if people continue to pay attention to a particular pattern as a habit. This could be because the pattern fulfils some psychological need or it could be because the pattern meets a critical utility.

A token is only effective as an attractor if a lot of people want it. A lot of people will only want it if it’s exchangeable for something a lot of people want. If it’s exchangeable for something that a lot of people want a  whole lot of then it’s going to be extremely effective as an attractor token but it is still just only an attractor token.

The purpose of attractor tokens is to stimulate stigmergy

The purpose of attractor tokens is to attract the swarm of attention. These attractor tokens and attractor patterns in general facilitate the process of stigmergy. Stigmergy is what actually coordinates and directs the swarm allowing for swarm intelligence to emerge.

References

Chang, J. F., & Shi, P. (2011). Using investment satisfaction capability index based particle swarm optimization to construct a stock portfolio. Information Sciences, 181(14), 2989-2999.

Miller, P. (2007). Swarm theory. National Geographic, 212(1), 1-17.

More on PPBNs now referred to as “personal preference swarms”

How should personal preference bot nets (PPBNs) be reframed?

It has been brought to my attention that the phrase “personal preference bot networks”(PPBNs) may be problematic because it evokes a bad frame in the minds of certain individuals. An alternative phrase for “personal preference bot network” (PPBNs) which acquires the same meaning while maintaining the mass appeal would be “personal preference swarms” (PPSs).

You who relays the message may decide the best frames for your audience

So it is at the discretion of those who relay these concepts to choose between “personal preference bot network” and “personal preference swarm” depending on who their audience is. You are also free to “remix” because “remix-ability is good”. If you can find a better way to express these concepts to your audience then please do so in your own words as long as there is accuracy in getting it across.

The message should be remixed and the most fit frames selected for each audience

The process of using a human based genetic algorithm applies here. The data sequence (core concepts and algorithms) must be consistent and unchanged. The innovators of new frames are whomever understands the core concepts and algorithms well enough to remix and repackage them without losing their meaning. The audience is the selector of effective frames and depending on who that audience is there may be different frames and packages which appeal to your audience.

Know your audience

To know your audience requires a feedback loop. You test a frame, you let them like or dislike any word or phrase in your article. If the data supports that the frame is catchy in a good way then continue to promote the frame. Just like evolution the most fit frames emerge from selection not from top down design. Maybe a good way to generate quality frames would be to have a prediction market and a way to track the success rate or failure rate of certain frames in certain demographics.

I encourage all understand the concepts in this blog to remix and share the concepts to the best of their understanding from the perspective they have for the audiences that follow them. May the best frames thrive.

Personal preference bot networks and Ethereum’s Provenance

What are Personal Preference Bot Networks?

PPBN’s are personal software agent networks owned by individuals. PPBNs work by allowing the individual to delegate tasks to their personal swarm of bots which can trade on their behalf. This can include shopping for example where Alice uses intention casting to set forth her bot(s) in accordance with her intention to find the best deals for a acquiring list of items.

These PPBN’s in theory should be able to integrate and interact with DApps, DACs, DAO’s, virtual states, or even traditional centralized entities.

What is Provenance?

Provenance solves a particular part of this problem by revealing exactly how the products are made. If Provenance has an open API which allows for easy integration with bots then bots could scour the Internet for products which meet the fitness criteria of the swarm of bots. Those bots would then purchase the most fit and avoid purchasing the least fit.

What problem does this solve?

It solves the problem of attention scarcity and adverse selection by utilizing automated transparency. Human shoppers typically are not rational and also do not pay attention to details. A human being for example may not have the attention or time to read every ingredient for every food product they buy to make sure it doesn’t contain anything unethical or harmful. As a result many humans eat products which contain substances they are unaware of.

Fitness criteria (swarm preferences) and swarm intelligence

When PPBN’s (Personal Preference Bot Networks) converge then purchase patterns can favor certain “fitness criteria” which we would call swarm preferences. So the personal preference bots allow for intention casting as well as an automated multi-agent system of supply and demand. Provenance allows for the necessary transparency so that the intelligent swarm can know whether or not what is being supplied meets “fitness criteria” aka swarm preferences.

References

Ethereum London Meetup: Provenance (YouTube)

Provenance | Discover the stories of great products and their makers. (Provenance)
https://www.provenance.org/

Swarm intelligence (- Scholarpedia)
http://www.scholarpedia.org/article/Swarm_intelligence

Personal preference bot nets and the quantification of intention

Personal preference bot nets

“Personal preference agents” are software autonomous agents (most commonly known as bots) that act on behalf of the individual. So if you for example tell your “personal preference agents” your needs such as a shopping list, the “personal preference agents” would automatically pursue tasks using AI to fetch whatever is on the list.

In the shopping example this would mean you would not have to spend time shopping and you would not be susceptible to vicious subliminal ads. It would save scarce time and attention for the human being by delegating AI to useful tasks.

Intent casting 101

Traditional literature would call the “personal preference agents” a conditional preference network or in the generic sense a software agent network. Doc Searls calls it intent casting in his video on the subject. The main idea of Doc Searls’s intent casting is to create an intention economy. Personal preference bots would be a means of bringing the decentralized “intention economy” to virtual citizens.

The “personal preference agent” would be a particular kind of software agent which can accept preferences by the person and using AI seek to meet those preferences by connecting to various different external networks, blockchains, the web, etc.

The idea is each virtual citizen should have the capability to utilize personal preference bots / “personal preference agents”. These bots would then interact with multiple blockchains, multiple API or network interfaces so that for example the bot owned by Alice could contact the bot owned by Bob over any open protocol in automated fashion to relay an encrypted message, conduct a trade/transaction, or coordinate as a swarm (a sort of collective transaction). This would give each virtual citizen the swarm intelligence capability and empower virtual citizens.

References

Searls, D. (2012). The customer as a God. The Wall Street Journal, 1-4.
Searls, D. (2013). Eof: Android for independence. Linux Journal, 2013(227), 9.

Evolutionary methods for problem solving and artificial development

One of the principles I follow for problem solving is that many of the best solutions can be found in nature. The basic axiom of all knowledge as self knowledge applies to the study of computer science and artificial intelligence.

By studying nature we are studying ourselves and what we learn from nature can give us initial designs for DApps (decentralized applications).

The SAFE Network example

SAFE Network for example is following these principles by utilizing biomimicry (ant colony algorithm) for the initial design of the SAFE Network. If SAFE Network is designed appropriately then it will have an evolutionary method so that over time by our participation with it can fine tune it. There should be both a symbiosis between human and AI as well as a way to make sure changes are always made according to the preferences of mankind. In essence SAFE Network should be able to optimize it’s design going into the future to meet human defined “fitness” criteria. How they will go about achieving this is unknown at this time but my opinion is that it will require a democratization or collaborative filtering layer. A possible result of SAFE Network’s evolutionary process could be a sort of artificial neuro-network.

The Wikipedia example

Wikipedia is an example of an evolving knowledge resource. It uses an evolutionary method (human based genetic algorithm) to curate, structure and maintain human knowledge. Human beings

One of the main problems with WIkipedia is that it is centralized and that it does not generate any profits. This may be partially due to the fact that the ideal situation is that knowledge should be free to access but it does not factor in that knowledge isn’t free to generate. It also doesn’t factor in that knowledge has to be stored somewhere and that if Wikipedia is centralized then it can be taken down just as the library of Alexandria once was. A decentralized Wikipedia could begin it’s life by mirroring Wikipedia and then use the evolutionary methods to create a Wikipedia which does not contain the same risk profile or model.

Benefits of applying the evolutionary methods to Wikipedia style DApps

One of the benefits is that is that there could be many different DApps which can compete in a market place so that successful design features could result in an incentive to continue to innovate. We can think of the market in this instance as the human based genetic algorithm where all DApps are candidate solutions to solve the problem of optimizing knowledge diffusion. The human beings would be the innovators, the selectors, and the initializers. The token system would represent the incentive layer but also be for signalling so that humans can give an information signal which indicates their preferences to the market.

Wikipedia is not based on nature currently and does not evolve it’s design to adapt to it’s environment. Wikipedia “eats” when humans donate money to a centralized foundation which directs the development of Wikipedia. A decentralized evolutionary model would not have a centralized foundation and Wikipedia would instead adapt it’s survival strategy to it’s environment. This would mean Wikipedia following the evolutionary model would seek to profit in competition with other Wikipedia’s until the best (most fit) adaptation to the environment is evolved. Users would be able to use micropayments to signal through their participation and usage which Wikipedia pages are preferred over others and at the same time you can have pseudo-anonymous academic experts with good reputations rate the accuracy.

In order for the human based genetic algorithm to work, in order for the collaborative filtering to work, the participants should not know the scores of different pages in real time because this could bias the results. Also participants do not need to know what different experts scored different pages because personality cults could skew the results and influence the rating behavior of other experts. Finally it would have to be global and decentralized so that experts cannot easily coordinate and conspire. These problems would not be easy to solve but Wikipedia currently has similar problems in centralized form.

Artificial development as a design process

Quote from artificial development:
Human designs are often limited by their ability to scale, and adapt to chang-ing needs. Our rigid design processes often constrain the design to solving the
immediate problem, with only limited scope for change. Organisms, on the other hand, appear to be able to maintain functionality through all stages of de-
velopment, despite a vast change in the number of cells from the embryo to a mature individual. It would be advantageous to empower human designs with
this on-line adaptability through scaling, whereby a system can change com-plexity depending on conditions.
The quote above summarizes one of the main differences between an evolutionary design model and a human design model. The human designs have limited adaptability to the environment because human beings are not good at trying to predict and account for the possible disruptive environmental changes which can take place in the future. Businesses which take on these static inflexible human designs are easily disrupted by technological changes because human beings have great difficulty making a design which is “future proof”.  It is my own conclusion that Wikipedia in it’s current design iteration suffers from this even though it does have a limited evolutionary design. The limitation of Wikipedia is that the foundation is centralized and it’s built on top of a network which isn’t as resilient to political change as it could be. In order for the designs of DApps to be future proof they have to utilize evolutionary design models. Additionally it would be good if DApps are forced to compete against each other for fitness so that the best evolutionary design models rise to the top of the heap.

References

Clune, J., Beckmann, B. E., Pennock, R. T., & Ofria, C. (2011). HybrID: A hybridization of indirect and direct encodings for evolutionary computation. In Advances in Artificial Life. Darwin Meets von Neumann (pp. 134-141). Springer Berlin Heidelberg.

Cussat-Blanc, S., Bredeche, N., Luga, H., Duthen, Y., & Schoenauer, M. Artificial Gene Regulatory Networks and Spatial Computation: A Case Study.

Doursat, R. (2008). Organically grown architectures: Creating decentralized, autonomous systems by embryomorphic engineering. In Organic computing (pp. 167-199). Springer Berlin Heidelberg.

Harding, S., & Banzhaf, W. (2008). Artificial development. In Oganic Computing (pp. 201-219). Springer Berlin Heidelberg.
Palla, R. S. AN APPROACH FOR SELF-CREATING SOFTWARE CODE IN BIONETS WITH ARTIFICIAL EMBRYOGENY.
Ulieru, M., & Doursat, R. (2011). Emergent engineering: a radical paradigm shift. International Journal of Autonomous and Adaptive Communications Systems, 4(1), 39-60.

Decentralized reputation based reward networks and gift economics

What are decentralized reputation based reward networks?

If we look at the reputation system as a sort of human based genetic algorithm in a sort of multi-agent system then it becomes possible to use smart contracts to reward agents which meet a threshold score for certain reputation attributes. These attributes could be for instance how effective an individual agent is at altruism. The agents who are deemed most effective at altruism would receive the highest effectiveness score in the altruistic reputation based network and they could then qualify for conditional discounts, conditional rebates, conditional rewards, from corporations and individuals within that reputation based network.

Is BasicIncome.co a reputation based reward network?

The Basic Income algorithm which allows for dividend pathways works in a similar fashion where the givers within the personalized safety net become part of the overall Resilience social support network.  The Resilience network accounts for and tracks the altruists who volunteer to pay the tax and at the same time the individuals (individual agents) who pay into the network are given the incentive to shop at businesses which are part of the network. The Resilience network is a sort of reputation network where all who maintain a certain attribute by giving to the community get to remain a part of the community. Basicincome.co could be recognized as a reputation based reward network but only in a very limited sense due to the fact that participants may or may not choose to see it that way. If participants choose to build a reputation system on top of Basicincome.co then it can become an effective reputation based reward network.

Reputation based reward networks allow for gift economics

In a gift economy nothing is bought or sold. In a gift economy everything given or received is a gift similar to how at Christmas it is a gift economy because everything given or received is gifted. A gift economy can take advantage of reputation so that those who give a lot to others earn a certain reputation which can allow the givers in the network to obtain priority status for rewards. It is also possible to have smart contracts with conditions such that only those who have proven themselves through specified acts of kindness can become eligible for the reward lottery.  In essence in order to enter the lottery you would have to earn lottery tickets which can only be earned by giving donations to certain charities.

Reputation lotteries can leverage greed to encourage effective altruism

  1.  In order to enter the lottery you must be able to prove you did the specified act of altruism. This can easily be shown by a blockchain transaction for example.
  2. For every altruistic interaction you shall receive points which can be traded in for lottery tickets.
  3. The more lottery tickets you have the greater your chance to win the rewards.
  4. All who enter the lottery are guaranteed to receive a permanent badge of honor for having participated whether they win or not. This would encourage the participants to continue playing into the future.
  5. Participants can be human or machine.

References

resilience.me,. ‘Basicincome.Co – Incentive-Based Decentralized Safety Nets’. N.p., 2015. Web. 12 Mar. 2015.

YouTube,. ‘Identity And Reputation’. N.p., 2015. Web. 12 Mar. 2015.

Evolutionary Computation as a Form of Organization

The Free Knowledge Exchange (FKE) project intro-
duces the concept of evolutionary knowledge manage-
ment based on concepts of GA. It used a human-based
genetic algorithm (HBGA) for the task of collabora-
tive solving of problems expressed in natural language
(Kosoruko , 2000a). It was created in 1997 for a small
organization with the goal of promoting success of
each member through new forms of cooperation based
on better knowledge management.

Human genetic based algorithms pave the way for evolutionary self organizing architectures. These architectures can be social, political, economic, or physical.

The idea is that user preferences are tracked in real time by the architecture itself. The architecture then uses this feedback to continuously evolve the organization.

The idea of human interaction came from interac-
tive genetic algorithms (IGA) that introduced hu-
man evaluation interfaces in evolutionary computa-
tion. Human-based genetic algorithm (HBGA) used
in FKE is basically an IGA combined with human-
based innovation interfaces (crossover and mutation).

The concept of Evolutionary Computation as a Form of Organization will be discussed in future postings within the context of how a distributed autonomous virtual state can utilize evolutionary computation to become a self optimizing system.

References

Coello, C. A. C. (2010). List of references on constraint-handling techniques used with evolutionary algorithms. Power, 80(10), 1286-1292.
Kosorukoff, A., & Goldberg, D. E. (2002, July). Evolutionary Computation As A Form Of Organization. In GECCO (Vol. 2002, pp. 965-972).