Category Archives: Emergent engineering

Safe to Fail Probes

The freedom to fail

In video games learning takes place because the players are not afraid to die. As a result of this freedom to fail a lot of experimentation takes place and over time learning takes place. Hackers tend to think in a similar fashion and the art of “fuzzing” is similar in that it attempts to make the system fail by trying to push the boundaries of the application design. In essence these are both “brute force” solution discovery methods where essentially trial and error are used in a systematic way. It may take many failures to achieve a success which is why there has to be a freedom to fail.

What are safe to fail probes?

The safe to fail probe is an experimental technique which can be applied in complex adaptive systems to discover new knowledge. Dave Snowden pioneered safe to fail probes and one of the key points is that each experiment should be different. It’s important to not have 20 out of 100 people all trying the same experiment.

Blockchain based distributed autonomous communities are not inherently chaotic

Despite how things may seem in the media the blockchain based decentralized community is not inherently chaotic. It is a complex adaptive system where order exists but it is too complicated to deal with using the top down approach. It is inflexible to think of blockchain communities as corporations because typically a corporation is a top down hierarchy, has public leadership roles, and is fairly predictable in how it operates with changes taking place rather slowly. The decentralized communities do not operate like corporations but at the same time are not “anarchy” in the sense that it’s a lawless “wild west”.  The law of the decentralized community is encoded into the software that each individual must run.

To think of each blockchain and or DApp as an experiment is to allow for the idea that a blockchain can be like a safe to fail probe.  The only way to discover what is or isn’t possible in a completely new environment is to probe the environment through multiple simultaneous experiments. So just as distributed sousveillance can allow a blockchain to probe an environment to create collective intelligence a similar approach can be used to decentralize knowledge generation through safe to fail probes.

Blockchains are distributed sensor networks

This will become clear as prediction markets utilizing distributed oracles show that the consensus (Schelling point/focual point) of truth can be identified. Methods such as using prediction markets, powered by zero knowledge proofs, oracles, connected to a blockchain are quite powerful. This means perspective is valuable for determining the truth of any event and the blockchain can combine globally distributed perspectives of an event or situation in a quantifiable way. Additionally this has implications for distributed autonomous security systems as well which at this time have not been fully explored or exploited.

A problem with blockchain maximalism (one blockchain to rule them all)

One of the major problems with blockchain maximalism is that while it does promote the network effect of the winning blockchain it doesn’t actually contribute to the process which led to the innovation of the blockchain itself. In an environment where there is a diversity of blockchains where many attempts are being made to innovate or to find solutions to previously unsolved problems then you have a way to experimentally probe the space which I’ll call the “search space” to find the optimal solution to a problem.  It allows more developers (and more minds in general) to attack the same problem with similar economic incentives to the incentives of the early Bitcoin adopters had.

Viewed as a complex adaptive system the evolutionary process is artificially slowed if we stop experimenting. If we stop experimenting then it will take longer for us to learn the optimal algorithms, ways of doing things, and methods for self management/self regulation of this new space.

A problem with funding too many versions of the same experiment

One of the problems in the current blockchain tech community is that VCs and entrepreneurs seem to think very much in orthodoxy. A lot of money is flowing into duplications of the same experiments while some of the truly unique experiments don’t receive funding. As a result there are dozens of exchanges, wallets, and similar services which don’t actually innovate much except in marketing yet which are being flooded with cash. A likely result is a lot of these duplicates will fail, will be consolidated, and while this is good in some ways it is not good in all ways. Companies which truly bring innovative new ideas to the space should be given a chance to monetize them and build out but at the same time a lot of businesses are being built just to make money rather than to improve on how something is being done.

References

Snowden, D. (2010). Safe-fail probes. www. cognitive-edge. com/method. php.

 

Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. harvard business review, 85(11), 68.

Evolutionary methods for problem solving and artificial development

One of the principles I follow for problem solving is that many of the best solutions can be found in nature. The basic axiom of all knowledge as self knowledge applies to the study of computer science and artificial intelligence.

By studying nature we are studying ourselves and what we learn from nature can give us initial designs for DApps (decentralized applications).

The SAFE Network example

SAFE Network for example is following these principles by utilizing biomimicry (ant colony algorithm) for the initial design of the SAFE Network. If SAFE Network is designed appropriately then it will have an evolutionary method so that over time by our participation with it can fine tune it. There should be both a symbiosis between human and AI as well as a way to make sure changes are always made according to the preferences of mankind. In essence SAFE Network should be able to optimize it’s design going into the future to meet human defined “fitness” criteria. How they will go about achieving this is unknown at this time but my opinion is that it will require a democratization or collaborative filtering layer. A possible result of SAFE Network’s evolutionary process could be a sort of artificial neuro-network.

The Wikipedia example

Wikipedia is an example of an evolving knowledge resource. It uses an evolutionary method (human based genetic algorithm) to curate, structure and maintain human knowledge. Human beings

One of the main problems with WIkipedia is that it is centralized and that it does not generate any profits. This may be partially due to the fact that the ideal situation is that knowledge should be free to access but it does not factor in that knowledge isn’t free to generate. It also doesn’t factor in that knowledge has to be stored somewhere and that if Wikipedia is centralized then it can be taken down just as the library of Alexandria once was. A decentralized Wikipedia could begin it’s life by mirroring Wikipedia and then use the evolutionary methods to create a Wikipedia which does not contain the same risk profile or model.

Benefits of applying the evolutionary methods to Wikipedia style DApps

One of the benefits is that is that there could be many different DApps which can compete in a market place so that successful design features could result in an incentive to continue to innovate. We can think of the market in this instance as the human based genetic algorithm where all DApps are candidate solutions to solve the problem of optimizing knowledge diffusion. The human beings would be the innovators, the selectors, and the initializers. The token system would represent the incentive layer but also be for signalling so that humans can give an information signal which indicates their preferences to the market.

Wikipedia is not based on nature currently and does not evolve it’s design to adapt to it’s environment. Wikipedia “eats” when humans donate money to a centralized foundation which directs the development of Wikipedia. A decentralized evolutionary model would not have a centralized foundation and Wikipedia would instead adapt it’s survival strategy to it’s environment. This would mean Wikipedia following the evolutionary model would seek to profit in competition with other Wikipedia’s until the best (most fit) adaptation to the environment is evolved. Users would be able to use micropayments to signal through their participation and usage which Wikipedia pages are preferred over others and at the same time you can have pseudo-anonymous academic experts with good reputations rate the accuracy.

In order for the human based genetic algorithm to work, in order for the collaborative filtering to work, the participants should not know the scores of different pages in real time because this could bias the results. Also participants do not need to know what different experts scored different pages because personality cults could skew the results and influence the rating behavior of other experts. Finally it would have to be global and decentralized so that experts cannot easily coordinate and conspire. These problems would not be easy to solve but Wikipedia currently has similar problems in centralized form.

Artificial development as a design process

Quote from artificial development:
Human designs are often limited by their ability to scale, and adapt to chang-ing needs. Our rigid design processes often constrain the design to solving the
immediate problem, with only limited scope for change. Organisms, on the other hand, appear to be able to maintain functionality through all stages of de-
velopment, despite a vast change in the number of cells from the embryo to a mature individual. It would be advantageous to empower human designs with
this on-line adaptability through scaling, whereby a system can change com-plexity depending on conditions.
The quote above summarizes one of the main differences between an evolutionary design model and a human design model. The human designs have limited adaptability to the environment because human beings are not good at trying to predict and account for the possible disruptive environmental changes which can take place in the future. Businesses which take on these static inflexible human designs are easily disrupted by technological changes because human beings have great difficulty making a design which is “future proof”.  It is my own conclusion that Wikipedia in it’s current design iteration suffers from this even though it does have a limited evolutionary design. The limitation of Wikipedia is that the foundation is centralized and it’s built on top of a network which isn’t as resilient to political change as it could be. In order for the designs of DApps to be future proof they have to utilize evolutionary design models. Additionally it would be good if DApps are forced to compete against each other for fitness so that the best evolutionary design models rise to the top of the heap.

References

Clune, J., Beckmann, B. E., Pennock, R. T., & Ofria, C. (2011). HybrID: A hybridization of indirect and direct encodings for evolutionary computation. In Advances in Artificial Life. Darwin Meets von Neumann (pp. 134-141). Springer Berlin Heidelberg.

Cussat-Blanc, S., Bredeche, N., Luga, H., Duthen, Y., & Schoenauer, M. Artificial Gene Regulatory Networks and Spatial Computation: A Case Study.

Doursat, R. (2008). Organically grown architectures: Creating decentralized, autonomous systems by embryomorphic engineering. In Organic computing (pp. 167-199). Springer Berlin Heidelberg.

Harding, S., & Banzhaf, W. (2008). Artificial development. In Oganic Computing (pp. 201-219). Springer Berlin Heidelberg.
Palla, R. S. AN APPROACH FOR SELF-CREATING SOFTWARE CODE IN BIONETS WITH ARTIFICIAL EMBRYOGENY.
Ulieru, M., & Doursat, R. (2011). Emergent engineering: a radical paradigm shift. International Journal of Autonomous and Adaptive Communications Systems, 4(1), 39-60.