The Computational Universe – Stephen Wolfram suggest a different approach to science

In 2002 computer scientist, physicist, and businessman Stephen Wolfram published his book A New Kind of Science (NKS).

Rather than looking at physics as the solution to unravel the mysteries of the world and the universe, Wolfram looked how systems work as they follow certain algorithms which create outcomes. There are plenty of examples in the world around us especially if we look at nature. We see some of the most beautiful patterns in our brains, birds, butterflies flowers, galaxies and so on. The same applies to art and music but we also see them in the Golden Ratio, prime numbers, and the digits of pi.  Wolfram thinks those patterns can be generated algorithmically, that he therefore wants to understand/explain the universe by unearthing those algorithms; and that even complex patterns can be shown to arise from extremely simple algorithms.

Castel del Monte build according to the Golden Ratio – Andria Puglia Italy – 2015

Traditional physics does have their limitations and there are many areas where physics is not going to assist us that much e.g., biology, social science, linguistics. We need a different set of scientific tools for that.

Wolfram worked on cellular automaton. At a one-dimensional cellular automaton there are two possible states (labelled 0 and 1) and the rule to determine the state of a cell in the next generation depends only on the current state of the cell and its two immediate neighbours. He figured out how that could be applied to understand the universe by using computer programming. He indeed was able to use a systems approach that delivered remarkable outcomes which could not be formulated in an equation or algorithm.

My friend Fred Kappetijn provided me with a  sanity check and as a consequence I have added the following disclaimer to the article.  I agree with Fred we need to be a bit skeptical about some of the claims that Wolfram makes and there is also a niggling worry that he – as a businessman – is trying to promote some of his own services. Nevertheless, I do find it refreshing that he as a physicist is coming up with a different view and some of that most certainly does appeal to me.

Interestingly this is what Dr David Bray added to the discussion.

To what degree can the phenomena that compose our physical reality explained and predicted by *closed-form* mathematical equations?

vs.

To what degree are *closed-form* mathematical equations ineffective at explaining and predicting the phenomena of our physical reality – and *open-form* descriptions (such as agent-based modeling and other complex adaptive system approaches) more effective in comparison?

Which gets to what Einstein noted – that:

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.

Mathematics is not a science and at the moment it doesn’t look like that we are getting any further with a mathematical approach. There’s nothing wrong with tying out equation modelling to our actual universe. It is a symbolic logic, with internal consistencies, that we have found effective – especially for classical non-quantum physics – to employ in explaining and predicting the physical reality of our universe.

However, we increasingly are discovering instances where other phenomena in our universe (especially at the quantum level) pose difficulties being described by closed-form, deterministic mathematics. In some instance closed-form, indeterministic (probabilities) work – yet even then there are signs that this approach is insufficient?

The Computational Universe.

Wolfram discovered ‘Rule 30’ to produce complex, random patterns from simple, well-defined rules. He explains this in more detail in his book, but again something that I find very hard to understand.

He indicated that this process of cellular automation could be applied across systems. This new way of scientific thinking could result in a ‘Computational Universe’. The universe is full of systems, so rather than trying to explain the whole universe, let us start by unravelling systems.

His model of the universe is looking at evolution step-by-step according to complex “cellular automaton” rules, that can produce chaotic/non-linear results (as his zillions of simulations show), to which partial differential equations don’t apply without, at a minimum, some heroic mathematical efforts.

Natural systems could be defined computational. Understanding them involves the development of models and simulations. He is not the only one talking about a computational universe, Swedish-American physicist Max Erik Tegmark talks about ‘Our Mathematical Universe’. However, he hypotheses that all structures that exist mathematically exist also physically.

In contrast Wolfram uses algorithms, finite sequences of well-defined, computer-implementable instructions, typically used to solve a class of specific problems or to perform a computation.

The ultimate challenge is to find out what the system behind the universe is; what the fundamental theory of physics is?  Ho looks for discoveries such as why we have the physical laws that we do. He thinks he can show how the universe generated them by finding the basic algorithms that produce them.

The anthropic principle goes that there is a restrictive lower bound on how statistically probable our observations of the universe are, given that we could only exist in the universe capable of developing and sustaining sentient life. I support the weak anthropic principle; the strong version is not verifiable and could even include a creator. The weak form states that we can observe our universe. The question here is if we and our universe are this totally unique entity. The anthropic principle could lead to questions such as must there be a multiverse of which ours is just one of them that has the right formula for us to be here?

Natural systems often appear to us as being extremely complex, however nature seems to generate them rather easily.  It looks like that nature uses its computational power to do this. Wolfram argues that we can replicate this through the principle of what Wolfram calls computational equivalence.

Irreducibility – there is no end game.

Interestingly if you apply these systems approach you end up with the phenomenon of irreducibility. What this means is that a complete account of the system is not possible because, at a next level, it exhibits always new properties beyond prediction and explanation in terms of lower levels. In other words, there is always something beyond (I come back on this when discussing big data). At the same time, it is important to realise that systems are based on modelling and the output totally depends on what the input is. However, if this is done scientifically, the input can be fine tuned. Also as the saying goes all models are wrong but some are more useful than others.

I do find that a fascinating approach as this means while we might be able to model systems, we can never control them. Can we link irreducibility to ‘Free Will”? Or at least can we interpret that this could seem to look like ‘Free Will? I know that I am now on very dangerous grounds. Wolfram’s interpretation goes along these lines: Human thought and behaviour are totally determinate but unpredictable.

However, it was only in the last decade, when Wolfram reviewed his book, that he was able to develop this further with the assistance of artificial intelligence (AI) or more precise machine learning (ML) and even more importantly neural nets. The latter are computing systems that are at least vaguely inspired by the biological neural networks.

While on a micro level (in our world) there is chaos, on a macro level however, it looks like there are systems and patterns and very often they are very beautiful and show harmony. To learn more about the underlaying systems will provide us with better guidelines on how to best understand the chaotic environments which is the reality that we must deal with. These ‘chaotic’ environments are systems that are there for us to understand them as systems and describe them in some symbolic way so we can use ML to work with it. According to Wolfram, it makes sense that what is underlaying our world and our universe could well be based on simple algorithms. Interestingly Tegmark with a different approach comes to a similar conclusion.

Start with grassroot systems.

It reminds me of some of the smart city work I have been involved in. Working with students in hacks. Collecting unstructured data from totally different systems and bringing them together and amazingly, we start to see very interesting patterns/systems on how a city operates. The more data input we provide to the system the more we learn about it, this process could be endless, a good example of irreducibility. So rather than trying to build a smart city, define what you want and then use the principles of computational universe to let it grow/develop.

I am using this example but talking to colleagues working in other fields. They report on similar outcomes from big data work they are involved in working in different sectors.

AI and Rembrandt van Rijn

Another interesting example was revealed a few weeks ago. Operation Night Watch (Rembrandt’s painting) shows what a computer system combining imagination, resources, technical virtuosity, and mastery of a powerful technology can achieve.  Painted in 1642, the Night Watch was in 1715, cut on all four sides to fit in a new spot in Amsterdam’s Town Hall (now the Royal Palace). The cut off pieces were lost. However, there was a copy of the full paining made by Gerrit Lundens shortly after Rembrandt had finished it. However, it was clear that it was not as good as the one from the master himself. Nevertheless, that painting showed the missing bits. With the assistance of AI these pieces have been reconstructed.

The reconstruction was based on the system approach. First AI was used to teach the computer what Lundens’ style of painting was: his brush strokes, techniques, etc Than through AI the system was fed with information based on Rembrandt’s techniques and his way of work was programmed into the computer as the desired outcome, the result is just stunning, Wolfram will be extremely proud of that.

Linking Wofram’s ideas with Quantum Mechanics

In the end we are part of all the (beautiful) patterns that we see in the universe so why would the sub-systems be so different. Could biological systems such as our body follow computational universe principles, with irreducible outcomes.  Everyone is a system on its own – be it integrated with each other and our environment – and this makes for example personalised medicine so important (on other big data project).

In this context Adrian Bejan’s Constructal Law is also interesting when he talks about “flow” designs in nature.  Typically this is represented as trees, rivers, flowers,  but a deeper look shows it manifests across most biological systems like the human body’s respiratory, circulatory and nervous systems. Basically: “constructal law is the statement that for a flow system to persist in time it must evolve in such a way that it provides easier access to its currents”

Observing these beautiful patterns and ‘flow’ systems. I can agree with Wolfram that we might need a totally different approach to understand these systems and replicate them. In the end we might be able to put all the puzzle pieces together and get a better picture of the full underlaying/overarching system. What we see as chaotic and unpredictable could well become clearer – be it still less predictable – once we understand more and more of the underlaying systems.

While we can see these systems develop it is hard to understand what and how that is happening. We cannot (yet) put it into mathematical constructs, we can only describe what happens in certain parts of such systems. A critical element here of course is to develop the right languages for data science this would allow us to tell the computer what to do and what the goals are. As well as of course finding the right data to put in the models.

I am also fascinated by quantum theories and the developments here are also mindboggling. Other scientists also pursuing similar goals to what Wolfram is doing. My question is could quantum mechanism play a role here, could it assist in explaining some the elements in a system?  Especially new developments such as Spin Networks and Loop Quantum Gravity that are trying to obtain a mathematical description of quantum space (far too complex for my brain to understand or explain this). One of the aims is to see if this can align Einstein’s General Relativity Theory with Quantum Mechanics.

Looking at it from the other side, could the systems approach clarify some of the quantum mechanics phenomena? Are there deeper relationships between the two theories? After 100 years of research, we still do not understand quantum (nor what the physics of gravity is for that matter), perhaps Wolfram offers a simpler way forward? Intuitively I would think that this is worthwhile to investigate further.

Perhaps because I am more involved in social and philosophical systems, I think that we should give Wolfram’s approach a serious chance to see if we can solve some of the ‘big’ problems from a different ‘systems’ angle. Could the system approach provide us with a law of nature? Is such a system a universal computer? Just fascinating to think about this.

Back to the here and now

Now on to everyday reality – and things that really matter in the here and now.

So let us stick to the knowns. We unravel more and more problem areas to get a better understanding of the environment/world/universe we live in. We already have undertaken a lot of good work here. While still at a basic level AI and ML as well as data sciences are going through massive levels of development and we can use them not just for commercial, military, or political purposes but also to improve humanity, our environment, social structures and so on. AI should not be used on autopilot; it needs to be used based on decisions made by humans. However, so far, we have largely failed to act upon what we are learning. We have been very selective in using the outcomes, mainly for commercial, political, and military purposes. It is about time that we start using them for applications for the good of all. Climate change is a classic example. ML has greatly assisted us in understanding and modelling (competing) systems. Now is the time to take some urgent action based on the evidence we have gathered with the assistance of AI.

The interesting question is if and how this new systems approach from Wolfram can assist us further. The chaotic nature of these systems, its dependence on the input and an output that will continue to change the more we input provides a lot of uncertainty and its applications will therefore be limited, at least in the foreseeable future.

What we need here are – according to my colleague Dr. David Weinberger – flexible, agile approaches that emphasise interoperability and enabling world-wide communication among super-local observers who can both feed data into the system but can also use conversational human intelligence to try to sense the small but crucial signals from the flood of non-crucial signals.

Another worrying element here is the global polarisation especially between China and the West. China is leading AI and Robotics, and the values of the Chinese Communist Party are rather different from the values we foster in the democratic countries. While it is desperately needed It is hard to see that we could build any global consensus on what is good for all.

We use ML, big data and who knows a systems-based approach to create a better world, but if we fail to implement the lessons that we are learning from using these tools than what is the point? We do need to align our technological inventions and development with human goals. I see this as the biggest issue in the development of our AI, ML, Algorithms, Systems, etc. Can we get an agreement on what we want from this technology?  We are going to have to define goals for what we want from these technologies, this will always be based on value choices that we humans have to make. AI will then figure out how best to achieve those goals. The worry is not that AI could take over humanity or make us redundancy, but that we are making the wrong value choices and that the consequent AI outcomes will not be for the good of all.

This is something that greatly frustrates me. We seem to lack the political will to use the knowledge that we have gathered, with the assistance of technology, for the good of all. We can fine tune these systems as we are learning to understand them better. Technology will have to assist us here as the complexity is simply too big for us to solve those problems without their assistance.

We also know that we are social beings, so we do depend on each other. There are systems that also underlay our communities – and again we know them reasonably well – but also here we fail to act upon them.  It is not difficult to see communities that do live in harmony and thrive, that is a good working system. What we are seeing over the last 50 years or so is that these communities/societies are disturbed and that indicates that social systems are failing. We know what is needed to make it work but again we fail to implement the solutions. Developments in smart cities systems based on developments starting at grassroot levels are a good way to improve and strengthen our community systems. Based on strong systems at that level we might ensure that we can hold on to our hard-fought democratic principles and institutions.

Paul Budde