You’ve probably heard of the butterfly effect, the idea that tiny changes (like a butterfly flapping its wings in Brazil) can have massive consequences (like triggering a tornado in Texas). More than just a poetic metaphor, the butterfly effect says that there are some things that even the most advanced science can never predict. Well, that list of things just got a lot shorter. Scientists from the University of Maryland have used machine learning to predict chaos.
Generally, when scientists want to make predictions about chaotic systems like weather systems or the stock market, they measure as much as they can about it as accurately as they can, create a computer model, then see what that model does next. But in a series of papers published at the turn of 2018 in Physical Review Letters and Chaos, chaos theorist Edward Ott and his colleagues took a different approach. They used a machine-learning algorithm called reservoir computing to repeatedly measure, predict, test, and fine-tune those predictions about a chaotic system until they were as accurate as possible.
The algorithm was tasked with predicting how a wall of flame would behave as it moved through a combustible medium like a sheet of paper. The technical term for it is the Kuramoto-Sivashinsky equation, which is also used to study things like plasma waves and air turbulence. The solution to the equation evolves just like a flame front would, and the scientists fed data from that known evolution into their algorithm — things like the height of the flames at a handful of points along the front at many different moments in time.
With every bit of data input, artificial neurons in the machine-learning network fire signals. The scientists measured the signal strengths of several neurons (or “reservoirs”) chosen at random, then weighted and combined them in different ways to produce a set number of outputs. The algorithm then compared these sets of outputs (in this case, predicted flame heights) with the next inputs (actual flame heights). It then made tiny adjustments to the weights of those signals to improve their accuracy for the next measurement. Like a butcher cutting one-pound steaks, every measurement and subsequent adjustment gets the algorithm closer to nailing it on the next try. Finally, you use all of that to make a real prediction about how the system will behave.
The algorithm successfully predicted the future evolution of that flame wall roughly eight times further ahead than any other method ever could. To do that with a model, writes Natalie Wolchover in Quanta, “you’d have to measure a typical system’s initial conditions 100,000,000 times more accurately to predict its future evolution eight times further ahead.”
It’s Something Unpredictable, But in the End It’s Right
This is especially big because so many things are so hard to model. There’s no equation to describe a whole lot of chaotic systems, and it’s really difficult to make grand, complex models of those systems. But if you can use machine learning to simply measure behavior in chunks and fine tune your predictions on the fly, that opens up a world of possibilities.
What possibilities? Think weather forecasts, tsunami predictions, earthquake warnings. You might be able to monitor heart rhythm for impending heart attacks and neuron firing patterns for impending seizures. You could even monitor the sun to get advance warning about devastating solar storms. We might be able to combine this new approach with existing modeling techniques to get even better predictions. “What we should do is use the good knowledge that we have where we have it,” Ott told Quanta, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.”