Whether we like it or not, we live in a closed world, subjected to processes that are beyond our control and of which we barely get to know a small part thanks to experience. At first sight it seems difficult to obtain that experience in a mechanical (or procedural) way.
Something like that must have been thought by those who invented the first audio players, starting with gramophones and ending with compact disc players. All of them had to deal with problems of space to fit millions of vibrations into a finite physical medium. Looking closely at the inner ear of humans, the perception of sound is produced by the vibration of small strands of various sizes housed in a spiral organism called organ of Corti. In fact, our brain does not react directly to the sum of vibrations that make up the sound, but to the sum of intensities of each strand of the organ of Corti. Research aimed at reducing the size of sound recordings began in the 1960s.
Taking advantage of the way we detect sound, it was intended to identify the intensity patterns of all the frequencies that make up our auditory spectrum and store only the numerical component of each frequency. The theory was coherent. Unfortunately, in order to put these fundamentals into practice, it was necessary to have a real-time sound processor. These processors, called DSPs, would generate the waveforms of each frequency and combine them to reconstruct the original sound.
In 1820 Joseph Fourier described a method to simplify continuous functions by changing their domain. The process is known as a Fourier transform. Thanks to this method, any waveform can be transformed from the time domain to the frequency domain. Taking into account that the human ear is physically limited within a small range of frequencies, any sound, which is a sum of frequencies at different amplitudes, can be converted into a polynomial expression that is very easy to store and process. A waveform is a rhythmic pattern; a repetitive process with a measurable cadence. If the value of the cadence and the amplitude are obtained, in addition to obtaining a reasonable explanation for past events, future processes can be inferred.
When I was younger I liked to “predict” when the disco songs I listened to would change. It was very simple. 100% of disco music follows a 4:4 rhythmic pattern. Also, every four bars there is always a change of orchestration. We like to think that we have the gift of innovation, but in reality we are tied to a world of rhythmic patterns that condition our own lives and from which it is very difficult to escape.
The succession of day/night, summer/winter, lunar phases, gastric cycles, circadian rhythms, the succession of work shifts… everything involves us in a tangle of repetitive events, each one with its respective frequency, which with a little observation can provide us with very useful data.
Think about astronomers: What is an eclipse? It is a casual phenomenon that occurs when several celestial bodies coincide in a certain position. A position that can be accurately predicted with a little math. The main problem is to recognize all the patterns that affect us, because some of them are so subtle that they can go unnoticed.
Now I am going to talk about another cyclical phenomenon: The movement of subatomic particles within the atom. A movement so complex that it is almost impossible to define the mathematical function that describes each quantum orbital. At least that was what was thought in the days of the Manhattan Project. Human genius wanted us to find a detour to overcome the obstacle. Assuming that we will never be able to crack the function, we can still predict its behavior using another great theoretical tool based precisely on the unpredictable: Statistics. Using a very large sample of observational data, the scientists who invented the atomic bomb were able to predict the behavior of the quantum levels of atomic orbitals. They called it the “Monte Carlo method” because it was like deducing the physical composition of a deck of cards from hundreds of poker hands.
A process similar to that of the Monte Carlo method is followed by machine learning algorithms based on neural networks or genetic algorithms. With them they officially renounced to understand what patterns decide our future.
The genius step is to infer that from a sample of learning data from past events, an Artificial Intelligence system can anticipate future events. Naturally there will be many irrelevant data that do not contribute anything to the investigation and that could be discarded to simplify the system. It will be a trial and error method that will establish good euristics.
Despite the technical verbiage, predicting the future based on analysis of cyclical patterns is something we do all the time, sometimes without realizing it. Somehow we “know” when the same traffic light that makes us stop every morning on the way to work is going to turn green, we know that a downpour is going to fall as soon as we appreciate the first signs or manage to wake up at the exact time without having to set an alarm. It is not that we are fortune tellers. We don’t have paranormal powers. We have simply been able to develop a certain sensitivity to some patterns that we manage to decipher and we use them to anticipate what is going to happen.
And if we can anticipate events, any computer system capable of hosting a machine learning algorithm will be able to anticipate responses to future stimuli after a certain period of training or learning. That is why I know for sure that I will end up with a system capable of turning on the lights in the living room just by thinking about them. To some it will look like telepathy. It is logical. It is spectacular. It is something that is within my reach. I just wanted to share it with you.