‘Artificial intelligence may soon predict disasters and pandemics’
Predicting the timing and size of natural disasters is a fundamental objective for scientists. However, there isn’t enough information to reliably foresee them because they are statistically so uncommon.
There are now techniques to predict them, according to academics from Brown University and the Massachusetts Institute of Technology, using artificial intelligence.
In a recent study published in the journal Nature Computational Science, they successfully avoided the requirement for enormous data by combining statistical algorithms, which need fewer data to create correct predictions, with effective machine learning (an application of AI).
“You have to realise that these are stochastic events,” said study author George Karniadakis, a professor of applied mathematics and engineering at Brown, in a university release.
“An outburst of a pandemic like COVID-19, environmental disaster in the Gulf of Mexico, an earthquake, huge wildfires in California, a 30-metre wave that capsizes a ship — these are rare events and because they are rare, we don’t have a lot of historical data.”
“We don’t have enough samples from the past to predict them further into the future. The question that we tackle in the paper is: What is the best possible data that we can use to minimize the number of data points we need?”
The team discovered that sequential sampling with active learning was the best method.
These algorithms have the ability to study incoming data and learn from it in order to identify additional data points that are equally important or more significant. In other words, more may be accomplished with less knowledge.
A sort of artificial neural network called DeepOnet, which uses interconnected and stacked nodes to imitate the neuronal connections of the human brain, is the machine learning model that they employed.
This tool combines the functionality of two neural networks into one, processing data across both networks.
In the end, this enables massive amounts of data to be examined in a very short amount of time while also generating massive amounts of data in response.
By using DeepOnet and active learning approaches, the researchers were able to show that even in the absence of a large amount of data, they can reliably identify warning signs of a catastrophic occurrence.
The goal is to actively search for occurrences that will signify the unusual events, not to gather every piece of data and input it into the system, explained Karniadakis.
He added that although there may not be many examples of the actual event, those precursors might exist. We can identify them using mathematics, and when combined with actual events, they will aid in the training of this data-hungry operator.
The group even discovered that their approach may surpass traditional models, and they concur that their framework may establish a standard for more accurate forecasts of uncommon natural occurrences.
They discovered that by examining likely conditions across time, they can predict when damaging waves that are more than twice the size of nearby waves will form. The team’s article explains how scientists might plan future experiments to keep expenses down and forecast even more precisely.
This content was originally published here.