Login   |      Register
English    中文


How Big Data Can Help Save the World

2023-04-04  |   Editor : jiping  
Category : News

Abstract

Our ability to collect data far outpaces our ability to fully utilize it—yet those data may hold the key to solving some of the biggest global challenges facing us today. Take, for instance, the frequent outbreaks of waterborne illnesses as a consequence of war or natural disasters. The most recent example can be found in Yemen, where roughly 10,000 new suspected cases of cholera are reported each week—and history is riddled with similar stories. What if we could better understand the environmental factors that contributed to the disease, predict which communities are at higher risk, and put in place protective measures to stem the spread? Answers to these questions and others like them could potentially help us avert catastrophe.

Content

We already collect data related to virtually everything, from birth and death rates to crop yields and traffic flows. IBM estimates that each day, 2.5 quintillion bytes of data are generated. To put that in perspective: that’s the equivalent of all the data in the Library of Congress being produced more than 166,000 times per 24-hour period. Yet we don’t really harness the power of all this information. It’s time that changed—and thanks to recent advances in data analytics and computational services, we finally have the tools to do it.

For example, knowing mosquito incidence in communities would help us predict the risk of mosquito-transmitted disease such as dengue, the leading cause of illness and death in the tropics. However, mosquito data at a global (and even national) scale are not available.

To address this gap, we’re using other sources such as satellite imagery, climate data and demographic information to estimate dengue risk. Specifically, we had success predicting the spread of dengue in Brazil at the regional, state and municipality level using these data streams as well as clinical surveillance data and Google search queries that used terms related to the disease. While our predictions aren’t perfect, they show promise. Our goal is to combine information from each data stream to further refine our models and improve their predictive power.

Similarly, to forecast the flu season, we have found that Wikipedia and Google searches can complement clinical data. Because the rate of people searching the internet for flu symptoms often increases during their onset, we can predict a spike in cases where clinical data lags.

We’re using these same concepts to expand our research beyond disease prediction to better understand public sentiment. In partnership with the University of California, we’re conducting a three-year study using disparate data streams to understand whether opinions expressed on social media map to opinions expressed in surveys.

For example, in Colombia, we are conducting a study to see whether social media posts about the peace process between the government and FARC, the socialist guerilla movement, can be ground-truthed with survey data. A University of California, Berkeley researcher is conducting on-the-ground surveys throughout Colombia—including in isolated rural areas—to poll citizens about the peace process. Meanwhile, at Los Alamos, we’re analyzing social media data and news sources from the same areas to determine if they align with the survey data.

If we can demonstrate that social media accurately captures a population’s sentiment, it could be a more affordable, accessible and timely alternative to what are otherwise expensive and logistically challenging surveys. In the case of disease forecasting, if social media posts did indeed serve as a predictive tool for outbreaks, those data could be used in educational campaigns to inform citizens of the risk of an outbreak (due to vaccine exemptions, for example) and ultimately reduce that risk by promoting protective behaviors (such as washing hands, wearing masks, remaining indoors, etc.).

All of this illustrates the potential for big data to solve big problems. Los Alamos and other national laboratories that are home to some of the world’s largest supercomputers have the computational power augmented by machine learning and data analysis to take this information and shape it into a story that tells us not only about one state or even nation, but the world as a whole. The information is there; now it’s time to use it.

Provided by the IKCEST Disaster Risk Reduction Knowledge Service System

    Sign in for comments!

Comment list ( 0 )

 



Most concern
Recent articles