Economic Theory, Juan Pablo Rossi
Leave a Comment

Some Love for Models

By: Juan Pablo Rossi

In the data-driven world of today we find ourselves taking economic and scientific models for granted, without recognizing the hard work that goes behind their development nor questioning the assumptions and potential problems with them. However, the new COVID-19 pandemic gives us an opportunity to take a deep dive into the process of fabricating a model.

Whenever people tune in to the news about the COVID-19 pandemic, it is almost impossible for them to avoid any projections about the consequences of the virus for the next couple of months. Some of the most popular predictions that have been going around relate to the different scenarios under which the viral outbreak could spread, indicating that voluntary social distancing and severe control measures by the government would help to limit the number of new cases per day, calling the effects of these the “flattening of the curve”. Despite the incredible surge in popularity of this phrase since mid-March, many people are unaware of what it really means or where it comes from, and to do so we need to take a brief pause from all the chaos and think about the intricacies of models. Whether it’s about the new number of students applying to NYU next year, the sales for a movie’s opening weekend, or the number of days until your next Amazon package arrives, models of all sorts come into play in various industries and walks of life.

Models are theoretical representations of processes that relate an outcome to its possible causes, exploring the quantitative and qualitative relationships between them. They are often simplified frameworks of complex, real-world scenarios that have the main purpose of analyzing causation effects and predicting future events. They are constructed from data by using various statistical and mathematical procedures that are guided by a theoretical expectation or hypothesis about the relationship between the outcome and its causes. All of them rely on various sets of assumptions that simplify the inner mechanics of the model while still being based on reality.

For example, in the case of COVID-19, models analyze the rate of infection in relation to how long someone is infectious, the average number of people they can come into contact with each day, the person’s age, etc. Models like these are constructed by using data from previous patients and theoretical expectations about their social behavior, which lead to conclusions about the most effective methods of dealing with the pandemic.

Economists use various sorts of models in all fields to predict the behavior of different agents in response to changes in the status quo of the economy. Important economic decisions that affect millions of people are often made after observing many models on future outcomes and expectations. All the changes that the Fed makes to its interest rate policies are backed up with models on the response to such measures, predicting consumer behavior regarding their levels of saving, investment, and consumption. However, even though economic models are very common and are used by many professionals, there are some careful considerations that everyone should take into account before they accept the results of a model as the truth, since many of them can be inaccurate, misleading, and even biased at times.

Since models are constructed primarily by using data, its collection process should be subjected to the utmost scrutiny. Data can suffer from problems regarding its scope, as it should usually represent the population instead of specific samples that fall under specific categories. For example, the current figures about the COVID-19 spread in the US have been under extreme criticism due to the high number of unreported cases due to the lack of testing. All patients who had mild symptoms and were not tested represent an error in the data collection for these models. Another problem with data, which is surprisingly more common than it should be, is related to its integrity. Data should be recorded in an accurate way which minimizes the chance of it losing consistency over time. One famous example of poor record-keeping which compromised data integrity was the infamous “Y2K,” as year data was stored in 2 characters instead of 4 and the new millennium disrupted the meaning of its data. Another common issue that arises with data collection comes with its privacy and anonymity, as many national research bureaus require that models are constructed without compromising anyone’s personal information. However, as 2018’s Cambridge Analytica case showed us, this is still a problem in data collection which should be corrected.

Furthermore, models are not only in danger of being compromised by the data they are   generated with, as they can also suffer from bias and inaccuracies during their construction. Researchers are always very keen on proving that their theories are correct, and are very susceptible to confirmation bias, as variables can be added or removed at the researcher’s discretion. Usually, these mistakes happen without any malicious intent, as they justify the inclusion or omission of certain variables as a way of improving the model and its outcomes.

To avoid many of these problems, social and natural scientists carry out Randomized Control Trials (RCTs), which compare the outcomes of two randomized samples from the same or similar populations, where one sample is under a specific “treatment” and one is not (also known as “control sample”). These “treatments” should then be the only factor causing a significant difference on the outcomes observed and are very good at proving causal relationships and building effective models. However, the problem is that these RCT experiments are very expensive to run, so they are not as common as they should be.

However, despite all these shortcomings, natural and social scientists are able to construct many accurate models under high—pressure scenarios, as it is currently done with the COVID-19 pandemic. These models must be constantly updated and reviewed to improve their outcomes, with hundreds of people working on their development. Models are certainly the easiest way of understanding and predicting the future as long as their mechanisms and implications are made transparent. We need to comprehend that all models are not accurate, and some even might be able to deceive us into believing a situation might be better or worse than it really is. As long as we think of them critically and not take in their results at face value, they are always a great tool to prepare for the future.

Works Cited

Image Source:×1400/filters:format(jpeg)/

Stevens, Harry (2020, March) Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve” The Washington Post. Retrieved from:

Google Trends search for the term “Flattening of the Curve”:

Begley, Sharon (2020, March) Coronavirus model shows individual hospitals what to expect in the coming weeks. Statnews. Retrieved from:

Glanz, James et al. (2020, March) Coronavirus Could Overwhelm U.S. Without Urgent Action, Estimates Say. The New York Times. Retrieved from:

Hsu, Jeremy (2020, February) Here’s How Computer Models Simulate the Future Spread of New Coronavirus. Scientific American Computing. Retrieved from:

Greenfield, Nell (2020, March) How Computer Modeling Of COVID-19’s Spread Could Help Fight The Virus. NPR. Retrieved from:

Zenghelis, Dimitri (2014, May) What do economic models tell us?. The London School of Economics and Political Science. Retrieved from:

Ouliaris, Sam (2011, June) at Are Economic Models? The International Monetary Fund. Retrieved from:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s