Scientific modeling has been thrust back into the spotlight due to the COVID-19 pandemic, with some questioning the results and validity of such modeling. The construction of valid and accurate models applies broadly to all scientific and engineering fields. The article below discusses some of the processes that feed a disease model, and the limitations of said models:
The value of a model depends on two basic factors: the quality of the data that is fed into it, and the range of validity of the model. Poor quality data will produce an inaccurate model, with large confidence intervals and high uncertainty. Reporting of single values without these caveats paints an incomplete picture of the accuracy of the model, and leads to over or under confidence in the underlying science. This is where we find ourselves in the present environment: incomplete data sets are feeding models that are being continuously updated, with outsiders anchoring on those singular reported values and not understanding the assumptions that produced those results. Poor data does not render a model useless - identifying trends can be more valuable to decision makers than extremely accurate predictions. We see this in applying preventative measures to "flatten the curve", where trend data is invaluable in the decisions to implement or rescind public quarantines.
The range of validity is of equal importance when evaluating a model. Just as you wouldn't expect a racing yacht to win a NASCAR race, models can only be used within their intended environment. The more accurate and precise a model, the more limited its validity in other situations. Applying a model outside of it's design environment - even applying a viral model from one country to another - can completely invalidate the results.
This philosophy applies to all models of a scientific nature. It is impossible to experimentally determine the exact result in every possible scenario, and scientific models reduce that inefficiency by offering "good enough" results. It is important to recognize the quality of the inputs and design environment are both important variables in determining model utility.
Keep this in mind when developing and applying models and analysis to your test programs, understanding the blind spots in the model to dictate the need for physical flight test. Your test points should be at the limits of your modeling to expand the envelope and have confidence in the analytical predictions within that envelope. Don't forget to continuously update those models as new data becomes available. Inconsistencies are usually a model limitation, not erroneous data!
Comments