Press "Enter" to skip to content

AI: an applied mathematics perspective

In the last week of Fall 2023 classes, I attended a workshop-style conference titled “Mathematical Opportunities in Digital Twins,” abbreviated MATH-DT, and co-organized by my PhD advisor, Professor Kathrin Smetana. The conference brought together people from many backgrounds in academia and industry, seeking to share perspectives and identify challenges in utilizing this concept of a “digital twin” in scientific, engineering, medical, and civil applications. 

What many of the speakers made clear in their talks at the conference was that the term “digital twin” still doesn’t really have a general definition, an initial challenge in a mathematical context — an ill-posed definition in math quickly leads to confusion and contradiction. Despite this, the consensus (at least at the conference) was that a digital twin is a computer model of a physical counterpart in the real world—a bridge constructed over a river, say, or a medical patient, or a set of cars in traffic on city roads—which allow for a continuous feedback loop between it and the human user. 

This is still a little vague, as you might say, “what do I mean by continuous feedback loop?” The idea here is that the human user inputs new data they receive from the physical system – stresses on parts of the bridge collected by sensors, a new checkup with the patient, or a car accident at a certain intersection – which the digital twin is able to interpret, update the model accordingly, and provide suggestions to the user about what to make of that new data. The suggestion could be: “this part of the bridge may be starting to wear down – do some construction on it,” or “this patient’s cancer is in remission, start to dial back the chemo treatment.”

Again, the consensus is that some AI or machine learning capability in the digital twin does the interpreting and suggesting. But of course, for these AI models to make accurate suggestions, they need first to be trained on several bytes of data. The challenge discussed at MATH-DT is that for many applications – the healthcare and traffic ones specifically – there still isn’t enough data that’s collected in order to successfully train the AI, or in the case of the bridge, more data collection is often cost-prohibitive (those sensors ain’t cheap!).

This struck me as a fascinating problem. After all, humans produced 120 zettabytes of data (or 12 with 22 zeros after it) in 2023, and we’re expected to produce 181 zettabytes (or a ~150% increase) in 2024! However, much of this may not be applicable to the digital twin model at hand. Meanwhile, humans have rapidly enhanced AI’s ability to mimic human-created data – which we often think of as silly at best and harmful at worst.

But in the digital twin context, such mimicking might be essential. If AI’s can produce new data that mimics real-life bridges, medical images, or traffic accidents, with an authenticity close to the real-life counterparts, we may still have an effective digital twin model for such applications. 

Moreover, I think in such settings, we’d see practitioners care more about the negative effects of AI, which we’ve seen with a lot of deepfakes recently, for instance. Engineering firms, hospitals (in the US, at least), and medical companies want to make money, but unlike AI companies right now, they are much more tied to government regulations on public safety. As a result, a best practices framework for AI in the digital twin context may lead to a more general consensus on how to effectively regulate AI, or at least do a better job of flagging harmful or false content generated by it.  

As with any powerful new technology, it’s crucial to strike a balance between working to advance it and working to reduce the harm it can cause to people and the Earth. AI is having a heyday on the advancing side, but digital twins may provide one route to not only better understand the math behind machine learning, but swing the technological pendulum back in a safer direction.