Explosion of data, but how can this lead to effective decision making, especially high-consequence ones?
Can’t just get more data as data cannot always be easily acquired, and may not always be required.
Multiscale, multiphysics an issue for modelling. High dimensionality.
Imperfection in the data and the models – need to indicate the uncertainty of decisions based on such imperfect data.
The models are not just AI models, in this case they are physical models. Uses computational science/computational science and engineering.
Predictive power, interpretability and use of domain knowledge required for decision models, which implies computational science.
Uncertainty quantification is now a big and important field – related to measurement, model and computational imperfections.
Can’t just apply machine learning naievely.
Digital twins to drive decisions. I.e. digital twin models a real vehicle over its lifetime, takes in sensor data, etc. Vehicles become self-aware by interfacing with their digital twin.
Projection-based model reduction. Need faster models to execute to allow digital twins, etc., to be more responsive. Needs to have enough of the behaviour:
- Solve PDEs for many scenarios, generating training data
- Use training data to identify structure, e.g. areas where the behaviour is linear, etc. [I will think more on this related to things I’ve done to see if there are other insights. Is this what some of the ML modules in materials science/chemistry applications are now using?]
- Project PDE model onto a low-dimensional subspace.
A complex non-linear system is more than just the sum of its pieces: no linear superposition. It is complex to deal with both local and global concerns in this context.
Use of classification trees using simulation data.
Questions missed as over-running.