Semantic Modelling: The Secret to Scalable, Modular Digital Twins

What is semantic modelling?

Semantic modelling is the process of gathering all the relevant data and modelling tools for  a small scale mathematical or ML/AI model in one place, and building a small scale model which can be used in conjunction with similar models to build a larger scale model of a system.

This modular approach to modelling allows for reduced development time, easier scalability and isolatable models. Smaller models are simpler and therefore easier to develop, and external processes which have an impact on the modelled system can be treated as inputs, which means that results from the model can be achieved with or without other models providing those inputs. This means that a return on investment is realised as soon as the first model is developed and deployed. The ability to deploy one model, then develop another to work with it, and deploy this at a later stage indicates the scalability of semantic modelling as an approach; hundreds or thousands of individual models can run simultaneously, interact with each other and build a cohesive whole, from individually coherent parts. These models can also be turned off, if there is a problem with a model, or with the data collection, meaning that the larger model is very resilient and there are no single points of failure.

Why should my live digital twin be a semantic model?

Whilst not an essential condition for a live digital twin, the semantic modelling approach offers many benefits, as outlined above, which apply especially well to live digital twins. Given the large investment required both on physical infrastructure(sensors etc) and digital infrastructure for a live digital twin, showing rapid ROI makes the case for developing a system even stronger. The scalable and compartmentalised approach to modelling is especially important in a live setting, where it is not possible to perform manual checks on data, and where problems in machines, models or data have an impact immediately.

Whilst a semantic model has these innate characteristics, a suitable infrastructure must be in place which allows for models to be stopped, and their outputs replaced without affecting other models. This kind of reactive, customisable, and flexible architecture is exactly what is offered by an intelligent data pipeline, which can be configured so that the data flows are redirected away from an affected model, or to a newly deployed one, with a single click.

Want to learn more about semantic modelling for live digital twins? Check out the full whitepaper here

Key Definitions:

  • Digital twin: A virtual representation of what a complex real world system was doing
  • Live digital twin: A virtual representation of what a complex real world system is doing
  • Intelligent data pipeline: A microservice based system allowing for the transfer, processing and buffering of data from many different sources to many possible destinations.
  • Condition monitoring: The process of using data gathered from sensor on a machine to predict failures or schedule maintenance
  • Unified namespace: A system for determining the conventions used to name data points across an enterprise in a logical, scalable and understandable manner.
  • Semantic model: A modular approach to building a large system model that consists of stand alone smaller models which can interact with each other to model the whole system