top of page

The Information Digital Twin (IDT)

The role of the IDT is to learn the behavior and preferences of its user, enabling them to predict their surroundings and identify the ideal course of action to achieve their objectives.

HDT_Human Perspective_edited.jpg

Digital twins (DT) have enormous promise for automating and optimizing assets, processes, and complex operations. However, developing DTs remains a challenging and costly task in many cases. ​The difficulty can be summed up in defining and updating the model used by a DT to capture the relationships between the thousands of parameters required to model a supply chain network, a production line, or a human patient.

IDT
IDT_User Specific.png

Information Digital Twin (IDT) Architecture 

To model and automate agent-environment interactions the Information Digital Twin relies on three major components. A representation component contains the model of the agent's interactions it supports. A learning component to learn and update the model according to the specifics and behavior of the agent. A monitoring and control component to evaluate the agent's behavior and interact with it either in the form of sending recommendations or taking over some of the actions. 

The Information Digital Twin Representation Component-The Information Heat Map (IHM)

Reference events model

The Information Heat MAP (IHM) is the representation component of the IDT and it serves a single purpose: it captures the level of dependency - in bits - between the various scenario parameters and how each parameter impacts a specific scenario objective. That is, the IHM enables the IDT to quantify the degree of dependency between all scenario parameters and to calculate how changes in one or more parameters spread across the entire scenario and affect selected scenario objectives.

The IHM parametrizes a process or scenario. The parameters are classified as input, context, actions and output. In the above example, an IHM is defined to manage risks associated with an ICU patient.

​

For an IHM representing of a patient, the input, context and action parameters provide information about the medications and patient-specific conditions and history. In contrast, the output parameters provide information about the various vitals and conditions observed for the patient.
​
Using inference from historical data, the IHM then calculates how well a parameter predicts specific risks. Each patient is provided with their own Patient-Human Digital Twin, or Patient-HDT, which relies on the various information values from the IHM to predict the onset of any of the risks in the scope of the IHM. 
​
In a broad sense, the IHM is an information model of a scenario that depicts-in bits-how much a parameter impacts the overall scenario objective, or risk in the case of the Patient-HDT.

IHM

Providing Recommendations for Actions-The IDT Control (Decision) Component

IDT RL Recommendations.png

The Information Digital Twin (IDT) decision (or recommendation) component uses Reinforcement Learning (RL) to obtain an information value for possible user actions and, consequently, provides action recommendations. The RL algorithm relies on the IHM to calculate updated state, action, and reward values to determine the optimal course of action toward the objective of a particular scenario.

​

Based on the Information Heat Map, which captures the user interactions with their environment, the IDT "populates" the Markov Decision Process (MDP) of its user with the necessary information values. The various MDP values are updated dynamically along the user interactions with their environment. A set of RL agents, each with a specific objective, then simulate possible courses of action and provide user recommendations accordingly.

​

The IDT's learning component performs the final step towards automating user decisions. The learning component learns user-specific action-preferences based on the user interaction patterns from the Information Heat Map (IHM) and the frequency with which the user selects specific recommendations. When a user faces a familiar choice, the IDT can make that choice for them based on their previously learned preferences.

RL Decisions
Learning

The IDT-Social AI Platform Dependency

Social AI and user IDT.png

To improve the speed and effectiveness of the IDT learning, we believe it is essential for the IDT to be part of a larger environment and to learn form other IDTs as well, which is achieved by being part of a Social AI Platform. 

​

Furthermore, the IDT (or the human-specific variant, the HDT) can be part of a social scenario controlled on the Social AI Platform, such as monitoring health risks or optimizing social resources.  In this situation, the IDT's objectives are generated from the Social AI Platform. The IDT's task is to adjust and fine-tune user habits and activities to align them with social objectives, but based on user preferences and interests. Defining the necessary algorithms to achieve this alignment is part of our ongoing research

bottom of page