Human Machine Teaming (HMT)

The human-machine teams involved in a task achieve their objectives by exchanging a series of specific actions and corresponding responses. Most of the time, the various actions and responses follow distinct patterns that emerge as a result of adhering to the task owner's policies and procedures for achieving the task objectives. The probabilities and correlations with which specific actions-responses occur in a given context capture the human-machine teaming patterns, regardless of the meaning of these actions or responses.

Trust in HMT

HMT is based on the exchange of messages—communication—among team members. Communication theory tells us that all communication occurs on three levels:

  1. The effectiveness level—how to assess the overall effectiveness of the interactions (e.g., trust).

  2. The semantic level—how to agree on, and represent the concepts necessary to achieve the objectives.

  3. The technical level—which is focused on how to effectively  exchange the data among the various parties.

The point here is that, ultimately, the second and third levels are rooted in the first level: if the content of the exchanged messages is corrupted, we can no longer be certain of the meaning of the concepts we use, and thus cannot assess the environment performance. 

Source and Channel Coding

Communication theory established that the effective design of a communication system is ultimately dependent on understanding the statistical characteristics of the messages exchanged: how frequently they are sent by the sender and in what correlation they are with one another. The use of a single dot in Mores code to represent the letter "E" the most common letter in the English language, is a well-known example of this insight. If we consider a human or an AI agent, we can assume that the frequency with which a message (for example, a request to interact with a mobile App or a system) is sent tells us something about the utility of that App in a given context. Capturing such message frequencies and correlations in a network of collaborating humans and machines should then provide insights about how to manage trust in the environment.