WP8: Cooperative and hybrid human-machine intelligence

Task Lead: UNITN, PI: Andrea Passerini, Co-PI: Bruno Lepri

The WP will develop the theoretical and methodological foundations for hybrid human-machine learning and decision making and collective intelligence in hybrid human-machine networks. It will devise AI models for explainable interactive human-machine learning and decision making, and develop socially-inspired cooperative AI systems and systems combining humans and AI agents. 

Connected to TP Learning and Reasoning from individual to communities to society (TP3) and TP Legal and Ethical Design of AI (TP1). 

Effort 30p/m + cascading funds.


 

Description of work  

 

The WP will focus on the development of the theoretical and methodological foundations for cooperative and hybrid human-machine intelligence. This will be addressed from two complementary perspectives, involving hybrid human-machine learning and decision making on one side, and social and cooperative intelligence for groups of AI agents or AI and human agents on the other side.

 

Task 2.8.1 – Trustworthy human-machine decision making Task Leader: Andrea Passerini

The task will focus on the definition of the metrics, requirements and methodologies for an effective human-machine decision making process assuming an already trained machine learning system. It will then formalize the joint human-machine decision making process that can reliably optimize these metrics, which includes strategies to decide when and how to incorporate machine outputs in the decision making process, which requirements the machine should satisfy to positively contribute to the process (e.g., calibration, robustness to certain types of adversarial attacks, interpretability), which kind of interaction is most appropriate to encourage comprehension and trust.

 

Task 2.8.2 – Social and Cooperative AI systems Task Leader: Bruno Lepri

Social learning and learning of cause-and-effect relationships are key components of human intelligence. Here, our goal is to model and evaluate notions of social learning, social influence, and counterfactual and causal learning in order to improve the performance of a group of AI agents and their ability to produce explainable decisions in scenarios where they have to interact with humans. We plan to use concepts and notions from coalitional game theory, social choice theory, and network science in environments where agents have to agree, in a fully decentralized manner, on some desirable outcome, while having partial knowledge on their respective preferences as determined by the structure of the underlying network.

 

Task 2.8.3 – Explainable interactive human-machine learning Task Leader: Andrea Passerini

This task will develop explainable interactive human-machine learning strategies aimed at teaching both the human and the machine how to best exploit their respective strengths and overcome their respective weaknesses, so as to optimize the performance of the hybrid human-machine decision process formalized in Task 2.8.1. Explainability is the key ingredient to foster this joint learning mechanism, and the WP will develop interactive explainability strategies that can help the user and the system to improve their understanding (both mutual understanding and understanding of the task to be addressed), as opposed to the post-hoc explainability which is the mainstream of XAI research at the moment.

 

Task 2.8.4 – Cooperation frameworks of AI and human agents Task Leader: Bruno Lepri

The goal of this task is to exploit the properties of empirical social networks (for example, sparsity) to organize the topology of communications, interactions, and cooperation that occur between a multiplicity of AI agents and between AI agents and humans. The work will study the emergent phenomena that arise in human-AI networks as a consequence of the type of agents and their communication strategies, and characterize the mechanisms that can incentivize fairness and social well-being in these hybrid systems. A set of algorithms and incentives for hybrid social AI systems will be developed as an outcome of this task, and evaluated with both computer simulations and behavioral experiments.