Doctoral defence: Abdul-Rasheed Olatunji Ottun "Practical Trustworthy Artificial Intelligence with Human Oversight"

Mees seisab õppehoone koridoris
Autor: Kadri-Ann Kivastik

On November 7 at 12:15 Abdul-Rasheed Olatunji Ottun will defend his doctoral thesis "Practical Trustworthy Artificial Intelligence with Human Oversight".

Supervisor:
Assoc. Prof. Huber Raul Flores Macario, University of Tartu

Opponents:
professor Christian Becker, Institute of Parallel and Distributed Systems University of Stuttgart (Germany)
Alexandre Da Silva Veith, Software and Data Systems Research Lab Nokia Bell Labs (Belgium)

Summary
Distributed systems form the backbone of modern digital applications, including AI-powered services. Today, machine and deep learning pipelines are increasingly integrated into these systems to enhance performance, perception, and user experience. However, their probabilistic and opaque nature often raises concerns about trust, safety, and accountability. In response, global regulatory and economic frameworks highlight the need for trustworthy AI, particularly with human oversight. While human involvement across the AI lifecycle is widely acknowledged, the practical implementation of human-in-the-loop mechanisms remains limited and underdeveloped. This thesis addresses the central question: How can AI-enabled applications be enhanced with human oversight to ensure trustworthy AI?

To explore this, we present three key contributions. First, we propose SPATIAL, a proof-of- concept system architecture that embeds trustworthiness metrics into AI applications and presents them via a user-facing dashboard. Empirical evaluations show SPATIAL enables experts to monitor AI inference behavior while revealing the complexity of integrating trust metrics in real-world systems. Second, we introduce Socially Aware Federated Learning (SAFL), a distributed learning framework that incorporates social dynamics and task delegation to guide data selection and incentivize human input. User studies demonstrate SAFL’s effectiveness in improving both model quality and training data quality. Third, we present AntiVenom, a lightweight, domain-agnostic anomaly detection technique for deployed autonomous AI systems. By analyzing device-level performance metrics, AntiVenom flags irregularities for human review. Compared to traditional explainable AI tools, it offers faster, proactive monitoring. Together, these contributions show how human oversight can be practically embedded across the AI lifecycle—design, training, and deployment—advancing the development of more transparent, safe, and trustworthy AI systems.

The defence will be held also in Zoom (meeting ID: 929 3333 1346, passcode: ati).