Responsible AI : the praxis of AI and data protection management : negotiating innovation and FAT principles

Addis, Chiara ORCID: 2021, Responsible AI : the praxis of AI and data protection management : negotiating innovation and FAT principles , PhD thesis, University of Salford.

Download (33MB) | Preview


The increasing deployment of Artificial Intelligence applications has sparked a debate on its possible uses and potential problems, and many questions on the protection of personal data have emerged. The General Data Protection Regulation (GDPR) imposed new requirements for organisations handling personal data, and the implications for organisations managing AI technologies are particularly significant. Whereas much research focuses on algorithmic biases and the development of AI, this research explores other important concerns arising from the uses of personal data during the introduction of AI, which impact on individuals and organisations. It investigates innovation in different organisational contexts and how people perceive, understand and apply AI, data protection and FAT principles (fairness, accountability and transparency). Drawing on responsible research and innovation (RRI) and Feenberg’s critical theory of technology, the research investigates the praxis of AI and GDPR management within UK organisations, examining the interplay between AI, data protection and FAT principles. The methodology comprises a multi method approach, employing a survey of experts and dual case studies of organisations implementing responsible AI projects. This research investigates organisational practices and people's agency, providing in-depth analysis of values, power dynamics, experience, understanding, perceptions, and difficulties of various stakeholders (leaders, senior managers, data protection and ML experts) in their specific contexts, all of which shapes and constructs this ambivalent technology. The research indicates that GDPR is often misinterpreted, there is limited understanding of AI and its specific risks, and there are diverse perceptions of the relevance of FAT principles. Discussion on ethics is usually focused on data and activities conducted prior to the implementation of new AI systems. Internal processes and personal data created by AI are generally unconcerned by discourse on responsible innovation. External partners raise special concerns around compliance and unethical practices. This research critically reflects upon these flaws, identifies rarely discussed problems that obstruct responsible innovation and defines areas for innovation. Explaining how roles, positionality and personal experiences can impact management decisions regarding AI implementation, the research proposes an approach to AI innovation studies that foregrounds the active role of people in shaping technology. These insights are systematised in the creation of a critical AI and data protection management model aimed at supporting organisations to understand and address specific challenges, risks, and benefits in their responsible management. The research thereby offers leaders and senior managers important instruments for increasing awareness and control while using AI to process personal data. Highlighting the multilevel and multidisciplinary aspects of AI management, unveiling the complexities around ML predictions and decision-making, and showing innovative potentials residing within the GDPR, this further contributes important insights to business and management studies and to interdisciplinary debates on AI, data protection, and organisational ethics.

Item Type: Thesis (PhD)
Contributors: Kutar, MS (Supervisor)
Schools: Schools > Salford Business School
Funders: Salford Business School
Depositing User: Chiara Addis
Date Deposited: 12 Apr 2022 15:22
Last Modified: 12 Nov 2022 02:30

Actions (login required)

Edit record (repository staff only) Edit record (repository staff only)


Downloads per month over past year