How healthcare manufacturers can manage ethical AI

Nick Chozos, Human Factors Specialist at PDD, explores how healthcare manufacturers can better manage the ethical issues surrounding the use of AI

Nick Chozos is a Chartered Engineer (MIET) and holds a PhD in Human-Computer-Interaction from the University of Glasgow. He has 15 years of consulting experience in safety, cybersecurity, human factors and dependability assurance in the medical device sector, as well as civil nuclear, aviation, rail, finance and public housing. 

“At PDD, I am involved in running usability evaluations for different types of medical devices and oversee a number of research-based innovation projects, helping medical device manufacturers drive innovation in product design as part of their overall business vision and strategy,” he explains. “My research interests encompass assurance cases and structured argumentation, systems theory, human factors in cybersecurity, and ethical issues in medical device development. I am also a member of the Specialist Interest Group (SIG) in ‘Human Factors and Artificial Intelligence in Healthcare’ of the Chartered Institute of Ergonomics and Human Factors (CIEHF) and have contributed to a white paper on this topic.” 

Here, Nick tells us more about ethical AI and its use in healthcare.

 

Hi Nick, what led you to this industry?

“When studying computing science in my early years at university, I took an interest in what lies beyond the technology of hardware and software – in the “bigger picture”, so to speak. I was curious about the people, procedures, organisations and how technology is used and developed, particularly in the context of safety-critical domains. I also became interested in how societal and organisational issues can lead to the failure of technology and wanted to explore ways to formally address those issues through design. My PhD focused on learning from accidents to improve diagnostic laboratory error detection by nurses in the UK NHS screening programme. That was my first step into the industry, and I have not looked back.”

 

What is PDD?

“PDD is a leading, global design and innovation consultancy that creates products and experiences to enhance businesses and improve people’s lives. With a unique multi-disciplinary approach which is rooted in Human Centred Design, we help customers in the healthcare and consumer industries achieve commercial and creative success. I’m proud of the impact our teams make to the products and services we use on a daily basis. Our clients include some of the world's leading healthcare and consumer companies including Beckton Dickinson (BD), Novartis, Samsung and Nestle, influential startups like Trojan Energy and many more.”

 

How can AI revolutionise healthcare? 

“The Artificial Intelligence (AI) revolution in healthcare seems inevitable. On the one hand, AI promises to empower healthcare to detect diseases faster, accelerate drug discovery, provide personalised treatment solutions and improve patient outcomes overall. On the other hand, it aspires to enable all of us to meaningfully participate in, if not take control of, our own care in the near future. 

“There are, however, many challenges ahead. In order for AI to be embraced and indeed, for it to be successful, it must enable all stakeholders, and most importantly, us, the patients, to trust it. This means we should comfortably believe and accept the output that AI provides, whilst remaining in control of what happens with our data. We also need to consider fairness and bias in AI models, the means for protecting privacy, and how AI might provide advice that is clear, explainable and respectful to all.

“These issues are clearly of an ethical nature, and for those demands to be met, medical device manufacturers need to introduce mechanisms within their development and assurance frameworks concerning patient data, AI models, and user interfaces. They also need to test, monitor and update the AI and Machine Learning (ML) components after the system is deployed.”

 

How can healthcare manufacturers manage the ethical issues regarding the use of AI? 

Make a commitment to ethics as part of your organisation’s culture

“While issues may vary for different devices, a commitment to ethical AI, and a recognition for responsibility and accountability, should be at the heart of what the organisation stands for. 

“We can all appreciate the impact that an organisation’s culture can have on performance. Since the 1980s, regulators have been calling for a safety culture across safety-critical domains, which is now a regulatory requirement in nuclear, rail and aviation. Similarly, medical device manufacturers have a responsibility to ensure that commitment to the ethical deployment of AI and ML is clear and central to the beliefs, perceptions, and values that employees share as part of the organisation.”

 

Minimise bias from training data and algorithm

“One well-documented challenge in AI systems is bias (in terms of gender, biological sex, nationality or age) – both in the AI algorithm and in the training data. It is practically impossible to completely remove bias, so manufacturers need to introduce design controls to address such risks. For example, by diversifying the training data and evaluating and monitoring the algorithm. 

“For instance, gender bias has been reported in Computer Aided Diagnoses (CAD) and chest X-rays, where accuracy in women’s diagnoses was found to be much lower compared to men’s. This is not only an issue of social justice, but also potentially one of safety – as patients that are misrepresented in the data may, be misdiagnosed or given the wrong treatment.”

 

Monitor the system

“One of the most exciting and perhaps controversial aspects of AI is its ability to perform autonomous tasks. Particularly, when it comes to ML where the system can perform tasks without being explicitly programmed to do so, and can even adapt or change aspects of itself. For example, it may change parts of the algorithm – e.g for the purpose of optimisation – in the light of new learning data. Whilst this is very exciting and has clear benefits, it also comes with risks. After all, how much can we entrust our lives and well-being to an algorithm that changes itself? 

“This is why we cannot completely remove humans from the process. There are means for controlling this autonomy which can be built into the code, but arguably, this is not enough. From the perspective of responsibility and accountability, allowing AI to act autonomously and make decisions about our well-being, may in itself be unethical. 

“Crucially, we need to look beyond the design of the AI system itself. Users such as clinicians and healthcare practitioners will also have to adapt their skills and practices to steward the AI tools available and catch system errors before they cause harm.”

 

Make the AI system transparent and explainable

“Transparency within AI refers to its explainability and ability to enable users to understand its outputs. This is key to the future of AI. The system should provide explanations of its “inner workings”, its algorithmic models, how it has been using data, and how it has made its decisions in a manner that is comprehensible for the user. This will become increasingly important as we start to directly interact with the AI component e.g. with the medical device interface or through mobile phones via the cloud. As users, we will all need to be empowered to ask the questions we need to ask – and not just to the AI itself, but to the human beings that manage the system.”

 

Make human-centred design a priority

“When using an AI system in healthcare, it is crucial to understand the needs of all involved stakeholders that are involved directly and indirectly with the system, including developers, service engineers, clinicians and patients. The user interface(s) of the system should accommodate the needs of all stakeholders when considering their responsibilities around developing, maintaining, monitoring and interacting with the AI and associated medical devices. 

“Stakeholders around the AI system will also need direct communication channels between them to collectively and effectively resolve issues when they emerge, eventually improving the AI system and its performance.

“In the end, we need to strike a balance between AI autonomy and human supervision and control. The potential benefits of AI are unquestionable, but we need to remember that AI has no purpose and no moral principles in itself. The same algorithm and data may be used to produce a life-saving drug, or a lethal poison. We must therefore, in the end, maintain control.”

 

What do the next 12 months hold for you and the company?

“PDD is a leader in innovation, with extensive expertise in healthcare and medical device innovation and an approach that is rooted in Human-Centred Design. Therefore, as AI systems continue to evolve and influence innovation across all industries, we’re developing strategies to help medical device manufacturers meet design principles for AI and increase our confidence in AI systems that are safe, trustworthy, respectful and can have a positive impact on people’s lives.”

Share

Featured Articles

ProGlove & topsystem Revolutionise Logistics Solutions

ProGlove & topsystem team up to elevate data capture in manufacturing. Pierre Mikaelsson, CPO at ProGlove, tells us more about innovations in automation

Car Manufacturers Urged to prioritise Inclusive Driving

Matthew Walker of ABC Mobility Group says car manufacturers must ensure disabled drivers’ independence is not compromised by new cybersecurity regulations

Digitising Supply Chains for Due Diligence & Trade

Brent Dawkins, Product Marketing Director at QAD, discusses digital due diligence & why manufacturers must prioritise compliance in today's trade landscape

GTK: IoT, Sustainability & Touchscreen Tech in Manufacturing

AI & Automation

Jabil: Insights on Sustainable Manufacturing Progress Report

Sustainability & ESG

The Impact of NCSA on Manufacturing & the Supply Chain

Procurement & Supply Chain