179
Software as a Medical Device: Regulatory and Market Access Implications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
35. Ironies of automation, Bainbridge Automatica, Vol. 19,
No. 6, 1983.
36. Wikipedia. Level 2 (“hands off”): The automated
system takes full control of the vehicle: accelerating,
braking, and steering. The driver must monitor the
driving and be prepared to intervene immediately at any
time if the automated system fails to respond properly.
The shorthand “hands off” is not meant to be taken
literally – contact between hand and wheel is often
mandatory during Level 2 driving to confirm that the
driver is ready to intervene. Wikipedia website. https://
en.wikipedia.org/wiki/Self-driving_car. Accessed 2
February 2020.
37. Overtrust is a form of risk compensation. Goddard K,
Roudsari A, and Wyatt J. Automation bias: a systematic
review of frequency, effect mediators, and mitigators.
Journal of the American Medical Informatics Association.
19(1)p121–127, 2012. Wikipedia. Accessed 3 March
2021.
38. Jorritsma W, Cnossen F, Van Ooijen PMA. Improving
the radiologist-CAD interaction: Designing
for appropriate trust. (2014). Clinical Radiology.
70.10.1016/j.crad.2014.09.017.
39. The US FDA obliged the manufacturer to remove
the human from the loop. K. Cobbaert (personal
communication, 17 February 2020).
40. For example, EU MDR and EU IVDR Annex I GSPR
5: In eliminating or reducing risks related to use error,
the manufacturer shall:
• Reduce as far as possible the risks related to
the ergonomic features of the device and the
environment in which the device is intended to be
used (design for patient safety)
• Give consideration to the technical knowledge,
experience, education, training and use
environment, where applicable, and the medical
and physical conditions of intended users (design
for lay, professional, disabled or other users).
41. Klein, et al. “Ten challenges for making automation
a ‘team player’ in joint human-agent activity.” IEEE
Computer. Nov/Dec 2004.
42. Transparency is defined as open, comprehensive,
accessible, clear, and understandable presentation of
information (ISO 20294:2018, 3.3.11) or as openness
about activities and decisions that affect stakeholders
and willingness to communicate about these in an open,
comprehensive, accessible, clear, and understandable
manner (draft IEC 22989 Information Technology:
Artificial Intelligence: Artificial Intelligence Concepts
and Terminology. Published 31 October 2019.
43. This phenomenon is called emergence. Emergence
occurs when an entity is observed to have properties
its parts do not have on their own. Similar to how
the function of a biological organism emerges from
the interaction of its cells: properties emerge in the
organism that are not present and understood when
looking at the cellular level. Wikipedia website. https://
en.wikipedia.org/wiki/Emergence. Accessed 16
February 2021.
44. Kearns M and Roth A. The Ethical Algorithm:
The Science of Socially Aware Algorithm Design. 1
November 2019, Oxford University Press.
45. Other methods are emerging for programmatically
interpretable reinforcement learning, in which the black
box and other models become not the ultimate output
of the learning process, but an intermediate step along
the way. As an example, consider DeepMind’s OCT
AI. It uses optical coherence tomography (OCT) scans.
These 3D images provide a detailed map of the back of
the eye. DeepMind split the AI in two parts. The first
AI identifies all that is abnormal, such as bleeding in
retina, leakage of fluid, or water logging of the retina.
It highlights all those features. The second AI then
categorizes them, for example, diabetic eye disease
with a percentage representing the confidence. Even if
the AI appeared to have it wrong, some of cases made
the scientists realize that the algorithm had noticed
something that the healthcare professionals had not
spotted. Some of those cases were very ambiguous,
challenging cases, and the researchers realized that their
gold standard might have to be adapted. DeepMind
Podcast, Episode 5 “Out of the lab.” 27 August 2019.
https://deepmind.com/blog/article/podcast-episode-5-
out-of-the-lab. Accessed 16 February 2021.
46. Artificial Intelligence and Black-Box Medical
Decisions: Accuracy versus Explainability. London AJ,
Hastings Cent Rep. 2019 49(1):15–21. doi:10.1002/
hast.973.
47. Explicability: Property of an AI system that important
factors influencing the prediction decision can be
expressed in a way that humans would understand.
Modified from “explainability” as defined by draft IEC
22989 Information Technology: Artificial Intelligence:
Artificial Intelligence Concepts and Terminology.
Published 31 October 2019.
48. GDPR Art. 13(2) (f) controllers must, at the time when
the personal data are obtained, provide the data subjects
with further information necessary to ensure fair and
transparent processing about the existence of automated
decision-making and when such is the case, include
meaningful information about the logic involved, as
well as the significance and the envisaged consequences
of such processing for the data subject. Art. 22 The
data subject shall have the right not to be subject to
a decision based solely on automated processing […]
which […] significantly affects him or her. EUR-Lex
website. https://eur-lex.europa.eu/eli/reg/2016/679/oj.
Accessed 16 February 2021.
[…] data controller shall implement suitable measures
to safeguard the data subject’s rights, freedoms, and
legitimate interests, at least the right to obtain human
intervention on the part of the controller, to express
Software as a Medical Device: Regulatory and Market Access Implications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
35. Ironies of automation, Bainbridge Automatica, Vol. 19,
No. 6, 1983.
36. Wikipedia. Level 2 (“hands off”): The automated
system takes full control of the vehicle: accelerating,
braking, and steering. The driver must monitor the
driving and be prepared to intervene immediately at any
time if the automated system fails to respond properly.
The shorthand “hands off” is not meant to be taken
literally – contact between hand and wheel is often
mandatory during Level 2 driving to confirm that the
driver is ready to intervene. Wikipedia website. https://
en.wikipedia.org/wiki/Self-driving_car. Accessed 2
February 2020.
37. Overtrust is a form of risk compensation. Goddard K,
Roudsari A, and Wyatt J. Automation bias: a systematic
review of frequency, effect mediators, and mitigators.
Journal of the American Medical Informatics Association.
19(1)p121–127, 2012. Wikipedia. Accessed 3 March
2021.
38. Jorritsma W, Cnossen F, Van Ooijen PMA. Improving
the radiologist-CAD interaction: Designing
for appropriate trust. (2014). Clinical Radiology.
70.10.1016/j.crad.2014.09.017.
39. The US FDA obliged the manufacturer to remove
the human from the loop. K. Cobbaert (personal
communication, 17 February 2020).
40. For example, EU MDR and EU IVDR Annex I GSPR
5: In eliminating or reducing risks related to use error,
the manufacturer shall:
• Reduce as far as possible the risks related to
the ergonomic features of the device and the
environment in which the device is intended to be
used (design for patient safety)
• Give consideration to the technical knowledge,
experience, education, training and use
environment, where applicable, and the medical
and physical conditions of intended users (design
for lay, professional, disabled or other users).
41. Klein, et al. “Ten challenges for making automation
a ‘team player’ in joint human-agent activity.” IEEE
Computer. Nov/Dec 2004.
42. Transparency is defined as open, comprehensive,
accessible, clear, and understandable presentation of
information (ISO 20294:2018, 3.3.11) or as openness
about activities and decisions that affect stakeholders
and willingness to communicate about these in an open,
comprehensive, accessible, clear, and understandable
manner (draft IEC 22989 Information Technology:
Artificial Intelligence: Artificial Intelligence Concepts
and Terminology. Published 31 October 2019.
43. This phenomenon is called emergence. Emergence
occurs when an entity is observed to have properties
its parts do not have on their own. Similar to how
the function of a biological organism emerges from
the interaction of its cells: properties emerge in the
organism that are not present and understood when
looking at the cellular level. Wikipedia website. https://
en.wikipedia.org/wiki/Emergence. Accessed 16
February 2021.
44. Kearns M and Roth A. The Ethical Algorithm:
The Science of Socially Aware Algorithm Design. 1
November 2019, Oxford University Press.
45. Other methods are emerging for programmatically
interpretable reinforcement learning, in which the black
box and other models become not the ultimate output
of the learning process, but an intermediate step along
the way. As an example, consider DeepMind’s OCT
AI. It uses optical coherence tomography (OCT) scans.
These 3D images provide a detailed map of the back of
the eye. DeepMind split the AI in two parts. The first
AI identifies all that is abnormal, such as bleeding in
retina, leakage of fluid, or water logging of the retina.
It highlights all those features. The second AI then
categorizes them, for example, diabetic eye disease
with a percentage representing the confidence. Even if
the AI appeared to have it wrong, some of cases made
the scientists realize that the algorithm had noticed
something that the healthcare professionals had not
spotted. Some of those cases were very ambiguous,
challenging cases, and the researchers realized that their
gold standard might have to be adapted. DeepMind
Podcast, Episode 5 “Out of the lab.” 27 August 2019.
https://deepmind.com/blog/article/podcast-episode-5-
out-of-the-lab. Accessed 16 February 2021.
46. Artificial Intelligence and Black-Box Medical
Decisions: Accuracy versus Explainability. London AJ,
Hastings Cent Rep. 2019 49(1):15–21. doi:10.1002/
hast.973.
47. Explicability: Property of an AI system that important
factors influencing the prediction decision can be
expressed in a way that humans would understand.
Modified from “explainability” as defined by draft IEC
22989 Information Technology: Artificial Intelligence:
Artificial Intelligence Concepts and Terminology.
Published 31 October 2019.
48. GDPR Art. 13(2) (f) controllers must, at the time when
the personal data are obtained, provide the data subjects
with further information necessary to ensure fair and
transparent processing about the existence of automated
decision-making and when such is the case, include
meaningful information about the logic involved, as
well as the significance and the envisaged consequences
of such processing for the data subject. Art. 22 The
data subject shall have the right not to be subject to
a decision based solely on automated processing […]
which […] significantly affects him or her. EUR-Lex
website. https://eur-lex.europa.eu/eli/reg/2016/679/oj.
Accessed 16 February 2021.
[…] data controller shall implement suitable measures
to safeguard the data subject’s rights, freedoms, and
legitimate interests, at least the right to obtain human
intervention on the part of the controller, to express