171
Software as a Medical Device: Regulatory and Market Access Implications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
the technology and adequate evidence of safety
and performance. For example, manufacturers
demonstrated via randomized controlled studies
that electro-convulsive therapy is highly effective
for severe depression, even though the mech-
anism of action remains unknown. The same
holds true for many drugs under the medicines
regulations, such as Selective Serotonin Reuptake
Inhibitors (SSRI) or anesthetic agents.
Suppose a technology is significantly more
effective than traditional methods in terms of
diagnostic or therapeutic capabilities, but it
cannot be explained. In that case, it poses ethical
issues to hold back technology, simply on the
basis that we cannot explain or understand it.
Explicability is a means (to trust), not a goal. A
blanket requirement that machine learning sys-
tems in medicine be explicable or interpretable
is therefore unfounded and potentially harm-
ful.46 Of course, the advantage of making the
model interpretable is that it helps the user gain
confidence in the AI system faster, allowing the
company to be successful commercially.
Enclosed AI, i.e., AI with no actionable
outcome, may not require transparency toward
the healthcare provider or patient but requires
sufficient explicability47 to the manufacturer or
service engineer to allow verification, error detec-
tion, and troubleshooting. An example would be
an AI that controls the cooling of a motor coil.
Manufacturers must also be transparent on
the use of automated decision making. The rules
in the EU General Data Protection Regulation
(EU) 2016/679 (GDPR)48 imply that when it is
not immediately obvious that the user is interact-
ing with an automated decision making process
rather than a human (e.g., because there is no
meaningful human involvement, for example,
to improve a sensor or optics within a device),
a software device must inform users of this,
in particular patients, and include meaningful
information about the logic involved, as well as
the significance and the envisaged consequences
of such processing for the data subject.
Ethics
AI ethics is used in the meaning of respecting
human values, including safety, transparency,
accountability, and morality, but also in the
meaning of confidentiality, data protection, and
fairness. These aspects are not new, as philoso-
phers have been debating ethics for millennia.
On the other hand, the science behind creat-
ing ethical algorithms is relatively new. Computer
scientists have the responsibility to think about the
ethical aspects of technologies they are involved in
and mitigate or resolve any issues. Of these ethical
aspects, fairness is probably the most complicated
because fairness can mean different things in
different contexts to different people.49
To consider the ethical aspects of software
that changes itself through learning, the Euro-
pean Parliamentary Research Service50 frames it
as a real-world experiment. In doing so, it shifts
the question of evaluating the moral acceptability
of AI in general to the question of ‘under what
conditions is it acceptable to experiment with
AI in society?’ It uses the analogy to healthcare,
where medical experimentation requires man-
ufacturers to be explicit about the experimental
nature of healthcare technologies by following
the rigorous procedures of a clinical investigation,
subject to ethics committee approval, patient
consent, and careful monitoring to protect the
subjects involved or impacted.
Many AI ethics frameworks have appeared
in recent years.51These differ based on the orga-
nization’s goals and operating contexts. Such
frameworks have limits, for example, because
many AI systems comprise a tradeoff between
algorithmic fairness and accuracy.52,53 A fair
algorithm should provide the same benefits for
the group while protecting it from discrimina-
tion. An accurate algorithm, on the other hand,
should make a prediction that is as precise as
possible for a certain subgroup, e.g., accord-
ing to age, gender, smoking history, previous
illnesses, etc. When an algorithm lies on the
tradeoff curve between fairness and accuracy,
it is often a matter of public policy, rather than
Software as a Medical Device: Regulatory and Market Access Implications
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
the technology and adequate evidence of safety
and performance. For example, manufacturers
demonstrated via randomized controlled studies
that electro-convulsive therapy is highly effective
for severe depression, even though the mech-
anism of action remains unknown. The same
holds true for many drugs under the medicines
regulations, such as Selective Serotonin Reuptake
Inhibitors (SSRI) or anesthetic agents.
Suppose a technology is significantly more
effective than traditional methods in terms of
diagnostic or therapeutic capabilities, but it
cannot be explained. In that case, it poses ethical
issues to hold back technology, simply on the
basis that we cannot explain or understand it.
Explicability is a means (to trust), not a goal. A
blanket requirement that machine learning sys-
tems in medicine be explicable or interpretable
is therefore unfounded and potentially harm-
ful.46 Of course, the advantage of making the
model interpretable is that it helps the user gain
confidence in the AI system faster, allowing the
company to be successful commercially.
Enclosed AI, i.e., AI with no actionable
outcome, may not require transparency toward
the healthcare provider or patient but requires
sufficient explicability47 to the manufacturer or
service engineer to allow verification, error detec-
tion, and troubleshooting. An example would be
an AI that controls the cooling of a motor coil.
Manufacturers must also be transparent on
the use of automated decision making. The rules
in the EU General Data Protection Regulation
(EU) 2016/679 (GDPR)48 imply that when it is
not immediately obvious that the user is interact-
ing with an automated decision making process
rather than a human (e.g., because there is no
meaningful human involvement, for example,
to improve a sensor or optics within a device),
a software device must inform users of this,
in particular patients, and include meaningful
information about the logic involved, as well as
the significance and the envisaged consequences
of such processing for the data subject.
Ethics
AI ethics is used in the meaning of respecting
human values, including safety, transparency,
accountability, and morality, but also in the
meaning of confidentiality, data protection, and
fairness. These aspects are not new, as philoso-
phers have been debating ethics for millennia.
On the other hand, the science behind creat-
ing ethical algorithms is relatively new. Computer
scientists have the responsibility to think about the
ethical aspects of technologies they are involved in
and mitigate or resolve any issues. Of these ethical
aspects, fairness is probably the most complicated
because fairness can mean different things in
different contexts to different people.49
To consider the ethical aspects of software
that changes itself through learning, the Euro-
pean Parliamentary Research Service50 frames it
as a real-world experiment. In doing so, it shifts
the question of evaluating the moral acceptability
of AI in general to the question of ‘under what
conditions is it acceptable to experiment with
AI in society?’ It uses the analogy to healthcare,
where medical experimentation requires man-
ufacturers to be explicit about the experimental
nature of healthcare technologies by following
the rigorous procedures of a clinical investigation,
subject to ethics committee approval, patient
consent, and careful monitoring to protect the
subjects involved or impacted.
Many AI ethics frameworks have appeared
in recent years.51These differ based on the orga-
nization’s goals and operating contexts. Such
frameworks have limits, for example, because
many AI systems comprise a tradeoff between
algorithmic fairness and accuracy.52,53 A fair
algorithm should provide the same benefits for
the group while protecting it from discrimina-
tion. An accurate algorithm, on the other hand,
should make a prediction that is as precise as
possible for a certain subgroup, e.g., accord-
ing to age, gender, smoking history, previous
illnesses, etc. When an algorithm lies on the
tradeoff curve between fairness and accuracy,
it is often a matter of public policy, rather than