Chapter 8. Artificial Intelligence-based Software
130 Regulatory Affairs Professionals Society (RAPS)
of the limitations of the AI/ML technology’s
intended use and therefore improperly interprets
the output of the AI/ML results. It is recom-
mended that the radiologist understands the
capabilities and limitations of the systems. In
addition to the operator manual, manufacturers
should consider a tool, which allows the radiolo-
gist, for example, to better understand how well
the AI/ML model might work for the intended
use cases. In the Association for Computing
Machinery’s Proceedings of the Conference on
Fairness, Accountability and Transparency, pub-
lished in 2019,75 a ‘Model Card’ was introduced.
This captures information that could be useful to
radiologists and supplements the AI/ML device’s
cleared label.
Clinical Practice
Trustworthiness, transparency, and accountability
are some important ethical factors to consider in
clinical practice.
Once validated algorithms are on the com-
mercial market, they can be used on millions of
patients. Therefore, the degree to which the clin-
ical end user has insights into the requirements
and limitations of the device for specific diseases
or conditions is important to ensure that the algo-
rithms are transparent and trustworthy. Also, if
the algorithms are not transparent, the users will
not be able to justify their actions. Likewise, if the
physicians cannot trust the algorithms, they are
more reluctant to use it in their practices resulting
in diminished use in hospitals. In addition, lack of
accountability raises concerns about the possible
safety consequences of using unverified or unvali-
dated algorithms in clinical settings.
Therefore, it is essential to have a framework
and process in place at hospitals to identify a
person responsible for its use.76 Another aspect
is monitoring the AI performance over time
or monitoring for variance amongst users. AI
systems can change in performance over time
due to data drifts such as changes in image
acquisition device, disease prevalence, virus muta-
tion etc. Developers and manufacturers should
consider tools for monitoring their AI/ML
system’s performance, and for communicating
any degradation in its performance back to them
for modification.
Data Privacy and Security
To achieve the full potential of AI/ML in
healthcare the following are important factors to
consider:77
a) Informed consent to use data,
b) Safety and transparency,
c) Algorithms fairness and biases, and
d) Data privacy.
Patient data collection and sharing have consis-
tently raised concern from various groups about
maintaining an individual’s privacy and/or dignity
when sensitive health information is shared with
others. Likewise, patient data collection and use in
AI/ML algorithms raises concerns from regulators,
payors, healthcare providers and administrators,
and patients about potential data breaches.
There is an absolute need for protecting
patient data from cyber attacks.78Therefore,
it is important that the involved stakeholders
consider the right of the patients to take their
own choices. This can be easily achieved by
an informed consent form. To ensure patient
privacy and confidentiality during data collection,
handling, storage and evaluation, the developers
must follow the applicable federal guidelines.
Increasingly, regulatory bodies are imple-
menting more stringent requirements for
cybersecurity. The Omnibus Act was passed into
law by Congress in December 2022.79 The Act
gives the FDA statutory authority over cybersecu-
rity in medical devices rather than the traditional
implicit authority via regulation of quality sys-
tems and as part of the risk management process
that we have seen in the past. The Act also gives
the FDA direct oversight of cybersecurity of
medical devices including AI devices. In March
2023, the FDA issued an updated policy to ensure
manufacturers have processes in place to address
vulnerabilities, provide regular updates and
patches, ensure inclusion of coordinated vulner-
ability disclosure, and include Software Bill of
Materials in their regulatory submissions.80
130 Regulatory Affairs Professionals Society (RAPS)
of the limitations of the AI/ML technology’s
intended use and therefore improperly interprets
the output of the AI/ML results. It is recom-
mended that the radiologist understands the
capabilities and limitations of the systems. In
addition to the operator manual, manufacturers
should consider a tool, which allows the radiolo-
gist, for example, to better understand how well
the AI/ML model might work for the intended
use cases. In the Association for Computing
Machinery’s Proceedings of the Conference on
Fairness, Accountability and Transparency, pub-
lished in 2019,75 a ‘Model Card’ was introduced.
This captures information that could be useful to
radiologists and supplements the AI/ML device’s
cleared label.
Clinical Practice
Trustworthiness, transparency, and accountability
are some important ethical factors to consider in
clinical practice.
Once validated algorithms are on the com-
mercial market, they can be used on millions of
patients. Therefore, the degree to which the clin-
ical end user has insights into the requirements
and limitations of the device for specific diseases
or conditions is important to ensure that the algo-
rithms are transparent and trustworthy. Also, if
the algorithms are not transparent, the users will
not be able to justify their actions. Likewise, if the
physicians cannot trust the algorithms, they are
more reluctant to use it in their practices resulting
in diminished use in hospitals. In addition, lack of
accountability raises concerns about the possible
safety consequences of using unverified or unvali-
dated algorithms in clinical settings.
Therefore, it is essential to have a framework
and process in place at hospitals to identify a
person responsible for its use.76 Another aspect
is monitoring the AI performance over time
or monitoring for variance amongst users. AI
systems can change in performance over time
due to data drifts such as changes in image
acquisition device, disease prevalence, virus muta-
tion etc. Developers and manufacturers should
consider tools for monitoring their AI/ML
system’s performance, and for communicating
any degradation in its performance back to them
for modification.
Data Privacy and Security
To achieve the full potential of AI/ML in
healthcare the following are important factors to
consider:77
a) Informed consent to use data,
b) Safety and transparency,
c) Algorithms fairness and biases, and
d) Data privacy.
Patient data collection and sharing have consis-
tently raised concern from various groups about
maintaining an individual’s privacy and/or dignity
when sensitive health information is shared with
others. Likewise, patient data collection and use in
AI/ML algorithms raises concerns from regulators,
payors, healthcare providers and administrators,
and patients about potential data breaches.
There is an absolute need for protecting
patient data from cyber attacks.78Therefore,
it is important that the involved stakeholders
consider the right of the patients to take their
own choices. This can be easily achieved by
an informed consent form. To ensure patient
privacy and confidentiality during data collection,
handling, storage and evaluation, the developers
must follow the applicable federal guidelines.
Increasingly, regulatory bodies are imple-
menting more stringent requirements for
cybersecurity. The Omnibus Act was passed into
law by Congress in December 2022.79 The Act
gives the FDA statutory authority over cybersecu-
rity in medical devices rather than the traditional
implicit authority via regulation of quality sys-
tems and as part of the risk management process
that we have seen in the past. The Act also gives
the FDA direct oversight of cybersecurity of
medical devices including AI devices. In March
2023, the FDA issued an updated policy to ensure
manufacturers have processes in place to address
vulnerabilities, provide regular updates and
patches, ensure inclusion of coordinated vulner-
ability disclosure, and include Software Bill of
Materials in their regulatory submissions.80