178
Chapter 11: Artificial Intelligence
17. Scherer MU. ‘Regulating Artificial Intelligence Systems:
Risks, Challenges, Competencies, and Strategies.’
2016 Harvard Journal of Law and Technology and
Miriam C. Buiten, ‘Towards Intelligent Regulation of
Artificial Intelligence.’ 2019 European Journal of Risk
Regulation.
18. One way of defining intelligence is provided by Steven
Pinker. He defines intelligence as the ability to deploy
novel means to attain a goal, whereby the goals are
extraneous to the intelligence itself. It also should
be noted that intelligence is a multi-dimensional
variable, i.e., with many facets to it (e.g., visual, motor,
mathematics, language) hence, just like humans, an AI
can be intelligent on one facet, but not on another.
19. Genetic programming is a technique to create
algorithms that can program themselves by simulating
biological breeding and Darwinian evolution. Instead
of programming a model that can solve a particular
problem, genetic programming only provides a general
objective and lets the model figure out the details itself.
20. While computer programs generally excel at repetitive
behavior, certain AI (e.g., probabilistic AI, soft
computing, and fuzzy logic), in order to cope with very
noisy input data, will employ an execution flow that is
very un-deterministic, but will nevertheless each time
reach a deterministic answer, within certain margins
of tolerance. For example, you can run the program a
hundred times it will never run the same way twice,
but it will (should) consistently produce the same result,
within the margins specified by the manufacturer. It has
inherent variability to tolerate imprecision, uncertainty,
and partial truth in real-world data to solve complex
problems while achieving tractability, robustness,
and low cost. Also, AI may use differential privacy, a
technique that carefully injects random noise to protect
individual privacy. The goal of such systems is to protect
privacy by making it impossible to reverse-engineer the
precise inputs used while still delivering an output that
is close enough to the accurate answer.
21. At the time of writing, BSI/AAMI is investigating
whether a standard can be created to provide guidance
and possibly also requirements on algorithm change
protocols.
22. Note: It is important to calibrate the uncertainty. What
you do not want is AI that gives the wrong answer and
is extremely confident in that wrong answer. See also:
The need for uncertainty quantification in machine-
assisted medical decision making. Nature Machine
Intelligence. 7 January 2019. Nature website. https://
www.nature.com/articles/s42256-018-0004-1. Accessed
16 February 2021.
23. Privacy safeguards inhibit storing of necessary data.
For example, AI systems that provide video or audio
recommendations are updated over time, changing
in response to the availability of content and user
reactions. The only way such systems could be precisely
replicable over time would be if every interaction of
every user was stored indefinitely, which would be
unacceptable from a privacy point of view and also
questionable from an environmental sustainability point
of view.
24. As is for example defined through EU MDR Art. 2 (40)
and Art. 52(1) and EU IVDR Art. 2 (32) and Art. 48(1).
25. As is for example defined through EU MDR Art. 10(9)
and EU IVDR Art. 10(8).
26. As is for example defined through EU MDR and EU
IVDR Art. 7(d).
27. Annex VII Requirements to be met by Notified Bodies
Section 4.9, Annex IX Conformity Assessment based
on a quality management system and on assessment of
technical documentation, Chapter II Assessment of the
technical documentation Section 4.10, and Annex X
Conformity Assessment based on Type-Examination
Section 5 Changes to the type.
28. EU MDR Art. 27(3) and EU IVDR Art. 24(3).
29. EU MDR and EU IVDR Art. 5(5).
30. Draft IEC 22989 Information Technology: Artificial
Intelligence: Artificial Intelligence Concepts and
Terminology. Published 31 October 2019.
31. The European Commission’s High-Level Expert Group
on AI in its Assessment List for Trustworthy AI (17
July 2020) distinguishes human-in-the-loop (HITL,
i.e., direct control), human-on-the-loop (HOTL, i.e.,
supervisory control) and human-in-command (HIC).
Human-in-command refers to the capability to oversee
the overall activity of the AI system (including its
broader economic, societal, legal, and ethical impact)
and the ability to decide when and how to use the AI
system in any particular situation. The latter can include
the decision not to use an AI system in a particular
situation to establish levels of human discretion
during the use of the system or to ensure the ability to
override a decision made by an AI system. European
Commission website. https://futurium.ec.europa.
eu/en/european-ai-alliance/pages/altai-assessment-
list-trustworthy-artificial-intelligence. Accessed 16
February 2021.
32. Autonomy, artificial intelligence, and robotics: Technical
aspects of human control, ICRC, August 2019.
33. Urias MG, Patel N, He C, et al. Artificial intelligence,
robotics, and eye surgery: are we overfitted? Int J Retin
Vitr 5, 52 (2019).
34. High throughput computing can handle situations
much faster than the human brain can let a signal pass
from eye to brain to hands. Any device that ultimately
relies solely or primarily on human attention and
oversight cannot possibly keep up with the volume
and velocity of algorithmic decision-making or will
necessarily be outmatched by the scale of the problem
and hence be insufficient.
Previous Page Next Page