176
Chapter 11: Artificial Intelligence
and not placing the bar too low is key to a coun-
try’s success. Keeping the legislative requirements
high-level while experimenting with standard
endorsement allows governments to establish
their sweet spot relatively safely.
IEC, ISO, ITU, IEEE, CEN-CENELEC,
BSI, and AAMI are examples of standardiza-
tion bodies working in the AI space. As AI
cuts across all sectors, these organizations have
feverishly tried to claim their territory, which
sometimes results in overlapping and competing
standards. A certain level of competition among
standardization bodies is desirable from a societal
perspective to avoid blind spots from becoming
fixated into the system. The downside is that
companies active in different countries that
use competing standards face higher costs, i.e.,
competing standards translate to a higher soci-
etal cost. Ideally, organizations that draft these
competing standards can meet halfway and come
together to form one standard. Convergence
requires standardization bodies to work together.
Standards are created and voted on dem-
ocratically. Any expert can join a standardiza-
tion body and propose an AI standard idea or
participate in its drafting. People participating in
AI standardization are not necessarily AI experts.
Participants such as consultants and certification
service providers may see standards as a vehicle
to grow their businesses, including in the domain
of AI certification. Such hidden agendas bring
the risk of more complex standards or standards
that do not bring value to society.
As the world’s regulatory powers, certifi-
cation companies, and consultants push for the
adoption of their home-grown AI standard to
become the world standard, it appears that inter-
national standardization bodies have become the
new battlegrounds.
Conclusion
More important than a definition for AI are its
characteristics and how these affect compliance
with existing legislation. Manufacturers should
pay attention to its controllability, whether it
changes through learning while in clinical use,
and whether it is explainable or not. However,
there is no room for a blanket requirement for
AI’s controllability and explainability because of
trade-offs against safety and performance.
The European Commission is introducing
initiatives to regulate the ethical aspects of AI.
Ethics contains many dimensions, several of
which are already covered by medical device
legislation, including machine learning devices,
which are heavily regulated.
Medical device legislation can benefit from
guidance on how to apply it to machine learning
devices. Legislators have started publishing such
guidance, with China taking the lead. Legislators
generally do not want to place the bar too low,
which could affect the ability to create enough
trust for AI to be adopted and hamper the
country’s competitive position. Nor do legislators
want to be too prescriptive, as requirements could
kill innovation and harm society in the long run.
Finding the sweet spot is essential.
As demonstrated in China, standards play a
crucial role in supporting legislation. Standards
are a relatively safe tool to experiment with find-
ing the best level of requirements. The push for
AI standardization has generated feverish activ-
ity. Because of vested interests and how standards
can advance a country’s competitive position,
standardization bodies appear to have become a
new battleground for AI.
References
1. European Commission AI High-Level Expert Group. 8
April 2019. “A definition of AI – Main Capabilities and
Disciplines.” European Commission website. https://
ec.europa.eu/digital-single-market/en/news/definition-
artificial-intelligence-main-capabilities-and-scientific-
disciplines. Accessed 16 February 2021.
2. An expert system is a computer system emulating the
decision-making ability of a human expert. Expert
systems are designed to solve complex problems by
reasoning through bodies of knowledge, represented
mainly as if–then rules rather than through
conventional procedural code. Wikipedia website.
https://en.wikipedia.org/wiki/Expert_system. Accessed
16 February 2021.
Chapter 11: Artificial Intelligence
and not placing the bar too low is key to a coun-
try’s success. Keeping the legislative requirements
high-level while experimenting with standard
endorsement allows governments to establish
their sweet spot relatively safely.
IEC, ISO, ITU, IEEE, CEN-CENELEC,
BSI, and AAMI are examples of standardiza-
tion bodies working in the AI space. As AI
cuts across all sectors, these organizations have
feverishly tried to claim their territory, which
sometimes results in overlapping and competing
standards. A certain level of competition among
standardization bodies is desirable from a societal
perspective to avoid blind spots from becoming
fixated into the system. The downside is that
companies active in different countries that
use competing standards face higher costs, i.e.,
competing standards translate to a higher soci-
etal cost. Ideally, organizations that draft these
competing standards can meet halfway and come
together to form one standard. Convergence
requires standardization bodies to work together.
Standards are created and voted on dem-
ocratically. Any expert can join a standardiza-
tion body and propose an AI standard idea or
participate in its drafting. People participating in
AI standardization are not necessarily AI experts.
Participants such as consultants and certification
service providers may see standards as a vehicle
to grow their businesses, including in the domain
of AI certification. Such hidden agendas bring
the risk of more complex standards or standards
that do not bring value to society.
As the world’s regulatory powers, certifi-
cation companies, and consultants push for the
adoption of their home-grown AI standard to
become the world standard, it appears that inter-
national standardization bodies have become the
new battlegrounds.
Conclusion
More important than a definition for AI are its
characteristics and how these affect compliance
with existing legislation. Manufacturers should
pay attention to its controllability, whether it
changes through learning while in clinical use,
and whether it is explainable or not. However,
there is no room for a blanket requirement for
AI’s controllability and explainability because of
trade-offs against safety and performance.
The European Commission is introducing
initiatives to regulate the ethical aspects of AI.
Ethics contains many dimensions, several of
which are already covered by medical device
legislation, including machine learning devices,
which are heavily regulated.
Medical device legislation can benefit from
guidance on how to apply it to machine learning
devices. Legislators have started publishing such
guidance, with China taking the lead. Legislators
generally do not want to place the bar too low,
which could affect the ability to create enough
trust for AI to be adopted and hamper the
country’s competitive position. Nor do legislators
want to be too prescriptive, as requirements could
kill innovation and harm society in the long run.
Finding the sweet spot is essential.
As demonstrated in China, standards play a
crucial role in supporting legislation. Standards
are a relatively safe tool to experiment with find-
ing the best level of requirements. The push for
AI standardization has generated feverish activ-
ity. Because of vested interests and how standards
can advance a country’s competitive position,
standardization bodies appear to have become a
new battleground for AI.
References
1. European Commission AI High-Level Expert Group. 8
April 2019. “A definition of AI – Main Capabilities and
Disciplines.” European Commission website. https://
ec.europa.eu/digital-single-market/en/news/definition-
artificial-intelligence-main-capabilities-and-scientific-
disciplines. Accessed 16 February 2021.
2. An expert system is a computer system emulating the
decision-making ability of a human expert. Expert
systems are designed to solve complex problems by
reasoning through bodies of knowledge, represented
mainly as if–then rules rather than through
conventional procedural code. Wikipedia website.
https://en.wikipedia.org/wiki/Expert_system. Accessed
16 February 2021.