AI Code of Ethics croppedBack in the 1950s, science fiction writer Isaac Asimov invented the Three Laws of Robotics. The three laws served as a sort of framework for ethical robotic behavior, and as usual, Asimov was ahead of his time.

Today, as the use of artificial intelligence becomes more prevalent, we’re wrestling with a very similar issue. Essentially, the world needs a practical governance framework for AI. From my perspective as a technology leader, we need to begin developing the framework ASAP.

Why the rush? The answer is easy: AI development is moving forward much more rapidly than anyone had anticipated. AI has become a sore point between the U.S. and China, and I would argue strongly that AI governance is at the heart of the trade dispute, since China is clearly poised to leap ahead of the U.S. in the race for AI dominance.

The European Commission has developed ethical guidelines for “trustworthy artificial intelligence” and I urge you take a look at them. In summary, the commission’s High-Level Expert Group on AI defines trustworthy AI as:

  • Lawful – respecting all applicable laws and regulations
  • Ethical – respecting ethical principles and values
  • Robust – both from a technical perspective while taking into account its social environment

Additionally, the group lists seven essential requirements for trustworthy AI:

1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.

2. Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.

3. Privacy and data governance: Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data.

4. Transparency: The data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

5. Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.

6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. 

7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

We can certainly argue the fine points of the guidelines, but I applaud the group for recognizing the urgency of the challenge and taking the initial steps to craft a workable framework. It’s my hope that we will begin a similar process here in the U.S. 

As technology leaders, it’s our responsibility to participate in the process, and to offer our guidance and expertise whenever possible. This is an issue of critical importance, and we need to make our voices heard.