AI & Ethical Concerns: A Call to Action

José De Jesús
LatinXinAI
Published in
4 min readAug 9, 2021

--

by José De Jesús

From automation of business and production processes to self-driving cars, intelligent searches, and more accurate diagnostics of diseases, AI is undoubtedly changing our landscape for the better. But the many benefits of AI are often eclipsed by growing concerns over trust. Machines taking over jobs, biased data leading to racially-biased predictions, facial recognition systems incorrectly identifying criminals, and mass surveillance technology invading the population’s privacy, are some of the many examples that further deepen that distrust and can hinder large-scale adoption of the technology. At a more extreme level, the term singularity (or technological singularity) denotes a future uncontrollable superintelligence that would far surpass human intelligence in all cognitive tasks, continuously recreates itself as a smarter and smarter AI, and change civilization forever, possibly even causing human extinction. Some doomsayers will even argue that the path is already in motion and irreversible.

Whether these issues are real or considered far-fetched, one thing is clear: there is a general distrust in AI with some justification. If AI systems are not designed and used responsibly, transparently, and with accountability, they can, at a minimum, amplify our negative tendencies as human beings: racism and unfairness can become more prevalent, fake news, including video broadcasts, can become indistinguishable from real news, fraud and corruption can be rampant and nearly impossible to detect, etc. So the matter is extremely important. Sooner rather than later, we need to align the goals of AI with those of humanity.

Ethical concerns with AI fall into two broad areas:

  • The moral behavior of human beings in designing and using AI systems
  • The behavior — and possibly liability — of autonomous or semi-autonomous systems, such as self-driving cars or advanced military weapons.

At the root of this is the data we use to feed AI models. Two things need to be well known: data provenance, the origin of the data and how it was collected, and data lineage or how the data was modified — shaped or aggregated — after it was collected.

While it may be impossible to enforce ethical standards that prioritize fairness and human well-being across the AI industry, establishing certifications for systems and data sets that comply with ethical standards is a necessary first step. Designers would have the right guidance, and consumers would be much more inclined to use trusted certified AI systems. Major technology companies such as IBM, Google, Amazon, Facebook, and Microsoft, who own most of the world’s data, have established partnerships and AI ethics advisory boards with the goal of making recommendations and sharing best practices around fairness, robustness, explainability, and accountability throughout the entire lifecycle of AI applications.

The IEEE Standards Association has established the Global Initiative on Ethics of Autonomous and Intelligent Systems, whose mission is to “ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” The Global Initiative community created a document titled, Ethically Aligned Design, editions 1 and 2, which state that the ethical design, development, and implementation of AI systems should be guided by five general principles:

  • Human Rights: Ensure they do not infringe on internationally recognized human rights.
  • Well-being: Prioritize metrics of well-being in their design and use.
  • Accountability: Ensure that their designers and operators are responsible and accountable.
  • Transparency: Ensure they operate in a transparent manner.
  • Awareness of misuse: Minimize the risks of their misuse.

The IEEE has also created the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) which creates specifications for certification and marking processes around “transparency, accountability, and reduction in algorithmic bias in Autonomous and Intelligent Systems (AIS).”

Having certification processes around ethics is essential because it would allow systems to be deemed “trusted” and “safe” by a globally recognized body of experts. But we also have a collective responsibility.

Society should demand that AI systems and the data used to train them are well documented and made transparent and traceable in order to mitigate bias or identify it early enough to prevent it. Similarly, rather than reject AI technologies, companies and their employees should embrace them and minimize the negative impacts of AI wherever possible. For example, as automation of repetitive, rules-based jobs increases, some jobs will inevitably go away, but, with the right guidance and support, businesses could retrain their people to do more interesting and fulfilling jobs.

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot! Thank you :)

--

--

José De Jesús
LatinXinAI

José is a Thought Leader Executive Architect with IBM and the CTO of Automation for IBM Expert Labs.