Ethical Emerging Technologists

Pablo Ballarin
Author: Pablo Ballarin Usieto, CISM, CISA, CRISC, CDPSE, CSX-F, CISSP, CEET, CEHv9, ISO 27001 LA, TOGAF, ISO 20000-1, ITIL v3
Date Published: 15 February 2022

Cybersecurity professionals aim to help organizations protect their information and IT infrastructures. Their main purpose is to avoid negative impacts, not only to their clients’ enterprises, but also to their clients’ and users’ lives. Over the last several of years, those impacts have increased dramatically as digitalization and the use of emerging technologies such as the Internet of Things (IoT) and artificial intelligence (AI) have transformed the way services interact with (and change) lives. The work of cybersecurity professionals has evolved to address these new impacts and the related ethical concerns.

New Worries

In the mid-2000s, I carried out different types of projects for telecommunication operators. I worked in fraud prevention, detecting SIM card cloning attacks that took advantage of mobile switches vulnerabilities and. I was also involved in protecting the data centers that provided emerging cloud services (mainly web and mail hosting). This work was needed to help organizations detect and control attacks that could cause harm to their clients, such as paying extra charges on their phone bills or not being able to access their web or email accounts. At the time, the impact of cyberattacks on the users’ lives was still low.

Things have changed since then. Now most organizations provide services that are data based and need to be secured, such as recommendation engines used in entertainment (e.g., Netflix) and banking or natural language processing (NLP) used in virtual assistants (e.g., Alexa). The impact of cyberattacks has increased as users’ dependency on emerging technologies has increased. This dependency has led to new risk scenarios and consequences such as:

  • Virtual assistants responding inappropriately to users because they have not been correctly programmed. For example, in 2016, Tay, the Twitter AI chatbot from Microsoft, exhibited this type of inappropriate biased behavior by making misogynistic and racist remarks.1
  • AI decision-making tools embedded with biases. Examples include COMPAS, the AI tool used in courtrooms to predict future crimes, which exhibited biases against African American defendants in the United States,2 and the Amazon machine learning (ML)-based hiring tool, which had an underlying model that favored male applicants over female applicants for technical positions.3

Emerging technologies open the door to additional risk scenarios related to not only bias and lack of human dignity, but also lack of transparency, loss of human autonomy, safety and, of course, cybersecurity and privacy. Organizations worldwide have raised these concerns and have launched more than 100 standards and frameworks seeking to ensure that humancentric values are included in the design of AI and IoT-driven technologies. These standards include the Montreal Declaration for a Responsible AI,4 the Institute of Electrical and Electronics Engineers (IEEE) 7000 standards to address ethical concerns during systems designs,5 the EU Ethics Guidelines for a Responsible AI,6 and the recent UNESCO Global Agreement on the Ethics of AI.7 

At the same time, governments in countries such as Canada,8 China,9 Russia10and the United States11 and in countries in the European Union,12 have created regulations seeking to protect their citizens against the potential misuse of AI-based services in their AI national strategies. Even though explicit regulation has not yet arrived, big tech enterprises such as IBM,13 Intel,14 Microsoft,15 Google,16and Facebook,17are also launching ethical standards for the development of their AI-based services.

Emerging technologies open the door to additional risk scenarios related to not only bias and lack of human dignity but also lack of transparency, loss of human autonomy, safety and of course cybersecurity and privacy.

To make these frameworks and coming regulations actionable and ensure that emerging technologies are trustworthy, a new category of professionals is needed: ethical emerging technologists. Cybersecurity and privacy risk are part of the problem, but practitioners also now need to address guaranteeing human dignity, avoiding bias, transparency and ensuring that not all decisions are made autonomously without human supervision. Some universities are already including those topics in their educational syllabuses for related degrees, and there are already specific professional certifications.18

Managing these new risk scenarios can be seen as an extension of what is already being done in cybersecurity: detecting vulnerability and threats, applying protections, monitoring unexpected activity and addressing incidents. Tools can be used to address the ethical risk, such as the AI Fairness 36019 from IBM for bias detection or resources for ML explainability such as LIME20 and SHAP.21 Also, more organizations are emerging to provide these types of services and products.

But things are not as straightforward as in cybersecurity. For example, cybersecurity practitioners can precisely identify if an event is a vulnerability that can be exploited to disrupt a system or if it is not. However, it is not always as easy to determine when a chatbot is behaving inappropriately (as it can be very subjective) or to avoid bias in an AI decision-making tool when all the designers come from the same population and share the same beliefs systems. To address issues such as these, the ethical emerging technologists must work with multidisciplinary teams and be ready to create new concepts that identify the new risk areas.

Conclusion

Many hospitals have ethics committees that help clinicians deal with challenges raised during clinical practice. A similar approach can be used as an example of how ethical emerging technologists can address ethics regarding the transformations and risk brought by the increased use of emerging technologies.

In November 2021, Frances Haugen, the woman who revealed how Facebook had put its profits before people, damaging the health and safety of users and threatening democracies, said coming regulations have the potential to set global standards for online platforms. If creation and enforcement of these regulations are not prioritized “we will lose this once-in-a-generation opportunity to align the future of technology and democracy.”22 Ethical emerging technologists will be key players in supporting these regulations.

Endnotes

1 Vincent, J.; “Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less Than a Day,” The Verge, 24 March 2016
2 Angwin, J.; J. Larson; S. Mattu; L. Kirchner; “Machine Bias,” ProPublica, 23 May 2016
3 Dastin, J; “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 10 October 2018G
4 Université de Montréal, Canada, “Montreal Declaration for a Responsible AI
5 Institute of Electrical and Electronics Engineer (IEEE), “IEEE Ethics in Action in Autonomous and Intelligent Systems
6 European Commission, Ethics Guidelines for Trustworthy, Brussels, 2019
7 United Nations, “193 Countries Adopt First-Ever Global Agreement on the Ethics of Artificial Intelligence,” UN News, 25 November 2021
8 Government of Canada, “Responsible Use of Artificial Intelligence (AI),” 12 October 2021
9 OneTrust DataGuidance, “China: MOST Issues New Generation of AI Ethics Code,” 11 October 2021
10 AI Alliance Russia, “Artificial Intelligence Code of Ethics
11 Lander, E.; A. Nelson; “Americans Need a Bill of Rights for an AI-Powered World,” Wired, 8 October 2021
12 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, Brussels, 21 April 2021
13 IBM, “AI Ethics
14 Intel, “AI for Good
15 Microsoft, “Putting Principles Into Practice
16 Google, “Artificial Intelligence at Google: Our Principles
17 Pesenti, J.; “Facebook’s Five Pillars of Responsible AI,” Meta AI, 22 June 2021
18 CertNexus, “Certified Ethical Emerging Technologist
19 IBM Research Trusted AI, “AI Fairness 360
20 GitHub, “h2oai/mli-resources
21 GitHub, “slundberg/shap
22 European Parliament, “Frances Haugen to MEPs: EU Digital Rules Can Be a Game Changer for the World,” 8 November 2021

Pablo Ballarin Usieto, CISA, CISM, CDPSE, CEET, CEHv9, CISSP, CSX-F, TOGAF, ISO 27001:2013 LA, ISO 20000 LI

Is a cybersecurity professional and cofounder of Balusian S.L. He provides advisory and strategic services related to cybersecurity (chief information security officer [CISO]-as-a-service, virtual CISO), ethics and trustworthy AI. Recently, he has assisted many organizations in different domains including retail, banking, telecommunications, public administrations, media, and entertainment in Europe and South America. Usieto is also a professor of security, an event speaker, a member of the board for the ISACA® Valencia (Spain) Chapter and the coordinator for the Center for Industrial Cybersecurity (CCI) in Aragon, Spain. Usieto is interested in discovering how emerging technologies (e.g., brain-computer interfaces, IoT, AI, smart cities) impact users’ behaviors, lives and bodies in terms of privacy, ethical issues and existential risk. He shares this knowledge through research studies with manufacturers, consultants, education services and standard recommendations.