Understanding and Managing the Artificial Intelligence Threat

Artificial Intelligence
Author: Larry G. Wlosinski, CISA, CISM, CRISC, CDPSE, CISSP, CCSP, CAP, PMP, CBCP, CIPM, CDP, ITIL v3
Date Published: 1 January 2020

Artificial intelligence (AI) has come to the forefront of software technology and is presented as both a positive and negative influence on humanity. Many futuristic movies are made about robots that have some form of AI and, in many films, those robots have the flaw of not relating to or understanding humans or good and evil. There are also products (e.g., Alexa, Siri, chatbots) and television commercials (e.g., Microsoft, IBM’s Watson) that use and promote AI with the benefits of increased productivity and performing tedious tasks at home and at work.

In addition to these benefits, AI has the potential to reduce the risk of malicious cyberevents, improve information security, provide stronger cyberdefense structures, uncover security and privacy blind spots when assessing risk, test for vulnerabilities in software, aid in digital forensics (e.g., system intrusions, data breach investigations), and provide quicker notification of cyberattacks and potential system compromises.

AI can be a threat or a benefit to an organization. Any enterprise looking to use AI for security purposes must first understand the security and privacy concerns, threats, and risk factors associated with AI technology.

AI Concerns, Threats and Risk Factors

Despite the benefits AI brings, there are some security decisions that must be made before implementing AI technology. AI technology puts data at rest, in use and in motion at risk. Data can be archived, held in multiple repositories, collected over and over again, correlated and combined (i.e., fused) into new data records, stored in multiple cyberclouds, extracted from the Internet into large data files, and processed by big-data organizations. The overall result of this compiled information is that attackers can now choose targets that are lucrative and ripe for exploitation. Because of modern advancements, attackers do not have to do a lot of work to obtain very useful information in their quest to acquire large amounts of money or sellable information.

Personal privacy is an important concern because of the information available in social networks, medical records, right of ownership, information gathered by chatbots, and the many data-gathering devices developed in the growing computing and processing environment known as the Internet of Things (IoT). Together, this information can create an extensive profile of an individual’s history, personal and professional associations, health, habits, preferences, activities, etc.

Information security is also a concern because of the growth of malicious AI in the form of information extraction, vulnerability information gathering, device compromises, and command and control systems.  

In a quick and highly impactful manner, AI can implement data breaches, data disclosure, data manipulation, malware, compromised mobile devices, and compromised big data repositories.

Associated with these threats is the risk to information security and personal privacy. The areas of risk to information security include: hackers using AI, more incidents due to AI exploitation and quicker exploitation of software vulnerabilities/weaknesses. Personal privacy is affected because of data centralization and correlation, and the accumulation of potentially inaccurate and out-of-date data in the data repositories of past and present places of employment. Risk to both security and privacy can be from cybercriminals using AI, Internet service providers (ISPs) capturing data via Internet/web crawlers and organizations using the data for more than they declare in their statements of intent/usage.

Protecting Data and Privacy From AI Threats

How can data and privacy be protected from the malicious use of AI? Recommendations for improving the security and privacy posture for an organization follow. The recommendations are broken down into preventive measures, protective controls, and detective practices and tools.

Preventive Measures
For purposes of this discussion, preventive measures are defined as policies, procedures, processes, techniques and actions taken to minimize or eliminate the possibility or occurrence of an event that could jeopardize the confidentiality, integrity and availability of the data.  

Organizations must establish privacy policies that:

  • Make it clear to users what data are permissible to collect and for what purpose the organization can use them
  • Define how the collection, use and disclosure of personal data is to be done in a way that is consistent with the context in which the consumer provides the data
  • Allow access to personal data in a usable format and include the ability to correct errors (i.e., redress)
  • Provide reasonable limits on the personal data that the organization can collect and retain

Compartmentalization is another key preventive measure. This involves separating business and personal applications (apps) using technological techniques, such as containerization (which is the segregation of data). This compartmentalization can prevent AI programs from having access to everything, which can help reduce the impact should attackers gain access to one application.

Any organization leveraging AI must have strong data management. For example, anonymization and data deletion are techniques to help minimize the target attack surface of personal information. Anonymizing the data for regulatory compliance (and improved data protection) helps to protect personal identities in the event that there are attempts to exploit the data.

A MORE INVOLVED SOLUTION WOULD BE TO STRUCTURE THE NETWORK SO THAT SENSITIVE AND PRIVACY DATA ARE NOT ACCESSIBLE VIA THE INTERNET.

Everyone can play a role in reducing AI risk. It is not just an enterprise-level task. Individuals should share less information on social media and not share information about others without permission. Machine learning (ML) is a source of risk because this software can easily correlate Twitter, Facebook and email to create personal profiles that can be used for identity theft, home break-ins, surveillance and other acts of malicious intent.

While the preventive measures discussed can help mitigate risk, organizations should also consider implementing risk mitigation services. Services such as identity verification, fraud detection and people-search (or look up) services can help reduce the risk of AI by ensuring the accuracy of the data and providing an approved/acceptable information source.

A more involved solution would be to structure the network so that sensitive and privacy data are not accessible via the Internet. This data could be housed in an internal subnet (i.e., no outside connection). By restricting access to the Internet, AI devices’ access to the network is also limited, helping to minimize the possibility of unauthorized access and reduce the potential damage AI could cause if sensitive, confidential, classified and/or personal information became available to those with criminal intent (e.g., identity theft, blackmail, espionage).

 

The use of data loss prevention (DLP) tools is another way of addressing the AI threat. DLP technology products help in the areas of providing algorithms and techniques that can detect types of data transmitted, cybertraffic behavior analysis, multilayer encryption, data storage protection techniques and authentication. DLP can also be outsourced, provided as a service and available from cybersupport vendors.1

Protective Controls
Protective controls are intended to shield data from unauthorized disclosure, modification, erasure, recovery and distribution. 

Tools such as data encryption and data masking are good practices for information technology to protect data from exposure due to AI searching, gathering, correlation and malicious usage. The new normal may be to encrypt all data when at rest and in transit, but this may give organizations a false sense of data security. Data in use or available in print or via nonsecured devices may be targets of attack and disclosure. Any device with a camera can be used to gather data that can be made available to AI technology. To ensure total protection, enterprises should implement non-cyber-related security and privacy controls as well.

Connecting personal computing devices to the organization’s network (i.e., the intranet) should be prohibited because those devices can impact the organization in many ways. Examples of mobile device malicious activity include: informing competitors of marketing and sales plans; releasing pricing and purchasing information; identifying enterprises in the supply chain (which has the potential of cyberexploitation); and exposing sensitive pending copyrights, strategic efforts and more. If the personal computing device contains AI, it could easily gather this information because of insufficient cybersafeguards.   

Protective device software can also be a useful tool in reducing the risk posed by AI devices. Enterprises should deploy organization-owned and centrally managed devices that have antivirus and antimalware software installed, and employees should be encouraged to install those same software tools on their personal mobile devices.

THE NEW NORMAL MAY BE TO ENCRYPT ALL DATA WHEN AT REST AND IN TRANSIT, BUT THIS MAY GIVE ORGANIZATIONS A FALSE SENSE OF DATA SECURITY.

Along those lines, privacy assistant apps for mobile devices allow users to create policies that predict and make inferences over time to decide how and when the user would like their data to be collected and used. A privacy app can implement baseline settings on the device and monitor new apps for threats and privacy weaknesses. ML can predict outcomes and potentially operate autonomously to block malware. ML involves creating models based on malware samples to determine whether activity on computers is normal or not. Dissected malware data are needed to build a model.

Privacy information can be a target for malicious AI usage. Protective measures include account authentication, access time-out controls, remote access detection, data encryption, control testing and data sanitization/destruction.2

Detective Practices and Tools
The detection of cyberintrusions, data breaches and misuse has become more difficult due to the speed of the attacker (e.g., AI, botnet), but it is an essential ingredient in the fight to safeguard data. There are a variety of forms of detection which include data collection and file configuration best practices, specialized network tools (e.g., intrusion detection and prevention systems), data filtering and analysis software, and cyberforensic tools.

Best practices regarding data collection may help to reduce the AI risk of a cyberintrusion and misuse of data and include tracking the data source, auditing data access and use, and continuous monitoring and control. Data collection practices could be managed by partnerships between government and industry and by clarifying legal requirements.

AI exists in mobile devices as well, and those devices analyze any data entered or collected (purposely or accidentally) with algorithms that can be used for productive or malicious purposes. It has been stated that:

Enterprise mobility management tools can be set up to look at only corporate apps, but still be able to whitelist and blacklist any apps to prevent malware. That way, IT doesn't invade users' privacy on personal apps.3

This is another example of a detection tool, but it may not be quick enough to address attacks executed with AI technology.

Using AI to Address Cyberattacks
Though the use of AI may lead to risk, it can also be used to predict attacks by combing through data and detecting suspicious activity by clustering the data into meaningful patterns using unsupervised ML. AI can present this activity to human analysts who confirm which events are actual attacks, and then incorporate the feedback into its models for the next set of data. The data needed for a prediction are what have been accumulated by the network sensors and audit logs.

Conclusion

One of the reasons that AI was created was to process, correlate and produce usable/actionable information. AI, however, can be used for malicious purposes and is one of the fastest-growing cyberthreats (aside from bots and botnets) to information security and privacy. So how can security professionals respond to the latest cyberthreat AI poses?

REDUCING SECURITY AND PRIVACY INCIDENT RESPONSE TIMES IS CRITICAL TO WINNING THE BATTLE AGAINST CYBERATTACKS AND GIVING ORGANIZATIONS THE MEANS TO COMBAT AI CYBERATTACKS.

In today’s world of increasing cyberthreats and data breaches, organizations are in a reactive mode, struggling to get ahead of the cyberthreat curve. Reducing security and privacy incident response times is critical to winning the battle against cyberattacks and giving organizations the means to combat AI cyberattacks.

Used with good intentions, AI can look for and address security and privacy program weaknesses. The growth and complexity of Internet bots and botnets show that hackers are sophisticated users who exploit technology for their own gain. No one will be surprised if AI becomes a tool for not only organized crime, but also foreign governments, organizations, and anyone with the know-how and ability to exploit it. But organizations can leverage the power of AI by using it not only for intrusion prevention and detection but also for forensics. Organizations should not wait to implement AI technologies because they can be strong tools in the fight against cybercrime.

Endnotes

1 Wlosinski, L.; “Data Loss Prevention—Next Steps,” ISACA Journal, vol. 1, 2018, http://wup.ozone-1.com/archives
2 Wlosinski, L.; “Key Ingredients to Information Privacy Planning,” ISACA Journal, vol. 4, 2017, http://wup.ozone-1.com/ archives
3 Provazza, A.; “Artificial Intelligence Data Privacy Issues on the Rise,” TechTarget, 26 May 2017, http://searchmobilecomputing.techtarget.com/news/450419686/Artificial-intelligence-data-privacy-issues-on-the-rise

Larry G. Wlosinski, CISA, CRISC, CISM, CAP, CBCP, CCSP, CDP, CIPM, CISSP, ITIL V3, PMP
Is a senior consultant at Coalfire-Federal with more than 20 years of experience in IT security and privacy. Wlosinski has been a speaker on a variety of IT security and privacy topics at US government and professional conferences and meetings, he has written numerous articles for magazines and newspapers, reviewed various ISACA publications, and written questions for the Certified Information Security Manager (CISM) exam.