Can AI be Trusted ?

10643291486?profile=RESIZE_400xAccountants may remember this phrase, “Figures do not lie, but liars’ figure.”  After questioning some data results, people later were informed that when, the answer given was, “This is what the computer results showed.”  As business people, there is a new “Expert” on its way and arguing with it may be useless.

In June 2022, Microsoft released the Microsoft Responsible Ai Standard, v2.[1]  Its stated purpose is to “define product development requirements for responsible Ai.”  Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (Ai): algorithm developers need to be aware of the potential for users to over-rely on Ai outputs known as ‘automation bias.’  Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users.  This is good commercial responsibility but poor social responsibility there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals.

Bias is one of three primary concerns about artificial intelligence in business that have not yet been solved: 1.) hidden bias creating false results; 2.) the potential for misuse by users and abuse by attackers; and 3.) algorithms returning so many false positives that their use as part of automation is ineffective.[2]

When Ai was first introduced into cybersecurity products it was described as a defensive silver bullet.  There is no doubt that it has some value, but there is a growing reaction against faulty algorithms, hidden bias, false positives, abuse of privacy, and potential for abuse by criminals, law enforcement and intelligence agencies.  According to a professor of psychology and neural science at New York University writing in Scientific American on 6 June 2022, the problem lies in the commercialization of a still developing science: “The subplot here is that the biggest teams of researchers in Ai are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations.  And corporations, unlike universities, have no incentive to play fair.  Rather than submitting their splashy new papers to academic scrutiny, they have taken to publication by press release, seducing journalists and sidestepping the peer review process.  We know only what the companies want us to know.” The result is that we hear about Ai positives, but not about Ai negatives.

The executive director at the Center on Privacy and Technology at Georgetown Law, came to a similar conclusion.  On 8 March 2022, she wrote, “Starting today, the Privacy Center will stop using the terms 'artificial intelligence', 'Ai', and 'machine learning' in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities… One of the reasons that tech companies have been so successful in perverting the original imitation game [the Turing Test] as a strategy for the extraction of capital is that governments are eager for the pervasive surveillance powers that tech companies are making convenient, relatively cheap, and accessible through procurement processes that evade democratic policy making or oversight.”

The pursuit of profit is perverting the scientific development of artificial intelligence.  With such concerns, we need to ask ourselves if we can trust Ai in the products we use to deliver accurate, unbiased decisions without the potential for abuse (by ourselves, by our governments and by criminals).

Some recent AI failures:

Autonomous vehicle 1. A Tesla on autopilot drove directly toward a worker carrying a stop-sign, and only slowed down when the human driver intervened. The Ai had been trained to recognize a human, and trained to recognize a stop-sign, but had not been trained to recognize a human carrying a stop sign.

Autonomous vehicle 2. On 18 March 2018, an Uber autonomous vehicle drove into and killed a pedestrian pushing a bicycle.  According to NBC at the time, the Ai was unable to “classify an object as a pedestrian unless that object was near a crosswalk.”

Educational assessment.  During the Covid-19 lockdowns in 2020, students in the UK were awarded assigned examination results based on an Ai algorithm.  Many (about 40%) were considerably lower than expected. The algorithm placed undue importance on the historical performance of different schools.  As a result, students from private schools and previously high performing state schools had an unmerited advantage over students from other schools, who suffered accordingly.

Tay. Tay was an Ai chatbot launched on Twitter by Microsoft in 2016.  It lasted just 16 hours.  It was intended to be a slang-filled system that learned by imitation.  Instead, it was shut down when it Tweeted, “Hitler was correct to hate the Jews.”

Candidate selection. Amazon wished to automate its candidate selection for job vacancies – but the algorithm turned out to be sexist and racist, favoring white males.

Mistaken identity. During the Covid-19 lockdowns, a Scottish football team live-streamed a match using Ai-based ball-tracking for the camera.  But the system continually confused the linesman’s bald head for the ball and focused on him rather than the play.

Application rejection.  In 2016, a mother living in CT requested permission for her son, who had just awakened from a six-month accident induced coma, to move into her apartment.  The request was rapidly refused without explanation.  Her son was sent to a rehabilitation center for more than a year while Arroya challenged the system.   The landlord did not know the reason.  He was using an Ai screening system supplied by a third party.  Lawyers eventually found the cause: an earlier citation for shop-lifting that had been withdrawn.  But the Ai system simply refused the request.  A staff attorney for the State of Connecticut Fair Housing Center (CFHC) representing the Mom commented, “He was blacklisted from housing, despite the fact that he is so severely disabled now and is incapable of committing any crime.”

There are many, many more examples of Ai failures, but a quick glance at these highlights two major causes: a failure in design caused by unintended biases in the algorithm, and a failure in learning.  The autonomous vehicle examples were a failure in learning.  They can be rectified over time by increasing the learning, but at what cost if the failure is only discovered when it happens?  And it must be asked if it is even possible to learn every possible variable that does exist, or might in the future. 

The exam results and Amazon recruitment events were failures in design. The Ai included unintended biases that distorted the results.  The question here is whether it is even possible for developers to exclude biases they have if they are unaware of their own biases.  Misuse entails using the Ai for purposes not originally intended by the developer.  Abuse involves actions such as poisoning the data used to teach the Ai.  Generally speaking, misuse is usually performed by the lawful owner of the product, while abuse involves actions by a third party, such as cybercriminals, to make the product return manipulated incorrect decisions.

Misuse of Ai:  A research lead at Vectra Ai said recently of the detection and response type of Ai used in many cybersecurity products - is largely immune to the inclusion of bias that plagues other domains.  The inclusion of hidden bias is at its worst when a human-developed algorithm is passing judgment on other humans.  In this scenario the real question is one of ethics. He points out that such applications are designed to automate processes that are currently performed manually; and that the manual process has always included bias.  “Credit applications, and rental applications… these areas have always had discriminatory practices.  The US has a long history of redlining and racist policies, and these existed long before Ai-based automation.”

His technical concern is that bias is harder to find and understand when buried deep in an Ai algorithm than when it is found in a human being.  “You might be able see the matrix operations in a deep learning model,” he continued. “You might be able to see the calculations that go on and lead to the actual classification; but that won't necessarily explain why.  It will just explain the mechanism.  (He said) I think at a high level, one of the things that we're going to have to do as a society is to ask, is this something that we think it's appropriate for Ai to act on?”

The inability to understand how deep learning comes to its Ai conclusions was confirmed by an MIT/Harvard study in the Lancet, 11 May 2022.  The study found that Ai could specify race from medical images such as X-rays and CT scans alone, but nobody understood how.  The possible effect of this is that medical systems may be determining more than expected it could also be looking at race, ethnicity, sex, whether the patient is incarcerated or not and more.

An associate professor of medicine at Harvard Medical School, and one of the authors, commented, “Just because you have representation of different groups in your algorithms, that doesn’t guarantee it won't perpetuate or magnify existing disparities and inequities.  Feeding the algorithms with more data with representation is not a panacea.  This paper should make us pause and truly reconsider whether we are ready to bring AI to the bedside.”

The problem also encroaches on the cybersecurity domain. On 22 April 2022, Microsoft added its Communications Compliance Leavers Classifier, part of the Purview suite of governance products to its product roadmap.  The product reached Preview stage in June 2022 and is slated for General Availability in September 2022.   In Microsoft’s own words, “The Leavers classifier detects messages that explicitly express intent to leave the organization, which is an early signal that may put the organization at risk of malicious or inadvertent data exfiltration upon departure.”

In a separate document published on 19 April 2022, Microsoft noted, “Microsoft Purview brings together data governance from Microsoft Data and Ai, along with compliance and risk management from Microsoft Security.”  There is nothing that explicitly ties the use of Ai to the Leavers Classifier, but circumstantial evidence suggests it is used.  Microsoft was asked for an interview, “to explore the future uses and possible abuses of AI.”  MS’s reply was, “Microsoft has nothing to share on this at the moment, but we will keep you in the loop on upcoming news in this area.”  With no direct knowledge of exactly how the Leavers Classifier will work, what follows should not be taken as a critique or criticism of the Microsoft product, but a look at potential problems for any product that uses what amounts to a psycholinguistic Ai analysis of users’ communications.

So how do you check the bias purpose, false positives, and misuse of algorithm practices.  On ethics, the question that must be asked is whether this is a right and proper use of technology. A researchers opined, “My intuition is that monitoring communications to determine whether someone is considering leaving especially if the results could have negative outcomes would not be considered by most people as an appropriate thing to do.”

False positives in Ai are generally caused by unintended bias.  This can be built into the algorithm by the developers, or ‘learned’ by the Ai through an incomplete or error-strewn training dataset. We can assume that the big tech companies have massive datasets on both people and communications.  Unintended bias in the algorithm would be difficult to prevent and even harder to detect. “I think there will always be some degree of error in these systems,” said a researcher.  “Predicting whether someone is going to leave is not something that humans can do effectively.  It is difficult to see why a future Ai system won’t misprocess some of the communications, some of the personal motivations and so on in the same way that humans do.”  He added, “I may have personal reasons for talking a certain way at work.  It may have nothing to do with my desire to stay or not.  I might have other motivations for staying or not, that simply won't be reflected in the types of data that those systems have access to.  There's always going to be a degree of error.”

Misuse involves how companies will use the data provided by the system.  It is difficult to believe that ‘high-likelihood leavers’ will not be the first staff laid off in economic downturns.  It is difficult to believe that the results won’t be used to help choose staff for promotion or relegation, or that the results won’t be used in pay reviews.  And remember that all of this may be based on a false positive that we can neither predict nor understand.  There is a wider issue as well.  If companies can obtain this technology, it is hard to believe that law enforcement and intelligence agencies will not do similar. The same mistakes can be made but with more severe results and this will be far more extreme in some countries over others.

The CEO and founder of Adversa.ai, is more worried about the intentional abuse of Ai systems by manipulating the system learning process.  “Research studies conducted by scientists and proved by our AI red team during real assessments of AI applications,” he told media.  “Prove that sometimes in order to fool an AI-driven decision-making process, be it either computer vision or natural language processing or anything else, it's enough to modify a very small set of inputs.”  He points to the classic phrase, ‘eats, shoots and leaves’ where just the inclusion or omission of punctuation changes the meaning between a terrorist and a vegan.  “The same works for Ai but the number of examples is enormously bigger for each application and finding all of them is a big challenge,” he continued.  Adversa.ai has already twice demonstrated how easy it is to fool Ai-based facial recognition systems firstly showing how people can make the system believe they are Elon Musk, and secondly by showing how an apparently identical image can be interpreted as multiple different people.  This principle of manipulating the Ai learning process can be applied by cybercriminals to almost any cybersecurity Ai tool.

The bottom line is that artificial intelligence is currently more artificial than intelligent.  Many believe we are many years away from having computers with true artificial intelligence even if that is ever possible.  Today, Ai is best seen as a tool for automating existing human processes on the understanding that it will achieve the same success and failure rates that already exist but it will do so much faster, and without the need for a costly team of analysts to achieve those successes and make those mistakes.  Microsoft’s warning on users’ automation bias the over-reliance on Ai outputs is something that every user of Ai systems should consider.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.    For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs. com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5504229295967742989

 

[1] https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf

[2] https://www.securityweek.com/bias-artificial-intelligence-can-ai-be-trusted

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!