How to Mitigate the Risks of AI Black Box

Please follow and like us:
Pin Share

Artificial Intelligence (AI) has been in the news a lot recently, and the phrase “AI Black Box” has been getting some serious media attention. Some commentators say it is part of the growing pains of the industry and others state that it is inhibiting the acceptance of AI.  AI is having an impact on every one of us as it makes decisions on our behalf, often without our being aware of it.

AI is used by Netflix and Amazon to make product recommendations, by Facebook to recommend friends.  Google has embraced it to tailor advertisements on our search pages.  Microsoft uses it in Office to make auto-corrections in documents.

So, why is the AI black box such a big issue?

AI black box

The issue with AI is that it can be used for automated decision making without humans being able to see or understand the basic algorithms and data that it uses to make the decision.  It is already being linked with Big Data to automate areas like credit checking, health prognoses and other behavioural factors.

People therefore instinctively distrust AI.  Healthcare is seen as a major growth area for AI, but health professionals are finding it difficult to accept AI processing.  This reduces take-up and means continual questions over ethical behaviour and legal actions to try to limit AI decision making.

Any AI process that interacts at a human level needs to be trusted by people before they will fully accept AI replacing rather than extending the human experience.  Some academics have expressed a concern that at some point in the not-to-distant future AI intelligence will exceed that of humans.

In short, to be able to increase trust levels, AI needs to be more open, to have “explainability”, which has morphed into a new research field in AI.

Three problems that must be mitigated are:

  1. Opaque Processing

    Opaque Processing

    As we stated above one major problem that needs to be mitigated is that of the opacity of many AI decision making systems.  People can be denied access to the algorithms because they are “proprietary” to the organisation that has developed them, or in some cases,  they are not accessible because the AI system itself has developed them as it receives data.

    They cannot be openly vetted or in the academic phrase “peer-reviewed” to ensure that they conform to any applicable legislation.   A second problem is that they are usually expressed in terms that people find difficult to understand.  A true black box.  We see what goes in and what comes out, but what happens in between is a mystery.

    In some jurisdictions, for example, the EU, this can be a major issue.  Under EU regulations, an individual has the right to receive “meaningful information” about the logic processes used in matters that have a “legal or similar legal effect” on them.   Rejection of a credit application is a good example.

    If that information cannot be or is not provided for any reason, the EU will forbid the use of the AI application.

    That must be attended to to to mitigate non-acceptance.

  2. Apparent Bias

    Apparent Bias

    AI is the development of Machine Learning.  It uses training data culled from Big Data databases to maintain its algorithms, and over time improve their accuracy.  However, this can be inherently unsafe, since the source databases themselves may exhibit prejudices and other bias factors in the training data.

    This issue is intimately linked to the quality of the data sources used as training data.  As stated above, bias in the data can deliberately or inadvertently cause the AI system to make decisions that are correct according to the algorithm, but incorrect according to a human observer.

    There have already been several cases, including Amazon where it has has been conclusively demonstrated that their AI processes discriminate against minorities.

    Until these apparent biases are addressed and removed or explained, the level of trust in AI systems will remain low.

  3. Electoral Processes

    Electoral Processes

    AI is now being increasingly used in the democratic process to analyse campaign programmes and voting results, by being able to identify and target groups of voters and predict their behaviour almost to an individual level.

    This is a serious risk to electoral systems.  If the voter’s register is manipulated by AI, the information presented to the electorate is deliberately skewed, or the votes cast manipulated, then the entire electoral process is flawed.

    There is considerable suspicion, but little provable evidence that the electoral process has been subverted by AI activities recently, for example in the 2016 Brexit referendum in the UK and in the US Presidential Election that brought Donald Trump to power.

    Media speculation has done little to increase trust in AI in this regard.

AI has tremendous potential to ease our existence by removing drudgery from our lives, but it needs to become much more open and transparent for full acceptance.

Please follow and like us:
Pin Share

Social media & sharing icons powered by UltimatelySocial