Slateford Skip to site navigation

Insights

Is the AI in your Christmas present lying to you to make you feel good?

Natalie McEvoy

Algorithms are increasingly used to predict behaviour and create content but can we trust AI or the interests of those behind it?

More and more gifts around the tree contain forms of Artificial Intelligence; home speaker systems, drones, robots, electronic note pads – even dog bowls. These contain a form of programming to allow the item some degree of autonomy to make its own decisions.

For example, top flight drones are programmed to avoid collision with a tree or wall despite human direction, which for the most part – as purchaser – is advantageous to you.

How the AI systems are set up is often an opaque affair. Often built under a set of unchallenged human assumptions, AI outcomes are generally set to prioritise buyer engagement, growth and profit over and above user satisfaction and happiness. AI enabled devices will take you to results that will keep you engaged first and foremost, will set goals it thinks your nearest statistical comparable is capable of reaching, or will interpret data to fit you into the most appropriate sponsored result.

For the most part, current AI is encoded by a narrow band of tech industry individuals not fully reflective of the diversity of society. We know, for example, that facial recognition software is lagging behind in relation to the accurate identification of black faces due to the relative paucity of programming data underpinning the product. AI – as futuristic as it sounds – is for now still a human product, with human bias.

Let’s relate this back to a work context. AI can’t in fact tell you who the best candidate for a job is. What it can do is take your records on candidates that have traditionally done well and apply them as a filter over potential candidates. “More of the same” bias can then be blamed on the algorithm. AI simply isn’t nuanced enough to consider who isn’t presently around the table who might do your business some good. Past success bias is every bit as true for your home devices, as you’ll know from Netflix recommendations and smart shopping lists.

As it develops, it is possible that we will experience AI programmed so effectively that it begins to make its own decisions and recommendations. Where will this take the decision as to what feedback you get, or what you need to see or hear next? No one really knows where “deep learning” will take us, which is one of the reasons why automated cars are still only in experimental stage.

Smart technology is already capable of manipulation; it is programmed to know how lighting can affect the mood of a room and even to flag when one voice is dominating the conversation. Data harvested from a smart TV is likely collected to establish your viewing habits and to push recommendations to increase your likelihood to spend before you need.

How does this trouble us as privacy and defamation specialists? Smart technology has to be embraced with caution and limits, mindful of what data is being collected, how it is being used and how securely it is being held. Problems arise in particular when a device is capable of building too detailed a picture of our home security or personal movements.


As for defamation, we are starting to see the emergence of journalism by algorithm, which raises an interesting question mark as to how AI will necessarily detect satire and avoid reporting it as fact. We stand on an interesting precipice.

Enjoy the technology but be mindful that AI is not looking out for you, it’s trying to keep that conversation going. Will it lie to you in the pursuit of that engagement priority? Yes, of course it will.

But as any parent whose child has asked Alexa if Santa is real will know, the lies of AI are sometimes welcome and justified.