If the AI Goliath is here to stay, how do we become David? Five things to arm yourself with.
Join me as I look inside the Devil's eye ...
Hello over-thinkers! I hope your week has not been too bad so far.
I’m not pretending I’ve lost my fear of AI. I’m not suggesting we’ll ever cut off its head like David did. But I’m better equipped to face the Devil now than I was a month ago. I’ve been busy doing a course on AI & Ethics. What’s the point if it’s such an unstoppable force? Well, I’ve always been a bit like Oedipus - in some ways. I’d rather know the truth, even if it hurts to gain insight in the process. The good news is I definitely haven’t accidentally married my mother.
My previous ‘knowledge’ about the risks posed by AI can be summarised in a few bullet points.
Potential job losses due to increased automation ( ‘These machines will steal our jobs…everybody is doomed!!’)
The threat to the creative industries due to generative AI (‘Some really terrible art and writing is flooding the internet and the genuine stuff is being stolen.’)
The government is using AI in surveillance (‘Big Brother is watching us’).
Cookies are highly suspicious (The name is definitely deceptive. They’re nowhere near as sweet as they sound).
Deep Seek is battling Open AI like the Siths and the Jedis in a kind of techy trade war. Only they’re both Siths. (High School children also think Chinese Deep Seek is way better than American Open AI for plagiarising homework. The added bonus is that Deep Seek is free to use, and downloadable locally, so you don’t give up your data).
The first four points don’t bother Donald Trump at all, but the last one has really got under his skin. (Whoever said AI was all bad?)
Suffice to say, I was no expert. But ‘better the Devil you know,’ as they say. Why not find out more about that thing which is going to kill us all off and steal our jobs? I also wanted to understand the hype. AI this. AI that. Could anyone even utter a sentence without those two pesky letters entering into it any more?
AI fever
My husband, a programmer, believes the massive hype around AI masks a massive pile of something else. The level of excitement is too great. Dollar signs in raving eye-balls remind us both too much of Tulipmania.1 (Watch the movie ‘Tulip Fever’ if you haven’t yet2 - it was panned by the critics, but I loved it, particularly the scene where one of the characters blows his life savings on what looks like a scabby old onion. I also found this brilliant painting by Brueghel on the subject, which I thought you’d like. The humans have become monkeys, putting all their energy into an idea which gets peed all over. See the cute little tike in the bottom right hand corner).
Is AI just another Tulipmania? Will we still be talking about it in another ten years or will we forget it like we all forgot NFTs, if we ever properly understood them in the first place, before we move onto the next fad? Is it just an expensive party balloon waiting to go pop?
Starry eyed people are spraying money all over the place as we speak. Some of them will grow filthily rich, some may soon become bankrupt. Either way, this isn’t some Jack-In-The-Beanstalk miracle bag of money-making beans. I would personally quite like AI to be popped into a big space ship and sent to Mars, along with Trump, Putin, Musk and the rest of the broligarchical BFFs. But the roots have taken hold too deeply. I don’t see this monster going anywhere anytime soon. So we need to know what we are dealing with. Here are five key takeaways as a little taster.
We are in YET another race
These races don’t always bode well. When I hear boasts about how big something is and how fast this or that will happen, it reminds me of former escapades, like the scramble for Africa. The nuclear race. The arms race. (My seven year old would want me to also add the dinosaur bone wars here too3 because those brontosaurus sized egos are highly reminiscent of the Tech Bro egos).
And the race isn’t just between CEOs in private tech companies. Governments globally are drunk on the hype and running in this mad race too. Hallucinating that it will somehow magically solve all their money worries. The Global AI index measures output and capacity. The US is number 1 on the list. The UK stands at number 44. There’s more detail on this in the footnotes. 5
People don’t generally make good decisions when they’re rushing them. They also make more mistakes. One of the participants on the course put his name into an AI system and it told him he was dead. Perhaps it would be better for everyone if we slowed down a little. The fact it is possible to do a thing, does not mean we should be doing it.
AI forces us to confront our values and the world we want to live in.
Those designing AI systems are being forced to confront their values as human beings, or at least they should be. Some see AI like a hammer. Neutral. Could be used to put up a picture. Could be used to kill someone. Others see it more like the nuclear bomb. Intrinsically ethically problematic. Either way, the system we create will reflect the values we put into it. If you put garbage in, you get garbage out. If you put racist data sets in, you’ll get algorithmic bias and racist AIs. Some have suggested inputting synthetic data so that AIs come up with solutions based on how we would like the world to be, rather than how it currently is. A nice idea, but practically hard to imagine it ever working.
And how would we even decide on the moral values we want our AIs to live up to? Who should get to choose them? Governments or private companies? Or a combination? The speed of technology makes it difficult for governments to catch up. There might be some shared ethical principles, say for signatories to existing treaties like the European Convention on Human Rights. But the priorities of countries like the US and China differ from the Global South. Poorer countries legitimately want AIs to focus on combatting climate change or finding water sources. Developed countries are still focussing more on profit.
In practice we will end up with systems based on more than one set of values, but skewed towards those in the West. If values guide makers of AI systems at all. It often feels naive to think that enough people controlling our AI systems genuinely care about ethical principles. Given how much energy is consumed by powering these AIs, it feels like they care more about making a profit than they do about climate change.
A couple of examples from the course really captured my imagination. Both were in the employment context. A company called Humanyze, a digital analytics company uses digital badges to survey workplace interactions.6 Employees wear it with the purpose of their speech, movement, and activity being tracked in order to drive performance once AIs have analysed it. I know from speaking to people on the course that there are other companies selling this kind of product to be used within prisons, hospitals or care homes in some countries. It made for some some lively discussions … I mentioned the word ‘consent’ and ‘capacity’ to consent several times.
The second example was of a woman who applied for a job at ‘Strategeion,’ a programming hub set up by US veterans which was initially aimed at employing other veterans. The company became so popular that their HR couldn’t handle the large volume of applications, so they set up an AI to filter CVs. A disabled woman from Athens, GA had her CV dismissed without even a first round interview despite fitting all the criteria for the role. She had the balls to ask for feedback. It took them a while to figure out why the AI had made the decision to reject her.
It wasn’t her disability, they said, as lots of veterans were disabled too. They finally figured out she had been dismissed because of the fact that she was not into sports. Whilst many veterans couldn’t play sport any more, they used to be able to. The AI had been sifting applicants according to vocabulary on CVs suggesting a culture fit. Most previous employees had mentioned a love of sport. Sport vocabulary was nowhere to be found on the woman’s CV. Lots of employees were horrified their CVs had been used to train the AI. The applicant was understandably livid.7
Few people understand AI, and even fewer have the moral compass to tackle it but there are some tiny rays of hope
Despite the UK being high up on the Global Index list, none of the big AI companies doing AI research are currently headquartered in the UK. Nor do we have a government department solely focussed on digital affairs and AI. So I can’t help but wonder whether 1) we are fuelling another country’s economy with the money we are spending on it and 2) we don’t have enough people analysing or understanding the risks in government.
Many companies also don’t have a moral conscience. CEOs are paid to make sure profits go up, not to let ethical problems drag them down. The big ones do have ethics departments focusing on AI, to appease regulators and auditors. Handy to point to if there’s an audit. Not so handy if you have to put in place what they recommend. If you don’t like it, you could just get rid of them though, especially if it’s an American company where firing at will is legit. Google has done it twice now after all with their ethics leads.8 How many CEOs are actually listening to the advice these ethics committees are giving them? And how many more will get the boot?
There are some outliers out there though. Our tutor kept emphasising the greater altruism of the French company Anthropic. It has even set up a council to assess AI’s global impact.9 They want to work collaboratively with universities and governments too, and are claiming to be solve the problem of the ‘black box.’ 10 For those who don’t know, LLMs (large language models) are so opaque that humans often don’t know how they have reached certain decisions. Transparency is a big buzz word in AI and Tech because a lack of transparency makes governments and political institutions question their legitimacy.
AI could undermine democracy
One reason people are exercised about transparency is that AI has the capacity to undermine democracy. There are already more autocracies than democracies in the world, so democracy is already at risk globally and barely hanging on by a thread in the US. So why has the computer scientist and member of the UK government’s AI Council, Dame Wendy Hall described AI as a ‘threat to democracy’ ?11
There is a huge power imbalance between ordinary people and Big Tech. Our information, which is free for these companies to horde, has not just turned the latter into mega-monopolies and their founders into billionaires. We play into their hands because we live so much of our lives online and sleepily hand over our data for free without thinking about what they’ll do with it next.
Think about the speed at which this is happening. We were told that every five minutes, there are more than 20 million google searches, over 23 million Youtube videos watched and nearly 100 million WhatsApp messages being sent. But what is going on behind the scenes? Every time you send a Whatsapp message, the information in those texts is being fed into AIs. Look at the bottom right-hand corner of the screen, and you will see a tiny little blue circle. That means Meta’s AI is on.
Meta can use its AI to analyse your messages. It can sell your lovely data to Google and your google searches can be analysed by its AIs and sold to Meta, and they can both get a really full picture of your age, gender, background, lifestyle, political views and spending habits. They can both do lots of targeted advertising on you too. Fancy doing a Zoom call tonight? I’d take AI off the settings if you do. I found out this week that the settings will automatically be switched onto their AI mode unless you actively disable them to avoid it. Do you want them knowing all the information you discussed in the call? Do you even remember consenting to this in any kind of informed way?
We know that disinformation spreads fast too. Deepfakes have not just been used in propaganda, like that vomit inducing sketch of Trump, Musk and all the fascist poster boys sipping cocktails by a Gazan riviera. They’ve been used in political campaigns as well. In 2023, a false AI generated video was circulating ahead of the Slovakian election which made it look like one of the candidates was intending to pay for votes12. AI is big business in election campaigns. The Indian government spent $50 million on it in its 2024 election campaign.13
Targeted advertising on social media is also linked to foreign interference in election campaigns and the spread of fascist propaganda. Our human desire for novelty means we pay more attention to entertaining content, even if it is fake. AI generated videos on social media could also be used to both increase and decrease electoral participation. Some academics talk about eliminating targeted advertising altogether on social media. But proving the advertising is targeted is difficult. I’d prefer it if we just scrapped all advertising on social media, especially in the run up to election campaigns. But I’d be more likely to see a pig fly than see this ever come to pass.
AI enthusiasts have argued that AI could be used to fact check or spot fake news, but it isn’t sophisticated enough to do so yet. Some governments are trying to do something to combat the problem, but the wheels of justice move too slowly to compete with the speed of changing technology. In France, they passed a ‘fake news’ law14 which enabled judges to order the immediate removal of ‘fake news’ during the three months before an election campaign. But it wasn’t effective. It took too long for cases to get through the system.
Some AI research labs, like Google Deep Mind, are working on invisible watermarks which could identify synthetic content15, but those marks can be removed, and there is no universal requirement to put them on material. Most of us wouldn’t even be able to see they were there anyway. It might keep ethics committees and auditors happy in the short term, but it doesn’t tackle the problem head on.
As I’ve mentioned, the lack of transparency of these AI models is another problem for democracy. IP laws protect AI models from disclosure, but the fact that the workings of unsupervised LLMs are so opaque, makes it hard to tell how the algorithms have generated certain results and whether these ‘black boxes’ are making unfair decisions or discriminatory ones. Some companies are posting ‘data sheets’ or ‘system cards’ trying to explain how the models work, or even making ‘XAI’ (explainable AI tools). I did try to look at one of these, but they weren’t too explainable. It’s almost as if their AI policies were deliberately designed to be so unwieldy that they would confuse people.16
I have a couple of good news stories to round this section off. First, Anthropic is trying to organise a ‘participatory AI’ model with rules built on input from the public. You can find out more here: Anthropic’s Collective Constitutional AI. It might be too good to be true, but I will be looking more into Anthropic in the weeks and months ahead. Because I have to believe some companies are more ethical.
Secondly, as far as governments go, Taiwan takes its digital responsibilities more seriously than most. We should be aspiring to be more like this in the West. It has a designated minister of digital information, Audrey Tang. She is investigating how to actively use AI to fight disinformation.17 She understands the threats posed by it, and has spoken extensively on the subject of deep fakes in propaganda, but remains an optimist. She thinks ‘democracy is technology.’ She sees AI as a tool for getting a more direct form of democracy one day. She is a supporter of using algorithms to create citizen assemblies and more widespread online voting on policy proposals. In Taiwan, the government has also figured out a way to send trusted information from a 111 number, which is apparently unforgeable. I wish I could be as optimistic as Audrey Tang, but I do at least have a feeling she is more of a Jedi than a Sith.
AI could increase inequality
The gap between the richest and poorest in society has already become much bigger. CEOs apparently now make 300 times what the average worker makes, and we were told their compensation has risen by 997% between 1978 and 2014. Almost double the stock market growth. AI could widen the gap further. Increased automation favours those who already have the capital to buy these machines. It exacerbates existing inequalities.
Musk keeps going on about ‘creative destruction’ as though this is part of a natural cycle which we always see at times of great technological transformation. The loom workers were ousted by the steam powered machines. The advent of the printing press meant scribes went out of work.
But the speed of all this is on a different scale. Billionaires already horde their money or hide it offshore. In 2012, it was estimated that at least $21 trillion was hidden by the world’s wealthiest people in tax havens18 Governments are scared of taxing them what they need to be taxed, and these companies are only getting richer through greater investment in AI. Those running them are unelected by us, yet they wield so much power over us.
When we hand over all our data, data lakes start to form. The snowball effect means once the tech companies get to a certain point, they just keep getting bigger and bigger and can monopolise the market. It also leads to poorly paid ghost workers training AIs. Governments are growing more concerned about the power imbalance and they are trying to correct monopolies through the competition courts. But companies with enough money can just pay the large fines imposed. The fines will need to be bigger.
The inequality doesn’t just stem from the increase in these companies' wealth, and the risks being transferred to workers. There’s also deep algorithm bias in these systems. The data being fed into these machines reflects the society we currently live in. Existing data sets are racist. We perpetuate racial bias when we feed this data into an AI. Surveillance systems can’t even detect a black face as well as they can detect a white face. So police are more likely to make mistakes when it comes to arrests. In 2020, Robert Williams spent 30 hours in jail because a facial recognition system wrongly thought he was the criminal who had stolen watches from a department store.19 Think about that case when you look at the Met Police surveillance vans with cameras parked in Central London casually surveying passers by. I wonder how many others will have to suffer like Robert Williams did.
Probably the biggest problem with AI is how opaque it is. Some LLMs have become so sophisticated that people cannot understand how they have made decisions. Hence the term ‘black box’. This is at odds with the requirement for a democratic system to have transparent structures and processes. There could be all sorts of mistakes which we never even realise are being made because we can’t understand how the decisions are being reached. It is like Kafka’s ‘The Trial’ where the main character keeps being referred to another person when he asks why he has been arrested. Finally - spoiler here - he is sentenced to death, without ever knowing why.
Final Thoughts
It’s all very thorny, and this is exactly why we are seeing more guidance on AI and Ethics cropping up. There is much talk of having more ‘humans in the loop’, both at the design and the output checking stage.
The OECD now has a catalogue of 738 tools. UNESCO (2020a) drafted a document of recommendations for the ethics of AI. The European Union proposed guidelines for trustworthy AI including seven key requirements that AI should meet. It has also introduced a AI Act Explorer so you can investigate the effects of the Act for you. The Institute for Business Ethics has also produced a guide for the corporate environment on the ethical use of Big Data. Cambridge Consultants, a private corporation, provided a report on AI in online content moderation.
There are people out there who are championing the need for AI to be ethical.20 There need to be more of them though. This will be even more important with the advent of something called ‘agentic AI.’ AIs which can carry out tasks for you. Not just writing an email, but sending it. Not just doing research on a restaurant, but booking a reservation. Not just exploring Dignitas, but booking yourself in.
Many of the writers’ copyright cases are likely to be test cases. I was pleased to see Reuters winning in a fair use case recently. 21 May there be many more cases like this one. Let the writers and publishers light the way…
———————
I have so much more I could say on this subject- this is only the tip of the iceberg. I’m afraid to say I left the course more terrified of AI’s capacity to do harm to us all than I was before I started the course. But I know the Devil better than I did before. In reality it has made me even more scared of humans. Because when I say ‘the Devil’ as if AI is evil and humans are angels, I’m ignoring the fact that it isn’t AI which is the problem. The problem is what human beings, greedy ones devoid of ethical principles or agendas outside making a profit, could do with AI if left completely unchecked. Not everyone is like this. But some people are. So let’s find out as much as we can about this thing and let’s not let them get away with it.
Thanks for reading, let me know if you found it interesting, and let me know your thoughts and any questions.
https://www.bbc.co.uk/news/business-51311368
https://www.theguardian.com/film/2017/sep/01/tulip-fever-review-alicia-vikander-christoph-waltz
https://www.bbc.com/future/article/20230119-the-dinosaur-feud-at-the-heart-of-palaeontology
https://www.tortoisemedia.com/data/global-ai#rankings
https://www.tortoisemedia.com/_app/immutable/assets/AI-Methodology-2409.BGTLUPC-.pdf
https://humanyze.com/research-measuring-interaction-and-productivity-with-sociometric-badges/
https://www.bbc.co.uk/news/technology-56135817
https://finance.yahoo.com/news/anthropic-establishes-council-assess-ai-122408142.html
https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/
https://news.sky.com/story/artificial-intelligence-a-threat-to-democracy-says-government-expert-ahead-of-2024-elections-12903066
https://edition.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html
https://restofworld.org/2024/india-elections-ai-content/
https://www.theguardian.com/world/2018/jun/07/france-macron-fake-news-law-criticised-parliament
https://www.nature.com/articles/d41586-024-03462-7
https://ai.meta.com/blog/responsible-ai-connect-2024/
https://www.thegreatsimplification.com/episode/169-audrey-tang
Murphy & Christensen, 2012
https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
https://www.bostonreview.net/forum/ais-future-doesnt-have-to-be-dystopian/
https://www.reuters.com/legal/thomson-reuters-wins-ai-copyright-fair-use-ruling-against-one-time-competitor-2025-02-11/
Amazing post! So detailed and thoughtful. There’s so much to think about, it’s a little mind boggling…
You had me dying at the Oedipus line. This was such a great look at what’s here, what’s ahead, and the potential ramifications some of which we’re already seeing. There was a time when I was obsessed with reading futurists like Ray Kurzweil, and I truly believe we are only seeing the tip of the iceberg in terms of how ethics will play out as the tech progresses. I think you also mentioned some time back about how your husband is skeptical about the actual achievements with LLMs, and how far away we are from actual AGI—I would agree, but at the same time I think what we’re already seeing is extending far beyond hypothetical trolley problems in 101 courses.
P.S. I’m always super impressed with the level of research you put into your articles, even down to finding appropriate pictures to include.