I didn’t sleep last night at all. My Fitbit has me down at less than four hours *actual* sleep, most of it after my beloved got up to start his working day. I’m not sure if it’s clever enough to catch me micro-napping. What I do know is I’ve been fretting about something - a lot - recently, and it’s about how we treat dissent. And how that leads into something important - the use of AI, and why we have it wrong, and what critical mass we’re maybe heading for.
Why black and white thinking doesn’t work
I hate AI. I have that strong stance because a perfectly sensible tool that could have been ethically, and cleverly built and leveraged has turned into a tech bro, capitalist cash grab, and more and more educators on all platforms are telling people ‘but I’m teaching my kids to use it ethically!’.
What I actually hate isn’t assistive tech, or anything labelled AI because it’s a buzzy word and people don’t like ‘algorithm’ as much, but ‘generative’ AI.
Won’t you think of the….what’s that word again?
Critical thinking - it’s the first thing to go when you use AI. Your actual ethical stance is shaky - and please don’t tell me that because you did not feed the model, you’re ethically clean. You might not be the villain of the piece, but you are *feeding* it. And that’s your choice.
My choice is to be outspoken about my stolen content. My microcosm, in the broad world, that has been taken to feed a machine that when it makes a mistake, we call it ‘hallucinating’. You know, that lovely term that if a human does it, they’ve ingested something they shouldn’t, they’re in the throes of a mental health crisis, or they’re otherwise somehow impaired by disease. We don’t hallucinate for good reasons. Yet, that, of all terms is what we call it when AI lies to our faces and pretends it’s citing stuff.
In amongst all of this - navigating the fact that more and more people think it’s ok to use something stolen to make something they want respected, they want to be paid for, in attention, money or both, is this nagging feeling that if you speak up and speak out, you’re going to be labelled a luddite. That you ‘don’t understand the reality’. That ‘the horse has already bolted’.
AI is costing us dearly
Attention span has been declining for a while. Politicians preach ethical behaviour and we’ve got whole laws on piracy, but right now? AI is a wild west of stolen content. It’s not fair use. It’s not permissioned on Chat GPT, Midjourney, and even Meta has stolen a whole library to train their AI. And that’s really what it is. Theft.
In any other environment, we’d be thinking about the implications of building something from the proceeds of a crime, but that’s part of the point. Whether it’s apathy, or the inability to reason that far, people *do not* care. Art has not been democratised. Art is being leveraged from those that don’t often earn enough of it, so tech companies can make a killing.
And then there’s thought.
Critical thinking is an issue for a lot of people - though, probably not in the way people think I might mean. If you’ve read anything else of mine, longer term, you’ll know about Typing A Blank. Y’know, the blog I started after everything went sidewards in 2023 for me, and I lost three years of memory.
Ever since then, I’ve been keeping a *very* close eye on research into two things. Mental decline (alzheimer’s, dementia and all the assorted issues) and general ability to debate, discourse and reach conclusions respectively. All things I lose the worse my migraines are. All things that scare me because honestly, I like being able to play music, and do puzzles, and reason things through. I like understanding what I read. It *terrifies me* that I’ll lose that. And I can’t understand why people would willingly use a tool that has preliminary studies that you lose that. Or teach children to use it, knowing they’re losing reasoning skills, potentially. More research is obviously needed, but… Generative AI generates. What it generates may or may not be true, but because we’ve been taught to go to search engines and research, people trust anything that we query in those ways. That is, IMO, dangerous. I would think we would be telling people that it’s a bit like Wikipedia. OK to start with, but please, don’t use it as a final product. Some educators are. One flat out admitted she’s not.
I opened with a very strong, black and white statement. It’s been explained and codified already, but honestly? Now that they’ve started cannibalising their own work product and feeding that back in, I feel like we’re on the verge of a war for objective fact that will *very much* come down to how we stop AI, or, as some people are suggesting, teaching the very cautious, very adversarial, very distrustful of what it tells you, take it as a place to *start* from bare bones.
This all came about because yesterday, I saw a handful of educators talking about using AI. (I have friends that use it. I respect their right to choose to do that, we just don’t talk about it any more. We both aired our views, we both respect that. It’s a bit like religion and politics. Just, I’ll talk about AI more than either of them).
A teacher I know uses three art programs to do her ‘side hussle’ and any criticism of that is seen as removing her right to make money. It’s not - it’s simply about the fact that she’s openly admitting that she sits and designs things for a little while and wants people to pay a premium. And honestly? It’s sort of beginning to feel a bit like those old MLM things - people are selling to their communities, and have to sell *hard* because everywhere is so saturated. This side hussle teacher friend confided after asking me how to market, that she was deliberately going to leverage the goodwill of parents that she’s had as guardians of the children she’s taught, and imply that her *purely prompt driven* books are somehow better, simply because she’s an educator. She sent me a copy of one of them and she’s right, the artwork is cute, but she’d missed two mistakes. She was looking at the bigger picture - one was ‘what’s the rhyme for orange. Draw one’. as a caption halfway down a page. The other one was ‘underline the two B’s in Strawberry. There’s another letter that only appears twice. It’s the first letter of one of the colours Strawberries can be. What is it?”
(answer is not S, even though it could be that because ‘Strawberries’. It’s R. It’s also something that people explicitly called out last month as an issue with AI). So, I told her, and told her how I felt about leveraging that trust, and asked if the school would be ok with it, and was accused of not wanting her to succeed. Because ‘people with computer skills think they rule the world, I’m think you’re upset that we don’t need you any more’.
And that’s the thing, If people speak out against AI, people do one of two things - point things they consider as a “like for like”, or they accuse those of us that work in tech of gatekeeping.
Pointing to spellcheckers (an assistive tool - it existed before AI, and is built off specific corpus (i.e a dictionary). More importantly, anyone that writes as already encountered the fallibility of spell checkers), and, oddly calculators. And I say ‘oddly’ because a calculator *does not* change the laws of maths and update on the fly. It’s literally about painting people, at that point, as getting in the way of progress.
Gatekeeping is another odd one. Many of us that work in tech have to be even further on guard for AI creeping into our tools, taking over our spaces, taking our work. As a creative and a technical writer, I hate generative AI because it doesn’t respect *us*. And turning that back by accusing people that are saying ‘well, no, wait’, is a lot hurtful. Needless to say, my friend was told I was writing this article, I’d identify her by her job, but not her name, and that I’d be otherwise not engaging with our conversation until she apologised for attacking me. And yes, her book is up, with the corrections.
Progress at any cost versus progress for the good of as many as possible
I actually think there are uses for generative AI that could be harnessed if they are trained *ethically*. One would be to assess and stay abreast of advancements. Law, medicine and tech are three areas that would be interesting. If built from recognised journals, with robust controls, AI could cut down on the time needed to research complex law cases, complex or rare medical conditions, and complex edge cases for tech. AI *could* be used to help assist people who are trapped in various ways due to disability.
I personally wanted to see AI help psychiatrists, psychologists and doctors stay on top of the information that landslides out about mental health. I’ve frequently said that though I deeply respect the teams looking after me, that it feels like the UK is about ten years behind that US, and right now we’re stalled with everyone that isn’t textbook whatever textbook the person diagnosing them is, that they actually have EUPD. They are probably finding a lot of people with features of EUPD. But they’re also misdiagnosing (or missing) people with atypical presentations, with complex issues, with diseases that look similar. Having access to *current* research without needing to compile it, and cross referencing on a closed system that can only be fed with journals would be amazing. (Instead, I saw an announcement that the NHS is considering mapping every child born in the next ten years’ DNA to ‘predict’ disease. I wrote about that in Glass Block, that’s *so* not going to backfire). I don’t think, in other words, the CONCEPT of AI is wrong, I just think that some of the applications are on the wrong side of the artistic spectrum. I want a smart machine that does the housework, not writes my books. I want an AI that supports lowering my risk of alzheimer’s by engaging with me on a level that challenges and stimulates, not the stupid videos of sharks and lions fighting that I scrolled past and quickly unfollowed the influencer in question’s page.
I wish I had something pithy to end with. It’s said though, that the victors of any war control the narrative, and write history. Right now, we have an AI that can’t keep it’s story straight, taking over the internet (or a fleet of them), and people that can’t think through what they’re doing because they’ve learned that one or two sentences will skip the ‘thought’ part in some cases, and that is, in the hands of a certain type of person, not only dangerous, but damaging. There is a perfect storm of ‘the wrong information, the wrong people, the wrong time’, and AI is feeding that, IMO. That’s if there’s no longer term loss of knowledge because AI polluted it. Again, that’s in both Glass Block and something my Priestesses of Care talk about, so it’s always on my mind and has been since….well, Glass Block was 03.
Scary, huh?
Yours, pessimistically (and where’s my AI housecleaner?),
Kai