A bit of an update - let's talk about AI and LLM
I'm tired of being lumped in with tech bros and it's time to say something
This post is entirely free. Gratis. And I’m making it because I’m angry. Like PEEVED. Full of a cold, shingles again, kitchen is leaking, but yeah, let’s stop and really talk about AI and Large language models.
I’m pretty sure each and every one of you has encountered this. Come on social media before, and the ‘fake’ stuff was hard to spot, but wasn’t the absolute GLUT we have right now.
The ones that always got me, because I’d cry, was actors that I liked, that had passed away. I rarely ever shared them unless I saw them confirmed, because the one time I did without waiting for confirmation, it was a hoax.
Now.
I look at the post, I assess the images attached (cause even the eclipse had fake images show up that looked *entirely* believable), look at the news itself, assess it. I’m pretty sure some people don’t do that, but there’s more and more people that just share. Because it’s pretty. Because it’s cool. Because it’s tapping into a fandom. Because people know that the currency of the web is actually interaction. And that's the problem isn't it?
And there’s a whole pile of rants too. Important rants. Informative rants. Problem, not solution though. And that’s not knocking the rants themselves, it’s just the fact that all we talk about is the problem, because the solution is NOT for artists to avoid them. If it was that easy, they’ve have deflated like day old soufflé by now. People interact because the web has so much of it. (and in fact, one of my friends just took a week off because she's an outspoken creative AI destructionist. She wants them removed, and I can't help but agree. But, she shared something that looked *entirely* believable about the floods locally to me. And it just so happened, I knew that image wasn't here. Because I know what that stretch of road looks like - there are ZERO palm trees on the road in question, or maybe one or two, in gardens. I told her. She got very upset, as she'd been 'caught', even though I said 'look, if it weren't for the fact I physically know the road in question - it's a dual carriageway, and there's a railway track, and some old camper vans down there, not a verdant oasis, I might have checked or shared. I just happen to know that stretch of the A40'. But she's incredibly outspoken about it all. She's...she doesn't criticise that I used to work in AI, but there's always this feeling that she's probably side eyeing me going 'it's your kinda brain that comes up with this stuff'. And I can't say she's exactly wrong. I'm ethical though - but right now? I don't think people care. And I understand that.
And you, my faithful readers, will know, that I wanted to do a double PHD - one in AI and LLM, and one in forensic linguistics.
Two out of four of my specialities are bad words online now.
I've subbed one - I no longer work with AI, I work with ethically populated Expert Systems. LLM though...
Do you know the *concept* of them has existed since the 60's? You probably even played with it - Eliza. What was all fun and games when we were teens, is now a terrible thing. And there's a huge bit of me that agrees. The smaller, more petty bit tends to go 'for fuck sake, fuck capitalism, stop breaking my toys'. An apologies on my language, but I really am over coming online, and seeing two AI pretty images shared, sandwiching a rant. I could do the full history - how we went from NLP (that’s natural language processing not neurolinguistic programming) to LLM and why it should mean ‘less bias’ if it’s used equally - except….that’d mean having translations into one core language for *everything*, and I know that AI’s don’t do that currently. You’ll find that it’s almost all English and all net accessible for the LLMs that built the very first tools. And if you watch internet marketers like I do, you’ll know the huge boom in their stuff is now AI. Overgrown, buggy chatbots mostly.
So. I know this is a rant. I don't have a solution yet, but I'm putting it out there - we can't vote with 'our money' because as my friend showed, she's very self-aware of what she shares, and she even got caught. We can't refuse to engage with it - there's accusations that CGI and other title sequences are AI.
Even if we make *conscious* choices, it's important to acknowledge that OUR conscious choices can be wiped out by people with more money, or less ethics, or even more desperation than us. I'm not ever going to judge a person that uses AI once or twice - and I even make a distinction against generative AI and supportive AI (spoiler, Word has an LLM that it uses for text and grammar. You feed it enough 'correct' sentences, it looks for differences, that's your grammar and spelling check, crudely. I once said this to a friend and she said, no, everything is encoded. When I asked her how, she said 'oh, the same way as we learn English' and I said 'so, correct sentences, diagrammed, and sorted?' *nods* 'How about whole books?' She said 'that's not how we learn proper English and I said, "Yeah, no. See spot Run is your first introduction to a human encoded model to teach you grammar and basic construction.'
She went quiet for a bit and said 'I'll never look at those books the same again," and I said "Why. It's not as if you're using ALL the variations to derivatively make money. Learning isn't actually the problem. Usage is".
USAGE IS.
It’s absolutely important to emphasize this. Mostly because if we focus on usage, and making sure the input is ethical (i.e. it’s designed to be permissioned, and built on solid permissions), I think that things *might* maybe get better.
But the almighty currency of the world is currently not allowing that. Ignoring the millions people claim to be making. It’s the shares that are driving more and more people to them.
And I think that's the solution. Cut off the supply of licenced material to learn from, unless it’s ethically permissioned, consciously choose not to interact with it, within reason, cause I’m betting I’m even unwittingly interacting with AI news and articles now. I’m *pretty* sure the system I use for my mental health, VOS, is an AI/Expert system, and I can see the point of them. If we push, and reject, hopefully ....maybe....they'll do a dot com bubble. And it’ll only be the people that started it that’ll suffer - which, I’ll be honest, is also super naieve.
Because much like NFTs and other stuff, I have to say, I think, and know, that Ai is unsustainable. But yelling about it, even if you have a massive following - it's screaming into the void, which is blinking back with all it's plagiarised eyes. We'll all yell, occasionally, but it's difficult to remain positive and upbeat about the things I'm passionate about (my expert system, which I was hoping to build to help with mental health) and look at biases.
I'm getting a great look at biases, just no way to point it out without sounding like I'm on the side of the unethical gits that did this. (again, spoiler, if you yell about ALL AI's, all the time, you're missing the point. And hurting the people that don't have unlimited resources. That were doing it for good. And yes, that's naïve. There's absolutely nothing I can say to say 'nope, it's not'. It doesn't change that some people are ethical, aware and working on some of the problems.
And after the dust settles, I'll get to play with my expert systems again. Maybe. Right now they're radioactive, and the LLMs are the isotopes fuelling it.
Hah. I went to make a video to share of this, and there's an AI voiceover.
nooooope.