Saturday, March 16, 2024

Awkward Interregnum


Regular readers may have noticed a marked absence anything to do with AI here at Inventory Full lately. It's not just that there haven't been any posts specifically about the technology or the cultural phenomenon or the underlying concepts; there also haven't been any AI-generated illustrations necessitating end credits listing prompts and models.

It's not a change of policy. Nor is it a reaction to the low-level pushback AI occasionally generates. It's a lot simpler than any of that. I just got bored with it.

Not with the potential. I still find that thrilling. It's more like what happened with VR. There, I was never remotely convinced or impressed by the claims being made, a position I think has been borne out by experience. With AI, I did think something was genuinely about to happen.

And it probably will, one day. The thing about AI as it stands now, though is...

It's not AI.

I mean, we all know that, don't we? No-one really believes anything we're seeing or hearing about is intelligent, artificially or otherwise, surely? 

Calling any of it AI is confusing, misleading and increasingly annoying. It's gone from a promise to a buzz-word to a cliche in just a couple of years. 

Unfortunately, labels stick and AI is the label attached to the disparate collection of algorithms, apps and processes currently drenching the media and drowning all coherent thought, so I guess we're stuck with it too. Since we've used up the acronym on what's effectively nothing more paradigm-shifting than a bunch of productivity software, what we're going to call any genuinely intelligent artificial entity when or if it ever appears, I have no idea. 

Master, probably. Or God. 

That, however, is a problem for another day. More likely, another century. For now, we're all going to carry on using the thick-headed shorthand we've been handed so let's just try and make the best of it.

It's not very useful

I mean, it isn't, is it? Let's be honest. Everyone in this part of the blogosphere who's experimented with AI has reported back with overwhelming evidence of inaccuracy. 

The AIs that are really Large Language Modules have been designed and developed to build sentences and paragraphs by extrapolating the next most likely word in a sequence, based on the petabytes of data fed to them from trawls of the internet, authorized or otherwise. That allows for an initially astonishing impersonation of something a person might write but it has very little to do with fact and even less to do with truth.

The tech giants behind all of this are doing their damnedest to force these systems to comply but by my own experience I'd have to say they aren't getting very far. The main reason I'm not using ChatGPT or Bard (Now Gemini, for reasons.) or whatever the other one is called is that it actually takes me longer to put a post together with any of them than it does to do it the old-fashioned way.

That's because I have to fact-check everything they tell me before I can risk publishing it, unless I'm just using the output for humor, in which case the less accurate it is, the better for me. Otherwise, I have to put everything I don't know to be true through Google Search, at which point I clearly could just have searched for the information that way to begin with and saved myself a step.

It's widely reported that Google Search has deteriorated of late but it still seems reasonably precise to me. I can always find what I want, even if what I want is frequently on Reddit. Moreover, the whole supposed benefit of using AI to search, which being that you can communicate with it in normal, conversational English, is what I've been doing with Google Search since the early 2000s.

I learned correct internet search practices way back in the 1990s but I haven't bothered to use what I learned since Google made it optional. Google Search easily parses full sentence queries and returns highly appropriate search results, so what value does AI add other than a fatuous "Thank you" at the end?

It's not funny any more.

I guess whether AI was ever funny is a matter of taste but it used to make me laugh out loud, sometimes uncontrollably. I found the nonsensical non-sequiturs all kinds of amusing - charming, whimsical, sweet - and the warped, weird images delightful. 

Two things have happened to blunt those positive impressions. The AIs have gotten much better at faking being human and the novelty has worn off. 

The second is probably the more damaging. Jokes aren't funny when you've heard them lots of times before. 

I follow Janelle Shae's excellent blog AI Weirdness, in which she tests to destruction the capacity of various AIs to follow simple instructions. When I first started reading it I had to be careful not to have a cup of coffee in hand when I opened a new post in case I laughed so much I spat it all over the keyboard. Now most posts barely raise a tired smile. Seen one mislabeled animal, you've seen them all.

Janelle still seems to get a laugh out of them but I fear she's having more trouble with the first problem. These days the AIs tend to give her something a lot closer to what she's asking for than they did a year or two ago. She has to push them harder to fail in a humorous way. That does suggest at least a move towards usefulness but it also diminishes what made the results so fascinating before - their inhuman alienness. 

The pictures all look the same.

Not literally. That would be interesting. No, what I mean is that as the AI Image Models become more and more sophisticated, the results seem to have acquired something of an AI imprimatur. You can look at an image and immediately sense it was created by AI. Just not, unfortunately, in what I used to think of as the good way.

There were two things I liked about making pictures using AI. Firstly, it meant I could imagine I could draw. Secondly, all the pictures looked bizarre.

I can't draw. Never have been able to. All my friends can draw, even the ones who really can't. Drawing is weird. If you believe you can do it, you can do it and other people believe you can, too. I never believed it so I can't draw and absolutely no-one ever thought I could.

If you look back at the earlier AI images on the blog, they are horrific. Warped, distorted, unnatural, freakish. That was what I liked about them. If I use NightCafe to make images now, they're almost proper pictures. Sometimes they're very good. Sometimes they're just a tiny bit off. Occasionally they're very poor but never in an interesting way.

Whatever they are, though, they're clearly not "my" pictures any more. Not in the way they were when the people in them had three arms. These all look like commercial art. I like commercial art well enough in its place but I don't aspire to using it here. It's too slick and corporate and functional for a funky, home-made blog.

It's nowhere near ready yet.

I've said this before but I'll reiterate: I want to be able to press a couple of buttons and have a complete blog post appear, indistinguishable from something I could write myself. Better still, I want to type in a short plot synopsis and have an original story as good as most of the proofs of novels I take home from work. (Seriously, it's not that high a bar...) 

In other fields, I'd like something that could generate animated cartoons or CGI movies as good as the ones I watch already, just from a few brief instructions in plain English. Most of all, I'd like a small device I could clip to Beryl's collar that would talk back to me, convincingly, in her voice while we're out walking.

None of this is currently available or even possible, although if you follow AI reporting in the media you absolutely could believe it was. It reminds me very strongly of the early days of VR, when everything seemed to suggest we'd all be running around inside the Star Trek HoloDeck by Christmas.

About the only thing the media gets right about AI is a that it's a highly disruptive technology. The problem is, until it gets a lot better, the kind of disruption it causes isn't going to be the supposed cultural reformatting that might or might not lead to a genuinely different, maybe even better, future; it's going to be the kind of disruption you'd get if you released a swarm of killer bees into a crowded shopping mall.

And that's why there's not much AI on this blog right now. I reserve the right to go back to covering the phenomenon should anything interesting develop - I am still keeping an eye on it and it is a fast-changing field, although most of it isn't changing anything like as fast as I'd like. 

Right now, though, it feels like there's nothing much to say about AI that hasn't been said too often already. When there's something to talk about, then we'll talk.

I will still use some AI images if I find it convenient or appropriate (Obviously I was always going to use some for this post.) but honestly I'm getting a lot more fun out of going old school and running screen grabs through an image processing app until they distort to the point of unrecognizeability.

I may also do some more experiments with the LLMs just to keep an eye on progress there. If they ever start to return consistently accurate results I would be interested in using them as research assistants. If that happens, though, it'll probably be the last I ever write about them. At that point, they'll become about as interesting as spell checkers or email clients. I mean, I use a spell-checker on every post but I don't feel the need to tell anyone about it.

As for my dabbling with audio and video, unless and until there are some very major advances, I can't see that continuing. It takes ages and I get nothing interesting out of it. Right now, AI in those fields is probably at the stage of becoming a useful tool for professionals. The day when the ungifted amateur can produce satisfying, convincing results for almost no effort is far, far away.

As for all the other things we also call AI, like the kinds of procedural generation used in virtual worlds or the way mobs follow a path or fight in an MMORPG or even the app I use to remove parts of an image and refill it with something unnoticeable, well no-one's really talking about any of that when they use the jargon these days, are they? I imagine that although PR people will try hard to convince us otherwise, all of that will carry on much as before without any of us needing to pay much attention.

I think that about covers it.

Oh, wait! I haven't even mentioned Artificial Insemination...

7 comments:

  1. AI isn't AI because it's not independent thought. Then again, you could make the argument that humans aren't capable of independent thought because we're the sum of our experiences --and chemicals that alter our brain processing-- and we simply don't understand all of the wiring that goes on in our brains. (Yet.) I'm going to have to start pulling out my old textbooks from my senior level philosophy class titled Practical Reasoning, which was neither practical or full of clear reasoning, but I suspect I'd understand those old textbooks a bit better now with "AI" out there these days.

    ReplyDelete
    Replies
    1. If we get into the "what is independent thought" debate we'll be here until the heat death of the universe. That said, having observed Beryl closely for a couple of years, I'm pretty sure a dog has it. Recent research shows octopuses and crows very definitely do and there's increasing evidence in favor of a form of intelligence in trees and fungi. I generally take the view that most things humans perceive as certainties are more likely to be expressions of their sensory limitations so I'm inclined to allow for the possibility of varieties of autonomous intelligence that don't match our expectations. Machine intelligence, should it ever arrive, would be just one example.

      The current AIs most definitely are not it, though.

      Delete
  2. I strongly suspect if and when machine intelligence does arrive, it will be in an unexpected setting and we won't really recognize it when it does. It also won't be all that badass, it will be more on the order of what a chicken can do in terms of of independent thought. Maybe a cat or even a crow (to be really generous, crows are astoundingly smart), but nothing at all like AI in a Gibson novel.

    ReplyDelete
    Replies
    1. It hadn't occurred to me until I read your comment but I can't recall ever seeing any suggestion, fiction or non-fiction, that machine intelligence might begin at a lower equivalent level than baseline human. It always seems to be very similar to human, albeit sometimes naive or childlike, or else superior. If there are any stories or movies or shows where the first AIs develop at an animal level I'd love to hear about it.

      Delete
    2. OK, well if I turn out to be right, you read it here first :-)

      The reasoning that led me there is a bit much for a comment, but it has to do with how neurons apparently work on a cellular level, and how higher levels of reasoning have seemingly evolved independently in several lineages, some very very distantly related.

      Delete
  3. That site by Janelle Shae is a real gem, thanks for sharing it!
    I am very torn on AI right now - I love the new avenues of possibility and creation but the further along we go, it starts to get really complicated as the legal issues pile up and everyone seems to be out of their depth. It feels like we're all part of a big chaotic experiment which is pretty much echoed by Silicon Valley insiders who warned this would happen if the tech got unleashed too soon and without proper checks and guidance. Someone is already undoubtedly profiting from this massively and it's not us.
    I don't know if I second the sentiment that things are happening too slowly (like for VR gaming), to me it still seems bewildering and we're clearly not ready as a society, even if everyone is talking about the great new world of AI and we just released our first ChatGPT guidelines at work. I watched an AI generated movie short clip the other day and thought it was wild. The deep fakes are scary.

    I hear you on the polish; things getting ever smoother, better and thus more boring as they imitate real life to perfection. It's a bit like when console games started to look too photo realistic, somehow that lost the magic for me personally. I also came across this big Bloomberg article a week ago about how AI recreates all the bias and inequalities we have in the real world and reinforces them. The AI is only as good/bad as the data its trained on and obviously that in itself is very limiting and potentially problematic. For reference: https://www.bloomberg.com/graphics/2023-generative-ai-bias/

    ReplyDelete
    Replies
    1. Chaos just about sums it up. Like you, I was originally excited by the possibilities but as with so many promises of this kind, the implementation doesn't quite match up to the imagination. Janelle Shae's posts used to have a whimsical, joyful tone but now I feel there's not just an ennui creeping in but a tone of disillusionment and concern. Her latest piece makes the well-rehearsed yet largely ignored point that the AIs are now feeding on themselves, making even those very valid concerns about real-world bias look old-fashioned. Basically, unrestricted access LLMs has pumped an ocean of twaddle into the worldwide web and that's where the AIs are fishing. It's like no-one remembers Garbage In-Garbage Out.

      Too late to do anything about it now, though...

      Delete

Wider Two Column Modification courtesy of The Blogger Guide