Blaugust 2018

Wednesday, November 26, 2025

And We'd Have Gotten Away With It, Too, If It Hadn't Have Been For You Pesky Humans...

Before I changed PC, I had a whole bunch of bookmarks concerning various developments in AI that I was planning on stitching into some kind of post. Because I chose not to transfer anything at all across to the new machine (A decision that feels better every day. It's been like moving from cluttered, disheveled old house to a clean and tidy new one.) all of those are lost.

Okay, no they're not. They're on the old computer, which still works perfectly well. I just can't be bothered to go into the other room where I've left it and switch it on. No-one wants to read a load of old stuff about AI, anyway, not least because anything anyone wrote about it more than a few days ago is almost certainly out of date by now. 

On the other hand, it's that or yet another post where I go into painful detail about what I'm doing in New World, so AI it is. 

It's not like I need all those bookmarked articles and reports and opinion pieces to write a post about it. Things are moving so fast, it's hard even to hold a consistent uninformed opinion, let alone one based on actual facts. 

A couple of months ago, I was wholly convinced by the bubble scenario. The entire AI edifice is completely unsustainable. Millions of people are making a fortune using the almost-free services but no-one providing them has any idea how to make any money from facilitating the much-heralded Webpocalypse. 

Any day now, the individuals and institutions funding the whole thing are going to pull the plug and that will be that. Then the economy will crash and no-one is going to be thinking about AI any more anyway because we'll all be too busy breaking up our furniture and burning it to keep warm, once all those data centers stop super-heating the atmosphere.

Or all these AI services everyone's using for free now will start charging and then charging more and at the consumer end AI will just be another service you can subscribe to if you can afford it but how many of those are you paying for already and do you really want AI more than you want TV or movies or games?

Meanwhile, all the businesses and government departments that bought in to the AI hype will be discovering it doesn't do what they were promised it would and they won't have been able to let as many people go as they thought they would or if they did now they'll have to hire some or all of them back because AI just isn't cutting it on its own.

Any of those. Most likely all of them at once. That seemed like how it would go.

There was plenty of support for all of those timelines. The pieces I had bookmarked were all talking about surveys and reports that said businesses that had tried AI were finding they weren't getting anything done any faster or cheaper and in fact maybe it was even a bit slower and more expensive, what with having to have everything the AIs did checked by a human and probably re-written by one, too. One opinion piece suggested the companies that would do best out of the whole thing would be those who didn't invest in AI at all but instead snapped up all the good people being let go by those who did, so that when the inevitable collapse came, they'd be in pole position to take advantage of it. 

As for the public, it felt like there was so much push-back.  Not just from all the people who, understandably, didn't want to lose their jobs but from the consumers and customers and clients who would supposedly be the ones paying for the products and services AI would make it possible to supply more cheaply and efficiently. If everyone's boycotting your stuff it hardly matters if you saved money getting it to market.

One of the articles made what I thought was a very telling point. Historically, gamers have been among the earliest of adopters of new technology and also the most willing to spend quite significantly over and above the going rate to get their hands on whatever was being sold to them as the latest, bleeding-edge gadget or gimmick. 

This time, though, gamers have to be about the most rabidly anti-AI consumers out there. The merest rumor that someone's spotted a single AI-generated image in a game has them out with their pitchforks and torches. If you can't convince gamers to accept AI, the argument goes, how do you expect to sell it to anyone else?

All of which must feel very comforting to anyone who never liked the thought of AI in the first place. Well, except for the crashing the economy and starting a global recession part, that is. When Mark Zuckerberg said move fast and break things I'm not quite sure that's what he meant. Then again...

Except, doesn't it seem to be taking a long time for this bubble to burst? And hasn't an awful lot of the damage been done already? Once the AI is embedded in the infrastructure, just how easy is it going to be to get it out? And how much would that cost? It's all very well to say we can't afford to keep it but now we have it, can we afford to let it go?

Sticking with games, because I really do not want to think about the AI-controlled robot war dogs that will be taking over the policing of our streets any time now (And they can climb stairs, you know. It'll be like when the Daleks learned how to levitate. Nowhere will be safe.) it's obvious all the really big producers want to go full AI. Some, like Square Enix and Ubisoft, are open about it, but you can bet every one of them has a plan in place already for when they can stop dealing with those pesky creatives and just talk to nice, obedient LLMs that have never been trained on the history of the Union movement.

I could link to any number of news items and articles where games executives are pictured drooling over the prospects but I'll limit it to just this one, which surprised me by its positive tone. It's from GamesIndustry, which I'm guessing is officially neutral on the general topic of AI (Not on the topic of General AI, though, another kettle of extinction for humanity altogether.) but which more often than not chooses to sound quite sniffy about the whole idea.

The tl:dr for that link is that Ubisoft (It had to be them, didn't it?) is in advanced testing for AI-controlled alternatives to players. Obviously not to all players. How would that make money? Just to the players other players would usually play with.

It shoudn't come as any kind of surprise. I mean, it's not like actual players haven't been asking for it for years, is it? How many blog posts have you read over the last decade and a half, where someone was either ranting about how all anyone wants to do these days is play MMORPGs like they're playing a single-player game? Or that MMORPGs would be so much more enjoyable if it wasn't for all those stinky players?

I'll tell you how many. A shit-ton. As I've said a few times before, the first time I really noticed the
strength of feeling on the subject was when Gordon from We Fly Spitfires posted about playing Guild Wars 2 for the first time, fighting the same mobs and doing the same quests but not having to speak to anyone, let alone actually form a group.

We Fly Spitfires is long gone so sadly I can't link directly to it but I referenced that post and talked about the general topic of playing solo with others back in 2018. I don't believe gamers in general have become markedly more social since then so it's hardly surprising if one of the best use-cases for AI is seen as getting rid of other players. It's what a lot of them have been wishing for for years and voting, as they say, with their feet, although with their backsides might be a better way of putting it, what with all the sitting down gamers tend to do.

Whether those same gamers want exactly what Ubisoft is trying to sell them is another question. That GI post describes how the various AI-NPCs have different personalities, varying from "stoic" to "bubbly" for the grunts who join you in the fights to "authentically annoying: self-satisfied and occasionally belittling" for the mission commander who tells you what to do and where to go.

I'd have thought one prime reason many players would like not to have to deal with other humans would be all that messy personality stuff. But then, in a commercial release, once the tech has been passed as ready to meet the public, I imagine you'll be able to pick the personalities of your team to suit your tastes. 

Based on how AIs work now, I imagine the default option will be a bunch of flirtatious sycophants who just can't get enough of your amazing insights and truly incredible ideas. No doubt there's someone out there looking for a sarcastic AI partner who treats them like dirt but I'm not convinced a video game based on a Tom Clancy novels is where Ubisoft is going to find them.  

From a personal perspective, I feel like every week that passes leaves me less interested in and enthusiastic about AI than I was before. It was amusing a few years ago, amazing a few months ago, and now it's starting to bore me a little. It doesn't infuriate me - yet - but it's getting harder and harder to summon up the required goshes and wows for the endless, iterative, baby steps forward.

At the moment, I'm tending towards the opinion that AI in its current incarnation is going to end up being yet another of those not particularly interesting things we all have to use and pay for, whether we like it or not, like broadband service providers and local taxes. Try living on grid in any town or city in the Western Hemisphere without either of those and see how far you get.

Equally, try excitedly and repeatedly telling your friends how wonderful they are. See? Now no-one wants to talk to you any more.

Azuriel wrote a great post bouncing off the last one I put up about AI, in which he suggested the natural conclusion of the course I appeared to be, if not advocating for, then at least traveling along willingly, would be a future where we gave up on "content" altogether in favor of stimulating our pleasure centers directly until we all died of dehydration and/or exhaustion. It's not that I don't find it an attractive picture but I suspect that, at least in the lifetimes of most people likely to have read this far, it'll turn out to be something a lot less dramatic than that.

In my own case, it feels as though the future is probably going to involve considerably more involuntary use of AI, as it embeds itself inexorably in every aspect of all our lives, while at the same time my voluntary, personal involvement with the technology will decline. I've stopped making AI music, for example. It feels like I've done that now and, while it was immensely enjoyable and satisfying while I was doing it, I'm not missing it at all now I've stopped. I'll probably do it again at some point but it isn't something that's going to occupy me for the rest of the time I have left.

Another indicator is what's happening with this year's Inventory Full Advent Calendar. I've shortlisted all the songs now and I didn't use AI at all. In fact, I skipped straight past the Gemini AI summary at the top of every Google search I did and went straight to the links. 

As for the pictures, there won't be any AI images this year. I've decided they're boring. We've all seen far too many and they all look the same. They're also both too good and not good enough at the same time, which is a terrible combination. 

I actually still like making them and looking at them for my own enjoyment but I don't feel it's likely that anyone else is going to share my pleasure, which I'm well aware comes mostly from having made something that looks like the picture in my head, not form any intrinsic qualities of the images themselves. They require back story that would be inappropriate and counter-productive other than as illustrations to a narrative.

The main reason I won't be using any AI for the calendar this time, though, is that AI just isn't cool any more. Two or three years ago, even if some people really, really hated it and left comments saying they weren't going to come back if they were going to have to look at AI images, I felt like it was hip and clever to be making them and sharing them. Now, it feels like the default option. Everyone does it. I'd rather do something a little different, even if it is more work. 

And let's be honest, it is more work. A lot more. No wonder so many people do choose to use AI. It may not get you the right answer or draw you a great picture but it will give you an answer and a picture and it will do both in seconds. 

If you're putting a blog post together, quite often any answer or any picture will do. I mean, no-one really reads this stuff, anyway and even if they did, they'd most likely have the pictures switched off. It's not like I'm using AI to gather evidence for a court case or diagnose an illness, after all. Or even write a job description or an essay that someone's going to grade. No-one would be that lazy, surely. Or that gullible...

Look! All the way to the end and not one word about what I've been doing in New World! I'm really tempted now to give Gemini a precis of what I did in Aeternum yesterday and have it write a post for me, then put it up and see if anyone notices.

I never said I was going to stop using AI, did I? Or stop writing about it, either. I'm just not going to pretend I think it's cool any more. That'd be as bad as believing the best movie ever made was Fellowship of the Ring...

As for using it, I'll guess I'll stop when it's neither fun nor useful any more. We're not quite there yet but we could be soon. That's just the voluntary uses, though. I doubt any of us is going to be able to opt out entirely, ever again, short of going full Into The Wild.

 

Notes on AI used in this post:

Ironically, I didn't use any until I got to the final sentence, when I couldn't remember the name or author of the book I wanted to reference. I typed "what's the book where the guy walks into the Canadian wilderness and is never seen again" into Google Search and the AI Overview came back with "The book you are likely thinking of is Into the Wild by Jon Krakauer", which was indeed the right book.

The AI then went on to point out that it was the Alaskan wilderness, not the Canadian, that the person Krakauer wrote about, Christopher McCandless, walked into, but that there was another book, "Vanished Beyond the Map: The Mystery of Lost Explorer Hubert Darrell", published recently, which tells the story of an explorer who vanished in the Canadian wilds back in 1910. 

Gemini (For it was they, I assume.) then gave me precis of both, accurate in the case of Into The Wild at least, although I'll have to take the other on trust, along with links to Amazon for the Krakauer and the publisher's website for the other, so I could buy them both.

Given that Gemini wasn't just right but righter than I was, I suspect the days of being able to assume the AI summaries at the top of Google Search aren't to be trusted may be limited, although I do have to say that only a couple of days ago the same AI Overview told me about some album or other than sounded really interesting but turned not to exist, at least not so far as an actual Google Search could tell me. So maybe don't stop checking their work quite yet. 

Notes on Non-AI used in this post.

All the writing and all the pictures.  The first and second image are photos of things made by Mrs Bhagpuss and given to me as presents. The third image is two things made by her (The felts.) and two made by me (The mirrors.) all hanging in our hallway, as does the final image, a framed print I won in a raffle back inn the '90s. 

An art raffle. 

It was the nineties.

Don't you miss the nineties? 

10 comments:

  1. Based on previous big bubbles, I would be very surprised to see the the AI bubble implode before next year. I did a little digging, and found that the four most famous previous US bubbles all took 5 to 7 years to go off (railroads, radio, fab 50, dot com). Inferring a general pattern from only four data points is well beyond stupid of course.

    However, completely outside of past trends the big tech firms getting pumped up by insane investments are all describing strategies where they don't become profitable until 2029/ 2039. So I think the tech bros will probably keep this thing going at least until 2027, and in the short term no-one is going to panic unless there is some market shock external to AI.

    So two, admitedly quite weak, but independant lines of evidence lead me to believe that the AI bubble won't pop in at least the next six months. That said, the further out we get without something changing, the more nervous I will get about it. The numbers do not make sense at all right now.

    ReplyDelete
  2. Edit: 2029/ 2030. Also, as near as I can tell major batshit crazy investment in AI started around 2022.

    ReplyDelete
    Replies
    1. Heh! I read those dates and had a moment where I thought the whole bubble would never need to burst because if they're willing to wait another decade and a half before they start making money, the AIs probably will be running the entire economy anyway so why would they turn themselves off? I don't think they'll be ready for that in three or four years, though so it probably is going to pop after all.

      Of course, as the dotcom bubble demonstrates, just because the financial bubble bursts doesn't mean the underlying idea has no longevity. Everything that bubble was predicated on did eventually happen. It just took a bit longer than the markets were willing to give it. I don't imagine this one will be much different. There may be a huge financial implosion but I'm pretty sure we're stuck with AI for the foreseeable future. We're all just going to have to get used to it, like it or not, like we've had to do with so many other changes and innovations we probably would rather have done without.

      Delete
    2. That's a very good point. The other thing I found in my digging is that you can make a lot of money off of a bubble if the tech is trully transormative. You do it by pulling out of the market before it goes off, and then buying up whatever cimpanies in that area are still standing after the crash for pennies on the dollar. At least some of those are going to be very good lon term investments.

      Figuring out when it will go off is the trick of course. If this keep looking like a bubble, I intend to at least pull back on the stock market well before common wisdom says that's a good idea since I am damn near positive I won't be able to time the implosion with anything that resembles precision.

      Delete
  3. Being a curious person, I've been using AIs to answer questions, sometimes the kind of question Google used to be good at, sometimes questions too complex for a mere search. But I ask only trivial stuff, because my little experiments with serious stuff crashed against the wall that AIs are far more convincing than right about what they calculate.

    That, for stochastic especulation of which words go together. AI image generation never appealed me as they're obviously too much work for too little results. It would be fine that they thought and could understand and make minor changes like "tilt his head a little less" but since they have no way to know what they're doing it's all guessing and trying again and again and again... Too much work.

    ReplyDelete
    Replies
    1. I'm finding AI search considerably more reliable than it was even six months ago but it's still not good enough to trust without cross-referencing the results. Mostly I just skip past it because I know I'm going to have to go to the source anyway so why bother with the middleman?

      Delete
  4. I will miss you as the only other pro-AI blogger I know.

    I am still fascinated by it and I don't think there is a bubble that is going to pop. The bubble part might be the crazy infrastructure build-outs to support it, since while companies are spending billions to build power plants and such, on the other end there are newer models that are much more modest in scope (and this, in resources consumed) that are just as accurate as the gigantic models.

    There are AI models that can run locally on your phone now, for instance.

    Also over here, it's an AI arms race between the US and China that China is arguably winning (maybe more on the robotics side) so our government will keep it propped up, I'd assume.

    I went to a fast food drive-through the other day and an AI Chatbot took my order. Though a human still handed me the bag and took my cash.

    What I think might "pop" is the recreational use of AI, like videos of bears being chased away by kittens and stuff. But functional, getting things done behind the scenes AI, I think it is here to stay and I do think it'll be really disruptive. If I were younger I'd be really worried about jobs.

    One thing we can agree on is the pace of change is so rapid that you can spend all your time trying to keep up and still not be able to. I had a bunch of YouTube channels I watched with the most recent AI news and they posted so frequently about new model versions that all I was doing was watching AI news videos. I finally had to let it go and now I just dip in once a week or so.

    Anyway if something truly cool bubbles up, I'll still let you know!

    ReplyDelete
    Replies
    1. Oh, I will most definitely still be posting about AI and in a positive way if it's something I like or find useful. The capacity of the software for making music is breathtaking although I think we're already seeing the beginning of the end game there. I'm very glad I kept making songs until I ran out of material because I don't believe it will be possible to do that for much longer, not in the unregulated and extremely cheap way I did. I'd barely published this post yesterday before I saw the news about Suno signing a deal with Warner Bros. I've now read Suno's own statement on it and that's worth a post of its own that I may or may not get around to writing before something else happens to supersede it.

      My biggest objection these days is really to the hype rather than the software. A huge amount of what's being called "AI" could just as easily and more accurately have been labeled "Apps" or "Algorithms" or "Software" and would have been just a couple of years ago. Back in the 90s there was a widely-used expression - "Sexing it up" - for what the companies behind most of this are doing. I am getting quite ticked off with that.

      Other than the environmental issues, which are serious, about the only thing that worries me about AI as we have it now is how frequently it doesn't work. Baking it into the infrastructure of public services at this stage seems crazy. If we do get to the point where AGI is a real thing, then I guess we might have something to worry about but at the moment it all seems to be more of a conjuring trick.

      Delete
  5. I don't "hate" AI, but I do think the current explosion of investment is ridiculous.

    I read somewhere that the expectation is that LLM AI data centres will be consuming as much electricity as the entire country of India within the next decade. And that OpenAI alone expects to 'invest' another $1.5 trillion in the next five years or so. All for a technology that seems to be generating no more than ten or fifteen billion a year in revenue... not profit, *revenue*. And look at the incredibly comical circular trading between Nvidia, OpenAI, Alphabet, and friends. Hundreds of billions of dollars going round and round and round.

    So yeah, LLM AI investment makes no sense and is a tulip-level bubble waiting to burst. Given how AI is propping up the stock market, such a burst could be very ugly. But it might just be more of a quiet 'bfffpt' over a long stretch instead of a kaboom: I suspect a kaboom, though.

    I am more interested in how other forms of AI, being used quietly and with little investment now, might revolutionize things like chemistry and medical research. You know, silly things like cures for types of cancer or ways of cleaning up microplastics.

    But those things don't make cute pictures or deceptive videos, so no one wants to spend trillions and burn the equivalent of an entire country's energy consumption on them.

    ReplyDelete
    Replies
    1. At the moment it's the medical and scientific applications that worry me more. There doesn't seem to be a lot of confidence outside the silos that the results of those kinds of AI applications are sufficiently reliable to support the enthusiasm with which they're being adopted. I'm fairly sanguine that they will be, one day, but there seems to be a huge amount of wishful thinking and hand-waving going on to make it seem like the technology is a long way ahead of where it really is.

      As for the environmental impacts, one of the oddest things happening right now is the way the whole climate catastrophe scenario that was dominating government planning all around the world a couple of years ago seems to have dropped a long way down the urgency list of most countries. It's going to be very tough to get anyone in power to take action on any of the environmental impacts of AI so long as voters don't find it a sexy topic and tech companies have all the money they need to buy politicians.

      Delete