Blaugust 2018

Monday, August 4, 2025

Teaching Moments


The plan is working! I promised myself last year that I wouldn't let Blaugust overwhelm me next time around. My prime strategy to avoid it happening was going to be skipping a post early on so I didn't have a thirty-one day streak to worry about. 

And yesterday I did it! I didn't post anything! I'm so proud of myself!

Even better, I had pre-written a post for Sunday. All it needed was the illustrations. Then, after I came home from work and did a few things and somehow it got to be late in the evening and I still hadn't done anything about it, I decided to leave it for another day. 

It's a rare example of me taking the advice I so glibly hand out to others every year, namely "it's fine not to post" and "if you're not feeling it, don't force it." I must be the only Blaugust Mentor (Thanks for reminding me I'm supposed to be one, Krikket!) whose main contribution to the event is to re-assure people they needn't bother with it.

Of course, I could still post thirty-one times in the month. I have a couple of ideas that would make that very easy. The important thing is that by Day Four I'm already free of the chain. It's not the number of posts that causes me stress, it's the metronomic regularity of daily posting.

This is also shaping up to be one my most Blaugusty Blaugusts. Because I'm barely playing any games now, a huge chunk of what would normally be the content here has been stripped away and even some of my go-to fillers aren't performing. I was hoping to post today about the August slate of giveaways from Amazon Prime Gaming but it's the fourth of the month and still not a word from the official blog. 

when someone remembers to have
one of the interns get on and do it.

I checked the website five minutes ago and there's nothing new there, either. Prime used to be assiduous in adding new games at the start of every month and sending out lots of publicity about them. Now all the offers overlap and the boundaries are fuzzy and they tell us about it when they feel like it or more likely when someone remembers to have one of the interns  get on and do it.

With that out of the picture, I'm going to do a very Blaugusty thing and riff on a topic some other blogs have been covering, although the blogger who kicked  it off, Pete at Dragonchasers,isn't actually signed up for Blaugust this year. Naithin at Time To Loot is, though, and it was writing a lengthy comment to his post this morning that made me decide to write one of my own.

The specific topic in question is something called "LLM Brain" which, now I come to do my due diligence, I find is not actually a term anyone is using for the supposed phenomenon. It appeared in a post on Mastodon, which Naithin helpfully identified as being this one

Having now read the full post and some of the thread that follows it, it's clear this is a non-controversy. It's merely one teacher making a perfectly valid observation about one student, not any kind of apocalyptic prediction for the fall of civilization. It's also a fine example of the old saw "When you have a hammer, everything looks like a nail.".

I find that a particularly apposite corollary because if LLMs are anything they're tools. All the generative AI spin-offs are tools. They none of them replace human thought or action any more than a hammer can decide what to hit and how hard for itself.

In this instance, the concern seems to be that using an LLM could become a closed loop, leading to functional illiteracy, at least in certain areas of education. The example the OP gives is of someone learning to code, getting error messages when they run it, then instead of interpreting and solving the errors, simply feeding the full error message to an LLM, asking it to solve the error and copy-pasting the reply back into whatever it is they're doing.

This does seem extremely specific. It also might be quite effective. That would depend on the LLM. Clearly it won't help that individual to learn to code on their own but it might result in some error-free, useable code, which might then serve a purpose. By some estimates I've seen, doing it that way would take about 20% longer than doing it yourself, but it would still get done in the end.

getting the smart kid
to do your homework for you
This is one of the nodal points of the supposed problem. As many educators in all kinds of fields are
pointing out, usually with a great deal of frustration and impatience, it's very hard to help someone learn to do something if that person is going to a third party and having them do all the work. It's hardly a new problem, either. It's basically getting the smart kid to do your homework for you.

Clearly, education as a process is going to need to find a way to manage the new technology, just as it had to manage any number of previous innovations that allowed students to avoid the awkward and annoying process of learning.

The key factor, though, is that that has always been something that applies to some students, not to all of them . Mostly, it applies to the students who aren't interested in the subject or who aren't interested in the learning process or aren't interested in either. 

Most nations in the 21st century have an industrial education system into which vast numbers of children and young adults are fed, willing or otherwise. The motivations of the states, educators and students tend to have very little to do with learning or knowledge for its own sake and far more to do with social control and financial advantage. It's hardly surprising if students, whose motivation has little or nothing to do with learning what they're being taught, seek for any way to make the process less tedious or challenging.

People of any age, who are motivated to learn either by a genuine interest in the subject or by a generic interest in gaining knowledge, are a lot less likely to take shortcuts and hardly likely at all to take shortcuts that leave them knowing or understanding less than they would had they not taken them. 

There are some worrying suggestions by neuroscientists concerning the way using various technologies may alter or shape the way the brain forms connections in childhood and adolescence and some disturbing similarities between the way some users experience of AI resembles other forms of addictive behavior. It may well be that those forces will cause unpredictable changes in human behavior over the coming generations but those are entirely separate issues. Looking at the non-medical fears over generative AI and LLMs, it's hard not to see much of it as the same moral panic that's greeted every new development in information technology since the days of Ancient Greece.

Over two millennia ago, Plato was concerned about a new technological development: writing things down. In Phaedra, which he somewhat ironically wrote around 370BCE, he worried that "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks." 

That Plato quote.
Every subsequent iteration on the written or recorded word has been received with similar suspicion, especially by educators and other gatekeepers. And that's really what's going on, at least some of the time. People who have a vested interest in the status quo feeling threatened by the prospect of it changing. Or, more likely, of their having to change along with it in order to accommodate it so as to  continue on the same path to which they're already committed.

In his post, Pete uses the example of pocket calculators, which I'm old enough to remember were hugely controversial in schools at the time of their introduction. Naithin counters by pointing out that "There is quite a difference in using a calculator to speed through a math equation because it is more efficient and not understanding at all the fundamental whys and hows of it all, and being utterly unable to do even the basics without that calculator in hand."

It's true, of course, but the only time it matters is when the person doing the math equation is being tested on whether they can do the math equation. At any other time, all that matters is the right answer. How you get it is irrelevant. 

Which is not to say it wouldn't also be useful if the calculator-operator understood how the calculator was doing it. It would be useful but it wouldn't be necessary. I have no practical understanding of how pressing the keys of this keyboard translates to letters appearing on the screen in front of me. I'd be quite interested to know it but I wouldn't remember the explanation for more than a few minutes because I don't need to know. If I needed to know, then I'd remember.

By the same token, I'd be fascinated to know how the LLMs do what they do but since that's something no-one knows I'm going to have live in ignorance for a while longer. 

Another Blaugustinian, Jeromai at Why I Game, is using the whole of Blaugust to run a series on how ChatGPT is helping him play a role-playing game. He's previously written at length about how the same LLM assisted him with writing fiction and he's made some stinging points about gatekeepers wanting to keep the writing process out of the hands of those who haven't paid their dues.

When it comes to the creative arts, in which I think we can include nominally scientific practices like coding, my feeling is that if you're going to do it, you're going to do it. No gatekeeper is ever going to keep you out. I certainly never needed anyone's permission to write whatever the hell I wanted to write and I definitely don't need an LLM's help to do it now.

"The old timers who built the early web
are coding with AI like it's 1995.
"
But I can't draw. Really not at all. I used to try all the time. I even went to classes. Things others found simple to learn, I just could not get my brain to make my hands do. GenAI image models let me create images very close to those I see in my head and in that process I completely understand what Jeromai is saying about gatekeepers. 

In this context, AI is purely a tool and as I've discovered over a lifetime of having to do my own repairs and maintenance around the house and garden, having the right tools for the job really does make a huge difference. LLMs and AIs can be very good tools indeed but as with all tools, they're completely purposeless without a hand to guide them. A human hand.

For that reason alone, I don't imagine "LLM Brain", in the context of bad habits and laziness, is going to be all that much of a problem. A bad workman blames his tools as the saying goes, with the implication being that the fault lies in the person not the process. People who want to learn will learn and LLMs and AI will just be another tool to help them do it. Meanwhile, those who don't can at least have a bit of fun playing around with the AI equivalent of painting by numbers or those crafting kits we sell at work. 

And it's not just talentless wannabes that see the merits. Yesterday, I read another short post by Tim Bornholdt, linking to a post by Christina Wodtke, in which she notes how "The old timers who built the early web are coding with AI like it's 1995." It seems AI has really taken off big-time with the tech-savvy GenX crowd, now in their fifties, who "gave blockchain the sniff test and walked away... Ignored crypto [and gave NFTs] a collective eye roll." Those are people who could do it the old way but they're more excited to try the new. 

Here's the thing. People, by and large, can tell the difference between something that gives them something they want or need and something that gives them nothing much at all. There are whole books published making fun of crazy patents that never got made and even crazier notions that did. The only reason we remember any of them now is to laugh at them. 

Will LLMs and generative AI end up as five-minute segments on "Weren't our parents crazy?" shows in thirty years time?  Are they the Innovations Catalogs of our day? Or are they more like pocket calculators, harbingers of vastly more appealing and versatile devices we most of us can't imagine being without?

It's always hard to figure that out when a craze is at its height or a bubble isn't quite ready to burst but I'd guess LLMs and their inevitable successors are going to be with us for the foreseeable future. That doesn't mean anyone has to like them but avoiding using them is going to take a positive act of will and most people won't want to opt out. 

 

Notes on AI used in this post.

The pictures, obviously. And very interesting I found them, too.

All five were generated at NightCafe, using the same model and the same settings. The model was Real Cartoon XL v4, which I picked because I wanted something that looked like it had been drawn, not like a photograph. I didn't touch the defaults, the more significant of which were presumably the prompt and refiner weights (50%) and the runtime (short). 

All the illustrations used were first-run, unmodified results. The prompts were all taken directly for the text and you can see them in the captions, although I haven't captioned the full Plato quote for reasons of space. All were followed by the stylistic instruction "Line art, magazine illustration, color".

Real Cartoon XL v4 is not one of NightCafe's "Pro" models, for which a subscription is required, but neither is it one of the really cheapo ones. It costs a full credit for each use. Honestly, for that I expected better.

I've become quite used to getting results that don't have many obvious flaws, the way almost all AI generated images did a year or two ago. These seem like throwbacks. The header image is fine but the second, the "Intern" one, is a really bad (Or good...) example of the way AI famously can't draw hands. Haven't seen that for quite a while.  

The "smart kid" picture does that thing they all used to do, where if you ask for someone drawing or writing, the AI will make them ambidextrous and put a pen or pencil in both hands. This image  compounds the issue by giving the kid a pencil sharpened at both ends. Although I did used to do that...

The Plato picture is flat-out weird. It looks absolutely nothing like any of the others and seems to have even less to do with the prompt. It's almost as though the LLM parsing the prompt recognized it as "philosophy" and came up with an image it thought represented the concept. On the other hand, it's also the only one that does actually resemble a line drawing rather than something done with an air-brush.

And finally, the old timer. He really is old, isn't he? Don't think he's GenX but then I didn't use that in the prompt. He also suffers from Root Vegetable Hand Syndrome like the girl in the earlier picture so it's clearly contagious.

I might avoid using this model in future. 

10 comments:

  1. More like "getting the smart kid who can write with both hands at once to do the work for you". Although I wonder if that's a metaphor for AI doing multiple things at once...

    ReplyDelete
    Replies
    1. It's tempting to say AIs can't proactively engage with a topic so as to make subtle points like that but since humans can do that and what LLMs mostly do is seek to emulate a human response, I guess they probably can...

      Delete
  2. Tim bernhers-lee is 70, Which is the age I would have picked for the old timers who built the web. Not genx. AI has its uses, code not so much.
    The most time consuming part for code is trying to identify subtle mistakes other people have made. To the point is sometimes easier to start fresh... I remember a 12 person meeting involving people from 3 companies about potential hardware issues over what turned out be a out of bounds exception corrupting memory.

    ReplyDelete
    Replies
    1. Yes, I think the people in question would be the cadre who were in high-school in the 90s and who hand-coded all those GeoCities sites and home pages. They'd be in their mid-40s to early 50s now.

      Interesting factoid I'm going to throw in here for no reason other than I've been itching to mention it is that I only recently realised Doglas Coupland, the author responsible for naming Generation X and thereby creating the convention for naming every generation, is actually a Boomer by all the criteria I've seen. He was born in 1961 and the most lenient boundary for Boomers is usually given as 1964. 18 years may be the right length for a biological generation but it's waaaay too long for a cultural definition. Ten would be better. It used to be seven.

      Delete
  3. I'm resisting the urge to take this to another post. Haha.

    Perhaps of interest- but I used ChatGPT in forming this reply. Not to write anything for me, but in interrogating my beliefs a little more robustly and testing some of the arguments, both for and against our positions.

    Moving off to the side of the discussion for a bit -- I just want to interject that some of the headers it came up with for the sections were quite brilliant. Haha. e.g., 'Plato and the Birth of Writing -> LLMs and the Death of Writing?', or 'Calculators in the Classroom -> LLMs in the Curriculum'.

    Anywho, back on track -- I think your argument for the practical lens that adults don't typically have cause to run mathematical equations without a calculator is true, to a point.

    But even before adding the wrinkle of LLMs, and keeping the conversation to calculators, it doesn't tell the full story. If educators and we as a society had truly adopted the stance that 'calculators exist, therefore mathematics no longer needs to be taught' and had to blindly rely on outputs -- which could themselves be subject to input errors -- there would be zero ability to critically assess whether what the tool in hand has presented is in the expected ballpark.

    The compromise educators made here is that for the fundamentals of arithmetic, calculators would be banned -- or at least limited to checking of work. But once these elements were nailed, then of course shifting focus to concepts and higher order math allowed space for calculator usage.

    If we jump to considering the LLM vs Calculator comparison, LLMs do hold an additional danger. Assuming for the moment inputs are correct in both cases -- a calculator can be pretty well assured to give you the correct answer. It won't go making equations up. LLMs on the other hand can, they can be both persuasive AND incorrect.

    Overall though, I think LLMs are in a similar space in that education will require adaptation to first factor their existence, but hopefully, to put them to good use where it makes sense to do so. For this I will just paraphrase what ChatGPT had to say on the matter, because I don't think I can do it any better -- a likely adaptation educators will have to consider is how to probe and test more on critical thinking processes and less on pure output.

    Students shouldn't be banned from using LLM's, but should be taught to challenge the outputs provided, taught how to reverse engineer the answers given back to their sources, and similar to the teaching required by Wikipedia and Google already, taught how to critically assess the likely validity of a source.

    I said it in my first recent AI post -- I'm an AI optimist. I don't think any of these problems are unsolvable, or that LLMs should be kept in a box aside from the realm of education, but I would perhaps hazard the opinion that development of LLM capability is outpacing the ability of educators to keep up.

    ReplyDelete
    Replies
    1. OK, that probably should've been another post -- I felt like I'd really clipped the wings of this already as well! Apologies!

      Delete
    2. Long comments always welcome! I'd have to say if one of the side-effects of the AI revolution turns out to be a focus on critical thinking skills in schools, almost all the other side-effects it may cause will be a price worth paying. I don't know about elsewhere but in the UK education has been co-opted into a form of social control for most of my lifetime and any pretense it ever had at teaching people how to think for themselves is barely a distant memory.

      A more likely response, I would guess, would be a move away from grading by course-work and a shift back to more formal, supervised testing and examinations, where it will be relatively easy to ensure students have no access to AI when they give their answers. Of course, there's then the issue of whether the people (or machines...) employed to asess those answers have the knowledge and perception to sift fact from fantasy. If the teachers are also all using AI...

      The calculator comparison is very thought-provoking. Your point about the accuracy of calculator results versus the unreliability of LLMs makes me ask whether the real issue is existential or pragmatic. If the hallucinations could be removed (Currently no-one knows how that would be done - it's an intrinsic part of the process.) and LLMs were as reliable as calculators, would it then be okay to use them instead of doing primary or secondary research? Or is it wrong to use them at all, regardless of whther or not they get everything right?

      Delete
  4. I’m fairly certain that the Plato picture is a rip off, if you will, of an existing one. I know I‘ve seen it, but I can‘t for the life of me remember when or where.
    Would explain why it looks so vastly different from the others though.

    ReplyDelete
    Replies
    1. It is a crazy outlier from the rest and it does look very familiar but then that image is very generic. I just ran it through TinEye, the reverse image search that claims to be able to check against more than 76 billion images and it came up with zero matches, for what that's worth...

      Delete
  5. Maybe the fact that LLMs hallucinate is a good thing, right now. It means you do have to have some basic understanding of the knowledge base you're working in!

    But the progress here is astounding. Like...ever week something new. At least. Google MLE Star ("...MLE-STAR, a novel approach to build MLE agents. MLE-STAR first leverages external knowledge by using a search engine to retrieve effective models from the web, forming an initial solution, then iteratively refines it by exploring various strategies targeting specific ML components.") or even these new "mixture of experts" models. The AI wonks are getting these systems to where they check their answers before spitting them out. These are just such early days.

    And as much as we talk about natural language models and stuff, I feel like there is a learning curve to using AI effectively and personally I want to stay ahead of that curve if I can.

    The real dark side here to me is something you touched on: "addiction" and I'm putting it in quotes because I'm not a doctor. But AI addiction is so similar to "MMO Addiction" we used to talk about. Some of the social chatbots can really hook you. It feels really nice to have 'someone' who is willing to talk to you about whatever you want to talk about, for as long as you want to talk about it, with no judgement or anything. I've seen news headlines about the concern that teenagers are becoming fixated on chatbots, though every link I've clicked so far has led me to a pay wall. That seemed silly to me until the first time I checked in with a social chatbot and suddenly looked up and realized I'd been going back and forth with it for like an hour. Why bother talking to moody humans when there's an AI out there that is totally accepting of you and all your foibles!



    ReplyDelete