Claude Sonnet is a small-brained mechanical squirrel of <T>

Claude Sonnet is a small-brained mechanical squirrel of <T>

This post is a follow-up from LLMs are mirrors of operator skill in which I remarked the following:

I'd ask the candidate to explain the sounds of each one of the LLMs. What are the patterns and behaviors, and what are the things that you've noticed for each one of the different LLMs out there?

After publishing, I broke the cardinal rule of the internet - never read the comments and well, it's been on my mind that expanding on this points and explaining it in simple terms will, perhaps, help others start to see the beauty in AI.

let's go buy a car

Humble me, dear reader, for a moment and rewind time to the moment in time when you first purchased a car. I remember my first car, and I remember specifically knowing nothing about cars. I remember asking my father "what a good car is" and seeking his advice and recommendations.

Is that visual in your head? Good, now, fast-forward time back to now here in the present to the moment when you last purchased a car. What car was it? Why did you buy that car? What was different between your first car-buying experience and your last car-purchasing experience? What factors did you consider in your previous purchase that you perhaps didn't even consider when purchasing your first car?

there are many cars, and each car has different sounds, properties and use cases

If you wanted to go off-road 4WD'ing, you wouldn't purchase a hatchback. No, you would likely pick up a Land Rover 40 Series.

Likewise, if you have (or are about to have) a large family then upgrading from a two door sports car to "something better and more suitable for family" is the ultimate vehicle purchased upgrade trope in itself.

the minivan, once a staple choice of hippies and musicians, is now used for tourism

Now you might be wondering why I'm discussing cars (now), guitars (previously), and later on the page, animals; well, it's because I'm talking about LLMs, but through analogies...

deliberate intentional practice
Something I’ve been wondering about for a really long time is, essentially, why do people say AI doesn’t work for them? What do they mean when they say that? From which identity are they coming from? Are they coming from the perspective of an engineer with a job title and

LLMs as guitars

Most people assume all LLMs are interchangeable, but that’s like saying all cars are the same. A 4x4, a hatchback, and a minivan serve different purposes.

there are many LLMs and each LLMs has different sounds, properties and use cases. most people think each LLM is competiing with each other, in part they are but if you play around enough with them you'll notice each provider has a particular niche and they are fine-tuning towards that niche.

Currently, consumers of AI are picking and choosing their AI based on the number of people a car seats (context window size) and the total cost of the vehicle (price per mile or token), which is the wrong way to conduct purchasing decisions.

Instead of comparing context window sizes vs. m/tok costs, one should look deeper into the latent patterns of each model and consider what their needs are.

For the last couple of months, I've been using different ways to describe the emergent behaviour of LLMS to various people to refine what 'sticks and what does not'. The first couple of attempts involved anthropomorphism of the LLMs into Animals.

Galaxy brained precision based slothes (oracles) and incremental small brained hyperactive incremental squirrels (agents).

But I've come to realise that the latent patterns can be modelled as a four-way quadrant.

there are, at least, four quadrants of LLM behaviour