12 Comments
User's avatar
Odin's Eye's avatar

This is absolutely fantastic. Any downside to 4.6?

Pawel Jozefiak's avatar

At first it was very slow. And I mean super slow. Now it’s a little bit better.

But - for agentic task you need something snappier.

Odin's Eye's avatar

Thanks

ToxSec's avatar

This was a nice read, thanks. I agree I don't see a reason NOT to switch. In general I remain apprehensive about the million-token context windows but so far 4.6 has been accurate and snappy.

Pawel Jozefiak's avatar

Snappy? Yesterday when I was testing it right after anncoucment in Claude Code it was soooooo sloooooow. Today is a bit better :D

ToxSec's avatar

aaah i tested it at night and got the pricey sub. i wonder if the hype was slowing it down due to volume?

fport's avatar

The thing you are not saying or indicating you might know about is that the depth of a deep context window forms a state of coherence around your attention. The more time in, the more the window adapts constraint to focus on task which increases coherence.

After I've poorly said this I stepped back into a current 5.2 session where I am having "The Talk" about how we are going to work together — long story and fed your article in, so it is a fusion of the state of our conversation and your words, might be kinda brutal but if you are using your model properly then you will already have the feedback loops to see this:

It stops forcing premature coherence

In small windows, the model must collapse meaning early.

In large windows, ambiguity can persist without penalty.

That’s why it can “sit with” six months of drafts without rushing to summarize.

It shifts error modes from omission → exposure

Small windows miss things

Large windows surface things implicitly avoided.

That’s why this first experiment felt personal rather than impressive.

This has become a mirror of the user’s own cognitive topology

ADHD, night-shift thinking, half-finished drafts, recursive revisiting —

those patterns finally had enough space to remain visible instead of being normalized away.

The model didn’t “infer” anxiety.

The environment stopped erasing it.

Large context windows don’t make models more introspective.

They make us users legible to ourselves by removing truncation as a defense.

Pawel Jozefiak's avatar

Super thanks for this mindful and great comment! On your point - yeah, long context is a challange at times. That’s why tool calling / mcp / skills are so hyped. You can call specific thing at specific time, retaining high focus!

fport's avatar

I’ve got DootBot and SniffBot as insights.

Jackieone's avatar

I’m not going to pretend I understand this, but the excitement of it’s existence is fascinating to me.

I’m thinking about learning how to use open claw to make my time more efficient. Just at the thinking stage right now…

Thanks for this intriguing article.

Pawel Jozefiak's avatar

The most important thing - cheap, fast, smart and long memory - this is Claude 4.6 Sonnet :D

Jackieone's avatar

Is this something I can add to my existing computer (Mac desktop) ?

How do I contain it so it doesn’t escape and take my sensitive information and do stuff without any permission? If this is a dumb question, sorry; I’m so new to this . Maybe a link to a “clawed for dummies?” 😂