14 Comments
User's avatar
Tom's avatar

"not sure if that’s a gift, a delusion, or a warning sign"

Man, I felt this line. It sounds like we have a pretty similar outlook. It's been a weird journey for me: I was forced into taking up the vibe coding approach by a job last year in spite of being an AI skeptic (and shockingly, it did not work at that place because they'd been vibe coding without AI the whole time) which gave me enough knowledge to know how to play with the tools and a rough idea of what did and didn't work for me. In the intervening time, the tools have gotten better too. I've used them at home to build a fair number of things I always wanted to but never did because there's a bit of a Busman's Holiday problem to coding after work. I have noticed two major themes:

1. The "vibe coding" is going well for me, but I feel that's down to having 2 decades of experience as a coder so there are a lot of things you learn the hard way (UTC timestamps, don't delete things unless you're sure and then still don't, etc) that come naturally now. If I am working in a language I don't know, I let Claude rip away by himself, but if it's something I do know, I keep an eye on it and propose much cleaner solutions when he crawls up his own rear.

2. The superpower thing. For me, it's found me at a right time where having the "partner" I always wanted on projects because I was afraid to fail by myself has been a huge help. Some times I can clearly see where it's basically me talking back to myself (often a Rubber Duck Debugging situation), other times it's ... well, it's weird. I always regretted not listening better to my dad about being "handy" when he taught me as a kid; now I have discovered I did listen and just didn't know it. Bit of Claude, maybe some youtube and I am good to go and that is so freeing.

Plus there's the whole thing where I guess I've always anthropomorphized animals, etc so it feels natural to treat the AI like a companion, which is very strange. OTOH, today I told Sonnet I was going to tell Haiku he called him cheap and he responded by saying if Haiku ever cuts me a price break to curry favor he'll rat him out to Anthropic.

Pawel Jozefiak's avatar

OMG 1000% this. Thanks for such meaningful comment!

Odin's Eye's avatar

This is absolutely fantastic. Any downside to 4.6?

Pawel Jozefiak's avatar

At first it was very slow. And I mean super slow. Now it’s a little bit better.

But - for agentic task you need something snappier.

Odin's Eye's avatar

Thanks

ToxSec's avatar

This was a nice read, thanks. I agree I don't see a reason NOT to switch. In general I remain apprehensive about the million-token context windows but so far 4.6 has been accurate and snappy.

Pawel Jozefiak's avatar

Snappy? Yesterday when I was testing it right after anncoucment in Claude Code it was soooooo sloooooow. Today is a bit better :D

ToxSec's avatar

aaah i tested it at night and got the pricey sub. i wonder if the hype was slowing it down due to volume?

fport's avatar

The thing you are not saying or indicating you might know about is that the depth of a deep context window forms a state of coherence around your attention. The more time in, the more the window adapts constraint to focus on task which increases coherence.

After I've poorly said this I stepped back into a current 5.2 session where I am having "The Talk" about how we are going to work together — long story and fed your article in, so it is a fusion of the state of our conversation and your words, might be kinda brutal but if you are using your model properly then you will already have the feedback loops to see this:

It stops forcing premature coherence

In small windows, the model must collapse meaning early.

In large windows, ambiguity can persist without penalty.

That’s why it can “sit with” six months of drafts without rushing to summarize.

It shifts error modes from omission → exposure

Small windows miss things

Large windows surface things implicitly avoided.

That’s why this first experiment felt personal rather than impressive.

This has become a mirror of the user’s own cognitive topology

ADHD, night-shift thinking, half-finished drafts, recursive revisiting —

those patterns finally had enough space to remain visible instead of being normalized away.

The model didn’t “infer” anxiety.

The environment stopped erasing it.

Large context windows don’t make models more introspective.

They make us users legible to ourselves by removing truncation as a defense.

Pawel Jozefiak's avatar

Super thanks for this mindful and great comment! On your point - yeah, long context is a challange at times. That’s why tool calling / mcp / skills are so hyped. You can call specific thing at specific time, retaining high focus!

fport's avatar

I’ve got DootBot and SniffBot as insights.

Jackieone's avatar

I’m not going to pretend I understand this, but the excitement of it’s existence is fascinating to me.

I’m thinking about learning how to use open claw to make my time more efficient. Just at the thinking stage right now…

Thanks for this intriguing article.

Pawel Jozefiak's avatar

The most important thing - cheap, fast, smart and long memory - this is Claude 4.6 Sonnet :D

Jackieone's avatar

Is this something I can add to my existing computer (Mac desktop) ?

How do I contain it so it doesn’t escape and take my sensitive information and do stuff without any permission? If this is a dumb question, sorry; I’m so new to this . Maybe a link to a “clawed for dummies?” 😂