Human Thinking First, AI Second
Last few weeks, I learned a new word: NERF.
I had no idea, but obviously in the gaming world, it means developers quietly reducing the strength of weapons or characters for balance.
Then the AI crowd borrowed it. Now it means: the models get dumber.
Not joking. If you’re a Claude user, the once-great Opus 4.6 just… isn’t responding as sharply as it was a couple weeks ago. ChatGPT’s Codex? ‘Nerfed’ as well.
Frustrating? Maybe. But here’s what I’ve always do, with or without a nerfed model.
I do my hard thinking before I open the chat window. Rough structure. Key arguments. The nuanced complexity in this business case. The logic I want to test.
Then I bring AI in to challenge, refine, or reorganize — not to generate from scratch.
Honestly, this is probably how it should’ve been all along. We’re scientists. We solve problems that haven’t been solved before. That part can’t be outsourced.
AI is the automation and refinement tool. The thinking is still ours.
This is this week’s #BitesizeAI.