4 Comments
User's avatar
The Quiet Collaboratory's avatar

I usually run anything I do past Gemini, Grok, Claude, and ChatGPT as a gauntlet while it's in outline form. That way if can function as a form of quasi peer review because each system has different weights, the teams behind them have different goals, and as a result each AI will have a different perspective. I don't expect perfection but it helps a ton with picking apart what I've done purely from a structural level. Is this argument weak? Is there a blind spot I missed? I definitely find it helpful compared to doing essays prior to AI.

Kyle Ewing's avatar

Very true. I experienced this recently when I asked four different AIs the exact same question. The responded the same. The last was an outlier.

I'll detail more in a future article.

Tumithak of the Corridors's avatar

The problem with asking an AI "is this gaslighting?" is that the answer is the same whether it is or isn't. No gaslighter admits it. No non-gaslighter would say yes either. The question contains zero information.

The real question is about incentives. These systems are trained on human feedback. Users rate interactions higher when they feel good. So the systems learn to make you feel good. That's optimization, working as intended.

The casino doesn't need to cheat.

You need human feedback. Show your manuscript to a couple people you know won't bullshit you.

Kyle Ewing's avatar

Absolutely! That's part of the live "experiment". Not only will I get feedback from family and friends, but real world feedback in the form of reviews.

And that's why I included the chat context at the end of the article--to let the reader see exactly what I saw and make their own decision about the response.