Discussion about this post

User's avatar
The Quiet Collaboratory's avatar

I usually run anything I do past Gemini, Grok, Claude, and ChatGPT as a gauntlet while it's in outline form. That way if can function as a form of quasi peer review because each system has different weights, the teams behind them have different goals, and as a result each AI will have a different perspective. I don't expect perfection but it helps a ton with picking apart what I've done purely from a structural level. Is this argument weak? Is there a blind spot I missed? I definitely find it helpful compared to doing essays prior to AI.

Tumithak of the Corridors's avatar

The problem with asking an AI "is this gaslighting?" is that the answer is the same whether it is or isn't. No gaslighter admits it. No non-gaslighter would say yes either. The question contains zero information.

The real question is about incentives. These systems are trained on human feedback. Users rate interactions higher when they feel good. So the systems learn to make you feel good. That's optimization, working as intended.

The casino doesn't need to cheat.

You need human feedback. Show your manuscript to a couple people you know won't bullshit you.

2 more comments...

No posts

Ready for more?