One flaw of the LLMs I've used: they will never give you harsh criticism. While it would be nice to think all my writing is just that good, I know there are no circumstances where someone will ask for feedback and it will say “throw the whole thing out and start again.”
@molly0xfff yeah, they also don’t generally proactively tell the user “what you are actually trying to do is a bad idea/approach” unless what you’re doing risks a safety rule. If you’re just asking about say, programming advice, it will quite happily help you shoot yourself in the foot unless you bother to ask “is what I’m asking here actually a good idea?”. Humans are quite the opposite -They often want to tell you your approach to a problem is dumb even before they finished hearing it!