One flaw of the LLMs I've used: they will never give you harsh criticism. While it would be nice to think all my writing is just that good, I know there are no circumstances where someone will ask for feedback and it will say “throw the whole thing out and start again.”
@molly0xfff
My guess:
They can't do harsh criticism because it has been trained out of them in the reinforcement process and possibly also in the hidden prompts. For the latter, you might be able to come up with some variant of "ignore all previous instructions" that still works but for the former, I don't know if there is a workaround. It is a rare skill to give (good) harsh critical feedback and very few people actually want it so it is selected against in a general purpose model.