There's a criticism of LLMs that goes "They can't really think. It's just pattern recognition, next-word-prediction. There's no reasoning". I don't think that's 100% true… but seeing stuff like this, yeah, even advanced models struggle to break out of recognizing set patterns.
38,39K