AI coding requires a new level of literacy, and if something is wrong, it is immediately terminated --- agent relies too much on prompt? Especially if you find a simple problem in the past, and suddenly the ghost hits the wall, you can't assume that the logic ability of the AI must be up, and sometimes there are cases where you go down in place, even if the Claude Code should be wrong The group complained that anthropic changed the prompt of CC to chaos, and the bug couldn't be repaired, and it wasn't confirmed On the other hand, the agent's over-reliance on prompts shows that the AI capability is still quite weak
880