progresso sobre alucinações:
Chris
Chris31/07/2025
OpenAI researcher Noam Brown on hallucination with the new IMO reasoning model: > Mathematicians used to comb through model solutions because earlier systems would quietly flip an inequality or tuck in a wrong step, creating hallucinated answers. > Brown says the updated IMO reasoning model now tends to say “I’m not sure” whenever it lacks a valid proof, which sharply cuts down on those hidden errors. TLDR, the model shows a clear shift away from hallucinations and toward reliable, self‑aware reasoning.
95,66K