Research from MIT shows that debating with the responses of ai produces much better results How can we make use of this idea to reduce hallucinations? 1. Asking it to verify it's own thought processes Me to AI 2: I have the following algorithm to help me determine how I can minimally restructure my day to have a 75 minute slot for the gym. Can you verify and check it for me and what changes you would suggest to simplify it? < Generated algorithm > --- 2. Ask AI to be brutally honest - "What assumptions are you making that I should verify?" - "Be brutally honest. Is this approach the best way we could do it or are there other better ways?" - “Be honest if you aren't sure about any part of this.” - “It’s better to say’I don ‘t know’ than to guess.” - “If you don't have enough context to give a confident answer, just tell me what additional information you'd need.” --- 3. telling AI it’s ok to ask for additional information. it does a great job using second-order thinking to not just answer, but also consider additional areas where it could improve if it had more information. In practice, it would look like: “I’m seeing this bug in <x> system, and I think it’s related to <y> folder, but I’m not sure. "Can you look into it and tell me what info you’d need from me or what commands you’d want me to run to give you more info to figure out what’s wrong?”
161