This is the case for Gemini too My take: 1mm context window is BAD for multi-turn chat, GREAT for single reqs “Large *relevant* context, clear direct ask” gets you SOTA perf out of last-gen models Not enough tool optz for fresh context windows when models are smartest. Soon…
Haider.
Haider.13.8. klo 19.15
i'm sorry, but the 1M context window sounds good only on paper in reality, it's more like a 400-500k context window that advertised 1M is the biggest lie i've seen. the model breaks down well before that point -- it doesn't forget, but it starts to malfunction completely
550