Eu me pergunto se esse "pixels vs. tokens" também pode ser estendido para "áudio vs. texto". A maioria das línguas já faladas nunca tem uma forma escrita. Tanta informação rica (tom, subtexto, prosódia) é perdida quando achatamos as palavras faladas em uma transcrição. Pode ser um experimento divertido comparar LLMs "alfabetizados" (treinados em imagens de texto) vs. LLMs "orais" (treinados em áudio bruto), traçando um paralelo com as sociedades humanas: um pensa em símbolos e espaço, o outro em ritmo e tempo.
Andrej Karpathy
Andrej Karpathy21 de out., 06:13
I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots), and yes data collection etc., but anyway it doesn't matter. The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible, at the input. Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input, maybe you'd prefer to render it and then feed that in: - more information compression (see paper) => shorter context windows, more efficiency - significantly more general information stream => not just text, but e.g. bold text, colored text, arbitrary images. - input can now be processed with bidirectional attention easily and as default, not autoregressive attention - a lot more powerful. - delete the tokenizer (at the input)!! I already ranted about how much I dislike the tokenizer. Tokenizers are ugly, separate, not end-to-end stage. It "imports" all the ugliness of Unicode, byte encodings, it inherits a lot of historical baggage, security/jailbreak risk (e.g. continuation bytes). It makes two characters that look identical to the eye look as two completely different tokens internally in the network. A smiling emoji looks like a weird token, not an... actual smiling face, pixels and all, and all the transfer learning that brings along. The tokenizer must go. OCR is just one of many useful vision -> text tasks. And text -> text tasks can be made to be vision ->text tasks. Not vice versa. So many the User message is images, but the decoder (the Assistant response) remains text. It's a lot less obvious how to output pixels realistically... or if you'd want to. Now I have to also fight the urge to side quest an image-input-only version of nanochat...
Será um jogo entre uma civilização que constrói monumentos de pedra contra uma que constrói monumentos em música
Outra coisa que vem à mente são as Ozymandias de Shelley, as relíquias colossais enterradas na areia e, no entanto, essas imagens não são transmitidas pelos próprios olhos do poeta, mas pela história oral de um viajante.
8,08K