Week of October 9, 2023

“Hallucinating” AIs Sound Creative, but Let’s Not Celebrate Being WrongThe term “hallucination,” which has been widely adopted to describe large language models (LLMs) outputting false information, is misleading. Its application to creativity risks compounding that. When people say GPT is hallucinating, they are referring to this kind of mangling of facts. But the idea of hallucination implies that at other times the facts have been accurately portrayed. Unfortunately, this promotes a misunderstanding of how large language models (LLMs) work, and misunderstanding how a technology works can make the difference between it being safe and dangerous. It might be better to say that everything GPT does is a hallucination, since a state of non-hallucination, of checking the validity of something against some external perception, is absent from these models. There is no right or wrong answer in their world, no meaning relating to goals. That’s because LLMs are not models of brains, but of language itself, its patterns, structures, and probabilities. At heart their job description is incredibly simple: Given some text, they tell us what text comes next. It’s worth keeping front and center, however, that there is not always one right response. If I say “the tail that wags the …”, you might say the next word is “dog” with a high degree of certainty, but this is not the right and only answer. In any such context, there is much freedom, and the “rightness” of any answer depends not only on the conceptual context but on what you’re trying to do — your goal.(The MIT Press Reader, Oliver Brown) / October 13

Text Embeddings Reveal (Almost) As Much As TextHow much private information do text embeddings reveal about the original text? We investigate the problem of embedding inversion, reconstructing the full text represented in dense text embeddings. We frame the problem as controlled generation: generating text that, when reembedded, is close to a fixed point in latent space. We find that although a naïve model conditioned on the embedding performs poorly, a multi-step method that iteratively corrects and re-embeds text is able to recover 92% of 32-token text inputs exactly. We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.(arXiv, John X. Morris, Volodymyr Kuleshov, Vitaly Shmatikov, Alexander M. Rush) / October 10

comments powered by Disqus