Week of March 4, 2024

The AI Threats to Climate ChangeSilicon Valley and Wall Street love to hype artificial intelligence (AI). The more it’s used, they say, the more diseases we’ll cure, the fewer errors we’ll make—and the lower emissions will go. Google’s AI subsidiary DeepMind claimed “advances in AGI [artificial generative intelligence] research will supercharge society’s ability to tackle and manage climate change.” At COP28 last year, Google released a new report proclaiming 5-10% of global greenhouse gas emissions could be mitigated by the use of AI. But there are two significant and immediate dangers posed by AI that are much less discussed: 1) the vast increase in energy and water consumption required by AI systems like ChatGPT; and 2) the threat of AI turbocharging disinformation—on a topic already rife with anti-science lies and funded by fossil fuel companies and their networks.(Friends of the Earth) / March 9

The GPT-4 barrier has finally been brokenFour weeks ago, GPT-4 remained the undisputed champion: consistently at the top of every key benchmark, but more importantly the clear winner in terms of “vibes”. Almost everyone investing serious time exploring LLMs agreed that it was the most capable default model for the majority of tasks—and had been for more than a year. Today that barrier has finally been smashed. We have four new models–Google Gemini 1.5, Mistral Large, Claude 3 Opus, Inflection-2.5–all released to the public in the last four weeks, that are benchmarking near or even above GPT-4. And the all-important vibes are good, too!(Simon Willison) / March 8

Could AI-designed proteins be weaponized? Scientists lay out safety guidelinesCould proteins designed by artificial intelligence (AI) ever be used as bioweapons? In the hope of heading off this possibility — as well as the prospect of burdensome government regulation — researchers today launched an initiative calling for the safe and ethical use of protein design.(Nature, Ewen Callaway) / March 8

Korean researchers power-shame Nvidia with new neural AI chip — claim 625 times less power draw, 41 times smallerA team of scientists from the Korea Advanced Institute of Science and Technology (KAIST) detailed their ‘Complementary-Transformer’ AI chip during the recent 2024 International Solid-State Circuits Conference (ISSCC). The new C-Transformer chip is claimed to be the world’s first ultra-low power AI accelerator chip capable of large language model (LLM) processing. In a press release, the researchers power-shame Nvidia, claiming that the C-Transformer uses 625 times less power and is 41x smaller than the green team’s A100 Tensor Core GPU. It also reveals that the Samsung fabbed chip’s achievements largely stem from refined neuromorphic computing technology.(Tom’s Hardware, Mark Tyson) / March 8

Smarter than GPT-4: Claude 3 AI catches researchers testing itClaude is definitely sharp – too sharp, perhaps, for the kinds of tests companies are using to evaluate their models by. In “needle in a haystack” testing, where a single random sentence is buried in an avalanche of information, and the model is asked a question pertaining to that exact sentence, Claude gave a response that seemed to turn around and look straight at the researchers. “I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention.”(New Atlas, Loz Blain) / March 4

Self-Retrieval: Building an Information Retrieval System with One Large Language ModelThe rise of large language models (LLMs) has transformed the role of information retrieval (IR) systems in the way to humans accessing information. Due to the isolated architecture and the limited interaction, existing IR systems are unable to fully accommodate the shift from directly providing information to humans to indirectly serving large language models. In this paper, we propose Self-Retrieval, an end-to-end, LLM-driven information retrieval architecture that can fully internalize the required abilities of IR systems into a single LLM and deeply leverage the capabilities of LLMs during IR process. Specifically, Self-retrieval internalizes the corpus to retrieve into a LLM via a natural language indexing architecture. Then the entire retrieval process is redefined as a procedure of document generation and self-assessment, which can be end-to-end executed using a single large language model. Experimental results demonstrate that Self-Retrieval not only significantly outperforms previous retrieval approaches by a large margin, but also can significantly boost the performance of LLM-driven downstream applications like retrieval augumented generation. To accurately generate the exact passages in the given corpus, we employ a trie-based constrained decoding algorithm in which the generated tokens can be constrained in the dynamic vocabulary. Specifically, instead of generating a token from the entire target vocabulary at each step, we use a prefix tree (trie) to constraint the target vocabulary and ensure that the generated content is within the corpus. During the construction of trie, we remove stop words from the initial token to improve semantic representation of the trie.(arXiv, Qiaoyu Tang, et al.) / February 23, 2024

The Moral Machine - Could AI Outshine Us in Ethical Decision-Making?So it seems to me that AI has the potential to act as a very good reasoning engine. In fact, AI may be better at ethical reasoning than most people. Of course, the concerns people have about AI and their potential to do damaging things are very real. But AI could also be the solution to the problem. If all AI systems have a suitably trained ethical reasoning module as part of their design, perhaps AI systems have the potential to make us better people and the world a better place.(James Johnson) / May 5, 2023

comments powered by Disqus