Week of May 8, 2023

The Alan Turing Institute has failed to develop modern AI in the UKThe UK’s flagship institute for artificial intelligence, the Alan Turing Institute, has been at best irrelevant to the development of modern AI in the UK. Along with the AI council, which advises the government on AI, the Turing has been completely blindsided by recent breakthroughs in artificial intelligence based on large language models (LLMs). The institute’s Annual reports for the last four years do not refer to LLMs at all. There is no record of its website or Director mentioning them until a few months ago. It’s as if the most important development in the history of AI has completely passed it by.What have they concentrated on instead? Their most popular blog post in 2022 was “Non-fungible tokens: can we predict the price they’ll sell for?”. Their top piece of content was “Data as an instrument of coloniality: A panel discussion on digital and data colonialism”. Do any AI specialists think this work is going to push the bleeding edge of AI research in the UK?(RSS DS+AI Section Newsletter, Martin Goodson) / May 12

Chatbots Don’t Know What Stuff Isn’tToday’s language models are more sophisticated than ever, but they still struggle with the concept of negation. That’s unlikely to change anytime soon.(Quanta Magazine, Max G. Levy) / May 12

Is the ability to think scientifically the defining essence of intelligence?It may yet be possible to train a sufficiently large neural network to mimic most of what the human brain can do. The recent success of neural networks in performing human-like tasks of image captioning and essay writing indicates that the brain’s processing is perhaps not as computationally difficult as once thought. This result may itself be a scientific breakthrough. Progress such as this, however, does not negate the fact that more work needs to be done to achieve AGI. Novel algorithmic approaches will be needed to transcend the boundaries of what is accessible to pure empirical reasoning to include abstract reasoning, hypothesis testing, and counterfactual logic necessary for scientific thinking. A scarcity mindset will also be required to achieve algorithmic efficiencies that enable sustainable levels of resource consumption for future AI systems.(ACM Queue, Edlyn V. Levine) / May 11

Introducing 100K Context WindowsWe’ve expanded Claude’s context window from 9K to 100K tokens, corresponding to around 75,000 words! This means businesses can now submit hundreds of pages of materials for Claude to digest and analyze, and conversations with Claude can go on for hours or even days. The average person can read 100,000 tokens of text in ~5+ hours, and then they might need substantially longer to digest, remember, and analyze that information. Claude can now do this in less than a minute. For example, we loaded the entire text of The Great Gatsby into Claude-Instant (72K tokens) and modified one line to say Mr. Carraway was “a software engineer that works on machine learning tooling at Anthropic.” When we asked the model to spot what was different, it responded with the correct answer in 22 seconds. Beyond just reading long texts, Claude can help retrieve information from the documents that help your business run. You can drop multiple documents or even a book into the prompt and then ask Claude questions that require synthesis of knowledge across many parts of the text. For complex questions, this is likely to work substantially better than vector search based approaches. Claude can follow your instructions and return what you’re looking for, as a human assistant would!(Anthropic) / May 11

What does a leaked Google memo reveal about the future of AI? Training a large LLM takes months and costs tens of millions of dollars. This led to concerns that AI would be dominated by a few deep-pocketed firms. But that assumption is wrong, says the Google memo. It notes that researchers in the open-source community, using free, online resources, are now achieving results comparable to the biggest proprietary models. It turns out that LLMs can be “fine-tuned” using a technique called low-rank adaptation, or LoRa. This allows an existing LLM to be optimised for a particular task far more quickly and cheaply than training an LLM from scratch.(The Economist) / May 11

How AI Knows Things No One Told ItNo one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems’ abilities go far beyond what they were trained to do—and even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines’ technique is different.(Scientific American, George Musser) / May 11

Google Launches AI Supercomputer Powered by Nvidia H100 GPUsGoogle says the new A3 supercomputers are “purpose-built to train and serve the most demanding AI models that power today’s generative AI and large language model innovation” while delivering 26 exaFlops of AI performance. Each A3 supercomputer is packed with 4th generation Intel Xeon Scalable processors backed by 2TB of DDR5-4800 memory. But the real “brains” of the operation come from the eight Nvidia H100 “Hopper” GPUs, which have access to 3.6 TBps of bisectional bandwidth by leveraging NVLink 4.0 and NVSwitch. According to Google, A3 represents the first production-level deployment of its GPU-to-GPU data interface, which allows for sharing data at 200 Gbps while bypassing the host CPU. This interface, which Google calls the Infrastructure Processing Unit (IPU), results in a 10x uplift in available network bandwidth for A3 virtual machines (VM) compared to A2 VMs.(Tom’s Hardware, Brandon Hill) / May 10

Air Force selects AI-enabled predictive maintenance program as system of recordhe Department of the Air Force has designated the Rapid Sustainment Office’s Predictive Analytics and Decision Assistant (PANDA) — an integrated artificial intelligence and machine learning tool for predictive maintenance — as a system of record. As the system of record for what the Air Force calls “Condition Based Maintenance Plus,” PANDA integrates AI and ML across a variety of aircraft maintenance data “to increase the operational reliability of our weapons systems before we project them forward when those aircraft are used in their operations,” Lt. Col Michael Lasher, an aircraft maintenance specialist in the service’s Rapid Sustainment Office, told DefenseScoop. In its simplest terms, CMB+ is all about using data analysis to improve availability and lifecycle cost through evidence.(DefenseScoop, Billy Mitchell) / May 10

Meta open-sources multisensory AI model that combines six types of dataThe new ImageBind model combines text, audio, visual, movement, thermal, and depth data. It’s only a research project but shows how future AI models could be able to generate multisensory content.(The Verge, James Vincent) / May 9

Language models can explain neurons in language modelsLanguage models have become more capable and more broadly deployed, but our understanding of how they work internally is still very limited. For example, it might be difficult to detect from their outputs whether they use biased heuristics or engage in deception. Interpretability research aims to uncover additional information by looking inside the model. We use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations. We release a dataset of these (imperfect) explanations and scores for every neuron in GPT-2.(OpenAI) / May 9

AI’s Ostensible Emergent Abilities Are a MirageWhat it means for the future is this: We don’t need to worry about accidentally stumbling onto artificial general intelligence (AGI). Yes, AGI may still have huge consequences for human society, Schaeffer says, “but if it emerges, we should be able to see it coming.”(Stanford HAI, Katharine Miller) / May 8

ChatGPT Is Powered by Human Contractors Getting Paid $15 Per HourChatGPT, the wildly popular AI chatbot, is powered by machine learning systems, but those systems are guided by human workers, many of whom aren’t paid particularly well. A new report from NBC News shows that OpenAI, the startup behind ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests. The compensation for this pivotal task? A scintillating $15 per hour.(Gizmodo, Lucas Ropek) / May 8

comments powered by Disqus