Week of March 27, 2023
The Contradictions of Sam Altman, AI Crusader • Sam Altman, the 37-year-old startup-minting guru at the forefront of the artificial intelligence boom, has long dreamed of a future in which computers could converse and learn like humans. In recent months, Mr. Altman has done more than anyone else to usher in this future—and commercialize it. OpenAI, the company he leads, in November released ChatGPT, the chatbot with an uncanny ability to produce humanlike writing that has become one of the most viral products in the history of technology. In the process, OpenAI went from a small nonprofit into a multibillion-dollar company, at near record speed, thanks in part to the launch of a for-profit arm that enabled it to raise $13 billion from Microsoft Corp., according to investor documents. His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.” In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. • (The Wall Street Journal, Berber Jin and Keach Hagey) / March 31
Robots that learn from videos of human activities and simulated interactions • Meta announces two major advancements toward general-purpose embodied AI agents capable of performing challenging sensorimotor skills: (1) an artificial visual cortex (called VC-1): a single perception model that, for the first time, supports a diverse range of sensorimotor skills, environments, and embodiments; and (2) a new approach called adaptive (sensorimotor) skill coordination (ASC), which achieves near-perfect performance (98 percent success) on the challenging task of robotic mobile manipulation (navigating to an object, picking it up, navigating to another location, placing the object, repeating) in physical environments. VC-1 is trained on videos of people performing everyday tasks from the groundbreaking Ego4D dataset created by Meta AI and academic partners, and it matches or outperforms best-known results on 17 different sensorimotor tasks in virtual environments. • (Meta AI) / March 31
Italian privacy regulator bans ChatGPT • The Italian privacy regulator Friday ordered a ban on ChatGPT over alleged privacy violations. The national data protection authority said it will immediately block and investigate OpenAI, the U.S. company behind the popular artificial intelligence tool, from processing the data of Italian users. The order is temporary until the company respects the EU’s landmark privacy law, the General Data Protection Regulation (GDPR). The authority said the company lacks a legal basis justifying “the mass collection and storage of personal data … to ‘train’ the algorithms” of ChatGPT. The company also processes data inaccurately, it added. • (POLITICO, Clothilde Goujard) / March 31
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace • HuggingGPT leverages LLMs (like ChatGPT) to connect various AI models in machine learning communities (such as HuggingFace) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in HuggingFace, execute each subtask with the selected AI model, and summarize the response according to the execution results. By leveraging the strong language capability of ChatGPT and abundant AI models in HuggingFace, HuggingGPT is able to cover numerous sophisticated AI tasks in different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards AGI. • (Yongliang Shen, et al.) / March 30
If AI scaling is to be shut down, let it be for a coherent reason • Look, I too have a 10-year-old daughter and a 6-year-old son, and I wish to see them grow up. But the causal story that starts with a GPT-5 or GPT-4.5 training run, and ends with the sudden death of my children and of all carbon-based life, still has a few too many gaps for my aging, inadequate brain to fill in. I can complete the story in my imagination, of course, but I could equally complete a story that starts with GPT-5 and ends with the world saved from various natural stupidities. For better or worse, I lack the “Bayescraft” to see why the first story is obviously 1000x or 1,000,000x likelier than the second one. • (Scott Aaronson) / March 30
Belgian man dies by suicide following exchanges with chatbot • A young Belgian man recently died by suicide after talking to a chatbot named ELIZA for several weeks, spurring calls for better protection of citizens and the need to raise awareness. About two years ago, the first signs of trouble started to appear. The man became very eco-anxious and found refuge with ELIZA, the name given to a chatbot that uses GPT-J, an open-source artificial intelligence language model developed by EleutherAI. After six weeks of intensive exchanges, he took his own life. “Without these conversations with the chatbot, my husband would still be here,” the man’s widow has said. She and her late husband were both in their thirties, lived a comfortable life and had two young children. • (The Brussels Times, Lauren Walker) / March 29
Google reshuffles virtual assistant unit with focus on Bard A.I. technology • In a memo to employees on Wednesday, titled “Changes to Assistant and Bard teams,” Sissie Hsiao, vice president and lead of Google Assistant’s business unit, announced changes to the organization that show the unit heavily prioritizing Bard. Jianchang “JC” Mao, who reported directly to Hsiao, will be leaving the company for personal reasons, according to the memo, which was viewed by CNBC. Mao held the position of vice president of engineering for Google Assistant and “helped shape the Assistant we have today,” Hsiao wrote. Taking Mao’s place will be 16-year Google veteran Peeyush Ranjan, who most recently held the title of vice president in Google’s commerce organization, overseeing payments. • (CNBC, Jennifer Elias) / March 29
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down • We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. • (TIME, Eliezer Yudkowsky) / March 29
Cerebras-GPT vs LLaMA AI Model Comparison • While Cerebras-GPT isn’t as capable as LLaMA, ChatGPT (gpt-3.5-turbo), or GPT-4, it’s been released under the fully permissive Apache 2.0 Open Source licence. It is a much smaller model at 13B parameters and has been intentionally “undertrained” relative to the other models to reach a “training compute optimal” state. It is ~6% of the size of GPT-3 and ~25% of the size of LLaMA’s full-size, 60B parameter model. It performs roughly the same as, and sometimes worse than GPT-J and GPT NeoX for tasks like OpenBookQA and ARC-c (“complex”) which rely on some amount of “common sense” knowledge to get right (determing the correct answer requires using knowledge that isn’t included in the question anywhere). • (LunaSec, Free Wortley) / March 29
What We Still Don’t Know About How A.I. Is Trained • It is unclear how many more terabytes of data were used to train GPT-4, or where they came from, because OpenAI, despite its name, says only in the technical report that GPT-4 was pre-trained “using both publicly available data (such as internet data) and data licensed from third-party providers” and adds that “given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” • (The New Yorker, Sue Halpern) / March 28
Pause Giant AI Experiments: An Open Letter • A call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. • (Future of Life) / March 28
AI systems like ChatGPT could impact 300 million full-time jobs worldwide, with administrative and legal roles some of the most at risk, Goldman Sachs report says • Based on an analysis of data on occupational tasks in both the US and Europe, Goldman researchers extrapolated their findings and estimated that generative AI could expose 300 million full-time jobs around the world to automation if it lives up to its promised capabilities. White-collar workers are some of the most likely to be affected by new AI tools. The Goldman report highlighted US legal workers and administrative staff as particularly at risk from the new tech. An earlier study from researchers at Princeton University, the University of Pennsylvania, and New York University, also estimated legal services as the industry most likely to be affected by technology like ChatGPT. • (Insider, Beatrice Nolan) / March 28
Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models • Cerebras open sources seven GPT models with 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B parameters, all of which are trained using 20 tokens per parameter. Trained using the Chinchilla formula, these models set new benchmarks for accuracy and compute efficiency and have faster training times, lower training costs, and consume less energy than any publicly available model to date. • (Cerebras, Nolan Dey) / March 28
Artificial intelligence searches for extraterrestrial intelligence • A new machine-learning algorithm, written by an undergraduate student at the University of Toronto, Peter Ma, has cut through the terrestrial noise to uncover eight currently unexplained radio signals, each with some hallmark of bonafide extraterrestrial chatter. • (SUPERCLUSTER, Keith Cooper, Clara Early) / March 27
Google’s claims of super-human AI chip layout back under the microscope • A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. Now Google’s claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD) who learned that Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip’s logic gates that the web giant’s reinforcement learning system then optimized. It’s argued these Synopsys tools may have given the model a decent enough head start that the AI system’s true capabilities should be called into question. • (The Register, Katyanna Quach) / March 27
The chat control proposal does not belong in democratic societies • The European Commission is working on a legislative proposal called chat control. If the law goes into effect, all EU citizens will have their communications monitored and audited. Now is the time to stop it. Chat control will not only involve total interception of your private communication. As the proposal is written, it may also affect open-source operating systems. • (Mullvad VPN) / March 27