Week of March 13, 2023

The genie escapes: Stanford copies the ChatGPT AI for less than $600Stanford’s Alpaca AI performs similarly to the astonishing ChatGPT on many tasks – but it’s built on an open-source language model and cost less than US$600 to train up. It seems these godlike AIs are already frighteningly cheap and easy to replicate.(NEW ATLAS, Loz Blain) / March 19

This Affordable Device Will Let Anyone Connect Their Brain to a ComputerThe PiEEG is a low-cost, high-precision, and easy-to-maintain device that aims to let people control robots and computers with their minds, using a Raspberry Pi, built by Ildar Rakhmatulin, a researcher at Imperial College, London.(MOTHERBOARD / Tech by VICE, Hannah Docter-Loeb) / March 17

GPT-4: A Copilot for the MindOn their own, large language models (LLMs) are, to a significant extent, Babel-like. Their latent space can output every possible combination of words. They are capable of creating genius-level sentences—and also false gibberish. And at this point in the lifecycle of this technology, the quality of the results you’re going to get is far higher when they’re grounded in a knowledge-base for the AI to reference when it is trying to respond to your prompts. That’s why if you’ve spent the hours to carefully curate your own personal library—whether that’s books, or articles, or videos, or movies—all of that time will significantly improve your copilot experience. Over the next year or two, I expect GPT-4 and its successors to become a copilot for the mind: a digital research assistant that will bring to bear the sum total of everything you’ve read, everything you’ve thought, and everything you’ve forgotten every time you touch a keyboard.(Every, Dan Shipper) / March 17

ViperGPT: Visual Inference via Python Execution for ReasoningViperGPT uses GPT-4 code-generation model to compose vision-and-language models and generate Python code to compute results for any query against an image.(Columbia University, Sachit Menon, Didac Suris, and Carl Vondrick) / March 17

Can GPT-4 Actually Write Code?Given a description of an algorithm or a description of a well known problem with plenty of existing examples on the web, yeah GPT-4 can absolutely write code. However, it absolutely fumbles when trying to solve actual problems. The type of novel problems that haven’t been solved before that you may encounter while programming. Moreover, it loves to “guess”, and those guesses can waste a lot of time if it sends you down the wrong path towards solving a problem.(Tyler’s Substack, Tyler Glaiel) / March 16

The Unpredictable Abilities Emerging From Large AI ModelsLarge language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors. However, beware of the paradox tied directly to emergence: As models improve their performance when scaling up, they may also increase the likelihood of unpredictable phenomena, including those that could potentially lead to bias or harm.(Quanta Magazine, Stephen Ornes) / date

The stupidity of AIThe lesson of the current wave of “artificial” “intelligence” is that intelligence is a poor thing when it is imagined by corporations. If your view of the world is one in which profit maximisation is the king of virtues, and all things shall be held to the standard of shareholder value, then of course your artistic, imaginative, aesthetic and emotional expressions will be woefully impoverished.(The Guardian, James Bridle) / March 16

The LLM ProblemAre LLMs dangerous distractions or are they a glowing harbinger of a bright future? Just now I wouldn’t bet my career on this stuff, nor would I ignore it. It’s really, really OK to say “I don’t know.”(Tim Bray) / March 14

How Siri, Alexa and Google Assistant Lost the A.I. RaceThe virtual assistants had more than a decade to become indispensable. But they were hampered by clunky design and miscalculations, leaving room for chatbots to rise.(The New York Times, Brian X. Chen, Nico Grant, and Karen Weise) / March 14

GPT-4 launchedThe latest milestone in OpenAI’s effort in scaling up deep learning, GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks(OpenAI) / March 14

Google-backed Anthropic launches Claude, an AI chatbot that’s easier to talk toIt can provide summaries, answer questions, provide assistance with writing, and generate code. You can also tweak the chatbot’s tone, personality, and behavior, which sounds a bit more comprehensive than the “creative, balanced, and precise” settings Bing’s chatbot offers.(The Verge, Emma Roth) / March 14

PaLM API & MakerSuite: an approachable way to start prototyping and building generative AI applicationsA new developer offering to easily, safely, and quickly experiment and prototye with Google’s large language models(Google, Scott Huffman and Josh Woodward) / March 14

The Role of AI in Accelerating Skill DevelopmentIn this post, I share my recent experience of interacting with ChatGPT while exploring the impact of permanently closing the United States stock exchanges. Although I have no formal economics background, I was able to go from this question through the process of learning how to ask it in the context of the macroeconomics field. This included exploring a hypothetical world where the stock market was closed, uncovering the related assumptions and potential impacts, and generating the program code required to do some of the math.(Saul Costa) / March 13

Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in needHealth insurance companies have rejected medical claims for as long as they’ve been around. But a STAT investigation found artificial intelligence is now driving their denials to new heights in Medicare Advantage, the taxpayer-funded alternative to traditional Medicare that covers more than 31 million people. Behind the scenes, insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment. (STAT News, Casey Ross and Bob Herman) / March 13

Alpaca: A Strong Open-Source Instruction-Following ModelAnnouncing Stanford Alpaca 7B, a large language model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca behaves similarly to OpenAI’s text-davinci-003, while being surprisingly small, easy, and inexpensive to reproduce.(Stanford University, Rohan Taori, et al.) / March 13

Discord Revises Its Privacy Policy After Backlash Over AIDiscord restored privacy promises in its published policy after being caught quietly removing them as it announced AI integrations(Gizmodo, Thomas Germain) / March 13

ChatGPT could power voice assistants in General Motors vehiclesThe automaker is reportedly using Microsoft’s Azure cloud service and OpenAI’s tech to develop a new virtual vehicle assistant providing drivers with information about their vehicle’s features, such as what action to take when a diagnostic light appears on the dashboard or how to change a flat tire(The Verge, Jess Weatherbed) / March 13

The Depth of the AI Plagiarism ProblemAI stretches the gap between the detection of plagiarism and the proof of plagiarism significantly making it especially problematic in higher education(AutomatED, Graham Clay) / March 13

comments powered by Disqus