Week of March 20, 2023

ChatGPT started a new kind of AI race — and made text boxes cool againWho would have thought that typing into a chat window, on your computer, would be 2023’s hottest innovation? The way we’re going, the future of technology is not whiz-bang interfaces or the metaverse. It’s “typing commands into a text box on your computer.” The command line is back — it’s just a whole lot smarter now.(The Verge, David Pierce) / March 26

Microsoft reportedly orders AI chatbot rivals to stop using Bing’s search dataMicrosoft doesn’t want its rivals to use Bing’s search index to power their AI chatbots, according to a report from Bloomberg. The company reportedly told two unnamed Bing-powered search engines that it will restrict them from accessing Microsoft’s search data altogether if they continue using it with their AI tools. Microsoft is noted to license out Bing’s search data to several search engines, including DuckDuckGo, Yahoo, and the AI search engine You.com.(The Verge, Emma Roth) / March 25

Hello Dolly: Democratizing the magic of ChatGPT with open modelsDolly is a cheap-to-build LLM that exhibits a surprising degree of the instruction-following capabilities exhibited by ChatGPT. Dolly works by taking an existing 2 years old, open source 6B-parameter model from EleutherAI and modifying it slightly to elicit instruction-following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.(Databricks, Mike Conover, et al.) / March 24

Pairing With GPT-4GPT-4 can be helpful for beginner and senior Ruby developers, but it does have limitations. It won’t write all of your software for you, but it will point you in a useful direction, especially if you prefer learning by doing. Let’s look at how well GPT-4 pairing works by picking an easy, but less well known project with some edge cases: downloading a RubyGem, parsing the docs via YARN, and dumping them into a Sqlite database.(Ruby Dispatch, Brad Gessler) / March 23

New Retrieval chain abstraction in LangChainWe are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative retrieval methods (like hybrid search). This is backwards compatible, so all existing chains should continue to work as before. However, we recommend updating from VectorDB chains to the new Retrieval chains as soon as possible, as those will be the ones most fully supported going forward.(LangChain) / March 23

ChatGPT pluginsWe’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.(OpenAI) / March 23

Rewind’s new feature brings ChatGPT to your personal informationIt’s a chatbot with access to everything you’ve ever done on your computer. That’s neat. But the real trick is making that work without compromising your privacy.(The Verge, David Pierce) / March 23

Google Bard Plagiarized Our Article, Then Apologized When CaughtBard not only plagiarized information, but also gave an incomplete answer. If Bard had cited our Tom’s Hardware article as its source then the reader would have the opportunity to go read all the test results and all the insights and make a more informed decision. By plagiarizing, the bot denies its users the opportunity to get the full story while also denying experienced writers and publishers the credit — and clicks — they deserve.(Tom’s Hardware, Avram Piltch) / March 23

A quick and sobering guide to cloning yourselfThe bad news, or at least some of it, is immediately obvious. You probably shouldn’t trust any video or audio recording ever again. There are some good use cases for this as well: realistic AI-run avatars could serve as customer support agents, personal tutors, and more. Hopefully, the positive uses will outweigh the negative, but our world is changing rapidly, and the consequences are likely to be huge.(One Useful Thing, Ethan Mollick) / Feb 10

Epic’s new motion-capture animation tech has to be seen to be believed“MetaHuman Animator” goes from iPhone video to high-fidelity 3D movement in minutes and, unlike some other machine-learning models, it “doesn’t hallucinate any details.” It could be used even by small developers to create highly convincing 3D animation without the usual time and labor investment.(Ars Technica, Kyle Orland) / March 23

OpenAI’s policies hinder reproducible research on language modelsReproducibility—the ability to independently verify research findings—is a cornerstone of research. On Monday, March 20, OpenAI announced that it would discontinue support for Codex by Thursday, March 23. A short, 3-day notice. As a result, hundreds of academic papers would no longer be reproducible: independent researchers would not be able to assess their validity and build on their results. Using open-source models, such as BLOOM, would circumvent these issues: researchers would have access to the model instead of relying on tech companies. Open-sourcing LLMs is a complex question, and there are many other factors to consider before deciding whether that’s the right step. But open-source LLMs could be a key step in ensuring reproducibility.(AI Snake Oil, Sayash Kapoor, Arvind Narayanan) / March 23

What Will Transformers Transform?When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated. And it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols. GPTs might be useful, and well enough boxed, when there is an active person in the loop, but dangerous when the person in the loop doesn’t know they are supposed to be in the loop. [This will be the case for all young children.] Their intelligence, applied with strong intellect, is a key component of making any GPT be successful.(Rodney Brooks) / March 23

Instruct-NeRF2NeRF: Editing 3D Scenes with InstructionsWe propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.(UC Berkeley, Ayaan Haque, et al.) / March 22

It’s Game Over on Vocal DeepfakesNow come this: a Twitter thread from John Meyer, who trained a clone of Jobs’s voice and then hooked it up to ChatGPT to generate the words. The clips he posted to Twitter are freakishly uncanny. It really sounds like Jobs. The only hitch is that it sounds like Jobs reading from a script, not speaking extemporaneously.(DARING FIREBALL, John Gruber) / March 21

Was this written by a human or AI? ¯\_(ツ)_/¯New research shows we can only accurately identify AI writers about 50% of the time. The real concern is that we can create AI “that comes across as more human than human, because we can optimize the AI’s language to take advantage of the kind of assumptions that humans have. That’s worrisome because it creates a risk that these machines can pose as more human than us,” with a potential to deceive. (Stanford University Human-Centered AI, Prabha Kannan) / March 16

GPT-4 and professional benchmarks: the wrong answer to the wrong questionThe manner in which language models solve problems is different from how people do it, so GPT-4’s performance on professional licensing exams and other standardized tests tell us very little about how a bot will do when confronted with the real-life problems that professionals face. On the other hand, there are many ways in which it can solve pain points for professionals: for example, by automating mundane and low-stakes yet laborious tasks. For now, it might be better to focus on achieving such benefits and on mitigating the many risks of language models.(AI Snake Oil, Arvind Narayanan, Sayash Kapoor) / March 21

‘AI-powered’ is tech’s meaningless equivalent of ‘all natural’Because AI is so poorly defined, it’s really easy to say your device or service has it and back that up with some plausible-sounding mumbo jumbo about feeding a neural network a ton of data on TV shows or water use patterns. The recent flowering of AI into a buzzword fit to be crammed onto every bulleted list of features has to do at least partly with the conflation of neural networks with artificial intelligence. Without getting too into the weeds, the two aren’t interchangeable, but marketers treat them as if they are.(TechCrunch, Devin Coldewey) / January 10, 2017

NVIDIA Announces H100 NVL - Max Memory Server Card for Large Language ModelsAt the high-end of the market, the company today is announcing a new H100 accelerator variant specifically aimed at large language model users. It consists of 2 H100 PCIe boards that come already bridged together with a total of 188GB of HBM3 memory – 94GB per card – offering more memory per GPU than any other NVIDIA part to date, even within the H100 family.(AnandTech, Ryan Smith) / March 21

The Age of AI has begunThis new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.(GatesNotes, Bill Gates) / March 21

Surprise Computer Science Proof Stuns MathematiciansFor decades, mathematicians have been inching forward on a problem about which sets contain evenly spaced patterns of three numbers. Last month, two computer scientists blew past all of those results.(Quanta Magazine, Leila Sloman) / March 21

Google Bard enters open betaBased on LaMDA, Google’s conversational AI model capable of fluid, multi-turn dialogue, Bard is an experimental, conversational AI chat service.(Google) / March 21

Zero-1-to-3: Zero-shot One Image to 3D ObjectA framework for changing the camera viewpoint of an object given just a single RGB image using a conditional diffusion model. It allows control of the camera perspective in large-scale diffusion models, enabling zero-shot novel view synthesis and 3D reconstruction from a single image.(Columbia University, Ruoshi Liu, et al.) / March 21

OpenAI CEO warns that GPT-4 could be misused for nefarious purposesAltman warned that their ability to automatically generate text, images, or code could be used to launch disinformation campaigns or cyber attacks. The technology could be abused by individuals, groups, or authoritarian governments.(The Register, iconKatyanna Quach) / March 20

Acer (yes, the computer company) is building a fancy electric bike with built-in AIThis lightweight 35 lb. (16 kg) e-bike features a number of gadgets and gizmos we have yet to spot in the industry, such as built-in AI designed to predictively control the transmission and make use of collision detection sensors for a safer ride.(electrek, Micah Toll) / March 20

Capabilities of GPT-4 on Medical Challenge ProblemsGPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). USMLE is a three-step examination program used to assess clinical competency and grant licensure in the United States. In addition, GPT-4 is significantly better calibrated than GPT-3.5, demonstrating a much-improved ability to predict the likelihood that its answers are correct.(Harsha Nori, et al.) / March 20

comments powered by Disqus