Week of May 1, 2023

Scientists Use GPT AI to Passively Read People’s Thoughts in Breakthroughcientists have invented a language decoder that can translate a person’s thoughts into text using an artificial intelligence (AI) transformer similar to ChatGPT, reports a new study. The breakthrough marks the first time that continuous language has been non-invasively reconstructed from human brain activities, which are read through a functional magnetic resonance imaging (fMRI) machine. The decoder was able to interpret the gist of stories that human subjects watched or listened to—or even simply imagined—using fMRI brain patterns, an achievement that essentially allows it to read peoples’ minds with unprecedented efficacy. While this technology is still in its early stages, scientists hope it might one day help people with neurological conditions that affect speech to clearly communicate with the outside world.(VICE, Becky Ferreira) / May 1

‘The Godfather of A.I.’ Leaves Google and Warns of Danger AheadGeoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future. On Monday, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.(The New York Times, Cade Metz) / May 1

ChatGPT: abstract logic and the doubling down biasThis is the third in a short series of posts about ChatGPT’s capabilities and flaws. First I noted that ChatGPT often makes things up because that’s what it was designed to do. Second, it lacks an “inner monologue” and meta-cognition. In this post, I’ll take a brief look at its reasoning capabilities, and the consequences of them.(Luke Plant) / May 1

comments powered by Disqus