Week of April 17, 2023

Finetuning Large Language ModelsIn the rapidly evolving field of artificial intelligence, utilizing large language models (LLMs) efficiently and effectively has become increasingly important. But we can use large language models in many different ways, which can be overwhelming if you are starting out. In essence, we can use pretrained large language models for new tasks in two main ways: in-context learning and finetuning. This article go over what in-context learning means, and the various ways we can finetune LLMs.(AHEAD OF AI, Sebastian Raschka) / April 22

There Is No A.I.It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics. If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.(The New Yorker, Jaron Lanier) / April 20

‘I’ve Never Hired A Writer Better Than ChatGPT’: How AI Is Upending The Freelance WorldWhile some freelancers are losing their gigs to ChatGPT, clients are being spammed with AI-written content on freelancing platforms. The result: increasing mistrust between clients and freelancers and mounting trouble for the platforms themselves. With freelancers in panic of losing their jobs and clients frustrated with AI-written work, ChatGPT has thrust the freelance world into disarray, and companies like UpWork and Fiverr stand to lose a lot.(Forbes, Rashi Shrivastava) / April 20

Stability AI Launches the First of its StableLM Suite of Language ModelsOn April 19, Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. Developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content.(Stability AI) / April 19

comments powered by Disqus