Skip to main content

There’s now an open source alternative to ChatGPT, but good luck running it

The first open-source equivalent of OpenAI’s ChatGPT has arrived, but good luck running it on your laptop — or at all.

This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta’s Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.

But PaLM + RLHF isn’t pretrained. That is to say, the system hasn’t been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won’t magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.

Like ChatGPT, PaLM + RLHF is essentially a statistical tool to predict words. When fed an enormous number of examples from training data — e.g. posts from Reddit, news articles and ebooks — PaLM + RLHF learns how likely words are to occur based on patterns like the semantic context of surrounding text.

ChatGPT and PaLM + RLHF share a special sauce in Reinforcement Learning with Human Feedback, a technique that aims to better align language models with what users wish them to accomplish. RLHF involves training a language model — in PaLM + RLHF’s case, PaLM — and fine-tuning it on a data set that includes prompts (e.g. “Explain machine learning to a six-year-old”) paired with what human volunteers expect the model to say (e.g. “Machine learning is a form of AI…”). The aforementioned prompts are then fed to the fine-tuned model, which generates several responses, and the volunteers rank all the responses from best to worst. Finally, the rankings are used to train a “reward model” that takes the original model’s responses and sorts them in order of preference, filtering for the top answers to a given prompt.

It’s an expensive process, collecting the training data. And training itself isn’t cheap. PaLM is 540 billion parameters in size, “parameters” referring to the parts of the language model learned from the training data. A 2020 study pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. And to train the open source model Bloom, which has 176 billion parameters, it took three months using 384 Nvidia A100 GPUs; a single A100 costs thousands of dollars.

Running a trained model of PaLM + RLHF’s size isn’t trivial, either. Bloom requires a dedicated PC with around eight A100 GPUs. Cloud alternatives are pricey, with back-of-the-envelope math finding the cost of running OpenAI’s text-generating GPT-3 — which has around 175 billion parameters — on a single Amazon Web Services to be around $87,000 per year.

Sebastian Raschka, an AI researcher, points out in a LinkedIn post about PaLM + RLHF that scaling up the necessary dev workflows could prove to be a challenge as well. “Even if someone provides you with 500 GPUs to train this model, you still need to have to deal with infrastructure and have a software framework that can handle that,” he said. “It’s obviously possible, but it’s a big effort at the moment (of course, we are developing frameworks to make that simpler, but it’s still not trivial, yet).”

That’s all to say that PaLM + RLHF isn’t going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.

In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback.

LAION, the nonprofit that supplied the initial data set used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques. Ambitiously, LAION aims to build an “assistant of the future” — one that not only writes emails and cover letters but “does meaningful work, uses APIs, dynamically researches information, and much more.” It’s in the early stages. But a GitHub page with resources for the project went live a few weeks ago.

There’s now an open source alternative to ChatGPT, but good luck running it by Kyle Wiggers originally published on TechCrunch



from TechCrunch https://ift.tt/uvDq0tl
via https://ift.tt/qHUzonN

Comments

Popular posts from this blog

WORKING 2.0 No Recoil File For PUBG | Anti-Ban File

 WORKING 2.0 No Recoil File For PUBG | Anti-Ban File - techy teacher 2.0 No Recoil File For PUBG Howdy Buddies! We are back with new theme on PUBG versatile 2.0. I genuinely want to believe that you all update your PUBG App in light of the fact that on 11 May PUBG new update has been shown up, and pretty much every client update it. As you most likely are aware my site is the best wellspring of hacking and breaking, on my site you get the most recent reports on game hacks with reasonable recordings. Today, I give you the most recent 2.0 No Recoil File For PUBG. This update is truly astounding, PUBG 2.0 report a few new and intriguing elements with regards to this update. PUBG presents new livik map and in this guide we see a great deal of new things. This new guide is entirely unexpected and PUBG add a few games in it. In this guide, you additionally appreciate soccer challenge and gather coins in remuneration to purchase plunder. How about we examine every one of the new elements in c

Lyft hails new leadership, layoffs come for Lucid and Waymo retires its self-driving minivan

The Station is a weekly newsletter dedicated to all things transportation. Sign up here —  just click The Station  — to receive the full edition of the newsletter every weekend in your inbox. Subscribe for free.  Welcome back to The Station, your central hub for all past, present and future means of moving people and packages from Point A to Point B.  This past week, I drove a 2023 Jeep Grand Cherokee 4XE Summit 4×4 (a press vehicle courtesy of Jeep) to Moab, Utah ahead of the Easter Jeep Safari event to check out what the Stellantis brand has been working on. Jeep, which hosts media in March every year for its annual off-roading and concept roadshow, put electrification front and center. While the seven concept vehicles (all of which I drove) were equipped with an array of powertrains, it was hard to ignore the variety of plug-in hybrid and battery-electric setups. There were some fun internal combustion-powered vehicles, like the Jeep Scrambler 392 concept equipped with a 6.4-li

Meta Connect 2024: What to expect, including Quest 3S and new AR smart glasses

Meta Connect 2024 is so close, you can almost taste it. Launching during the week of Sept. 23, the social media giant is expected to rollout hardware and software goodies that will intrigue VR gamer enthusiasts, AI aficionados, and smart glasses devotees. But what, specifically, does Meta have up its sleeves? We have a few guesses based on credible reports. What to expect at Meta Connect 2024 Last year, the Meta Quest 3 was announced in early June, but it got its full reveal at Meta Connect 2023. The headset boasted a sleeker, more comfortable design, as well as new AR capabilities, that made it more appealing than its predecessor. Once again, for Meta Connect 2024, the social media giant is expected to drop a new VR headset, but it's not necessarily an upgrade over the Quest 3. Meta Quest 3S Rumor has it that Meta is planning on revealing a cheaper, more budget-friendly version of the Quest 3 called "Quest 3S." Tweet may have been deleted Whether it was intent