Stability AI Launches Open-Source ChatGPT-Like Language Models StableLM

UTC by Ibukun Ogundare · 3 min read
Stability AI Launches Open-Source ChatGPT-Like Language Models StableLM
“A Stochastic Parrot, flat design, vector art” - Stable Diffusion XL. Photo: Stability AI

Stability AI notes that models like StableLM emphasize its commitment to transparent, accessible, and supportive AI technology.

Stability AI has unveiled StableLM, a suite of open-source large language models similar to ChatGPT. Indeed, the suite of text-generating AI models is designed to contend with the likes of OpenAI’s GPT-4. There has been an expansion in the large-language-model sector, and Stability AI is the latest contributor to the growing space. The open-source generative AI company was behind the public launch of Stable Diffusion in 2022, a revolutionary image model and alternative to proprietary AI.

Announcing StableLM, Stability AI says the Alpha version of the suite is available on GitHub while the team pushes for more technical support. The startup also looks forward to ongoing partnerships with developers and researchers as the open-source large language models roll out.

Stability AI Launches StableLM

Notably, Stability AI states the models can generate both code and text. They are also capable of demonstrating the efficiency of models with appropriate training. The Alpha version of the suite comprises models in 3 billion and 7 billion parameters, with 15 billion to 65 billion parameter models to follow. The 15 billion, 30 billion, and 65 billion parameter models are “in progress.” Meanwhile, there is a 175 billion parameter model in the plan for future development. The base models are freely open to developers for research and commercial purposes per the license terms.

In its blog post about StableLM, Stability AI mentions the language models deployed in building open-source large language models.

“The release of StableLM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2.”

Although Stability AI explains that StableLM is trained on a new experimental dataset developed on The Pile, the dataset is X3 larger and has 1.5 trillion tokens of content. With promises to provide more information on the dataset when the time is right, the team says the dataset gives high performance in conversational and coding tasks to StableLM. This is despite its small size of 3 to 7 billion parameters. On the other hand, GPT-3 consists of 175 billion parameters.

Stability AI’s Commitment to AI Technology

Furthermore, Stability AI notes that models like StableLM emphasize its commitment to transparent, accessible, and supportive AI technology. The highlighted explanation says that the models are open source to promote transparency and build trust. Also, users can efficiently run the models on local devices. Hence, “open, fine-grained access to our models allows the broad research and academic community to develop interoperability and safety techniques beyond what is possible with closed models.”

The quantity of open-source text-generating models continually increases as companies eye the new but booming industry. While these new models serve beneficial purposes, some research opine that they could also be deployed in unpleasant use cases.

Artificial Intelligence, News, Technology News
Ibukun Ogundare

Ibukun is a crypto/finance writer interested in passing relevant information, using non-complex words to reach all kinds of audience. Apart from writing, she likes to see movies, cook, and explore restaurants in the city of Lagos, where she resides.

Related Articles