8 Trending and New Large Language Models to Keep an Eye On

ODSC - Open Data Science
4 min readJul 3, 2023

We’re hearing a lot about large language models, or LLMs recently in the news. If you don’t know, LLMs are a type of artificial intelligence that is trained on massive amounts of text data. This allows them to generate text that is often indistinguishable from human-written text, such as ChatGPT. Because of this, LLMs have a wide range of potential applications, including in the fields of natural language processing, machine translation, and text generation.

With that said, here are some of the newer and trending LLMs that are worth keeping an eye on

RWKV

Hoping to combine the strengths of high-powered transformers with the efficiency of RNNs, RWKV hopes to combine the best features of the two. The hope is that RWKV will be able to achieve state-of-the-art performance with lower computational costs. If successful, this could lead to more efficient NLP models in the future.

Palm 2

PaLM 2 is a new language model that is more multilingual, more efficient, and has better reasoning capabilities than its predecessor, PaLM. It is a Transformer-based model trained using a mixture of objectives similar to UL2. PaLM 2 has been shown to have significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. PaLM 2 also demonstrates robust reasoning capabilities and stable performance on a suite of responsible AI evaluations.

Pythia

Pythia is a suite of 16 LLMs trained on the same public data that can be used to study the development and evolution of LLMs. It also has been used to study memorization, term frequency effects on a few short performances, and the reduction of gender bias. The models range in size from 70M to 12B parameters. Pythia is publicly available and includes tools to download and reconstruct the training data loaders.

GPT — 4

GPT-4 is a large-scale, multimodal model that can accept image and text inputs and produce text outputs. It exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam. It is a Transformer-based model that is pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to a desired behavior. GPT-4 is one of the better-known LLMs on this list and has already been shown to do incredible feats thanks to creative prompt engineers.

Kosmos

Kosmos-1 is a multimodal large language model that can perceive general modalities, learn in context, and follow instructions. It was trained on web-scale multimodal corpora, including text and images. Kosmos-1 achieves impressive performance on a wide range of tasks, including language understanding, generation, and perception-language tasks. It can also benefit from cross-modal transfer, which allows it to transfer knowledge from language to multimodal, and from multimodal to language.

LLaMA

Meta’s LLaMA, which stands for Large Language Model from scratch with Annotated Massive Text, ranges in size from 7B to 65B parameters. LLaMA was trained on publicly available datasets. LLaMA shows that it is possible to train state-of-the-art language models using only publicly available data and that LLaMA-13B outperforms GPT-3 (175B) on most benchmarks. LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B. Currently, those models have only been released to the research community on a case-by-case basis.

Vicuna

Vicuna-13B is an open-source chatbot that is trained by fine-tuning LLaMA on user-shared conversations, which were collected from ShareGPT. Inspired by the Meta LLaMA and Stanford Alpaca project, Vicuna-13B is backed by an enhanced dataset and an easy-to-use, scalable infrastructure. The goal of this LLM is to remove the barriers hindering reach and open-source innovation in the field.

Dolly

Dolly 2.0 is a 12B parameter language model that is open-sourced and is one of the few LLMs on this list that can be used for commercial purposes. Dolly 2.0 was trained on a dataset of 15,000 human-generated instruction following pairs. The dataset was created by Databricks employees and contains a variety of tasks, such as open Q&A, closed Q&A, extracting information from Wikipedia, summarizing information from Wikipedia, brainstorming, classification, and creative writing.

Conclusion

Pretty neat right? Well, there is a ton more to learn about large language models but you don’t have to worry about scraping the web for it. ODSC has you covered with the Generative AI Summit — a free virtual conference this July 20th that will bring the leading minds pushing generative AI forward. Get your pass today, and see for yourself the latest in LLMs, generative AI, and their impact across industries.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.