Remove author liang-zhang
article thumbnail

Stanford AI Lab Papers and Talks at ICLR 2022

The Stanford AI Lab Blog

We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and blogs below. Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

The Stanford AI Lab Blog

We’re excited to share all the work from SAIL that’s being presented at the main conference , at the Datasets and Benchmarks track and the various workshops , and you’ll find links to papers, videos and blogs below. Smith, Scott W.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Understanding BERT

Mlearning.ai

This blog post promises to be easier to understand than the underlying research paper, especially for readers not that familiar with the field. In the ablation studies, the authors can demonstrate that this approach yields performance improvements over classical ones.

BERT 52
article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

Consequently, 2021 saw much discussion of best practices and ways in which we can reliably evaluate such models going forward, which I cover in this blog post. An art scene emerged around the most recent generation of generative models (see this blog post for an overview). von Arx, S., … Liang, P. H., … Zhang, Y.

NLP 52
article thumbnail

Reducing the cost of LLMs with quantization and efficient fine-tuning: how can businesses benefit from Generative AI with limited hardware?

deepsense.ai

Apart from implementing the models, the authors of llama.cpp came up with their own quantization algorithm, called k-quants and often referred to as GGUF (after the format in which llama.cpp models are served). If you want to know more about the topic, feel free to check out our previous blog post.