Remove author lu-wang
article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

The Stanford AI Lab Blog

We’re excited to share all the work from SAIL that’s being presented at the main conference , at the Datasets and Benchmarks track and the various workshops , and you’ll find links to papers, videos and blogs below. Smith, Scott W.

article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

Consequently, 2021 saw much discussion of best practices and ways in which we can reliably evaluate such models going forward, which I cover in this blog post. An art scene emerged around the most recent generation of generative models (see this blog post for an overview). Wang, L., & Liu, Z. Why is it important?  

NLP 52
article thumbnail

The State of Transfer Learning in NLP

Sebastian Ruder

Introduction For an overview of what transfer learning is, have a look at this blog post. 2018 ; Wang et al., 2019 , Wang et al., Author released checkpoints  Checkpoint files generally contain all the weights of a pretrained model. 2019 ; Lu et al., 2019 ; Logan IV et al.,

NLP 75