Remove 10 ab-testing-data-science
article thumbnail

ML and NLP Research Highlights of 2021

Sebastian Ruder

The model is pre-trained on diverse multilingual speech data using a self-supervised wav2vec 2.0-style At the same time, we saw new unified pre-trained models for previously under-researched modality pairs such as for videos and language [9] as well as speech and language [10]. 8) ML for Science The architecture of AlphaFold 2.0.

NLP 52