ML and NLP Research Highlights of 2021
Sebastian Ruder
JANUARY 24, 2022
The model is pre-trained on diverse multilingual speech data using a self-supervised wav2vec 2.0-style At the same time, we saw new unified pre-trained models for previously under-researched modality pairs such as for videos and language [9] as well as speech and language [10]. 8) ML for Science The architecture of AlphaFold 2.0.
Let's personalize your content