Remove _ author ines
article thumbnail

Explosion in 2019: Our Year in Review

Explosion

In the interview, Matt and Ines talked about Prodigy , where training corpora come from and the challenges of annotating data for an NLP system – with some ideas about how to make it easier. ? Jan 28: Ines then joined the great lineup of Applied Machine Learning Days in Lausanne, Switzerland. Got a question?

NLP 52
article thumbnail

Vision Transformers (ViT) in Image Recognition – 2023 Guide

Viso.ai

The Vision Transformer (ViT) model architecture was introduced in a research paper published as a conference paper at ICLR 2021 titled “An Image is Worth 16*16 Words: Transformers for Image Recognition at Scale” It was developed and published by Neil Houlsby, Alexey Dosovitskiy, and 10 more authors of the Google Research Brain Team.

article thumbnail

Explosion in 2017: Our Year in Review

Explosion

Merged 3,238 commits from 129 authors. Published 13 pre-trained statistical models for tagging, parsing and NER in 8 languages, and extended tokenization support to 26 languages in total. You can see the thought process behind Prodigy in three blog posts that we wrote along the way.

NLP 52