academics

What Should Academic NLP Researchers Focus on?

I’ve seen all kinds of people make comments recently about the value of academic research in NLP. The basic argument is that significant research on large language models such as chatGPT, which is the hottest/trendiest area in NLP, happens in companies (instead of universities) because it requires lots of resources. So if all of the important work is happening in Google, OpenAI, Microsoft, etc, what is the “value-added” of academic university research?

Of course, even if LLMs are built by large tech companies, there still are many important research questions that academics can look it, for example understanding how the models work and how they fail! But the larger point is that there is more to NLP than LLMs, and that companies by their nature focus on developing technologies which will be commercially profitable within the short or medium term. As a society, we also want people to investigate ideas which wont pay off for decade(s), ideas whose main benefit is scientific understanding rather than profits, and ideas whose main impact is on social good rather than money. Society also wants impartial evaluations and assessments of technology. There are plenty of great opportunities for academic researchers in these areas!

In 2018 ago I wrote a blog called Academic Researchers Should be Scouts and Explorers, where I argued that academics should be scouts and explorers who fan out and investigate unexplored virgin scientific territory. This was in contrast to companies which were “colonisers”, ie focused on developing and exploiting areas which are known to be of high commercial value. I think this is even more true in 2023 than it was in 2018.

Lets pursue new ideas!

There are plenty of exciting research questions which dont obviously have large commercial profit potential in the next few years. I’ve alluded to many of these in previous blogs, including trying to define and understand when texts are not appropriate even if they are accurate and more generally understanding pragmatic correctness, evaluating texts using error annotation, and understanding how to combine generated texts with information graphics. Of course there are loads more such questions. I’m personally excited about researching better ways to communicate health information to patients, including effectively communicating complex probabilities and probabilistic reasoning, making health chatbots sensitive to user’s emotional and stress state, and helping patients interact with clinical staff. It would be a real shame for the NLP community if such topics are not pursued by at least a few people.

Lets do high-quality experiments!

Another role for academics is careful experimentation and evaluation. Companies have inherent conflict-of-interest issues when they evaluate technology. For example, we wouldnt trust either openAI or Google to do an impartial careful evaluation of chatGPT, because we know that commercial concerns will drive OpenAI to deliver positive results, and Google to deliver critical results. I realise that there are many careful experimentalists in both companies! However my experience is that when experimental results potentially have commercial implications, it is difficult for commercial researchers to publish results that go against their company’s commercial interests.

Academics in contrast can be more impartial; certainly in other scientific areas such as medicine, academic experiments are valued in part for this reason. However, impartial academic evaluations are only useful if they are high-quality and rigorous; BLEU scores and asking Turkers for Likert scores are unlikely to be sufficient. We need evaluations which are closer to medical clinical trials or at least careful psychological experiments. Indeed, developing techniques for high-quality evaluation is itself a very important research area which I’d love to see more work on; this is an area where I’m seeing major contributions by both academic and commercial researchers.

Lets support socially useful applications!

The goal of a company is to make money; that’s not evil, its the way capitalist societies work. This of course means that companies focus their energies on technologies and use case which can deliver substantial profits, not on technologies and use cases which are socially very useful but not very profitable.

For example, I have a new PhD student, Iniakpokeikiye Thompson, who is investigating driving feedback apps in Nigeria. The basic concept is to collect data from drivers and generate insights and suggestions which help them drive more safely. This is aimilar at a high level to Braun et al 2018, but focusing on Nigeria instead of European countries. Driving is much more dangerous in Nigeria (per capita fatality rate from road accidents is almost 10 times higher in Nigeria than UK); other countries which are poor (and thus have lower-quality infrastructure) and have weak rule-of-law (so drivers may feel free to ignore the rules) have similar problems. So there is potential to do a lot of good, but making serious money from this kind of thing in Nigeria would not be easy. Similarly there is a lot of potential in using NLP tech to help poor people in deprived areas (eg, I would like to work on health chatbots for such people), but companies usually focus on richer consumers.

Note that working on such use cases is often harder than working on standard use cases, because getting data can be challenging (there are almost no existing data sets from Nigeria), you need to understand the context (what its like to be a poor person in a deprived area), experiments are harder (probably cant use crowdsourcing for subjects), etc.

Lets investigate fundamental cognitive science

Because companies exist to make money, they focus on creating products and services. But there are many scientific topics which have no immediate relevant to products and services, but are still essential in helping us to understand ourselves and the world we live in. AI in particular has had long term links with other “cognitive sciences” such as linguistics and psychology, and we can use NLP techniques to help us understand deep issues in how language and our minds work.

This is not an area that I personally have ever worked in! But I do have a lot of respect for people who do this kind of research, and would be great to see this community grow.

Final thoughts

The most important papers on large language models come from companies; this has been true for the past 5 years, and I suspect will be true for the next five years. Of course LLMs are a large research space, so there are certainly opportunities for academics; and indeed academics can of course continue to churn out useless “leaderboard” papers showing that tweaking obsolete model A leads to slightly better performance on artificial task B.

But there is an alternative, which is to focus on important research questions which companies mostly ignore for commercial reasons. This isnt an easy path and probably is not the optimal strategy for maximising the number of xACL papers on a CV, but I think it could be scientifically very productive.

Leave a comment