How Many Languages Does Wikimedia Search Support?

TL;DR: We “support” a lot of “languages”! “More than 50 language varieties” is a defensible position to take. “More than 40 languages” is 100% guaranteed. “Over 300 languages” is admittedly expansive spin, but not entirely untrue. Precise numbers present a philosophical conundrum.

Recently, someone asked the Wikimedia Search Platform Team how many languages we support.

This is a squishy question!

The definition of what qualifies as a language is very squishy. We can try to avoid some of the debate by outsourcing the decision to the language codes we use—different codes equal different languages—though it won’t save us.

Another squishy concept is what we mean by “support”, since the level of language-specific processing provided for each language varies wildly, and even what it means to be “language-specific” is open to interpretation. But before we unrecoverably careen off into the land of philosophy of language, let’s tackle the easier parts of the question.

Full Support

“Full” support for many languages means that we have a stemmer or tokenizer, a stop word list, and we do any necessary language-specific normalization. (See the Anatomy of Search series of blog posts, or the Bare-Bones Basics of Full-Text Search video for technical details on stemmers, tokenizers, stop words, normalization, and more.)

CirrusSearch/Elasticsearch/Lucene

The wiki-specific custom component of on-wiki search is called CirrusSearch, which is built on the Elasticsearch search engine, which in turn is built on the Apache Lucene search library.

Out of the box, Elasticsearch 7.10 supports these 33 languages, so CirrusSearch does, too.

  • Arabic, Armenian, Basque, Bengali, Bulgarian, Catalan, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hindi, Hungarian, Indonesian, Irish, Italian, Latvian, Lithuanian, Norwegian, Persian, Portuguese, Romanian, Russian, Sorani (Central Kurdish), Spanish, Swedish, Turkish, and Thai.

Notes:

  • Sorani has language code ckb, and it is often called Central Kurdish in English.
  • Persian and Thai do not have stemmers, but that seems to be because they don’t need them.

Elasticsearch 7.10 also has two other language analyzers:

  • The “Brazilian” analyzer is for Brazilian Portuguese, which is represented by a sub-language code (pt-br). However, the Brazilian analyzer has all separate components, and we do use it for the brwikimedia wiki (“Wiki Movimento Brasil”).
  • The “CJK” (which stands for “Chinese, Japanese, and Korean”) analyzer only normalizes non-standard half-width and fixed-width characters (ア→ア and A→A), breaks up CJK characters into overlapping bigrams (e.g., ウィキペディア is indexed as ウィ, ィキ, キペ, ペデ, ディ, and ィア), and applies some English stop words. That’s not really “full” support, so we won’t count it here. (We also don’t use it for Chinese or Korean.)

We will count Brazilian Portuguese as a language that we support, but also keep a running sub-tab of “maybe only sort of distinct” language varieties.

We’ll come back to Chinese, Japanese, Korean, and the CJK analyzer a bit later.

Simplified language tree in Swedish, Public Domain, by Ingwik

Other Open Source Analysis

We have found some open source software that does stemming or other processing for particular languages. Some as Elasticsearch plugins, some as stand-alone Java code, and some in other programming languages. We have used, wrapped, or ported as needed to make the algorithms available for our wikis.

  • We have open-source Serbian, Esperanto, and Slovak stemmers that we ported to Elasticsearch plugins.
    • There are currently no stop word lists for these languages. However, for a typical significantly inflected alphabetic Indo-European language, a decent stemmer is the biggest single improvement that can be added to an analysis chain for that language. Stop words are very useful, but general word statistics will discount them even without an explicit stop word list.
    • Having a stemmer (for a language that needs one) can count as the bare minimum for “full” support.

[†] English is weird in that it is not significantly inflected. Non-Indo-European languages can have very different inflection patterns (like Inuit—so much!—or Chinese—so little!), and non-alphabetic writing systems (like Arabic or Chinese) can have significantly different needs beyond stemming to count as “fully” supported.

  • For Chinese (Mandarin) we have something beyond the not-so-smart (but much better than nothing!) CJK analyzer provided by Elasticsearch/Lucene. Chinese doesn’t really need a stemmer, but it does need a good tokenizer to break up strings of text without spaces into words. That’s the most important component for Chinese, and we found an open-source plugin to do that. Our particular instantiation of Chinese comes with additional complexity because we allow both Traditional and Simplified characters, often in the same sentence. We have an additional open-source plugin to convert everything to Simplified characters internally.
  • For Hebrew we found an open-source Elasticsearch plugin that does stemming. It also handles the ambiguity caused by the lack of vowels in Hebrew (by sometimes generating more than one stem).
  • For Korean, we have another open-source plugin that is much better than the very basic processing provided by the CJK analyzer. It does tokenizing and part-of-speech tagging and filtering.
  • For Polish and Ukrainian, we found an open-source plugin for each that provides a stemmer and stop word list. They both needed some tweaking to handle odd cases, but overall both were successes.

Shared Configs

Some languages come in different varieties. As noted before, the distinction between “closely related languages” and “dialects” is partly historical, political, and cultural. Below are some named language varieties with distinct language codes that share language analysis configuration with another language. How you count these is a philosophical question, so we’ll incorporate them into our numerical range.

  • Egyptian Arabic and Moroccan Arabic use the same configuration as Standard Arabic. Originally they had some extra stop words, but it turned out to be better to use those stop words in Standard Arabic, too. Add two languages/language varieties.
  • Serbo-Croatian—also called Serbo-Croat, Serbo-Croat-Bosnian (SCB), Bosnian-Croatian-Serbian (BCS), and Bosnian-Croatian-Montenegrin-Serbian (BCMS)—is a pluricentric language with four mutually intelligible standard varieties, namely Serbian, Croatian, Bosnian, and Montenegrin. For various historical and cultural reasons, we have Serbian, Croatian, and Bosnian (but no Montenegrin) wikis, as well as Serbo-Croatian wikis. The Serbian and Serbo-Croatian Wikipedias support Latin and Cyrillic, while the Croatian and Bosnian Wikipedias are generally in Latin script. The Bosnian, Croatian, and Serbo-Croatian wikis use the same language analyzer as the Serbian wikis. Add three languages/language varieties.
  • Malay is very closely related to Indonesian—close enough that we can use the Elasticsearch Indonesian analyzer for Malay. (Indonesian is a standardized variety of Malay.) Add another language/language variety.

Moderate Language-Specific Processing

These languages have some significant language-specific(ish) processing that improves search, while still lacking some obvious component (like a stemmer or tokenizer).

  • For Japanese, we currently use the CJK analyzer (described above). This is the bare minimum of custom configuration that might be considered “moderate” support. It also stretches the definition of “language-specific”, since bigram tokenizing—which would be useful for many languages without spaces—isn’t really specific to any language, though the decision to apply it is language-specific.
    • There is a “full” support–level Japanese plugin (Kuromoji) that we tested years ago (and have configured in our code, even), but we decided not to use it because of some problems. We have a long-term plan to re-evaluate Kuromoji (and our ability to customize it for our use cases) and see if we could productively enable it for Japanese.
  • The Khmer writing system is very complex and—for Historical Technological Reasons™—there are lots of ways to write the same word that all look the same, but are underlyingly distinct sequences of characters. We developed a very complex system that normalizes most sequences to a canonical order. The ICU Tokenizer breaks up Khmer text (which doesn’t use spaces between words) into orthographic syllables, which are very often smaller than words. It’s somewhat similar to breaking up Chinese into individual characters—many larger “natural” units are lost, but all of their more easily detected sub-units are indexed for searching.
    • This is probably the maximum level of support that counts as “moderate”. It’s tempting to move it to “full” support, but true full support would require tokenizing the Khmer syllables into Khmer words, which requires a dictionary and more complex processing. On the other hand, our support for the wild variety of ways people can (and do!) write Khmer is one place where we currently outshine the big internet search engines.
  • For Mirandese, we were able to work with a community member to set up elision rules (for word-initial l’, d’, etc., as in some other Romance languages) and translate a Portuguese stop word list.
Romance languages diagram, CC BY-SA 4.0, by El Bux

Minimal Language-Specific Processing

Azerbaijani, Crimean Tatar, Gagauz, Kazakh, and Tatar have the smallest possible amount of language-specific processing. Like Turkish, they use the uppercase/lowercase pairs İ/i and I/ı, so they have the Turkish version of lowercasing configured.

However, Tatar is generally written in Cyrillic (at least on-wiki). Kazakh is also generally in Cyrillic on-wiki, and the switch to using İ/i and I/ı in the Kazakh Latin script was only made in 2021, so maybe we should count that as half?

(Un)Intentional Specific Generic Support

Well there’s a noun phrase you don’t see every day—what does it even mean?

Sometimes a language-specific (or wiki community–specific) issue gets generalized to the point where there’s no trace of the motivating source. Conversely, a generic improvement can have an outsized impact on a specific language, wiki, or community.

For example, the Nias language uses lots of apostrophes, and some of the people in its Wikipedia community are apparently more comfortable composing articles in word processors, with the text then being copied to the Nias Wikipedia. Some word processors like to “smarten” quotes and apostrophes, automatically replacing them with the curly variants. This kind of variation makes searching hard. When I last looked (some time ago) it also resulted in Nias Wikipedia having article titles that only differ by apostrophe curliness—I assume people couldn’t find the one so they created the other. Once we got the Phab ticket, we added some Nias-specific apostrophe normalization that fixed a lot of their problems.

Does Nias-specific apostrophe normalization count as supporting Nias? It might arguably fall into the “minimal” category.

About a year later, we cautiously and deliberately tested similar apostrophe normalization for all wikis, and eventually added it as a default, which removed all Nias-specific config in our code.

Does general normalization inspired by a strong need from the Nias Wiki community (but not really inherent to the Nias language) count as supporting Nias? I don’t even know.

At another time, I extended some general normalization upgrades that remove “non-native” diacritics to a bunch of languages, and an unexpectedly large benefit was that it was super helpful in Basque, because Basque searchers often ignore Spanish diacritics on Spanish words, while editors use the correct diacritics in articles, creating a mismatch.

If I hadn’t bothered to do some analysis after going live, I wouldn’t have known about this specific noticeable improvement. On the other hand, if I’d known about the specific problem and there wasn’t a semi-generic solution, I would’ve wanted to implement something Basque-specific to solve it.

Does a general improvement that turns out to strongly benefit Basque count as supporting Basque? I don’t even know! (In practice, this is a slightly philosophical question, since Basque has a stemmer and stopword list, too, so it’s already otherwise on the “full support” list.)

I can’t think of any other language-specific cases that generalized so well—though Nias wasn’t the first or only case of apostrophe-like characters needing to be normalized.

Of course, general changes that were especially helpful to a particular language are easy to miss, if you don’t go looking for them. Even if you do, they can be subtle. The Basque case was much easier for me, personally, to notice, because I don’t speak Basque, but I know a little Spanish, so the Spanish words really stood out as such when looking at the data.

Vague Categorical Support

It’s easy enough to say that the CJK analyzer supports Japanese (where we are currently using it) and that it would be supporting Chinese and Korean if we were using it for those languages—in small part because it has limited scope, and in large part because it seems specific to Chinese, Japanese, and Korean because of the meaning of “CJK”.

But what about a configuration that is not super specific, but still applied to a subset of languages?

Back in the day, we identified that “spaceless languages” (those whose writing system doesn’t put spaces between words) could benefit from (or be harmed by) specific configurations.

We identified the following languages as “spaceless”. We initially passed on enabling an alternate ranking algorithm (BM25) for them (Phab T152092), but we also deployed the ICU tokenizer for them by default.

  • Tibetan, Dzongkha, Gan, Japanese, Khmer, Lao, Burmese, Thai, Wu, Chinese, Classical Chinese, Cantonese, Buginese, Min Dong, Cree, Hakka, Javanese, and Min Nan.

14 of those are new.

We eventually did enable BM25 for them, but this list has often gotten special consideration and testing to make sure we don’t unexpectedly do bad things to them when we make changes that seem fine for languages with clearer word boundaries (like Phab T266027).

And what about the case where the “category” we are trying to support is “more or less all of them”? Our recent efforts at cross-wiki “harmonization”—making all language processing that is not language-specific as close to the same as possible on all wikis (see Phab T219550)—was a rising language tide that lifted all/most/many language boats. (An easy to understand example is acronym processing, so that NASA and N.A.S.A. can match more easily. However, some languages—because of their writing systems—have few if any native acronyms. Foreign acronyms (like N.A.S.A.) still show up, though.)

Family tree of the Indo-European languages, CC BY-SA 4.0, by EnriBrahimaj

Beyond Language Analysis

So far we’ve focussed on the most obviously languagey of the language support in Search, which is language analysis. However, there are other parts of our system that support particular wikis in a language-specific way.

Learning to Rank

Learning to Rank (LTR) is a plugin that uses machine learning—based on textual properties and user behavior data—to re-rank search results to move better results higher in the result list.

It makes use of many ranking signals, including making wiki-specific interpretations of textual properties—like word frequency stats, the number of words in a query or document, the distribution of matching terms, etc.

Arguably some of what the model learns is language-specific. Some is probably wiki-specific (say, because Wikipedia titles are organized differently than Wikisource titles), and some may be community-specific (say, searchers search differently on Wikipedia than they do on Wiktionary).

The results are the same or better than our previously hand-tuned ranking, and the models are regularly retrained, allowing them to keep up with changes to the way searchers behave in those languages on those wikis.

Does that count as minimal language-specific support? Maybe? Probably?

We tested the LTR plugin on 18 wikis:

  • Arabic, Chinese, Dutch, Finnish, French, German, Hebrew, Indonesian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Swedish, and Vietnamese.

One of those, Vietnamese, is new to the list.

Cross-Language Searching

Years ago we worked on a project on some wikis to do language detection on queries that got very few or no results, to see if we could provide results from another wiki. The process was complicated, so we only deployed it to nine of the largest (by search volume) Wikipedias:

  • Dutch, English, French, German, Italian, Japanese, Portuguese, Spanish, and Russian.

Those are all covered by language analyzers above. However, for each of those wikis, we limited the specific languages that could be identified by the language-ID tool (called TextCat), to maximize accuracy and relevance.

The specific languages allowed to be identified per wiki are listed in a table in a write up about the project.

The consolidated list of those languages is:

  • Afrikaans, Arabic, Armenian, Bengali, Breton, Burmese, Chinese, Croatian, Czech, Danish, Dutch, English, Finnish, French, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Korean, Latin, Latvian, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Tagalog, Telugu, Thai, Ukrainian, Urdu, and Vietnamese.

Nine of those are not covered by the language analyzers, and eight are not covered by the LTR plugin: Afrikaans, Breton, Burmese, Georgian, Icelandic, Latin, Tagalog, Telugu, and Urdu. (Vietnamese is covered both by Learning to Rank and TextCat.)

Does sending queries from the largest wikis to other wikis count as some sort of minimal support? Maybe. Arguably. Perhaps.

Conclusions?

What, if any, specific conclusions can we draw? Let’s look again at the list we have so far (even though it is also right above.)

We have good to great support (“moderate” or “full”) for 44 inarguably distinct languages, though it’s very reasonable to claim 51 named language varieties.

We do try to make improvements that apply to all or almost all 300+ languages (like acronym handling) or that help all wikis (like very basic parsing for East Asian languages on any wiki), but that doesn’t quite feel like “language support”… more like “search support”.

My final conclusion: I don’t even know.

(This blog post is a snapshot from July 2024. If you are from the future, there may be updated details on MediaWiki.org.)