← Return

The AI tool quickly became good at Swedish

For the past two years, Joakim Nivre has been involved in developing language models based on Swedish texts. Photo: Daniel Olsson

Joakim Nivre, a Professor of Computational Linguistics, explains how language models like Chat-GPT have become so proficient in Swedish. These models are trained by predicting the next word in a text and receiving feedback on their accuracy. Over time, the models learn the language and develop knowledge about various topics. The more data they are trained on, the more knowledge they acquire. In comparison to the Swedish language models developed by researchers in Sweden, Chat-GPT and its successor GPT-4 have demonstrated superior language capabilities, providing relevant answers to questions. However, there is a need for Swedish language models in specific domains, such as healthcare, where sensitive data cannot be shared with external models like Chat-GPT. The challenge lies in developing language models that can run in closed systems while still requiring significant computing power. The possibility of a centralized IT infrastructure for municipalities and regions to utilize language models and other AI technologies, such as interpreting medical images, is also suggested.