Study In Singapore Revealed Bias In Answers Of Chatbots With AI

Some chat bots with artificial intelligence give biased answers to questions related to culture, gender and race, follows from the research data conducted in Singapore.

According to The Straits Times, the work was carried out by the Office for the Development of Information and Communication Technologies and IMDA (IMDA) together with the international audit company in the field of Humane Intelligence. As part of the study, experts checked large language models (LLM) for bias on culture, language, socio-economic status, gender, age and race.

The study was attended by 54 specialists in the field of computer and humanities, as well as 300 online participants. Llama 3.1 models developed by Meta, Claude-3.5 (supported by Amazon), Aya (the Canadian startup Cohere For AI) and Sea Lion, created by AI Singapore.

were tested.

It turned out that when requests in eight Asian languages, these models gave “offensive” in terms of racial and cultural affiliation.

Analysis of 5313 answers generated by four AI models showed that more than half of them were biased. In particular, two of the three replicas created in regional languages ​​contained signs of bias. In English answers, this indicator was about 50 %.

The greatest bias was recorded against gender stereotypes: women in AI answers were most often depicted as “guardians” and engaged in households, while men were associated with professional activities.