A recent investigation uncovered limitations in ChatGPT’s ability to offer localized information on environmental justice issues. Virginia Tech, a U.S. university, released a report outlining these biases, indicating disparities in ChatGPT’s responses across various counties.
According to the study, ChatGPT struggles to provide location-specific information on environmental justice issues. Interestingly, it revealed that this information is more readily accessible to residents in larger, densely populated states.
“In states with substantial urban populations like Delaware or California, less than 1 percent of the population resides in counties lacking specific information,” the report noted. Meanwhile, regions with smaller populations face a significant information gap.
“For rural states such as Idaho and New Hampshire, over 90 percent of the population resides in counties without access to local-specific information,” the report highlighted.
Kim, a lecturer from Virginia Tech’s Department of Geography, emphasized the need for further research as these biases are unveiled. “While more investigation is warranted, our findings indicate the existence of geographic biases within the ChatGPT model,” Kim stated.
The research paper also featured a map illustrating the extent of the U.S. population without access to location-specific information on environmental justice issues.
This development follows recent reports revealing potential political biases exhibited by ChatGPT. On August 25, researchers from the United Kingdom and Brazil published a study indicating that large language models (LLMs) like ChatGPT can produce text containing errors and biases that may mislead readers and reinforce political biases found in traditional media.