Poor facsimile: The problem in chatbot conversations with historical figures

WHAT HAPPENED: A recent article in the Washington Post had a journalist interview Harriet Tubman, anthropomorphizing the chatbot and faulting it for not engaging in critical race theory (CRT). It faced backlash from the community, who were (rightfully) enraged that this was in poor taste, especially when there is tons of good literature available about this very important historical figure.

FOGGY VISION: It is important to recognize that AI systems often provide a poor representation and imitation of a person's true identity. As a reference, it can be compared to a blurry JPEG image, lacking depth and accuracy. AI systems are also limited by the information that has been published and captured in their training datasets. The responses they provide can only be as accurate as the data they have been trained on. It is crucial to have extensive and detailed data in order to capture the relevant tone and authentic views of the person being represented.

EROSION OF THINKING: While accuracy is an important measure, relying solely on Q&A with an AI chatbot version of an article can lead to a decline in critical reading skills. Instead of actively engaging with the text to develop their own understanding and placing it within the context of other literature and references, individuals may simply rely on the chatbot for answers.

NOT HUMAN: Additionally, the anthropomorphization of AI systems can exacerbate ethical issues. Referring to a bot as "she" or "her" can create a false sense of interaction and human-like qualities, blurring the lines between technology and humanity. This raises concerns about the appropriate use and ethical implications of AI technology.

It is crucial for media outlets and society as a whole to critically examine the ethics of AI and consider its limitations, potential impact on critical thinking, and the importance of preserving a clear distinction between human and artificial intelligence.


SHOULD WE DO IT?: First and foremost, conducting such interactions in an ethical manner necessitates a comprehensive approach. This would involve meticulously defining the inputs and carefully fine-tuning the AI model. Additionally, adopting a retrieval-augmented approach can help ensure that the conversations remain within historically accurate boundaries. By combining these measures, we can promote responsible and accurate engagement with the past.

GO BEYOND: Furthermore, it is crucial to involve and collaborate with communities and stakeholders who possess substantial knowledge and understanding of the historical figure in question. These individuals can offer invaluable insights and effectively represent the perspectives of someone who is no longer alive. However, it is important to acknowledge that a complete representation of someone else's views may not always be achievable. Despite this limitation, working closely with knowledgeable individuals can help mitigate potential backlash, as evidenced by the negative response to that article.

SEEKING CONSENT: In addition, seeking consent from relatives and individuals who hold relevant authority over the portrayal of the historical figure is of utmost importance. Obtaining their permission and involving them in the interview process can lead to a more respectful and considerate engagement with the AI-generated conversations. By doing so, we acknowledge the significance of consent and strive for a more inclusive and accurate representation.

MAKE EDUCATIONAL METHODS BETTER: Ultimately, we must critically evaluate the purpose and value of interacting with a simulated version of a historical figure. It is essential to question whether the benefits of such interactions truly outweigh other ways of enhancing the accessibility of historical texts. For instance, investing in improved in-class instructions can make existing texts more accessible and foster deeper understanding among students. This consideration helps us assess the ethical implications and potential societal impact of utilizing AI in historical conversations.

The increasing popularity of AI doesn't necessarily mean that it is right or ethical. One concern I have is the potential erosion of deep investment in critical engagement with historical texts and literature for the purpose of knowledge building.


WHAT WILL WE DO?: As AI continues to automate various tasks, including knowledge generation and critical thinking work, there is a risk of abandoning the last bastion of what gives us a distinctive advantage as humans - the ability to assimilate, synthesize, and create new ideas. If we entirely rely on AI to perform these cognitive processes, we run the danger of relinquishing our pursuit of creating value for ourselves and society.

While AI-assisted approaches can undoubtedly be helpful tools and means to an end, it is crucial to ensure that they do not become the sole method of inquiry. In a classroom setting, where young minds are impressionable, it is vital to be mindful of the capabilities and limitations of AI tools. Using such tools without an understanding of their boundaries and biases may hinder the success of education and impede critical thinking.

Abhishek Gupta

Founder and Principal Researcher, Montreal AI Ethics Institute

Director, Responsible AI, Boston Consulting Group (BCG)

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

Chair, Standards Working Group, Green Software Foundation

Author, AI Ethics Brief and State of AI Ethics Report

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Deciding who decides: AIES 2023 - Day 1

Next
Next

Hallucinating and moving fast