How do we ensure responsible technology?
Professor Brit Ross Winthereik's field of research is the interplay between humans and digital technologies. Photo: Mikal SchlosserPeople react strongly to ChatGPT for several reasons. One reason is that it interacts with users in a different way compared to traditional search engines, providing personalized responses that may be mistaken for reasoning. This can lead to controversy because ChatGPT utilizes data from the public domain, which was not intended for corporate profit-making, raising concerns about the exploitation of publicly available information.
Furthermore, the collection of user data is different from traditional search engines because of the unique way ChatGPT interacts with users, fostering a sense of attachment to the ideas or thoughts shared. Additionally, there are concerns about potential biases in the system, as machine learning-based software systems tend to amplify prevalent elements in the data they are trained on, which can include racism and misogyny.
Professor Brit Ross Winthereik encourages approaching new technology with open and analytical minds to assess its responsible use. Responsible technology is defined by how it aligns with its intended goals, its actual effects in practice, and its adherence to societal values. Monitoring specific effects is crucial to identifying any inappropriate consequences and ensuring that technologies deliver on their promises.
Assessing technology's responsibility involves contextual analysis, examining its broader infrastructures, business models, and cultural values. It requires analyzing the technology's effects, its limitations, and any potential exclusions. Alignment with organizational, sectoral, or national values is essential, and decisions can be made to promote, restrict, or reject a technology accordingly.
Determining whether a technology is necessary involves considering its impact on human development and whether it aligns with desired goals. For example, in the context of education, the responsible approach to chatbots may involve drawing boundaries and collecting experiences through dialogue with teachers and students. It's also essential to foster a basic understanding of technology, highlighting that it carries inherent preferences, values, standards, policies, and history.
Regulation is seen as necessary to ensure responsible technology use, particularly in areas like privacy protection. The regulation of Big Tech companies and their data practices is challenging due to their business models, but efforts such as the EU's General Data Protection Regulation (GDPR) aim to address these concerns. Further research is needed to understand the interaction between humans and automated systems, enabling informed decision-making at the societal level and safeguarding democratic institutions and trust in authorities.