In the article on BBC, Chris Vallance writes about OpenAI, the artificial intelligence research firm co-founded by Elon Musk. It has unveiled a new chatbot called ChatGPT. Within just one week of its public release, the chatbot has already attracted over one million users.
But while the development of ChatGPT is a remarkable technological achievement, OpenAI is quick to acknowledge that the chatbot can produce problematic responses and exhibit biased behaviour. The company has stated that it is “eager to collect user feedback to aid our ongoing work to improve this system.”
As the latest in a series of AIs referred to by OpenAI as GPTs, or Generative Pre-Trained Transformers, ChatGPT was developed through conversations with human trainers who fine-tuned an early version of the system.
Impressive Results, But Potential Risks
Despite its impressive initial results, there are still risks involved with the use of such technology. OpenAI’s warning about problematic responses and biased behaviour are clear indicators that chatbots, like ChatGPT, can be prone to making mistakes and even promoting problematic viewpoints. Or used in an ill-mannered way, such as making homework or writing reports.
Are you interested in analysing texts, go to GPT-Detective. If you are interested in sentence-level analysis on the use of GPT, go to Document-Detective.