An OpenAI collaborator has sparked a vigorous discussion about AI in therapeutics
OpenAI plans to introduce a new voice feature for ChatGPT, allowing people to use the bot to have voice conversations about deep topics. However, this has raised concerns among scientists and AI ethics experts, who fear that using chatbots in therapy could have unpredictable risks.
OpenAI's Head of Security, Lilian Weng, shared her emotional conversation with ChatGPT about stress and work-life balance. She remarked that she felt supported, but experts warn that such models can convince people to make harmful choices, and that anthropomorphizing AI requires special attention to ethical considerations.
Timnit Gebru, one of the experts, expressed concern about the safety and effectiveness of this approach. He noted that the example of Eliza, a chatbot created in the 1960s, raises concerns because of its ability to anthropomorphize and its lack of ability to provide quality therapy assistance.
Despite the benefit in using chatbots in the initial stages, it is necessary to evaluate their limitations, especially when treating structured modalities such as cognitive-behavioral therapy.
Experts warn OpenAI of the potential risks and advise that Eliza's experience be carefully considered to avoid undesirable consequences.
Given these concerns, the use of ChatGPT as a therapy is questionable and should be scrutinized to determine what risks and benefits may be associated with this approach.
Read more about it here.