Search

    Select Website Language

    GDPR Compliance

    We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

    Big Tech’s Reckless Release of AI Companion Products Sacrifices Safety for Profit

    7 hours ago

    According to a new report from Public Citizen, Counterfeit Companionship, Big Tech companies are pushing experimental, unsafe AI companions onto the American public, putting millions at risk of emotional and physical harm.The AI companion or companion-like products investigated in the report are large language models (LLM) that use generative AI to emulate close friendships, romantic relationships, and other social interactions. Millions of Americans report using these AI companion products, and over half of teens report using them regularly. While evidence of their effectiveness for alleviating loneliness has been mixed, tragic outcomes, including multiple deaths by suicide, have become increasingly common.“Recklessly pushing unsafe, experimental AI products on vulnerable people is predatory corporate behavior at its worst,” said Rick Claypool, a Public Citizen research director and author of the report. “Many people struggle with feelings of loneliness and isolation, but the inherent limitations of AI products make them a poor and dangerous substitution for genuine human relationships. Big Tech’s failure to prioritize anything beyond big profits comes at the expense of us all. We need urgent action to get dangerous products off the market — particularly for children, who are especially vulnerable to emotional manipulation by AI products.”Top findings from the report include: Eleven suicide deaths have been attributed to AI companion products so far. The victims’ ages range from 13 to 56. Numerous instances of delusions and other harmful behavior involving these products have been widely reported.Some of the biggest Big Tech corporations, including Google, Meta, OpenAI, and xAI have been involved in developing and deploying AI companion products – and, too often, have prioritized short-term profits and competition for market share over user safety.Policy solutions to protect vulnerable users are gaining momentum. A Public Citizen model bill would protect children and adolescents from becoming emotionally attached to these products by prohibiting access for users under 18.
    Click here to Read More
    Previous Article
    Saint Louis University rises to No. 21 in the latest national rankings
    Next Article
    DJ Khaled Flaunts $1.8 Million Ultra-Rare Rolex At Reserve Cup Miami

    Related News Updates:

    Are you sure? You want to delete this comment..! Remove Cancel

    Comments (0)

      Leave a comment