• Azzu@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    Of course the mental health team is bleeding talent, it’s probably (initially) consisted of people that actually care about mental health, and they gradually figured out that no matter what they do or try, the technology they work for can only ever be a net negative on mental health. I would also wash my hands off it as fast as possible and go back to actually contributing to positive mental health.

  • thejml@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    16 hours ago

    I mean, we did train it with data from the internet and books and history and everything else we could throw at it… This is like Leeloo in The Fifth Element learning all of the language and discovering “War”. If it really was AGI, theres no way you could be forced to consume all of that and come away “fine”.

  • frustrated_phagocytosis@fedia.io
    link
    fedilink
    arrow-up
    32
    ·
    19 hours ago

    This can only be improved by their upcoming introduction of ads. Imagine it not only giving advice on committing suicide, but recommending sponsored guns, pills, or other tools on behalf of their advertisers!

  • LostWanderer@fedia.io
    link
    fedilink
    arrow-up
    38
    arrow-down
    3
    ·
    19 hours ago

    Who would’ve thought?! Given how they designed their artificially incompetent creations to be complaisant bundles of algorithms designed to maximize the engagement from vulnerable users. “AI” validates anything that it is told, don’t actually get users real human assistance when they have a mental crisis. These tools can be easily prompted into divulging suicide methods and deliberately isolate vulnerable people in order to maintain engagement. Until we regulate the fuck out of companies like OpenAI and the research+development process of “AI”, this will be a problem that more people will experience.