• Bloefz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I work with AI and use it personally, but I have my own servers running local models which solves tons of privacy concerns. The inaccuracy is another problem but not a big one for me as I know it and will simply fact check. Also, I don’t really use it for knowledge anyway. Just to filter news to my interest, help with summaries and translation etc.

    People use AI as some all-knowing oracle but an LLM is not meant for that at all.

    • Infynis@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      There are definitely things AI is good for. Archival search is obviously the biggest, because that’s what we’ve been using it for decades. It can also be helpful for subterranean and medical imaging, and art restoration. But the companies selling it want to sell a Magic 8 Ball with ads

    • Clanket@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      How do you know it’s doing any of this correctly, especially filtering and translations?

      • Bloefz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        I mainly use it for Spanish which I have a basic proficiency in. It just accompanies me on my learning journey. It may be wrong sometime but not often. Like the other reply said, LLMs are good at languages, it’s what they were originally designed for until people found out they could do more (but not quite as well).

        And as for filtering, I just use it as a news feed sanitizer with a whole bunch of rules. It will miss things sometimes but it’s also my ruleset that’s not perfect. I often come across the unfiltered sources anyway and even if it misses something, it’s only news. Nothing really important to me.

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          It’s funny, I had half been avoiding it for languages. I had lots of foreign friends and they often lived together in houses and those houses would almost have this creole. They came to learn English and were reinforcing their own mistakes but it was mutually intelligible so the mistakes were reinforced and not caught. I suspect LLMs would be amazing at doing that to people and their main use case along these lines seems like it would be to practice at a slightly higher level than you so I suspect some of those errors would be hard to catch / really easy to take as correct instead of validating

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        Not OP, but…

        It’s not always perfect, but it’s good for getting a tldr to see if maybe something is worth reading further. As for translations, it’s something AI is rather decent at. And if I go from understanding 0% to 95%, really only missing some cultural context about why a certain phrase might mean something different from face value, that’s a win.

        You can do a lot with AI where the cost of it not being exactly right is essentially zero. Plus, it’s not like humans have a great track record for accuracy, come to think of it. It comes down to being skeptical about it like you would any other source.

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          Strongly disagree with the TLDR thing

          At least, the iPhone notifications summaries were bad enough I eventually turned them off (but periodically check them) and while I was working at Google you couldn’t really turn of the genAI summaries of internal things (that evangelists kept adding to things) and I rarely found them useful. Well… they’re useful if the conversation is really bland but then the conversation should usually be in some thread elsewhere, if there was something important I don’t think the genAI systems were very good at highlighting it

          • ctrl_alt_esc@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            Completely agree, those summaries are incredibly bad. I was recently looking for some information in Gemini meeting notes and just couldn’t find it, even though I was sure it had been talked about. Then I read the transcript itself and realised that the artificial unintelligence had simply left out all the most important bits.