• mcv@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    15 hours ago

    They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.

    I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.

    Never ever trust them. Always verify.

    • BanMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I’m not one to stump for AI but 2-3 years ago we would have said AI struggled to kick out a working Powershell script and now the error rate for complex scripts is maybe 5%. The tech sped up very fast, and now they’re getting runtime environments to test the code they write, memories and project libraries. the tech will continue to improve. In 2026, 2028 are we still going to be saying the same about how AI can’t really handle coding or take people’s jobs? Quite a bit less. In 2030, less still.

      There is a point beyond which no refinements can be made but just looking backward a bit, I don’t think we’re there yet.

      Just in the past few months, I’d say Claude has gotten good enough to let us downsize our team from 3.5 to 2.5 but thankfully no one is interested in doing that.