• Chahk@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    3 months ago

    “AI is nowhere near to being ready to replace you at your job. It is, however, ready enough to convince your boss that it’s ready to replace you at your job.”

    • Barry Zuckerkorn@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      I remember reading an article or blog post years ago that persuasively argued that the danger of AI is not going to be that it ends up doing things better than humans, but that it causes a lot of harm when entrusted with tasks it actually isn’t good at. I think that thesis seems much more plausible now, watching people respond to clearly flawed AI systems.

    • ShepherdPie@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      This is nothing new though. For decades, managers have fallen for “solution in a box” sales pitches even though front line workers know it’s doomed to fail as soon as they set eyes on it. This time the solution just happens to be “AI.”

  • helenslunch@feddit.nl
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 months ago

    “AI convinced me of something I later learned was completely incorrect, isn’t that amazing!”

    No. No, this is bad. Very bad.

  • Seasoned_Greetings@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Unpopular opinion incoming:

    I don’t think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don’t. No AI diagnosis comes without a physician double checking anyway.

    For that reason, I don’t think it’s necessarily a bad thing that an AI got it wrong. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.

    If the patient was insistent enough that something was wrong, they would have had them double check or would have gotten a second opinion anyway.

    Flaming the AI for not being correct is missing the point of using it in the first place.

    • rho50@lemmy.nz
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 months ago

      I don’t think it’s necessarily a bad thing that an AI got it wrong.

      I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

      There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.

  • Mastengwe@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    The minute I see some tool praising the glory of AI, I block them. Engaging with them is a futile waste of time.

  • NeatNit@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I’m not following this story…

    a friend sent me MRI brain scan results and I put it through Claude

    I annoyed the radiologists until they re-checked.

    How was he in a position to annoy his friend’s radiologists?