ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions::A new study shows AI’s capabilities at analyzing medical text and offering diagnoses — and forces a rethink of medical education.

  • jsveiga@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    How would the students fare if they had access to all the information available on the internet, used to train the AI, during the test?

      • RaincoatsGeorge@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        I’ve used chat gpt a bit to see what it spits out in terms of medical education. I don’t trust it to be completely accurate but for the things where I’m able to verify it is true it does surprisingly well. There are a number of databases that exist with specifically verified content that is current and reliable that doctors use. If you could isolate the ai to only use that information you could reduce the risk of it spitting out false information and doctors could use it to spitball ideas or get assistance pulling protocols and guidelines and whatnot. I definitely could see language model ai like this getting used to assist clinical providers in the future. I could also see it used to further automate patient monitoring which we already do quite a bit but still struggle to master. Current ai models can identify high risk patients hours before a human can identify them and they improve outcomes. This will only continue but it will certainly not be replacing humans in this equation anytime soon.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      for a human, that’s probably too much information to be useful. That’s why chatGPT is so powerful. It can sort through all that cruft and find “relevant” information.

      it’s an incredibly complicated set of If-then statements that lead it through a decision tree; ultimately responding to a prompt using what is most commonly followed up in similar prompts on the internet.

      It fails on knowing if the information is useful, or even correct, however. and it receives the biases inherited both from the people who wrote the if-then statements and the data to which it was fed. Further, the narrow AI’s we have today have no agency, no creativity or intuition. It fakes all of these things in order to make us believe it’s ‘real’- that’s what it’s programed to do.