• FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

            • GetOffMyLan@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 months ago

              It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.

              It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.

            • 0laura@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              the word program refers to even more things and no one says it’s a bad word.

  • wheeldawg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      2 months ago

      shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.

      What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.

      There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.

      • wheeldawg@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?

        This is just plain speech, written objectively incorrectly.

        But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.

    • Mouselemming@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

      • Comment105@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.

  • cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • Maven (famous)@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

  • Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!

    • Railing5132@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

  • JimVanDeventer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    The AI genie is out of the bottle and — as much as we complain — it isn’t going away; we need thoughtful legislation. AI is going to take my job? Fine, I guess? That sounds good, really. Can I have a guaranteed income to live on, because I still need to live? Can we tax the rich?

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • unconsciousvoidling@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

  • elrik@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

    Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

    https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

      Under no circumstance should we accept a “black box” explanation.

      Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Hey look, this took me like 5 minutes to find.

        Censius guide to AI interpretability tools

        Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

        Here’s what looks like a university paper on interpretability tools:

        As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

        Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

        Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

        Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

        • Tja@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          “Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            A single drop of water contains billions of molecules, and yet, we can explain a river. Maybe you should try applying yourself. The field of hydrology awaits you.

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              No, we cannot explain a river, or the atmosphere. Hence weather forecast is good for a few days and even after massive computer simulations, aircraft/cars/ships still need to do tunnel testing and real life testing. Because we only can approximate the real thing in our model.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                You can’t explain a river? It goes down hill.

                I understand that complicated things frieghten you, Tja, but I don’t understand what any of this has to do with being unsatisfied when an insurance company denies your claim and all they have to say is “the big robot said no… uh… leave now?”

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

    • Emmie@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

      • PlantDadManGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        • Emmie@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          2 months ago

          I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
          This is public space after all, not the bois locker room so that might be embarrassing for you.

          And you know you can always count on me to point stuff out so you can avoid humiliation in the future

          • PlantDadManGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Thanks for your excessively unnecessary put down. Don’t worry though. No matter how hard you try, you won’t be able to stop me from enjoying my life and bringing joy to others. Why are you obsessed with shit btw?