Summary

This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.

Key findings:

  • Threat actors are exploring LLMs for various tasks: gathering intelligence, developing tools, creating phishing emails, evading detection, and social engineering.
  • No major attacks using LLMs were observed: However, early-stage attempts suggest potential future threats.
  • Several nation-state actors were identified using LLMs: Including Russia, North Korea, Iran, and China.
  • Microsoft and OpenAI are taking action: Disabling accounts associated with malicious activity and improving LLM safeguards.

Specific examples:

  • Russia (Forest Blizzard): Used LLMs to research satellite and radar technologies, and for basic scripting tasks.
  • North Korea (Emerald Sleet): Used LLMs for research on experts and think tanks related to North Korea, phishing email content, and understanding vulnerabilities.
  • Iran (Crimson Sandstorm): Used LLMs for social engineering emails, code snippets, and evading detection techniques.
  • China (Charcoal Typhoon): Used LLMs for tool development, scripting, social engineering, and understanding cybersecurity tools.
  • China (Salmon Typhoon): Used LLMs for exploratory information gathering on various topics, including intelligence agencies, individuals, and cybersecurity matters.

Additional points:

  • The research identified eight LLM-themed TTPs (Tactics, Techniques, and Procedures) for the MITRE ATT&CK® framework to track malicious LLM use.
  • Funderpants @lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    I mean, yea okay, but most of those use cases are exactly what everyone else is using them for so far.

  • Pantherina@feddit.de
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    4 months ago

    And thats why you dont produce tools that are not needed and cause harm, MicroShit

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      4 months ago

      I am baffled that you appear to be attacking Microsoft over this. They’re doing research to counter bad actors here.

      • Pantherina@feddit.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        They are funding and forcefully pushing that tool to Windows. And now they want to “protect” against “threat actors”.

        Dont believe a word that comes out of Big Tech PR departments.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          4 months ago

          You think Microsoft is the only organization capable of producing these tools? They weren’t even the first.

          • Pantherina@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            That is true. Still, huge big tech companies are the biggest threat actors

      • demonsword@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        They’re doing research to counter bad actors here

        “Bad actors” as defined by the US gov’t, of course. Home of the “brave” that bombs the shit out of everyone they dislike using unmanned drones, and currently supports a ongoing genocide happening right now in the middle east. Literally the paradise of freedom and justice on Earth.