• sapient [they/them]@infosec.pub
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    I hope not. Not a big fan of propriety AI (local AI all the way, and I hope people leak all these models, both code and weights), but fuck copyright and fuck capitalism which makes automation seem like a bad thing when it shouldn’t be ;p nya

    • wim@lemmy.sdf.org
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      Yes, because AI and automation will definitely not be on the side of big capital, right? Right?

      Be real. The cost of building means they’re always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.

      • sapient [they/them]@infosec.pub
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Be real. The cost of building means they’re always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.

        Distribution of LoRA-style fine-tuning weights means that FOSS AI systems have a long term advantage because of compounding effects. .

        That is, high-quality data provided for smaller models and very small “model finetuning” weights, which is more accessible to open groups, are sufficiently accessible and modular in their improvements to a given model that the FOSS community can take and run with it to compete effectively with proprietary groups from even a single leak.

        Furthermore, smaller and more efficient models which can be run on lower end hardware also avoid the need to send off potentially sensitive data to AI companies and enable the kinds of FOSS compounding effect explained above.

        This doesn’t just affect people who like privacy, but also companies with data privacy requirements . - as long as the medium models are “good enough” (which I think they are ;p), the compounding effects of LoRA tuning and better data privacy properties, and further developments which already exist in research papers towards much lower weight-count models and training mechanisms capable of greater weight efficiency to induce zero-shot learning, mean local AI can compete with proprietary stuff. It’s still early days but it is absolutely doable even today with fairly low-end hardware, and it can only get better for the reasons provided.

        Furthermore, “intellectual property” and copyright stuff have an absolutely massive and arguably even more powerful set of industries behind them. Trying to strengthen IP stuff against AI means that AI will only be available to those controlling these existing IP resources and it’s unending stranglehold on technology and communication and people as a whole :/

        AI I think is also forcing more and more people to look and reevaluate society’s relationship with work and labour. And frankly I think that this is super important, as it enables a greater chance of more radical liberation from the existing structures of not just capitalism and it’s hierarchies but the near-mandatoriness of work as a whole (though there has already been some stuff like this around the concepts of “bullshit jobs”).

        I think people should use this as an opportunity to unionise and also try and push for cooperative and democratic control of orgs ., and many other things that I CBA to list out ;3

    • hascat@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      No leaks necessary; there are a number of open-source LLM’s available:

      https://github.com/Hannibal046/Awesome-LLM#open-llm

      The key differentiator between these and proprietary offerings will always be the training data. Large amounts of high-quality data will be more difficult for an individual or a small team to source. If lawsuits like this one block ingestion of otherwise publicly-available data, we could have a future where copyright holders charge AI builders for access to their data. If that happens, “knowledge” could become exclusive to various AI platforms much the same way popular shows or movies are exclusive to streaming platforms.

      • pax@rblind.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        the opensource models are so bad that they give you responses out of context. they have completely random responses.