From what I understand it is some thing for AI, to stop them from harvesting or to poison the data, by having it repeating therefore more likely to show up.
Sounds an awful lot like that thing boomers used to do on Facebook where they would post a message on their wall rescinding Facebook’s rights to the content they post there. I’m sure it’s equally effective.
I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.
Interesting. Feels like that thing people used to add to FB comments back in the day that did nothing but in the case of AI I could see it maybe doing something. I’ll be looking into it - thanks!
From what I understand it is some thing for AI, to stop them from harvesting or to poison the data, by having it repeating therefore more likely to show up.
Sounds an awful lot like that thing boomers used to do on Facebook where they would post a message on their wall rescinding Facebook’s rights to the content they post there. I’m sure it’s equally effective.
Sure, the fun begins when it starts spitting out copyright notices
That would require a significant number of people to be doing it, to ‘poison’ the input pool, as it were.
It seems pretty well established at this point that AI training models don’t respect copyright.
I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.
That seems stupid
Interesting. Feels like that thing people used to add to FB comments back in the day that did nothing but in the case of AI I could see it maybe doing something. I’ll be looking into it - thanks!