I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.
I would be extremely extremely surprised if the AI model did anything different with “this comment is protected by CC license so I don’t have the legal right to it” as compared with its normal “this comment is copyright by its owner so I don’t have the legal right to it hahaha sike snork snork snork I absorb” processing mode.
No but if they forget to strip those before training the models, it’s gonna start spitting out licenses everywhere, making it annoying for AI companies.
It’s so easily fixed with a simple regex though, it’s not that useful. But poisoning the data is theoretically possible.
Only if enough people were doing this to constitute an algorithmically-reducible behavior.
If you could get everyone who mentions a specific word or subject to put a CC license in their comment, then an ML model trained on those comments would likely output the license name when that subject was mentioned, but they don’t just randomly insert strings they’ve seen, without context.