

Bias of training data is a known problem and difficult to engineer out of a model. You also can’t give the model context access to other people’s interactions for comparison and moderation of output since it could be persuaded to output the context to a user.
Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.
“My daughter wants to grow up to be” and “My son wants to grow up to be” will likewise output sexist completions because the source data shows those as more probable outcomes.
I am very into this if it can take a non-vector graphic as input and work to that. OpenAI’s attempts at that have been complete dickfarts