AI tools are helpful and cool as long as you know their limitations. AI doesn’t exist.There is no fidelity in AI. AI is build on biased data sets and will gi...
I agree with her standpoints about the danger of AI, but I also think she ignores important applications of AI.
Of course AI shouldn’t be used for life and death decisions. If there’s even 0.1% risk someone can get harmed by the AI, then it shouldn’t be used (on its own).
But not all applications are life threatening. For example, you can use computer vision to determine the quality of apples. If the apple is bad, it will be immediately disposed. This can reduce the amount of bad apples being shipped to the grocery store, which in turn reduces cost.
Is it terrible if the AI misses 20% of the bad apples? Not really. Those apples would’ve been shipped anyway without the AI.
If you’re ok with some error, then you have a case for AI, and many industrial applications are like that.
How do you feel about the self driving car use-case? Say for example a self driving car has a 0.5% risk of an accident, and thus human harm, in it’s usage lifetime, but a human driver has a 5% risk of an accident (making numbers up for the sake of argument but let’s say the self driving car has a 0.1% chance of harm or greater but it’s still much lower than a human). Would you still be against the tech and ven though if we disallowed it there would statistically be more harm caused?
If it can be proven that it causes less accidents, maybe.
My fear is that the accidents can be systematically triggered. For example, one particular curve the AI has trouble understanding. Or a person standing in one particular corner causes the AI to completely misrepresent the scene. Or one particular color of a car makes it confused.
I agree with her standpoints about the danger of AI, but I also think she ignores important applications of AI.
Of course AI shouldn’t be used for life and death decisions. If there’s even 0.1% risk someone can get harmed by the AI, then it shouldn’t be used (on its own).
But not all applications are life threatening. For example, you can use computer vision to determine the quality of apples. If the apple is bad, it will be immediately disposed. This can reduce the amount of bad apples being shipped to the grocery store, which in turn reduces cost.
Is it terrible if the AI misses 20% of the bad apples? Not really. Those apples would’ve been shipped anyway without the AI.
If you’re ok with some error, then you have a case for AI, and many industrial applications are like that.
How do you feel about the self driving car use-case? Say for example a self driving car has a 0.5% risk of an accident, and thus human harm, in it’s usage lifetime, but a human driver has a 5% risk of an accident (making numbers up for the sake of argument but let’s say the self driving car has a 0.1% chance of harm or greater but it’s still much lower than a human). Would you still be against the tech and ven though if we disallowed it there would statistically be more harm caused?
If it can be proven that it causes less accidents, maybe.
My fear is that the accidents can be systematically triggered. For example, one particular curve the AI has trouble understanding. Or a person standing in one particular corner causes the AI to completely misrepresent the scene. Or one particular color of a car makes it confused.