• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle





  • That’s how human intelligence works. We assign a value to the source of the information. The fact that the AI’s seemed to be trained without that explains why they “lie” so much. They simply reconstruct patterns without giving any weight to specific patterns.

    For example, if you have the information “President Biden will launch a ground invasion of Russia.” If the New York Times, BBC, and CNN are all reporting it, we would give that information a higher likelihood of being true than if the information was found on random blogs. However, if the random blogs reporting the information belonged to reputable reporters or bloggers on military and international affairs, we would assign the information a higher value of being correct than if the information came from Bob’s Bigfoot and Alien sightings Index.

    Without the ability to check the level of accuracy of source data, all the generative AI could be corrupted. If you fed an art AI photos of the Statue of Liberty but kept telling it that it was the Eiffel Tower, when asked to draw the Eiffel Tower it would spit out the Statue of Liberty. Right now, without the ability to assess the accuracy of a response, any of the chat-based AI are garbage for most of the use-cases companies are deploying them in.


  • Nuclear is very expensive, which means it needs to be run for a long time to make up for the initial investment costs. There are not very many places where you will be able to have enough cooling water for 3 to 5 decades that is not on a coastline. However, if you build on the coast you have to build with 50 years of sea level rise, tsunamis and flooding in mind. All of that adds to the already high costs.

    Cover everything with solar, build up on and offshore wind, improve existing hydroelectric and invest in geothermal, make the grid larger with more grid storage, and if you still need more energy sources then add nuclear.


  • AI only “knows” what it has been trained on. Since structural racism exists, racism will be present in how AI operates. It does not mean we will have AI Hitler trying to kill Jews, but it might mean things like an AI drawing program defaulting to a white woman when asked to draw a generic woman. It could mean that bias that already exists gets amplified, for example an AI “pre-crime” program targeting Black neighborhoods as potential hotspots while ignoring similar White neighborhoods.