It used to be that you would do a search on a relevant subject and get blog posts, forums posts, and maybe a couple of relevant companies offering the product or service. (And if you wanted more information on said company you could give them a call and actually talk to a real person about said service) You could even trust amazon and yelp reviews. Now searches have been completely taken over by Forbes top 10 lists, random affiliate link click through aggregators that copy and paste each others work, review factories that will kill your competitors and boost your product stars, ect… It seems like the internet has gotten soooo much harder to use, just because you have to wade through all the bullshit. It’s no wonder people switch to reddit and lemmy style sites, in a way it mirrors a little what kind of information you used to be able to garner from the internet in it’s early days. What do people do these days to find genuine information about products or services?
ChatGPT for general knowledge and programming questions. Mostly straight to the point answers without 500 word drivel and 6 ad blocks on a single page for a 3 line answer you find on most blogs…
Literally the worst source for anything…
It has no understanding, it just craps out things that look right, absolutely awful for code generation beyond boilerplate. (And I do pay for the better model. )
Eh, I found it quite useful in giving me relatively well known information. As for code, it’s great at telling me what functions and such do without having to traverse the documentation for a library and such, and also explaining stuff I am confused about. It is faster and more convenient for a lot of stuff, as long as you double check important info (but you have to do that anyway, never use a single source etc etc).
Its more like its only as smart as the average person… Which isnt that high of a bar so yeah for anything even mildly specific its dogshit
It isn’t “smart”, it’s a language model.
You ain’t smart >:(
But in an age where were considering fridges smart, I think a language model at least contends
I mean, it doesn’t even produce compilable code half the time. Even if you give it feedback about which error it produces, it might not fix it after 3-4 corrections. I’ve ended up in loops where it cycles through incorrect suggestions, apparently forgetting that all previous answers are incorrect.
Hey all I gotta say is do you think 50% of people can properly code? Cause otherwise it should be ass at it.
But ask people to write the general structure code might take, and it does give you some boilerplate (but again might also be ass)
I wonder whether ChatGPT can evaluate trustworthiness on the fly. A lot of the complexity of modern search engines is to try to prevent gaming the system. Maybe an AI heuristic would be less predictable/gamable