• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • AI will certainly be a challenge for web sites. I think we must accept that the end of the 2010s internet is upon us. Add supported web sites, that offer the user as the product, will start to slowly disappear. That is not really a bad thing though. The drawback for us users is that services will start to cost money.

    I read a comment by Bill Gates recently where he suggested that most people will interact with AI through personal assistants. This actually feels like a good point from him. A PA that actually works will add a lot, to a lot of people. Which goes to your point about how normal users will stop interacting with the web though an unfiltered browser and start using AI to access the web. Companies who sell actual good and services online should be safe as their income is not dependant on advertising.

    I suspect normal users will end up paying for the search engines used by the AI. On some form of tiered approach. Apart from a few oddball users, like us here in the fedverse who will find other ways to make things work.


  • Lol, but I get that. A proper affordable heads up display will add so much more value to my life. I ride motorcycles and that is where it can be really useful. A Kickstarter tried ot a while ago with a helmet,but that flopped badly. A pair of glasses that will fit in my helmet, beaming useful info to my eyeballs could be lifesaving.

    I had a long rural night ride a while back and it was bloody tricky navigating with the mounted phone. Not out of choice, more of a needs must scenario. The Gaiman map app was very useful in indicating the road ahead, bit the split attention needed was insane.

    Another 90s geek chirping in.







  • It is an area that will require us to think carefully of the ethics of the situation. Humans create works for humans. Has this really changed? Now consumption happens through a machine learning interface. I agree with your reasoning, but we have an elephant in the room that this line of reasoning does not address.

    When we ask the AI system to generate content in someone else’s style or when the AI distorts someone’s view in its responses. It is in this area where things get very murky for me. Can I get an AI to eventually write another book in Terry Pratchett’s style? Would his estate be entitled to some form of compensation? And that is an easier one compared to living authors or writers. We already see the way image generating AI programs copy artists. Now we are getting the same for language and more.

    It will certainly be an interesting space to follow in the next few years as we develop new ethics around this.


  • Agreed on your point. We need a way to identify those links so that our browser or app can automatically open them through our own instance.

    I am thinking along the lines of a registered resource type, or maybe a central redirect page, hosted by each instance, that knows how to send you to your instance to view the post there.

    I am sure it is a problem that can be solved. I would however not be in favour of some kind of central identity management. It is to easy a choke point and will take autonomy away from the instances.


  • That should just work. You view the post on your own instance and reply there. That reponse trickles to the other instances.

    It may take a while to propagate though. The paradigm is close to that of the ancient nntp news groups where responses travel at the speed of the server’s synchronisation. It may be tricky for rapid fire conversation, but works well for comments of articles.



  • Edit: Wrote this on mobile. The mobile U/I is not always clear as to the source magazine where the post came from, so I missed the Linux in there. Things are not as dire on Linux as on Windows for AMD, so my assessment may be a bit pessimistic. With AMD’s focus on the data centre for machine learning, the linux driver stack seems fairly well supported.

    I spent the last few days getting stable defusion and pytorch working on my Radeon 6800 XT in windows. The machineml distribution of stable diffusion runs at about 1/4 of the speed of raw rocm when I compare it to the shark tooling, which supports rocm via docker on windows.

    Expect tooling to be clinky and that you will need to compile everything yourself on linux. Prebuilt stuff will all be for Nvidia.

    Amd is pushing hard into the ai space, but aiming at datacenter users. They are rumoured to be building rocm for their windows drivers, but when that will ship is anyone’s guess.

    So right now, if you need to hit the ground running for your academic work, I would recommend NVidia, as much as it pains me, a long time AMD user.