• 1 Post
  • 780 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle








  • I love owning a home but fuck it’s expensive. Learn how to do stuff yourself is the best advice I can give you.

    I have saved so much money being able to troubleshoot and repair simple things like hvac, electrical, plumbing, woodworking, etc. YouTube is amazing for learning this stuff. A good example: I recently had to replace two hvac condenser fans that would have likely cost me a $1000 a pop to fix. It’s bad enough the motors themselves were $300 a pop. Plumbing is easy if you have the right tools (pex is awesome). Electrical can be pretty easy if you’re willing to learn (I was a computer engineer in college and a system architect by trade so I get the electrical stuff). Learn how to patch holes in drywall. You’d be surprised how much you’ll be doing that. Learn how to replace a faucet. Learn how to replace the inwards of a toilet.

    The great thing about a fixer upper is you can afford to make mistakes. Take your time, don’t rush it. Make little improvements all the time. It all adds up.





  • They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI: -no true understanding -they can’t really engage in the real world. -they have no real ability to learn real-time. -they don’t really have the ability to take in more then one type of info at a time.

    I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.