• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Yep, and for good reason honestly. I work in CV and while I don’t work on autonomous vehicles, many of the folks I know have previously worked at companies or research institutes on these kinds of problems and all of them agree that in a scenario like this, you should treat the state of the vehicle as compromised and go into an error/shutdown mode.

    Nobody wants to give their vehicle an override that can potentially harm the safety of those inside it or around it, and practically speaking there aren’t many options that guarantee safety other than this.



  • I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.

    This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”

    The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.







  • I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.

    I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.