I Googled it to see because I thought they maybe were using event cameras then but no, they use 10bit instead of classic 8bit but they are not litterally counting photons (which would not be useful). It’s interesting that it improved the precision and recall of their « object detection model ». Guess the image is of better quality then.
The link from 2 years ago is not particularly impressive: https://arxiv.org/abs/1406.2283 this is an equal valent paper I think from 2014
Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.
I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.
I Googled it to see because I thought they maybe were using event cameras then but no, they use 10bit instead of classic 8bit but they are not litterally counting photons (which would not be useful). It’s interesting that it improved the precision and recall of their « object detection model ». Guess the image is of better quality then.
The link from 2 years ago is not particularly impressive: https://arxiv.org/abs/1406.2283 this is an equal valent paper I think from 2014
Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.
I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.
I think you are absolutely correct for the interpretation of the photon count :)