• 6 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle






  • Heh, sorry if I sound so pedantic, but I thought that would be my final message. Let me try with this one.

    Yes, the steps are empty, expose, measure. I was just trying to explain the difference I’m finding in the electronic handling of these steps between MS and ES. In both cases though, the pixels experience the same three steps in the same sequence: empty, expose, measure.

    The “only” difference I’m finding is that in MS the action of “measuring” the pixel happens effectively differently than in ES because, at each photo site, the action of “measuring” is only, well, reading sequentially the amount collected, which is limited in speed to the readout speed.

    In ES, the action of “measuring” is preceded every time by "emptying, exposing"sequentially, all controlled electronically.

    This makes me think that there is some liberty in choosing when to empty and expose pixels electronically, and it’s not really limited neither by some kind of speed, nor some kind of sequence: in MS you don’t really care because the curtains do the job, in ES you must control precisely when to empty a pixel and when to expose it.

    This would also agree with how I understand EFC-S works: your closing of the curtains is limited by the curtain speed, so you have to empty and expose those pixels at the right time to let the curtain end the exposure so that you collect the specified amount of light. So if you’re shooting at 1/1000 and the curtain closes at 1/250, you follow the ES method and empty+expose those buckets a tiny bit before the curtain passes over them. And since after the curtain it’s darkness, just like in MS you can wait for the readout without worry.

    Therefore, it is to my understanding that, electronically speaking, emptying and exposing the pixels can happen at very high speeds, independently from the measuring step: in MS, you empty and expose all of the pixels at once (from the sensor’s POV, of course it’s the curtain that does the exposure job), and then you measure them - the measuring is done once, “alone”; in ES you empty, expose, and measure each pixel individually - the measuring is coordinated with emptying and exposing (which also agrees with your beautiful pseudo-code).

    EDIT: maybe it can be reduced to the emptying action alone. You can empty whenever you want, and since the pixel always gathers light as soon as you’re done emptying it, as long as you time your emptying action correctly the exposure can happen however you prefer: by the curtains in MS, by emptying at the right time so that the pixel is exposed correctly for when the readout happens in ES, and right before the curtain passes over in EFC-S

    To me, in EFC-S this takes the best of both worlds: no fear of distortion because the reading phase is done in darkness, higher theoretical shutter (i.e. exposure) speed because you can empty+expose each pixel electronically even almost as the curtain closes above them.

    I hope you understand that i truly appreciate your help, and I’m sorry I keep hammering at this stuff. I truly appreciate all the patience you can have for me.


  • First of all, your replies are worth waiting for :) It’s not the first time we interact, maybe you didn’t notice but I sure did and I’m glad.

    I should have specified, I understand that in MS rolling shutter is just hard to get, but not impossible. After all, the curtains don’t sweep the sensor instantaneously. But I thank you very much for the detailed demonstration!

    In general, I think I’ve figured it out now. The buckets are always open, but I had it backwards: we don’t have lids to close to limit the exposure, we straight up flip the buckets and empty them to control how much light they contain. (Btw, I love the pseudo code you put there. My mind works a lot “by rules” and that code example is spot on, it’s crystal clear to me. Thank you).

    Let me try to explain so you can tell me if I understood or not:

    • In MS, the buckets are always open, in total darkness, until the shutter flies above them and lets a limited amount of light fall into them. The shutter is so fast, that the image each bucket sees is very chronologically close to the other ones. Once the shutter has passed, the buckets are back to total darkness, so no other light is collected, and the readout can piece together the light collected by every bucket. It does it slowly, but since no light is coming in anymore this does not cause any “distortion”. Important point: the readout does not empty each bucket before reading it. It arrives at the next bucket it and measures the content.

    • In ES, the buckets are always open, in total light, then the readout starts and at each bucket, it empties it (it’s probably full but since it’s going to be emptied nobody cares nor sees what was collected) and then flips it back up for the specified amount of time before immediately reading it, then it goes on to the next bucket. This way, the light read by the readout is always the correct amount, but since it’s collected at the very slow readout speed, the image that is pieced together will be chronologically skewed in the direction of the readout (top of the image is “before” the bottom in a tangible way). Important point: the readout does empty each bucket before reading it. It arrives at the next bucket, it empties it, and waits for the specified time interval before measuring the content.

    So I guess the last piece I need is: there is a “way” to choose whether to empty the charge accumulated by each pixel when reading it or not. In MS it’s chosen (at some mechanical level I suppose) that the readoud doesn’t empty the charge accumulated before measuring it at each pixel, in ES it does empty it before measuring it. Not only that, in MS it’s set that the readout happens immediately, while in ES it happens only after having waited the exposure time.

    I’m way out of my element when it comes to microelectronics, circuitry, and engineering in this field (I don’t even know what it’s called), so probably even if I got a detailed answer to this I wouldn’t be able to fully comprehend it.

    If my intuition is correct, then I can say I finally have an answer to this (for me) burning question I’ve had for a while. I guess at the end there is indeed a fundamental difference in how the sensor works between MS and ES. I feel good thinking that at least my hunch was right, what I imagined was just so much more complicated that it felt impossible to be correct. Your explanation is much more “acceptable” for my brain :)


  • Everytime I feel like it makes perfect sense, I find something that shatters it all.

    Here’s what makes sense to me after your explanation: the buckets have the lid off all the time, so anytime light falls on them, it gets collected. This makes perfect sense in mechanical shutter: they are there in complete darkness waiting for light, and when the curtains fly over them some light falls inside the buckets, and this light all fell almost at the same time. Basically they all saw the same thing. After that they stay open, but since it’s total darkness the amount of light stays the same, and the readout can calmly go to each bucket and measure what’s inside. This works perfectly. An assumption I make in this case is that the buckets don’t really have a lid at all, you don’t control when they close, you can only control how much light enters by means of the shutter. Is this assumption correct?

    Now, I need to apply the same logic to the electronic shutter because otherwise I go crazy. In the electronic shutter you must have a way to control that lid otherwise you can’t control the exposure. In the mechanical shutter the curtains did the job, in the electronic one light is already falling inside those buckets all the time so you must have a way to limit that, and it cannot be the readout alone because you would be limited to the readout speed as the shortest exposure time, which is not the case.

    If you indeed can limit the amount of light, then there are two separate mechanisms to interact with the buckets on the same sensor and in principle this seems weird to me, I feel like this is not correct but I might be mistaken.

    With a way to limit the amount of light then it would make sense that, in ES, if I imagine the readout as the thing that adds pieces of the picture sequentially, each bucket waits with its lid closed, it opens the lid for a tiny bit, it’s filled by the set amount of light, and is immediately measured by the readout and a piece is added to the picture, but since it takes a long time to measure this one bucket, the next one just stands waiting with its lid closed. For this reason the moment of the collection of light is limited to the readout speed, and it makes sense that by the time you’re done with all buckets a lot of time has passed because the readout is slow, so the rolling shutter effect happens, but they all had their lid lifted for the set amount of time so the amount of light is correct.

    The two scenarios make sense to me only if there are two ways to expose the buckets. And I feel like this is just wrong and I’m missing a tiny piece of information that lets me imagine a single mechanism that can do both.



  • Thanks for getting into the details.

    I’m still not 100% there…

    I still am not sure how the exposure is started: is it row by row or is it the entire sensor and it’s just the readout that is sequential? In the first case, then it would make sense to me because they would gather 1/1000 s of light and then, even though the readout is slow, they are physicall not receiving any more light because the shutter is closed and it’s dark, so the image is not affected. If the exposure starts sequentially as well at the readout speed, I would not understand how it could keep up with the curtains, because the curtains would fly over the sensor while the pixels didn’t have the chance yet to be “activated”, so only the first few pixels would see something and the rest would be exposed when the curtain is already closed.

    When in electronic shutter though, the pixels must not be activated all at once because then the first row would get less light than the last rows, and this also would mean that the exposure itself is the readout speed which is not the case. So they are activated sequentially, which is in contrast with how it works for mechanical shutter. In this case though, like in my post, how can the image “move” if the pixels are exposed for only 1/1000 s? Is it because the next pixel is exposed for 1/1000 s only after the first one is read, so there is a delay between one pixel being exposed and the next one? Like pixel #2 is not firing for 9/1000 s because it’s waiting for the first one to be read by the 1/10 s readout.

    What am I getting wrong here? I’m sure there is some misconception in my mind that is preventing me to see what is going on clearly like everyone else.





  • So, quick update. I’ve gotten a cheap “diffuser” to put in front of the built-in flash (this one), and I’ve gone outside when it’s dark with some garden lamps around using my EF-S 24mm f/2.8 (so I can see with this aperture how it looks).

    Well, first of all I’ve found out that man, flash photography is not intuitive, at least for me. I still don’t understand how to expose the pictures, especially with ETTL which, for my understanding, does exposure “automatically”, because all my pictures just look the same, with dark background and subject well lit.

    I’ve read how, with this kind of photography, changing ISO, shutter speed, and aperture is supposed to change the exposure of the background, NOT the subject (which supposedly is handled by the camera+flash). Yet, I must be doing something wrong because the background is so dim every time.

    If you have any tips that could help me out I would greatly appreciate it.



  • I’ll reply here also regarding the pictures. Thanks for sharing them! They look sick. If I understood correctly this is more or less what a 24mm would produce for me, which I recognize as familiar and I quite like it.

    I’m used to shoot either at 24mm (~39mm FF equivalent) or 50mm (~80mm), only recently have I been experimenting with 15-30mm (~24-48mm), and I’ve been loving because I can capture so much scene at 15mm, while I can get a nice “flat” picture at 30mm (I’m further away and I can capture also the “around” the subject, feels less like a fisheye. Hopefully you understand what I mean).

    I hadn’t realized I was starting to shoot at the “sweet spot” that is 50mm, which everyone seems to love. I understand why now!

    I’m more and more convinced by the RF 15-35mm f/2.8. I know I won’t have the same light as the f/1.4 or f/1.8 lenses I’ve looked at, but at this point I don’t think I can give up the flexibility of the zoom: I’d like to shoot at 15mm for the scenery, 24mm for the close ups and group photos, and 35mm for portraits and details, and I feel like this is the only lens which lets me do all of this, even though it will limit me by giving up some light. Am I being reasonable?

    Hopefully I’m not disappointing you and the other kind commenters which have given me their advice to pick a fast prime and to bring a flash.

    I almost certainly won’t bring a flash with me because of the , but I was thinking about diffusing the integrated flash I have on the R10 with one of those, albeit janky-looking, light diffusers that mound on the hotshoe and stay in front of the pop-up flash. This might help me out when f/2.8 is not enough let me shoot at 1/100-1/125 (below this I get mixed results with IS, and unusable ones without it).

    While looking around I’ve also found the Sigma 18-35mm f/1.8 DC HSM ART which I can rent as well. It doens’t come with IS and I’ve read that the autofocus is not as reliable, and I’ll also lose some degrees on the low end, but at with that large aperture I might get enough light to compensate the lack of IS. What do you think? How much difference is there between f/2.8 and f/1.8? Is it enough for me to use it without IS?



  • Thanks a lot for the help!

    I have experience with a 24mm prime and I was never truly satistied with landscape/scenery pictures because it was too narrow for them, and neither with portrait pictures because it was too wide. Of course, this is with it in my hands, there certainly are people out there who can use it better than I do.

    I’m using a RF 15-30 f/4.5-6.3 when it’s bright outside and I like it a lot, I like the flexibility it gives me, that is why I’m tempted by the absolute brightness of the primes I’ve found, but also I’m tempted by the flexibility those zooms give me.

    I don’t think I’ll be using an external flash because I fear my setup will become too cumbersome to carry around, and I prefer the feel I get from pictures when there is no flash involved :(



  • Thanks a lot for the input!

    Yes, I will be taking pictures of both the place and the people, especially of people being natural and not concerned about me, so I would avoid the flash for that reason too (also, the setup becomes too bulky with one).

    I also enjoy very much shooting at 24mm but I’m not too comfortable with it in these situations, I’d like to be closer to the subjects but further from the scenery (if that even makes sense), but since I’m renting the lens I can go for something expensive which does both!

    That 15-35 f/2.8, is it the RF one? It’s one of the lenses I’m considering. Do you have like a sample gallery I can take a look at?


  • Thanks for the reply!

    I thought of the monopod, but the issue is always that all I can bring with me without the camera becoming too much of a hassle to carry around is the body and a lens, so I’m afraid I won’t be able to use the monopod.

    I’m sure the 24mm can do a bit of both scenery and portraits, I’m used to the focal lenght because I’ve used it for quite a while, though I prefer shorter for scenery and loonger for portraits.

    And you’re right, I need to keep in mind that I have to enjoy the party too, which is why I’m being so restrictive on the gear I’m taking with me, which is why I’m willing to take the most expensive lens there is for the job, because renting them won’t cost too much anyway!


  • I’m finding just now some time to reply. First of all, thank you for your advice!

    I don’t think I can bring an external flash with me because I’m a guest too and I’ll be going around the place dancing and eating, and I fear it would become too bulky if I also have a big flash on the camera :( I won’t be the official photographer, rather the nerdy cousing with a camera so nobody expects much from me, I just really care because I love taking photos. The field is quite large and I can move around freely, and in my mind I’d like to take pictures of the scene from relatively afar or with no specific subject, but also group pictures with 2 or three people and even portraits.

    I totally understand the DoF argument: at f/1.4 I might get a very bright picture of a sharp nose with some blurry eyes! I’ll step down a bit in those cases and hope for the best. Correct me if I’m wrong, but when taking “scenery” pictures with no close subject, won’t the DoF become larger (I mean, larger area of things in-focus)? So in those case I can be wide open.

    I have a EF-S 24mm f/2.8 which is light and bright, which I could use for the scenery (even though I prefer shorter length for this) but from my experience it’s not narrow enough when I want to take pictures of people looking at me; and I have a RF 50mm f/1.8 which is also light and very bright, also super sharp but it’s kinda too narrow for portraits and definitely too narrow for scenery. I also have a RF 15-30 f/4.5-6.3 which I white enjoy but It’s unusable for me in low light…

    The need for a nice lens to rent is precisely because I am not comfortable in low light in general except for close portraits. I know I’m asking for everything here while also restricting heavily on what I can use, so maybe I’m just dreaming and there is not a setup which fits all my requirements and I’ll have to compromise.




  • Yeah, you’re right I messed up again. You use multiple, wide aperture shots to capture multiple shallow DOF pictures, and then stack the in-focus planes of each picture into one, creating a deeper DOF. Of course lol, sorry for the confusion.

    Thank you for the great info! I’m learning a lot!