SNR in Digital Cameras in 2020

There are significant number of misconceptions about noise in digital cameras and how this depends on variables like the sensor size or the pixel size. In this short post I will try to explain in clear terms the relationship between Signal Noise Ratio (SNR) and sensor size.

Signal (S) is the number of photons captured by the lens and arriving on the sensor, this will be converted in electric signal by the sensor and digitised later on by an Analog Digital Converter (ADC) and further processed by Digital Signal Processors (DSP). Signal depending on light is not affected by pixel size but by sensor size. There are many readings on this subject and you can google it yourself using sentences like ‘does pixel size matter’. Look out for scientific evidence backed up by data and formulas and not YouTube videos.

S = P * e where P is the photon arrival rate that is directly proportional to the surface area of the sensor, through physical aperture of the lens and solid angle of view, and e is the exposure time.

This equation also means that once we equalise lens aperture there is no difference in performance between sensors. Example two lenses with equivalent field of view 24mm and 12mm on full frame and MFT with crop 2x when the lens aperture is equalised produce the same SNR. Considering a full frame at f/2.8 and the MFT at f/1.4 gives the same result as 24/2.8=12/1.4 this is called constrained depth of field. And until there is sufficient light ensures SNR is identical between formats.

Noise is made of three components:

  1. Photon Noise (PN) is the inherent noise in the light, that is made of particles even though is approximated in optics with linear beams
  2. Read Noise (RN) is the combined read noise of the sensor and the downstream electronic noise
  3. Dark Current Noise (DN) is the thermal noise generated by long exposure heating up the sensor

I have discovered wordpress has no equation editor so forgive if the formulas appear rough.

Photo Noise is well mapped by Poisson distribution and the average level can be approximated with SQRT(S).

The ‘apparent’ read noise is generally constant and does not depend on the signal intensity.

While 3 is fundamental to Astrophotography it can be neglected for majority of photographic applications as long as the sensor does not heat up so we will ignore it for this discussion.

If we write down the Noise equation we obtain the following:

Noise=sqrt({PN}^2+{RN}^2+{DN}^2)

Ignoring DN in our application we have two scenarios, the first one is where the signal is strong enough that the Read Noise is considerably smaller than Photon Noise. This is the typical scenario in standard working conditions of a camera. If PN >> RN the signal to noise ratio becomes:

SNR =sqrt S

S is unrelated to pixel size but is affected by sensor size. If we take a camera with a full frame and one with a 2x crop factor at high signal rate the full frame camera and identical f/number it has double the SNR of the smaller 2x crop. Because the signal is high enough this benefit is almost not visible in normal conditions. If we operate at constrained depth of field the larger sensor camera has no benefit on the smaller sensor.

When the number of photons collected drops the Read Noise becomes more important than the photon noise. The trigger point will change depending on the size of the sensor and smaller sensor will become subject to Read Noise sooner than larger sensors but broadly the SNR benefit will remain double. If we look at DxOMark measurements of the Panasonic S1 full frame vs the GH5 micro four thirds we see that the benefit is around 6 dB at the same ISO value, so almost spot on with the theory.

Full Frame vs MFT SNR graph shows 2 stop benefit over 2x crop

Due to the way the curve of SNR drops the larger sensor camera will have a benefit or two stops also on ISO and this is the reason why DxOMark Sport Score for the GH5 is 807 while the S1 has a sport score of 3333 a total difference of 2.046 stops. The values of 807 and 3333 are measured and correspond to 1250 and 5000 on the actual GH5 and S1 cameras.

If we consider two Nikon camera the D850 full frame and the D7500 APSC we should find the difference to be one stop ISO and the SNR to drop at the same 3 dB per ISO increment.

The graphic from DxoMark confirms the theory.

Full Frame vs APSC SNR graph shows 1 stop benefit over 1.5x crop

If the SNR does not depend on pixel size, why do professional video cameras and, some high end SLR, have smaller pixel count? This is due to a feature called dual native ISO. It is obvious that a sensor has only one sensitivity and this cannot change, so what is happening then? We have seen that when signal drops, the SNR becomes dominated by the Read Noise of the sensor so what manufacturers do is to cap the full well capacity of the sensor and therefore cap the maximum dynamic range and apply a much stronger amplification through a low signal amplifier stage. In order to have enough signal to be effective the cameras have large pixel pitch so that the maximum signal per pixel is sufficiently high that even clipped is high enough to benefit from the amplification. This has the effect of pushing the SNR up two stops on average. Graphic of the read noise of the GH5s and S1 show a similar pattern.

Panasonic Dual Gain Amplifier in MFT and Full Frame cameras shows knees in the read noise graphs

Sone manufacturers like Sony appear to use dual gain systematically even with smaller pixel pitch in those cases the benefit is reduced from 2 stops to sometimes 1 or less. Look carefully for the read noise charts on sites like photonsforphotos to understand the kind of circuit in your camera and make the most of the SNR.

Because most of the low light situation have limited dynamic range, and the viewer is more sensitive to noise than DR, when the noise goes above a certain floor the limitation of the DR is seen as acceptable. The actual DR is falling well below values that would be considered acceptable for photography, but with photos you can intervene on noise in post processing but not DR, so highest DR is always the priority. This does not mean however that one should artificially inflate requirements introducing incorrect concepts like Useable DR especially when the dual gain circuit reduce maximum DR. Many cameras from Sony and Panasonic and other manufacturers have a dual gain amplifier, sometimes advertised other times not. A SNR of 1 or 0 dB is the standard to define useable signal because you can still see an image when noise and signal are comparable.

It is important to understand that once depth of field is equalised all performance indicators flatten and the benefit of one format on the other is at the edges of the ISO range, at very low ISO values and very high ISO and in both cases is the ability of the sensor to collect more photons that makes the difference, net of other structural issues in the camera.

As majority of users do not work at the boundaries of the ISO range or in low light and the differences in the more usual values get equalised, we can understand why many users prefer smaller sensor formats, that make not just the camera bodies smaller, but also the lenses.

In conclusion a larger sensor will always be superior to a smaller sensor camera regardless all additional improvement made by dual gain circuits. A full frame camera will be able to offer sustained dynamic range together with acceptable SNR value until higher ISO levels. Looking for example at the Panasonic video orientated S1H the trade off point of ISO 4000 is sufficient on a full frame camera to cover most real-life situation while the 2500 of the GH5s leaves out a large chunk of night scenes where in addition to good SNR, some dynamic range may still be required.

HDR or SDR with the Panasonic GH5

As you have read, I have been at the forefront of HDR use at home. I have a total of 5 devices with HDR certification of which 2 supporting all standards all the way to Dolby Vision and 3 supporting at least HLG and HDR-10. The consumption of content is composed for most of Netflix or Amazon originals and occasional BBC HLG broadcasts that are streamed concurrently to live programs. So, it is fair to say I have some practical experience on the subject and two years ago I started writing about shooting HLG with the GH5. This was mostly limited by lack of editing capabilities on the display side, but recently Mac OS 10.15.4 has brought HDR-10 support that means you can see HDR signal on a compatible HDMI or DisplayPort device. This is not HLG but there are ways around it as I wrote in a recent post. This post makes some considerations on the issues of shooting HDR and why as of 2020 shooting SDR Rec709 with your Panasonic GH5 is still my preferred option for underwater video and not.

Real vs Theoretical Dynamic Range

You will recall the schematic of a digital camera from a previous post.

This was presented to discuss dual gain circuits but if you ignore the two gain circuits it remains valid. In this post we will focus on the ADC which stands for Analog to Digital Converter. Contemporary cameras have 12- and 14-bits ADC, typically 14 bits ADC are a prerogative of DSLR cameras or high-end cameras. If we want to simplify to the extremes the signal arriving to the ADC will be digitalised on a 12- or 14-bits scale. In the case of the GH5 we have a 12-bits ADC, it is unclear if the GH5s has a 14-bits ADC despite producing 14-bits RAW, for the purpose of this post I will ignore this possibility and focus on 12-bits ADC.

12-bits means you have 4096 levels of signal for each RGB channel this effectively means the dynamic range limit of the camera is 12 Ev as this is defined as Log10(4096)/Log10(2)=12. Stop wait a minute how is that possible? I have references that the Panasonic GH5 dynamic range is 13 Ev how did this become 12?

Firstly, we need to ignore the effect of oversampling and focus on 1:1 pixel ratio and therefore look at the Screen diagram that shows just a bit more than 12 Ev. We then have to look at how DxOMark measures dynamic range this is explained here. In real life we will not be shooting a grey scale but a coloured scene, so unless you are taking pictures of the moon you will not get much more than 12 stops in any scenarios as the colours will eat the data.

This was for what concerns RAW sensor data before de-mosaicing and digital signal processing that will further deteriorate DR when the signal is converted down to 10-bits even if a nonlinear gamma curve is put in place. We do not know what is really the useable DR of the GH5 but Panasonic statement when V-LOG was announced referenced 12 stops dynamic range using a logarithmic curve so we can safely conclude that the best case is 12 stops when a log curve is used and 10 for a gamma curve with a constant correction factor. Again, it is worth stressing that the 12 stops DR is the absolute maximum at the camera setting with 0 gain applied aka base or native ISO which for the GH5 is 200 corresponding to 400 in log modes.

Shooting HLG vs SDR

Shooting HLG with the GH5 or any other prosumer device is not easy.

The first key issue in shooting HLG is the lack of monitoring capabilities on the internal LCD and on external monitors. Let’s start with the internal monitor that is not capable to display HLG signals and relies on two modes:

  • Mode 1 : priorities the highlights wherever they are
  • Mode 2 prioritise the subject i.e. center of the frame

In essence you are not able to see what you get during the shot. Furthermore, when you set zebra to 90% the camera will be rarely reaching this value. You need to rely on the waveform, that is not user friendly in an underwater scene, or on the exposure meter. If you have a monitor you will find if you are carefully in the spec that the screens are rec709 so will not display the HLG gamma while they will correctly record the colour gamut. https://www.atomos.com/ninjav : if you read under HDR monitoring gamma you see BT.2020 that is not HDR is SDR. So you encounter the same issues albeit on a much brighter 1000 nits display that you have on the LCD and you need to either adapt to the different values of the waveform or trust the exposure meter and zebra that as we have said are not very useful as it take a lot to clip. On the other hand if you shoot an SDR format the LCD and external monitor will show exactly what you are going to get except you shoot in V-LOG, in this case the waveform and the zebra will need to be adjusted to consider that VLOG absolute max is 80% and 90% white is 60%. Once you apply a monitor LUT however, you will see exactly what you are going to get on the internal or external display.

Editing HLG vs SDR

In the editing phase you will be faced with similar challenges although as we have seen there are workarounds to edit HLG if you wish so. A practical consideration is around contrast ratio. Despite all claims that SDR is just 6 stops I have actually dug out the BT.709, BT.1886, BT.2100 recommendations and I this is what I have found.

 Contrast RatioMax BrightnessMin BrightnessAnalog DR
BT.70910001000.19.97
BT.188620001000.0510.97
BT.210020000010000.00517.61
Specifications of ITU display standards

In essence Rec709 has a contrast ratio of 1000 which means 9.97 Stops of DR and already allows for 8- and 10-bits colour. BT.1886 was issued to consider CRT screens no longer exist and this means that the DR goes to 10.97 stops. BT.2100 has a contrast ratio of 200000:1 or 17.61 stops of DR.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
HDR40010004000.49.97
HDR50050005000.112.29
HDR60060006000.112.55
HDR10002000010000.0514.29
HDR14007000014000.0216.10
400 TB8000004000.000519.61
500 TB10000005000.000519.93
DisplayHDR Performance Standards

Looking at HDR monitors you see that, with the exception of OLED screens, no consumer devices can meet BT.2100 standards; so even if you have an HDR monitor in most cases is falling short of BT.2100 recommendation.

Our GH5 is capable of a maximum 12 stops DR in V-Log and maybe a bit more in HLG however those values are far below BT.2100 recommendations and more in line with BT.1886 recommendation. If we look at DxOMark DR charts we see that at ISO 1600 nominal that is in effect just above 800 the DR has fallen below 10 Ev. Consider that this is engineering DR practically speaking you are getting your 12 stops just at ISO 200 and your real HDR range is limited to 200-400 ISO range this makes sense as those are the bright scenes. Consider that log photo styles start at ISO 400 but this really translates to ISO 200 on this chart as well as exposure values. Unless you are shooting at low ISO you will get limited DR improvement. Underwater is quite easy to be at higher ISO than 200 and even when you are at 200 unless you are shooting the surface the scene has limited DR anyway. Generally, 10 stops are more than adequate as this is what we get when we produce a Jpeg from a RAW file.

Viewing HDR

I think the final nail in the coffin arrives when we look where the content will be consumed.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
IPS/Phones10003500.359.97
LED Tv40004000.111.97
OLED60000006000.000122.52
Typical Devices Performance

Phones have IPS screen with some exceptions and contrast ratio below 1000:1 and so do computer screens. If you share on YouTube you will know phones and computer constitute around 85% of playback devices. Tv are around 10% and a small part of those will be HDR. So other than your own home you will not find many HDR devices out there to give justice to your content.

10-bits vs 8 bits

It is best practice to shoot 10 bits and both SDR and HDR support 10 bits colour depth. For compatibility purposes SDR is delivered with 8 bits colour and HDR on 10 bits colour.

Looking at tonal range for RAW files on 8 Megapixels we see that the camera has 24 bits depth over RGB this means 8 bits per channel and 9 bits tonal range. Tonal range are grey levels so in short, the camera will not produce 10 bits colour bit will have more than 8 bits of grey tones which are helpful to counter banding but only at low ISO, so more useful for blue skies than for blue water. Considering that image for photo competitions are JPEGs and that nobody has felt the need for something more we can conclude that as long as we shot at high bitrate something as close to a raw format 8 bit for delivery are adequate.

Cases for HDR and Decision Tree

There are cases where shooting HLG can be meaningful those include snorkelling at the surface on bright days. You will not be going at depth so the footage will look good straight off the camera, likewise, for bright shots in the sun on the surface. But generally, the benefit will drop when the scene has limited DR or at higher ISO values where DR drops anyway.

What follows is my decision tree to choose between SDR and HDR and 10 bits vs 8 bits formats. I like my pictures and my videos to look better than life and I think editing adds value to the imaging although this is not an excuse for poor capture. There are circumstances where editing is less important, namely when the scene is amazing by itself and requires no extra help, or when I am looking at fast paced, documentary style scenes that do not benefit from editing. For the rest my preference remains for editing friendly formats and high bit rate 10 bits codec all intra. Recently  I have purchased the V-Log upgrade and I have not found difficult to use or expose so I have included this here as possible option.

The future of HDR

Except a cinema like setting with dark surrounding and low ambient light HDR mass consumption remains challenging. Yes, you can have high peak brightness but not high contrast ratio and this can be obtained with SDR for most already. There is a lot of noise in the cinema community at present because the PQ curve is hard to manage and the work in post processing is multiplied, clearly PQ is not a way forward for broadcasting and HLG will prevail thanks to the pioneering efforts of the BBC but the lack of monitoring and editing devices means HLG is not going to fit cine like scenarios and little productions. It could be a good fit for a zero-edit shooter someone that like to see the scene as it was.

Conclusion

When marketing myths and incorrect information is netted out we realise that our prosumer devices are very far away from what would be required to shoot, edit and consume HDR. Like many other things in digital imaging is much more important to focus on shooting techniques and how to make the most of what we have, instead of engaging on a quest for theoretical benefits that may not exist.