I have been spending some time looking at ProRes RAW in the last weeks and I have come to some conclusions that I wanted to share with you.
First of all ProRes RAW together with the unsupported CinemaDNG is the only RAW video codec that is not camera manufacturer specific this is a benefit of course as it makes your workflow camera independent.
Atomos implementation is based on HDMI data transfer and capture of raw signal on disk. This is limited by HDMI bandwidth.
HDMI 2.0 imposes a limit of 12 bits depth
HDMI 2.1 will remove this limit and allow 14+ bit depth however this is not supported mainstream right now
So if you have a camera with an APSC (Super35) or Full Frame sensor with a 14 bits RAW image capability you will be limited by two things:
Sensor Readout: in video mode most full frame camera can’t output the full frame at video recording speed. Super35 being smaller are better in that respect
Bandwidth of HDMI: your camera will not have more than HDMI 2.0
So in short right now the only camera on the market that can fully benefit from ProRes RAW is the ZCAM E2 see here link
This camera needs an external monitor and you need a Ninja V anyway to record ProRes RAW but it does have a Nauticam housing.
So right now your only option to full exploit ProRes RAW underwater is to use a ZCAM E2 with Nauticam housing and HDMI 2.0 connection.
Super35 cameras remain professional domain, Varicam EVA1 has ProRes RAW support still limited to 12 bits.
Nauticam makes a housing for it. It is of course expensive and this is really prograde.
Due to readout and scaling limitations there are no full frame cameras that can output 4K. However the S1H can offer a cropped Super35 (APSC) output to ProRes RAW this also has a Nauticam housing supporting HDMI 2.0. Am not considering the Nikon Z series total fail in this review.
Sharpness and lens correction
MFT format lenses are autocorrected other formats aren’t we don’t know what ProRes does.
Furthermore we have no details of which demoisaicing procedure is embedded in ProRes. Camera without an antialiasing filter appear to work badly indicating the algorithm favours speed to precision.
It is therefore possible that the IQ in ProRes RAW may be worse than log out of camera. This has been discussed in a workshop on Raw, gamma and log I have attended this week.
At Prosumer level the only options currently for ProRes RAW is the ZCAM E2 as the Panasonic S1H firmware has been delayed.
It is likely that ProRes RAW current readiness produces an outcome that may be equal or less satisfying that standard video processing.
I will keep following on the subject and keep you updated for now the advice is don’t rush it.
This post is NOT about underwater imaging. With the lockdown most of us have started using their cameras in the garden to shoot bugs, or birds or family members or abstracts.
In my instagram on the side you can see some examples of what I have been up to.
Shooting underwater is typically done at small apertures because of underwater optics issues. It is rare to shoot wide angle wider than f/5.6 on a MFT body or F/11 on full frame.
On land everything changes and you want to have as much light as possible coming into your camera to maximise dynamic range, bring out colours and minimised noise. Aperture controls not just how much light hits the sensor but also depth of field or I should say depth of focus.
Depth of field at equal level of magnification (size of the subject relative to the frame) depends only on the aperture of the lens. It does not matter if the lens is short or long once the subject fill your frame it is the f/number that influences depth of field.
2.8/2/1.4 is the Magic Number
Typically in full frame terms f/2.8 was a good lens, and the reason is quite simple if you shoot a classic 50mm lens from 1.5 meters away you will have 15 cm or half a foot depth of field. This is ideal to keep things in focus but also provide some background separation as objects blur as they move away from the area in focus. If you had a faster lens more light would go in the frame however you risk that nothing is in focus, for example nose and eye in focus and maybe ears not in focus.
And this is why 2.8 has been the magic number for full frame photography. If we move to an APSC sensor this becomes 2 and on MFT the magic number is 1.4. So 1.4 on a 25mm lens on MFT is equivalent to 2.8 on 50mm on full frame.
1.4 also gives plenty of light to your sensor so when you want to do some street photography or filming on MFT you can keep your ISO very low.
Every scene has a level of illumination given in LUX and your camera needs to be able to expose for it with the right focus, with the required motion blur and lowest noise.
The scene in the image above is shot at f/1.4 1/60 ISO 640 let’s calculate the Ev taking into account the reference value is f/1 1 second and ISO 100.
1.4 means 1 stop 1/60 means 5.9 stops and 640 means 2.67 stops. So in total we have 6.9 stops of light taken away from aperture and shutter and 2.67 stops added by ISO gain. Total of 4.22 Ev using the formula Lux = 2.5 2^Ev we get 47 Lux which is the level of illumination of your living room in the evening with artificial lights.
If you had a slower lens like for example 2.8 to cover the same scene you needed to shoot at ISO 2500 this would have increased the noise, reduced the dynamic range and the colors.
2.8 Zooms are for outdoor
There are a number of great lenses for MFT cameras that are midrange zoom and have outstanding optical quality:
The lenses above are constant aperture and weather sealed they are ideal for outdoor use however they do not offer a shallow depth of field for subject isolation as they really are f/5.6 in full frame equivalent and they are also slow meaning they will take you to the ISO 2500 zone if you try street photography or shooting movies in your living room.
If you want fast lenses in MFT you need to have prime lenses, this is due to the physical constraint of the format.
Here my selection, I am not a fan of vintage lenses or full manual lenses, I like the best optical quality and if I want to add a vintage feel I do it in post.
In more detail:
The Panasonic 12mm 1.4 is an expensive lens that I use for astrophotography and gimbals plus low light narrow room indoor shots.
It is weather sealed, extremely sharp and fast to focus and works in full auto focus on a gimbal.
The Sigma 16mm 1.4 must be the best value prime on the market for MFT lenses. I use it in street photos and for videos. It is almost a 35mm full frame lens.
The Panasonic 25mm is a workhorse for small group portraits and ideal lens for movie style video.
The Panasonic 42.5 Nocticron is probably the best portrait lens on MFT and one of the best lenses overall.
Why not Olympus/Others?
Of course there are equivalent primes from other brands for all focal lengths except the 12mm. They will perform equally and as long as they can go to 1.4 all is good. I use Panasonic bodies so tend to have Panasonic lenses and I buy Sigma since a long time but this is personal. There are tons of reviews on which lenses to choose etc etc but is not my place to do such comparisons.
How about Video?
Even more essential to have fast primes for video as you are constrained in the shutter speed you can use.
Using a 1.4 lens at 1/50 you can shoot several scenes at different ISO
Indoors low lit areas
full overcast sunset/sunrise very dark indoor
After Twilight dark
Aperture vs environment
For my purposes this adequate for reference underwater scenes at 3.5 means I can cover 100 Lux in ambient light in movie mode before turning on the lights.
If you find yourselves with grainy images or videos invest in fast lenses. A lens is the eye of your camera and the sensor is the brain. Think about getting better lenses before investing in a new camera and consider that if you need to go in lower light it is not always true that getting a bigger sensor will help considering the limitation of depth of field so you may want to think about lights.
We are finally there. Thanks to smaller companies that are keen to get a share of the market we now have at least two cameras with MFT sensor that are able to produce RAW video.
RAW Video and RED
It has been RED to patent the original algorithm to compress raw video data straight out of the sensor before the demosaicing process. Apple tried to circumvent the patent with their ProRes RAW but lost in court the legal battle and now has to pay licenses to Red. Coverage is here.
So RED is the only company that has this science, to avoid paying royalties Blackmagic Design developed an algorithm that uses data taken from a step of the video pipeline after demosaic for their BRAW.
I do not want to discuss if BRAW is better than RedCode or ProRes RAW however with a background in photography I only consider RAW what is straight out of the sensor Analag Digital Converter so for me RAW is RedCode or ProRes RAW and not BMRAW.
How big is RAW Video
If you are a photographer you know that a RAW image data file is roughly the same size in megabytes than the megapixels of your camera.
How is that possible I have a 20 Megapixel camera and the RAW file is only a bit more than 20 megabytes? My Panasonic RW2 files are 24.2 MB without fail out of 20.89 Megapixels so on average 9.26 bits per pixel. Why don’t we have the full 12 bits per pixel and therefore a 31 MB file? Well cameras are made of a grid of pixels that are monochromatic so each pixel is either red, green or blue. In each 2×2 matrix there are 2 green pixels, 1 red and 1 blue pixel. Through a series of steps of which on is to decode this mosaic into an image (demosaic) we rebuild an RGB image for display.
Each one of our camera pixels will not have the full 4096 possible tones, measures from DxoMark suggest that the Sony IMX272AQK only resolves 24 bits colours in total and 9 bits of grey tones. So this is why a lossless raw files is only 24.2 MB. This means that an 8K frame video in RAW would be 9.25 MB and therefore a 24 fps RAW video stream would be 222 MB/s or 1,776 Mb/s if we had equivalent compression efficiency. After chroma subsampling to 422 this would become 1184 Mb/s.
Cameras like the ZCam E2 or the BMPCC4K that can record ProRes 422 HQ approach those bitrates and can be considered virtually lossless.
But now we have ProRes RAW so what changes? The CEO of ZCAM has posted an example of a 50 fps ProRes RAW HQ files and this has a bitrate of 2255 Mb/s if this was 24 fps it would be 1082 Mb/s so we can see how my maths are actually stacking up nicely.
Those bit rates are out of reach of almost all memory card so an SSD drive support is required and this is where Atomos comes into the picture.
Atomos have decided to adopt ProRes RAW and currently offer support for Nikon, Panasonic and Zcam selected model.
ProRes RAW workflow
So with the ProRes RAW file at hand I wanted to test the workflow in Final Cut Pro X. Being an Apple codec all works very well however we encounter a number of issues that photographers have resolved a long time ago.
The first one is that RAW has more dynamic range than your SDR delivery space, this also happens with photos however programs work in larger RGB spaces like ProPhotoRGB at 16 bits and using tone mapping you can edit your images and then bring them back to an 8 bit jpeg that is not as good as the RAW file but is in most cases fine for everyone.
Video NLE are not in the same league of photo raw editors and usually deal with a signal that is already video is not raw data. So the moment you drop your ProRes RAW clip on a SDR timeline it clips as you would expect. A lot of work is required to bring back clips into an SDR space and this is not the purpose of this post.
To avoid big issues I decided to work on an HDR timeline in PQ so that with a super wide gamut and gamma there were no clipping issues. The footage drops perfectly into the timeline without any work required to confirm which is brilliant. So RAW for HDR is definitely the way forward.
ProRes RAW vs LOG
My camera does not have ProRes RAW so I wanted to understand what is lost going through LOG compression? For cameras that have an analog gain on sensor there is no concept of base ISO fixed like it happens on Red or ARRI cameras. Our little cameras have a programmable gain amplifier and as gain goes up DR drops. So the first bad news is that by using LOG you will lose DR from RAW sensors.
This graph shows that on the Panasonic GH5 there is a loss of 1 Ev from ISO 100 to 400 but still we have our 11.3 Ev minimum to play with. I am not interested in the whole DR but I just want to confirm that for those cameras that have more DR than their ADC allows you will have a loss with LOG as this needs gain and gain means clipping sooner.
What is very interesting is that net of this the ProRes RAW file allowed me to test how good is LOG compression. So in this clip I have :
RAW video unprocessed
RAW video processed using Panasonic LOG
RAW video processed using Canon LOG
RAW video processed using Sony LOG
In this example the ZCAM E2 has a maximum dynamic range of 11.9 Ev (log2(3895)) from Sony IMX299CJK datasheet. As the camera has less DR than the maximum limit of the ADC there is likely to be no loss.
We can see that there are no visible differences between the various log processing options. This confirms that log footage is an effective way to compress dynamic range in a smaller bit depth space (12->10 bits) for MFT sensors.
Final Cut Pro gives you the option to go directly to RAW or go through LOG, this is because all your log based workflow and LUT would continue to work. I can confirm this approach is sound as there is no deterioration that I can see.
Is ProRes RAW worth it?
Now that we know that log compression is effective the question is do I need it? And the answer is it depends…
Going back to our ProRes RAW 1082 Mb/s once 422 subsampling is applied this drops to 721 Mb/s this is pretty much identical to ProRes 422 HQ nominal bit rate of 707 Mb/s. So if you have a Zcam and record ProRes RAW or ProRes 422 HQ you should not be able to see any difference. I can confirm that I have compressed such footage in ProRes 422 HQ and I could not see any difference at all.
However typically with photos a RAW files can hold heavy modifications while a JPEG cannot. We are used processing ProRes and there is no doubt that ProRes 422 HQ can take a lot of beating. In my empirical tests I can see that Final Cut Pro X is very efficient manipulating ProRes RAW files and in terms of holding modifications I cannot see that this codec provides a benefit but this may be due to the lack of capability of FCPX.
For reference Panasonic AVC Intra 422 is identical in terms of quality to ProRes 422 HQ though harder to process, and much harder to process than ProRes RAW.
If you have already a high quality output from your camera such as ProRes 422 HQ or Panasonic AVCI 400 Mbps with the tools at our disposal there is not a lot of difference at least for an MFT sensor. This may have to do with the fact that the sensor DR and colour depth is anyway limited and therefore log compression is effective to the point that ProRes RAW does not appear to make a difference, however there is no doubt that if you have a more capable camera, there is more valuable data there and this may be well worth it.
I am currently looking for Panasonic S1H ProRes RAW files. Atomos only supports 12 bits so the DR of the camera will be capped as RAW is linearly encoded. However SNR will he higher and the camera will have more tones and colors resulting in superior overall image quality, someone calls this incorrectly usable DR but is just image quality. it will be interesting to see if AVCI 10 bits and log is more effective than ProRes RAW 12 bits.
In order to product HDR clips you need HDR footage. This comes in two forms:
Cameras have been shooting HDR since years the issue has been that no consumer operating system or display were capable of displaying it. The situation has changed as Windows 10 and Mac Os now have HDR-10 support. This is limited for example on Mac Os there is no browser support but the Tv app is supported, while on windows you can watch HDR-10 videos on YouTube.
You need to have in mind your target format because Log and HLG are not actually interchangeable. HLG today is really only Tv sets and some smartphones, HDR-10 instead is growing in computer support and is more widely supported. Both are royalty free. This post is not about what is the best standard is just about producing some HDR content.
The process is almost identical but there are some significant differences downstream.
Let me explain why this graph produced using the outstanding online application LutCalc show the output input relationship of V-LOG against a standard display gamma for rec709.
V-LOG -> PQ
Looking at the stop diagram we can appreciate that the curves are not only different but a lot of values differ substantially and this is why we need to use a LUT.
Once we apply a LUT the relationship between V-LOG and Rec709 is clearly not linear and only a small parts of bits fit into the target space.
We can see that V-Log fills Rec709 with just a bit more than 60% IRE so there will need to be a lot of squeezing to be done to fit it back in and this is the reason why many people struggle with V-Log and the reason why I do not use V-Log for SDR content.
However the situation changes if we use V-Log for HDR specifically PQ.
You can see that net of an offset the curves are almost identical in shape.
This is more apparent looking at the LUT in / out.
With the exception of the initial part that for V-Log is linear while PQ is fully logarithmic the curve is almost a straight line. As PQ is a larger space than that V-Log can produce on a consumer camera we do not have issues of squeezing bits in as PQ accommodates all bits just fine.
Similar to V-LOG HLG does not have a great fit into an SDR space.
The situation becomes apparent looking at the In/Out Lutted values.
We can see that as HLG is also a log gamma with a different ramp up 100% is achieved with even less bits that V-Log.
So really in pure mathematical terms the fit of log spaces into Rec709 is not a great idea and should be avoided. Note with the arrival of RAW video we still lack editors capable to work in 16 bit depth space like photo editors do and currently all processes go through LOG because they need to fit into a 10/12 bits working space.
It is also a bad idea to use V-Log for HLG due to the difference of the log curves.
And the graph demonstrates what I said at the beginning. You need to decide at the outset your output and stick to a compatible format.
Importing Footage in Final Cut Pro X 10.4.8
Once we have HLG or LOG footage we need to import it into a Wide Gamut Library, make sure you check this because SDR is default in FCPX.
HLG footage will not require any processing, but LUTs have to be applied to V-LOG as this is different from any Rec2100 target spaces.
The most convenient way is to go into Organise workspace select all clips than press the i button and select General. Apply the Panasonic V-Log LUT to all clips.
Creating a Project
Once all files have been handled as required we create our HDR-10 project which in final cut means Rec2020 PQ.
The following screenshots demonstrate the effect of the LUT on footage on a PQ timeline.
With the LUT applied the V-LOG is expanded in the PQ space and the colours and tones come back.
We can see the brightness of the scene is approaching 1000 nits and looks exactly we we experienced it.
Once all edits are finished and just as last step we add the HDR Tools to limit peak brightness to 1000 Nits which is a requirement of YouTube and most consumer displays. The Scope flex slightly with an automatic highlight roll-off.
Exporting the Project
I have been using Panasonic AVCI 400 mbps so I will export a master file using ProRes422 HQ if you use a lower bitrate ProRes 422 may be sufficient but don’t go lower as it won’t be HDR anymore.
YouTube and other devices use default settings for HDR-10 metadata so do not fill the mastering display nor content information it is not required and you would not know how to fill it correctly with exception of peak brightness.
Converting for YouTube
I use the free program handbrake and YouTube guidelines for upload to produce a compatible files. It is ESSENTIAL to produce an mp4 file otherwise your TV and YouTube may not be able to display HDR correctly avoid any other format at all costs.
The finished product can be seen here
SDR version from HDR master
There are residual issues with this process one is the production of an SDR version. This currently works much better for HLG than HDR-10 which is interesting because HLG is unsupported on any computer so if you produce HDR HLG you are effectively giving something decent to both audiences.
For HDR-10 YouTube applies their own one fits all LUT and the results can be really bad. You may experience oversaturated colours in some cases, dark footage in others, and some clips may look totally fine.
At professional level you would produce a separate SDR grade however it is possible to improve the quality of YouTube conversion using specific techniques I will cover in a separate post.
Grading in HDR is not widely supported the only tools available are scopes and Tone Mapping of your display. There is no concept of correct exposure for skin tones, in one scene those have a certain brightness and in another this changes again because this is not a 0-100% relative scale but goes with absolute values.
If you invested in a series of cinema LUT you will find none of them work and compresses the signal to under 100 nits. So there is less headroom for looks. There are other things you can do to give some vintage look like adding grain but you need to be careful as the incredible brightness of the footage and the details of 10 bits means if you push it up too much it looks a mess. Currently I am avoiding adding film grain and if I add it I blend it to 10%-20%.
One thing that is interesting is that Log footage in PQ does have a nice feel to it despite the incredible contrast. After all Log is a way to emulate film specifically Cineon, this is true for almost all log formats. Then you would have the different characteristics of each film stock, this is now our camera sensor and because most of them are made by Sony or Canon the clips tend to look very similar to each other nowadays. So if you want to have something different you need to step in the world of Red or ARRI but this is not in the scope of what I am writing here and what you my readers are interested in.
Am keeping a playlist with all my HDR experiments here and I will keep adding to it.
If you find this useful please donate using the button on the side and I will have a drink on you…Cheers!
There are significant number of misconceptions about noise in digital cameras and how this depends on variables like the sensor size or the pixel size. In this short post I will try to explain in clear terms the relationship between Signal Noise Ratio (SNR) and sensor size.
Signal (S) is the number of photons captured by the lens and arriving on the sensor, this will be converted in electric signal by the sensor and digitised later on by an Analog Digital Converter (ADC) and further processed by Digital Signal Processors (DSP). Signal depending on light is not affected by pixel size but by sensor size. There are many readings on this subject and you can google it yourself using sentences like ‘does pixel size matter’. Look out for scientific evidence backed up by data and formulas and not YouTube videos.
S = P * e where P is the photon arrival rate that is directly proportional to the surface area of the sensor, through physical aperture of the lens and solid angle of view, and e is the exposure time.
This equation also means that once we equalise lens aperture there is no difference in performance between sensors. Example two lenses with equivalent field of view 24mm and 12mm on full frame and MFT with crop 2x when the lens aperture is equalised produce the same SNR. Considering a full frame at f/2.8 and the MFT at f/1.4 gives the same result as 24/2.8=12/1.4 this is called constrained depth of field. And until there is sufficient light ensures SNR is identical between formats.
Noise is made of three components:
Photon Noise (PN) is the inherent noise in the light, that is made of particles even though is approximated in optics with linear beams
Read Noise (RN) is the combined read noise of the sensor and the downstream electronic noise
Dark Current Noise (DN) is the thermal noise generated by long exposure heating up the sensor
I have discovered wordpress has no equation editor so forgive if the formulas appear rough.
Photo Noise is well mapped by Poisson distribution and the average level can be approximated with SQRT(S).
The ‘apparent’ read noise is generally constant and does not depend on the signal intensity.
While 3 is fundamental to Astrophotography it can be neglected for majority of photographic applications as long as the sensor does not heat up so we will ignore it for this discussion.
If we write down the Noise equation we obtain the following:
Ignoring DN in our application we have two scenarios, the first one is where the signal is strong enough that the Read Noise is considerably smaller than Photon Noise. This is the typical scenario in standard working conditions of a camera. If PN >> RN the signal to noise ratio becomes:
SNR =sqrt S
S is unrelated to pixel size but is affected by sensor size. If we take a camera with a full frame and one with a 2x crop factor at high signal rate the full frame camera and identical f/number it has double the SNR of the smaller 2x crop. Because the signal is high enough this benefit is almost not visible in normal conditions. If we operate at constrained depth of field the larger sensor camera has no benefit on the smaller sensor.
When the number of photons collected drops the Read Noise becomes more important than the photon noise. The trigger point will change depending on the size of the sensor and smaller sensor will become subject to Read Noise sooner than larger sensors but broadly the SNR benefit will remain double. If we look at DxOMark measurements of the Panasonic S1 full frame vs the GH5 micro four thirds we see that the benefit is around 6 dB at the same ISO value, so almost spot on with the theory.
Due to the way the curve of SNR drops the larger sensor camera will have a benefit or two stops also on ISO and this is the reason why DxOMark Sport Score for the GH5 is 807 while the S1 has a sport score of 3333 a total difference of 2.046 stops. The values of 807 and 3333 are measured and correspond to 1250 and 5000 on the actual GH5 and S1 cameras.
If we consider two Nikon camera the D850 full frame and the D7500 APSC we should find the difference to be one stop ISO and the SNR to drop at the same 3 dB per ISO increment.
The graphic from DxoMark confirms the theory.
If the SNR does not depend on pixel size, why do professional video cameras and, some high end SLR, have smaller pixel count? This is due to a feature called dual native ISO. It is obvious that a sensor has only one sensitivity and this cannot change, so what is happening then? We have seen that when signal drops, the SNR becomes dominated by the Read Noise of the sensor so what manufacturers do is to cap the full well capacity of the sensor and therefore cap the maximum dynamic range and apply a much stronger amplification through a low signal amplifier stage. In order to have enough signal to be effective the cameras have large pixel pitch so that the maximum signal per pixel is sufficiently high that even clipped is high enough to benefit from the amplification. This has the effect of pushing the SNR up two stops on average. Graphic of the read noise of the GH5s and S1 show a similar pattern.
Sone manufacturers like Sony appear to use dual gain systematically even with smaller pixel pitch in those cases the benefit is reduced from 2 stops to sometimes 1 or less. Look carefully for the read noise charts on sites like photonsforphotos to understand the kind of circuit in your camera and make the most of the SNR.
Because most of the low light situation have limited dynamic range, and the viewer is more sensitive to noise than DR, when the noise goes above a certain floor the limitation of the DR is seen as acceptable. The actual DR is falling well below values that would be considered acceptable for photography, but with photos you can intervene on noise in post processing but not DR, so highest DR is always the priority. This does not mean however that one should artificially inflate requirements introducing incorrect concepts like Useable DR especially when the dual gain circuit reduce maximum DR. Many cameras from Sony and Panasonic and other manufacturers have a dual gain amplifier, sometimes advertised other times not. A SNR of 1 or 0 dB is the standard to define useable signal because you can still see an image when noise and signal are comparable.
It is important to understand that once depth of field is equalised all performance indicators flatten and the benefit of one format on the other is at the edges of the ISO range, at very low ISO values and very high ISO and in both cases is the ability of the sensor to collect more photons that makes the difference, net of other structural issues in the camera.
As majority of users do not work at the boundaries of the ISO range or in low light and the differences in the more usual values get equalised, we can understand why many users prefer smaller sensor formats, that make not just the camera bodies smaller, but also the lenses.
In conclusion a larger sensor will always be superior to a smaller sensor camera regardless all additional improvement made by dual gain circuits. A full frame camera will be able to offer sustained dynamic range together with acceptable SNR value until higher ISO levels. Looking for example at the Panasonic video orientated S1H the trade off point of ISO 4000 is sufficient on a full frame camera to cover most real-life situation while the 2500 of the GH5s leaves out a large chunk of night scenes where in addition to good SNR, some dynamic range may still be required.
As you have read, I have been at the forefront of HDR use at home. I have a total of 5 devices with HDR certification of which 2 supporting all standards all the way to Dolby Vision and 3 supporting at least HLG and HDR-10. The consumption of content is composed for most of Netflix or Amazon originals and occasional BBC HLG broadcasts that are streamed concurrently to live programs. So, it is fair to say I have some practical experience on the subject and two years ago I started writing about shooting HLG with the GH5. This was mostly limited by lack of editing capabilities on the display side, but recently Mac OS 10.15.4 has brought HDR-10 support that means you can see HDR signal on a compatible HDMI or DisplayPort device. This is not HLG but there are ways around it as I wrote in a recent post. This post makes some considerations on the issues of shooting HDR and why as of 2020 shooting SDR Rec709 with your Panasonic GH5 is still my preferred option for underwater video and not.
Real vs Theoretical Dynamic Range
You will recall the schematic of a digital camera from a previous post.
This was presented to discuss dual gain circuits but if you ignore the two gain circuits it remains valid. In this post we will focus on the ADC which stands for Analog to Digital Converter. Contemporary cameras have 12- and 14-bits ADC, typically 14 bits ADC are a prerogative of DSLR cameras or high-end cameras. If we want to simplify to the extremes the signal arriving to the ADC will be digitalised on a 12- or 14-bits scale. In the case of the GH5 we have a 12-bits ADC, it is unclear if the GH5s has a 14-bits ADC despite producing 14-bits RAW, for the purpose of this post I will ignore this possibility and focus on 12-bits ADC.
12-bits means you have 4096 levels of signal for each RGB channel this effectively means the dynamic range limit of the camera is 12 Ev as this is defined as Log10(4096)/Log10(2)=12. Stop wait a minute how is that possible? I have references that the Panasonic GH5 dynamic range is 13 Ev how did this become 12?
Firstly, we need to ignore the effect of oversampling and focus on 1:1 pixel ratio and therefore look at the Screen diagram that shows just a bit more than 12 Ev. We then have to look at how DxOMark measures dynamic range this is explained here. In real life we will not be shooting a grey scale but a coloured scene, so unless you are taking pictures of the moon you will not get much more than 12 stops in any scenarios as the colours will eat the data.
This was for what concerns RAW sensor data before de-mosaicing and digital signal processing that will further deteriorate DR when the signal is converted down to 10-bits even if a nonlinear gamma curve is put in place. We do not know what is really the useable DR of the GH5 but Panasonic statement when V-LOG was announced referenced 12 stops dynamic range using a logarithmic curve so we can safely conclude that the best case is 12 stops when a log curve is used and 10 for a gamma curve with a constant correction factor. Again, it is worth stressing that the 12 stops DR is the absolute maximum at the camera setting with 0 gain applied aka base or native ISO which for the GH5 is 200 corresponding to 400 in log modes.
Shooting HLG vs SDR
Shooting HLG with the GH5 or any other prosumer device is not easy.
The first key issue in shooting HLG is the lack of monitoring capabilities on the internal LCD and on external monitors. Let’s start with the internal monitor that is not capable to display HLG signals and relies on two modes:
Mode 1 : priorities the highlights wherever they are
Mode 2 prioritise the subject i.e. center of the frame
In essence you are not able to see what you get during the shot. Furthermore, when you set zebra to 90% the camera will be rarely reaching this value. You need to rely on the waveform, that is not user friendly in an underwater scene, or on the exposure meter. If you have a monitor you will find if you are carefully in the spec that the screens are rec709 so will not display the HLG gamma while they will correctly record the colour gamut. https://www.atomos.com/ninjav : if you read under HDR monitoring gamma you see BT.2020 that is not HDR is SDR. So you encounter the same issues albeit on a much brighter 1000 nits display that you have on the LCD and you need to either adapt to the different values of the waveform or trust the exposure meter and zebra that as we have said are not very useful as it take a lot to clip. On the other hand if you shoot an SDR format the LCD and external monitor will show exactly what you are going to get except you shoot in V-LOG, in this case the waveform and the zebra will need to be adjusted to consider that VLOG absolute max is 80% and 90% white is 60%. Once you apply a monitor LUT however, you will see exactly what you are going to get on the internal or external display.
Editing HLG vs SDR
In the editing phase you will be faced with similar challenges although as we have seen there are workarounds to edit HLG if you wish so. A practical consideration is around contrast ratio. Despite all claims that SDR is just 6 stops I have actually dug out the BT.709, BT.1886, BT.2100 recommendations and I this is what I have found.
Specifications of ITU display standards
In essence Rec709 has a contrast ratio of 1000 which means 9.97 Stops of DR and already allows for 8- and 10-bits colour. BT.1886 was issued to consider CRT screens no longer exist and this means that the DR goes to 10.97 stops. BT.2100 has a contrast ratio of 200000:1 or 17.61 stops of DR.
DisplayHDR Performance Standards
Looking at HDR monitors you see that, with the exception of OLED screens, no consumer devices can meet BT.2100 standards; so even if you have an HDR monitor in most cases is falling short of BT.2100 recommendation.
Our GH5 is capable of a maximum 12 stops DR in V-Log and maybe a bit more in HLG however those values are far below BT.2100 recommendations and more in line with BT.1886 recommendation. If we look at DxOMark DR charts we see that at ISO 1600 nominal that is in effect just above 800 the DR has fallen below 10 Ev. Consider that this is engineering DR practically speaking you are getting your 12 stops just at ISO 200 and your real HDR range is limited to 200-400 ISO range this makes sense as those are the bright scenes. Consider that log photo styles start at ISO 400 but this really translates to ISO 200 on this chart as well as exposure values. Unless you are shooting at low ISO you will get limited DR improvement. Underwater is quite easy to be at higher ISO than 200 and even when you are at 200 unless you are shooting the surface the scene has limited DR anyway. Generally, 10 stops are more than adequate as this is what we get when we produce a Jpeg from a RAW file.
I think the final nail in the coffin arrives when we look where the content will be consumed.
Typical Devices Performance
Phones have IPS screen with some exceptions and contrast ratio below 1000:1 and so do computer screens. If you share on YouTube you will know phones and computer constitute around 85% of playback devices. Tv are around 10% and a small part of those will be HDR. So other than your own home you will not find many HDR devices out there to give justice to your content.
10-bits vs 8 bits
It is best practice to shoot 10 bits and both SDR and HDR support 10 bits colour depth. For compatibility purposes SDR is delivered with 8 bits colour and HDR on 10 bits colour.
Looking at tonal range for RAW files on 8 Megapixels we see that the camera has 24 bits depth over RGB this means 8 bits per channel and 9 bits tonal range. Tonal range are grey levels so in short, the camera will not produce 10 bits colour bit will have more than 8 bits of grey tones which are helpful to counter banding but only at low ISO, so more useful for blue skies than for blue water. Considering that image for photo competitions are JPEGs and that nobody has felt the need for something more we can conclude that as long as we shot at high bitrate something as close to a raw format 8 bit for delivery are adequate.
Cases for HDR and Decision Tree
There are cases where shooting HLG can be meaningful those include snorkelling at the surface on bright days. You will not be going at depth so the footage will look good straight off the camera, likewise, for bright shots in the sun on the surface. But generally, the benefit will drop when the scene has limited DR or at higher ISO values where DR drops anyway.
What follows is my decision tree to choose between SDR and HDR and 10 bits vs 8 bits formats. I like my pictures and my videos to look better than life and I think editing adds value to the imaging although this is not an excuse for poor capture. There are circumstances where editing is less important, namely when the scene is amazing by itself and requires no extra help, or when I am looking at fast paced, documentary style scenes that do not benefit from editing. For the rest my preference remains for editing friendly formats and high bit rate 10 bits codec all intra. Recently I have purchased the V-Log upgrade and I have not found difficult to use or expose so I have included this here as possible option.
The future of HDR
Except a cinema like setting with dark surrounding and low ambient light HDR mass consumption remains challenging. Yes, you can have high peak brightness but not high contrast ratio and this can be obtained with SDR for most already. There is a lot of noise in the cinema community at present because the PQ curve is hard to manage and the work in post processing is multiplied, clearly PQ is not a way forward for broadcasting and HLG will prevail thanks to the pioneering efforts of the BBC but the lack of monitoring and editing devices means HLG is not going to fit cine like scenarios and little productions. It could be a good fit for a zero-edit shooter someone that like to see the scene as it was.
When marketing myths and incorrect information is netted out we realise that our prosumer devices are very far away from what would be required to shoot, edit and consume HDR. Like many other things in digital imaging is much more important to focus on shooting techniques and how to make the most of what we have, instead of engaging on a quest for theoretical benefits that may not exist.
It has been almost two years from my first posts on HLG capture with the GH5 https://interceptor121.com/2018/06/15/setting-up-your-gh5-for-hlg-hdr-capture/ and last week Apple released Catalina 10.15.4 that now supports HDR-10 with compatible devices. Apple and in general computer are still not supporting HLG and it is unlikely this is ever going to happen as the gaming industry is following VESA DisplayHDR standard that is aligned to HDR-10.
After some initial experiments with GH5 and HLG HDR things have gone quiet and this is for two reasons:
There are no affordable monitors that support HLG
There has been lack of software support
While on the surface it looks like there is still no solution to those issues, in this post I will explain how to grade HLG footage in Final Cut Pro should you wish to do so. The situation is not that different on Windows and DaVinci Resolve that also only support HDR-10 monitors but I leave it to Resolve users to figure out. This tutorial is about final cut pro.
A word about Vlog
It is possible to use Vlog to create HDR content however VLOG is recorded as rec709 10 bits. Panasonic LUT and any other LUT are only mapping the VLOG gamma curve to Rec709 so your luminance and colours will be off. It would be appropriate to have a VLOG to PQ LUT however I am not aware this exists. Surely Panasonic can create that but the VLOG LUT that comes with the camera is only for processing in Rec709. So, from our perspective we will ignore VLOG for HDR until such time we have a fully working LUT and clarity about the process.
Why is a bad idea to grade directly in HLG
There is a belief that HLG is a delivery format and it is not edit ready. While that may be true, the primary issue with HLG is that no consumer screens support BT.2020 colour space and the HLG gamma curve. Most display are plain sRGB and others support partially or fully DCI-P3 or the computer version Display P3. Although the white point is the same for all those colour spaces there is a different definition of what red, green and blue and therefore without taking into this into account, if you change a hue, the results will not be as expected. You may still white balance or match colours in HLG but you should not attempt anything more.
What do you need for grading HDR?
In order to successfully and correctly grade HDR footage on your computer you need the following:
HDR HLG footage
Editing software compatible with HDR-10 (Final Cut or DaVinci)
An HDR-10 10 bits monitor
If you want to produce and edit HDR content you must have compatible monitor let’s see how we identify one.
Finding an HDR-10 Monitor
HDR is highly unregulated when it comes to monitors, TVs have Ultra HD Premium Alliance and recently Vesa has introduced DisplayHDR standards https://displayhdr.org/ that are dedicated to display devices. So far, the Display HDR certification has been a prerogative of gaming monitors that have quick response time, high contrast but not necessarily high colour accuracy. We can use the certified list of monitors to find a consumer grade device that may be fit for our purpose: https://displayhdr.org/certified-products/
A DisplayHDR 1000 certified is equivalent to a PQ grading device as it has peak brightness of 1000 nits and minimum of 0.005 this is ideally what you want, but you can get by with an HDR-400 certified display as long as it supports wide colour gamut. In HDR terms wide gamut means covering the DCI-P3 colour space at least for 90% so we can use Vesa list to find a monitor that is HDR-10 compatible and has a decent colour accuracy. Even inside the HDR-400 category there are displays that are fit for purpose and reasonably priced. If you prefer a brand more orientated to professional design or imaging look for the usual suspects Eizo, Benq, and others but here it will be harder to find HDR support as usually those manufacturers are focussed on colour accuracy, so you may find a display covering 95% DCI-P3 but not necessarily producing a high brightness. As long as the device supports HDR-10 you are good to go.
I have a Benq PD2720U that is HDR-10 certified, has a maximum brightness of 350 nits and a minimum of 0.35, it covers 100% sRGB and REC709 and 95% DCI-P3, so is adequate for the task. It is worth nothing that a typical monitor with 350-400 nits brightness offers 10 stops of dynamic range.
In summary any of this will work if you do not have a professional grade monitor:
Search professional display specifications for HDR-10 compatibility and 10 bits wide gamut > 90% DCI-P3
Final Cut Pro Steps
The easy way to have HDR ready content with the GH5 is to shoot with the HLG Photo Style. This produces clips that when analysed have the following characteristics with AVCI coded.
Limited means that it is not using the full 10 bits range for brightness you do not need to worry about that.
With your material ready create a new library in Final Cut Pro that has a Wide Gamut and import your footage.
As we know Apple does not support HLG so when you look at the Luma scope you will see a traditional Rec709 IRE diagram. In addition, the ‘Tone Mapping Functionality’ will not work so you do not have a real idea of colour and brightness accuracy.
At this stage you have two options:
Proceed in HLG and avoid grading
Convert your material in PQ so that you can edit it
We will go on option 2 as we want to grade our footage.
Create a project with PQ gamut and enter your display information in the project properties. In my case the display has a minimum brightness of 0.35 nits and max of 350 and it has P3 primaries with a standard D65 white point. It is important to know those parameters to have a good editing experience otherwise the colours will be off. If you do not know your display parameters do some research. I have a Benq monitor that comes with a calibration certificate the information is right there. Apple screens are typically also P3 with D65 white point and you can find the maximum brightness in the specs. Usually around 500 nits for apple with minimum of 0.5 nits. Do not enter Rec2020 in the monitor information unless your monitor has native primaries in that space (there are almost none). Apple documentation tells you that if you do not know those values you can leave them blank, final cut pro will use the display information from colour sync and try a best match but this is far from ideal.
For the purpose of grading we will convert HLG to PQ using the HDR tools. The two variants of HDR have a different way to manage brightness so a conversion is required however the colour information is consistent between the two.
Please note that the maximum brightness value is typically 1000 Nits however there are not many displays out there that support this level of brightness, for the purpose of what we are going to do this is irrelevant so DO NOT change this value. Activate tone mapping accessible under the view pull down in the playback window this will adapt the footage to your display according to the parameters of the project without capping the scopes in the project.
Finalising your project
When you have finished with your editing you have two options:
Stay in PQ and produce an HDR-10 master
Delete all HDR tools HLG to PQ conversions and change back the project to HLG
If you produce an HDR-10 master you will need to edit twice for SDR: duplicate the project and apply the HDR tool from HLG to SDR or other LUT of your choice.
If you stay in HLG you will produce a single file but is likely that HDR will only be displayed on a narrower range of devices due to the lack of support of HLG in computers. The HLG clip will have correct grading as the corrections performed when the project was in PQ with tone mapping will survive the editing as HLG and PQ share the same colour mapping. The important thing is that you were able to see the effects of your grade.
In my case I have an HLG TV so I produce only one file as I can’t be bothered doing the exercise two times.
The steps to produce your master file are identical to any other projects, I recommend creating a ProRes 422 HQ master and from there other formats using handbrake. If you change your project back to HLG you will get a warning about the master display you can ignore it.
Paolo Isgro lives in Belluno (Italy) in the Dolomites National Park, one of the most suggestive alpine locations in the world. Although he lives on the mountains and is fond of nature Paolo has been limited by his altitude sickness and therefore when he tried diving in 2002 he was immediately locked in.
Paolo is a scuba diver and has recently certified in free diving, he tries to travel as much as possible and he is keen to explore distant remote locations.
Paolo has recently participated to a number of underwater photography competition, among his latest results:
Ocean Art 2019: 1 and 3 in the Super Macro category
Underwater Phographer of the Year 2020 2nd in behaviour category
Deep Visions 2019 1st in cetacean category
Deep Visions 2019 2nd in Macro category
Deep Visions 2019 best snoot image
Questions and Answers
When did you start underwater photography and why?
I started in 2006 during my first trip to Indonesia. Photography has been the natural evolution of my love for the ocean. I wanted to extend the emotions of the dives through images to keep as memories for me and others to enjoy.
How much diving experience did you have when you started?
I had around 50 dives when I started. I have done other 900 dives since then, all with my camera.
Were you a land photographer before starting?
I did not have significant photography experience prior to diving. I do like to take shots of the diving locations I visit however when I am at home I do not have sufficient time to dedicated to land photography.
Today my underwater photography is concentrated during my trips although I keep studying and learning when am home.
What was your first underwater camera and housing?
My first camera was a Nikon E4600 point and shoot with a Fantasea housing. One year after I replaced it with an Olympus with a strobe and in 2009 I bought my first DSLR.
What is your current camera rig and why did you choose it?
For wide angle I use a Canon 80D with Canon 8-15mm FE or Tokina 10-17mm FE while for macro a Canon 7D with Canon 60mm lens. I also use Inon UCL-67 wet lenses and an inverted Canon 24mm Pancake for extreme super macro.
I use Sea and Sea housing with a 45 degree viewfinder, I have developed my own trim system with self made floats on Stix arm segments. I have of course a macro port, a zen minidome and a 170 dome with 20 and 30 extensions.
My strobes are Inon Z330, OneUW 160 and Inon Z240 as remote snoot rig using triggerfish. I have several snoots including some self made in fiber optic.
I started using Sea and Sea housing in 2009 when I bought the DSLR and I have stayed with this brand ever since. Maybe there are better products now however I have found Sea and Sea to be very sturdy and reliable and I have invested in the ports and accessories so now is difficult to change.
For what concerns the camera right now I think Nikon is better than Canon however I had already built my set of lenses and I really like the reverse ring macro that canon offers.
What is your favourite underwater photography discipline?
I started with macro and I have a lot of experience with it. I think macro is the easiest discipline in underwater photography you can start critter hunting with a dive guide and just keep shooting. When you have more experience, you start framing correctly and understanding the correct positioning as well as the lighting. Eventually you realise that the background can be at times more important than the subject and that it is not just about shooting but waiting for the subject to be ready for your shot, chasing the peak of the action.
I have also spent time in developing special techniques with reverse rings or with mixed lighting or slow shutter speeds. Sometimes I use vintage lenses to get a special bokeh at very wide apertures. I try to constantly move forwards some experiments are very successful like super macro or slow shutter shots, others still to be improved like vintage lenses. I constantly look at the work of other photographers to understand if there is a technique I am interested in trying. Another point in favour of macro is that most key locations are accessible at reasonable cost, so once on location I recommend to hire a private guide to support you taking the shots and maximise the opportunities.
Ajiex Dharma in Tulamben and Obet Curpuz in Anilao are the guides that have helped me the most during my trips.
Wide angle is the discipline that today I find more interesting, especially large animals and the possibility to dive in spectacular dive sites. I think I still need to develop my wide angle photography.
Wide angle is the most complex discipline in underwater photography and I recommend to try it once you have already some experience. There are many challenges, firstly you need the location with the right mix of reefs and fish life, those tend to be more difficult to dive with currents, surge, or sometimes deep dives. Balancing ambient and strobe light is complex and requires more powerful strobes to cover fisheye lenses. I find particularly challenging to develop a wide angle vision to frame the shots in such a way that they have depth and energy in the frame.
Selection of shots
Accelerated panning with snoot :
supermacro with reverse ring :
Macro with vintage lenses :
Ambient light wide angle :
Wide angle :
What has been to date your best trip from a photography viewpoint?
Triton Bay ( Indonesia ) has incredible variety of subjects : 5 different pygmy seahorse (satomi , pontoi , severn, denise , bargibanti ), whale sharks, reef fish and invertebrates of west papua . The reefs offer incredible scenes in shallow water thanks to ambient light and the beaches are wonderful.
How many trips have you done in the last 3 years and where?
Lately I have been lucky to make up to 3 trips per year. In the last 3 years I have been to Fiji/Tonga, La Paz, Socorro, Anilao, Tulamben, Gorontalo, Triton bay, Raja Ampat and Weda Halmahera. I prefer staying in resort for two reasons: I can repeat the same dive site over and over and I can stay for longer period of time. Clearly some locations are only accessible by boat but if there is a choice I would always stay in resort, typically I look for small locations with a limited capacity specialised in underwater photography.
Has there been a defining moment where you think your photography improved significantly?
No. I am self-taught so I have had to study hard. I like to research the theory before trying and as I am far from the sea my progression has been steady and continuous.
It is really important to understand your own limitations and mistakes, this is a key point. Even if you get many likes on facebook or win a competition you don’t understand from there how to move forward and you get stuck in a loop. Having some friends that are experts and open minded that can give you some feedback is extremely useful.
What is your personal favourite shot among all you have taken?
I think my shot with strobe and accelerated panning of this seahorse really gives the idea of a horse galloping in the wind!
Do you want to be featured? Next article it could be you
I hope you are staying safe with the COVID-19 pandemic please in case of doubt err on the safe side and check any advice that sound ‘original’.
In order to keep morale I have decided to start with a series of 121 Q&A with up and coming underwater photographer that have either won some competitions or created some emotionally engaging images in the last few years and MORE IMPORTANTLY are happy to share their work and ideas.
The first release will be on Saturday 28 March 2020 and will feature Paolo Isgro .
I believe Paolo has produced some really exciting macro images in the last years but I see the greatest potential for wide angle where he is producing more exciting images each trip.
If you are or know a photographer that wants to share his work please let me know and I will send out the questionnaire.
Dual native ISO is one of the most confusing topics in modern videography. Almost any professional camera Alexa, Varicam have dual native ISO. So what is it and does it matter to underwater shooters?
Sensitivity and ISO
Most of the confusion stems from the fact film no longer exists. When you had film you could choose different ASA or film sensitivities and once loaded in the camera you were stuck with it until it was finished.
With digital cameras having a memory storage you can flexibly change the ISO but there is some confusion about ISO and sensitivity so let’s have a look at some details.
In the schematic above the film represent the sensor. As with film the sensor has a fixed level of sensitivity that does not change.
The two triangles are gain circuits those will amplify the signal coming from the sensor that is still analog has yet to be converted into digital signal. A camera has typically a single gain circuit but some camera have two in this case we will have a dual gain circuit like the Panasonic GH5s or the Blackmagic Pocket Cinema Camera 4/6K.
It follows that as the amplifier is in pass through the camera can only have a single native ISO. So the whole definition of dual native ISO is incorrect and this should be called as Dual Gain camera as the sensor really has only 1 ISO. The ISO formula defines speed in lux*sec with the following formula this gives the native ISO of the sensor and then gain levels on the amplifier are mapped to an Ev or Stops scale.
It is worth noting that ISO values as seen on a camera are typically off and real values are lower, this is because manufacturers tend to leave headroom before clipping.
And how do I find out the native ISO of my camera? This is typically not defined clearly but generally is the lowest ISO you can set on the camera that is outside the extended range, where extended really means additional digital enhancement range.
For simplicity this is a snapshot of the GH5 manual where you can see that the native ISO is 200. The extended gain is below 200 and above 25600.
A method to check what gain circuitry is installed in the camera is to look at Read noise graphics from PhotonsforPhotos.
When we look at a camera with a dual gain circuit the same graph has a different shape.
In the case of the Panasonic GH5s the sensor has a native ISO of 160, this is the value without any gain applied. You can also see that at ISO 800 when the high gain amplifier is active the read noise is as low as at ISO 320. This is why there is a common misconception that the GH5s native ISO is 800 but as we have seen it is not.
The GH5s manual mentions a dual native ISO setting, as we have seen this is actually an incorrect definition as the sensor has only 1 native ISO and this is 160.
The first low gain analog amplifier works from 160 to 800 ISO and the high gain amplifies works from 800 to 51200, values outside this range are only digital manipulation.
Gain and Dynamic Range
In order to understand dynamic range, defined as the Ev difference between the darkest and brightest part of the image, we can look at a DR chart.
This chart looks at photographic dynamic range (usable range) so it is much lower than the advertised 12 or 13 Ev from Panasonic but neverthless shows that dynamic range is always higher at the lowest ISO. This may or not be the native ISO, in the GH5s case is actually ISO 80 in the extended range. First of all is not possible to increase dynamic range by virtue of amplification so it is not true that the camera DR will be higher at say ISO 800. So why you find plenty of internet posts and video saying that the GH5s native ISO is 800? It is because of confusion between photo styles, gain and gamma curve.
When the gamma curve is logaritmics the camera will no longer reach saturation at the native ISO of 160 but will require an additional stop of light. This is explained in the manual where we can see that the values 160 and 800 have shifted to 320 and 1600.
We can also see that when in variable frame rate the camera needs additional gain to record VLOG so the ranges are 320-2500 and 2500 25600. Values above 25600 are not implemented for VLOG because actually the camera has already at 51200.
So what has changed in the situation above are the base ISO of the Low and High gain setting depending on the gamma curve.
The compression of the gamma curve allows further dynamic range to be recorded despite higher noise due to a higher gain applied.
In terms of EV or stops HLG has more dynamic range than VLOG however is not grading ready and really is more an alternative to Like709. In this evaluation the knee function has not been activated so the real gap between HLG and Like709 is less than 4.3 Ev.
When it comes to VLOG vs CineLike D we can see that VLOG has a higher maximum exposure of cinelike D however in virtue of the additional gain applied also a higher minimum exposure resulting in 0.4 Ev less dynamic range. However what really matters is the maximum brightness as displays typically are not true black and a lot of the lower darks are just clipped.
Due to the difference of gamma curve and impact on ISO and the in variance of native ISO it is totally pointless to compare a linear style like CineLike with a log one (vlog) at same ISO setting. The comparison has to be done with VLOG set 1 stop higher ISO.
So most of the videos you see on YouTube comparing the two settings at same exposure settings are flawed and no conclusion should be drawn from there.
Because VLOG needs higher gain and higher gain means higher noise log footage in dark conditions may as well appear more grainy than linear photo styles. As VLOG really excels on highlights you need to evaluate case by case if it is worth using it or not for your project. In particular when the high gain amplifier is engaged it may make more sense to use CineLike D so that the gamma is not compressed and there are no additional artefacts due to the decompression of the dark tones.
Underwater Video Implications
When filming underwater we are not in situation of extreme brightness except specific cases and this is the reason why generally log profiles are not useful. However dual gain camera can be useful depending on the lens and port used.
In a macro situation generally we control light and therefore dual gain cameras do not offer an advantage.
For wide angle supported by artificial lights the case is marginally better and strongly depends on the optics used. If appropriate wet optics are used and aperture f-numbers are reasonably low the case for low gain cameras is not very high.
For ambient light wide angle on larger sensor cameras with dome ports dual gain cameras are mandatory to improve SNR and footage quality. This is even more true if colour correction filters are used and this is the reason a Varicam or Alexa with dual gain are a great option. However considering depth of field equivalence you need to assess case by case your situation. If you shoot systematically higher than ISO 800-1250 than a camera with dual gain is an absolute must even in MFT format.