After a few months of using the GH6 is time to answer the question pretty much every GH5 user is asking now.
The answer as always is … it depends. I hope this article will help you clarify your thinking.
I have done a number of tests on all the GH5 and GH6 series cameras including the original GH5, the GH5S, the GH5M2 and the GH6.
While many people talk about dynamic range most only care about noise and in particular if this will show in your footage or not.
Unfortunately read noise accurate calculation are only possible for RAW image files not video. Video has an additional issue which is temporal noise.
As noise is random by nature each frame will have its own noise and the changes in noise generate that flickering effect that everybody hates. This is called temporal noise and to an extent every camera has it.
Obviously if you have less noise you will see less flickering but all cameras will have some.
The other discussion that has been going on forever is that large pixels are better for low noise, this is also not true as more pixels can be added and noise averaged out. So the only thing that matters is sensor size, sensor construction and the sensor coating.
The original GH5 did not have a great coating so when the GH5M2 was released sharing the same coating of the G9 most people were saying well it won’t matter much while instead it does.
The benefit that the AR coating brings to the GH5M2 compared to the original GH5 is around 2/3 Stops which is not negligible.
The other difference among the various GH cameras is how VLOG is implemented.
In the GH5/GH5M2 VLOG is simply a curve and achieves no major benefit compared to other photo styles but it avoids you clipping highlights at expense of additional noise. This noise is managed overexposing 1 stop.
In the GH5s/GH6 VLOG applies underexposure behind the scenes of 1 and 1 1/3 stops so dynamic range is maximized. Both cameras have strategy to deal with noise. The GH5S applies noise filtering the GH6 scaling the net result is that VLOG in those camera is better than shooting something else.
Using a mix of read noise on RAW files and calculation of how noise is managed I have created the following chart that shows how noise goes at bit level when ISO goes up.
Here you can see all the cameras I think this graphic explains pretty much what happens at high ISO. For low ISO you need to take into account shot noise and my analysis is not able to evaluate that however this will make a small difference to the evaluations.
So lets go into the specifics
I am a hybrid user I want the best of both worlds which camera is better for me?
The GH5M2 is currently the best camera in this category, it offers the best still image performance, it has IBIS and video is very good and can be improved with an external recorder if you wish. It also records 8 bits which is fine for those who do not want 10 bits at all costs and uses SD cards. The dynamic range of a still image is the best of all GH series cameras as seen on photonstophotos. Remember that RAW files are not denoised or scaled like video.
I am a GH5 video user should I buy a GH6
Assuming that you shoot vlog because if you don’t any camera works just fine the answer is yes unless you are always at ISO 400 with your GH5 and do not want to buy more ND filter the GH6 is a significant step forward.
You need to evaluate however if you need all the GH6 offers.
I am a GH5S video user is the GH6 for me?
While the GH6 performance is better than the GH5S in the high ISO zone at low ISO is worse. The GH6 has IBIS and all the features the GH5S has however it is limited to 12800. The GH6 also produces 25 megapixels photos but as a GH5S user this was probably not important.
So the answer is yes if you don’t need really high ISO (>12800).
I am a GH5M2 video user is the GH6 for me?
If you don’t mind ND filters, use the camera in both daylight and low light and you need any of the features like 120fps 4k then the answer is yes.
The GH5 has been a very competitive camera and the GH5M2 further improved on it. The GH5S has its own niche and all of those are strong proposition. When looking at the GH6 the key criteria is that you are focused on video and that you need all the codecs and feature the camera has.
I have had the GH6 now for a bit more than one Month and it is time to get to conclusions in terms of the image quality in both photo and video.
In order to do that I have ran the GH6 side by side with the GH5M2 so far in my opinion the best hybrid Micro Four Thirds camera.
There have been a number of reviews online with regards to the GH6 video mode and for me two have stood out.
The first is the review from CineD and the second is from CVP
You do need to take into account a combination of factors when you look at video because the functionality and the camera image pipeline are what makes the video.
In general terms when it comes to functionality and codecs offered the GH6 is simply incredible. I have taken the opportunity to start bird video project and that would have not been possible with the GH5M2 or any previous GH series camera.
You can follow my work as it develops here
For the first time I am shooting in VLOG an entire project and this is due to the implementation in the GH6.
In the GH6 the implementation of VLOG is similar to what is done in the GH5s and the S series. So when you shoot in VLOG the camera is applying a negative adjustment of 1 1/3 stops behind the scenes.
This means when you are shooting at the 250 base ISO the camera is actually internally working at ISO 100.
In addition the GH6 no longer underexposed middle grey behind the scenes and it is spot on the grey card and in the RAW linear data.
In addition we now have a Dynamic Range Boost functionality that blends two frames one at High Gain and one at Low Gain to give you additional headroom in the highlights.
The result is that when you look at VLOG you have increased performance with dynamic range boost on from ISO 2000 and very strong dynamic range up to ISO 6400.
I have run some read noise tests using my astrophotography software and then applying the exposure shifts I come with the above result. Take into account that dynamic range in the GH5M2 is clipped on the highlights at 1 stop less so in reality although the graph seems to indicate that the GH5M2 at ISO 400 has more dynamic range this is not actually the case. However that is true is that up to ISO 1600 when the GH6 has dynamic range boost OFF the GH5M2 outperforms the GH6 in video.
I have shot side by side video and I will post on my YouTube channel some time soon.
However the first conclusion is:
if you do not need 4k120fps or 5.7k and don’t exceed ISO 1600 the GH5M2 is a better choice
What does this actually mean and how low light can you go? In practical terms f/2 1/60 ISO 1600 means 17 lux middle grey typical of floodlight buildings exteriors so not that dark but not that bright either. An indoor lounge with decent lights will have this level of illumination. Of course you can put a strong ND filter on the GH6 and enjoy more dynamic range however this has a number of other side effects.
The second conclusion is
Using dynamic range boost gives you 1 1/3 stops improvement on the GH5M2 from ISO 2000 and more highlight headroom but worse noise performance at low ISO
So what is the use case that will definitely favour the GH6? Typically need for high quality high frame rate formats and decent low light performance. The camera does pretty well up to ISO 6400 in VLOG.
If you don’t use VLOG there is noise reduction in camera so although it looks clean the details is not anymore there. So personally I would use VLOG when possible with the GH6.
When it comes to photos the design choices of the GH6 backfire. The camera has incredibly high levels of read noise as per this graphic.
In addition the read noise is higher in the low ISO before it turns to ISO 800 when dual gain output is in action.
This has of course a direct impact on the theoretical maximum dynamic range.
Here you can see that at values up to ISO 640 the GH5M2 really has an edge and the improvement of the GH6 is really limited to the region between 800 and 3200 the benefit is modest at best 0.5 stops.
So the third conclusion is:
If you are interested in the best photographic dynamic range in micro four thirds the GH5M2 (and the G9) are better choices
As an example those two images shot outdoor show that in effect our eyes do not really see read noise in a bright scene and once scaled the two camera cannot be taken apart. However if you had shot a long exposure at low ISO you will see grain with the GH6 under ISO 800.
For how hard I try I could not tell the difference between the two images above once processed and scaled.
Final disclosure all my figures look at pixel level noise and dynamic range. Scaling to a common size as shown by the image above will benefit the GH6 more as it has higher resolution but the difference is no so large that the data above is not valid so in general all I said above holds.
I have provided test files to Bill Claff of PhotonstoPhotos and he will publish more reliable and scientific results in due course.
We are both puzzled by the GH6 design and are waiting for another raw converter support to reconfirm however the triangulation of my data with other sources holds so I am quite confident of what I wrote here.
Today I went out with both the GH5M2 and GH6 to shoot some roll for my new project.
I have tested the GH6 in my light box and surprisingly CineD2 has changed it is now correct but also more saturated. So to avoid issues I shot both cameras in V-Log. Most readers know I am not a fan of log footage however today they conditions were pretty decent so I was at base ISO and I did not mind closing the aperture as I was shooting birds and landscape.
Lumix GH5M2 set up
I shot the GH5M2 in default settings in VLOG without manipulating exposure. The camera has a tendency to overexpose and I let it do it.
I set up color temperature to daylight to avoid differences and shot All Intra 30fps at 400mbps on a tripod. It was windy at times.
Shot sunrise then the pond that is the target for my long lens work.
Lumix GH6 set up
I used the same settings of the GH5M2 but shot at 60 and 120 fps using the new codecs of the GH6. I did not use dynamic range boost.
As I used a very long lens I had set up a plate for the tripod but I still got occasional shake as I was fidgeting the remote shutter.
As the camera has a lag to start recording I ended up shooting many blanks. I realised the lens is far too long for birds in flight but good for detail shots.
Putting it together
I combined footage in Final Cut Pro and used the standard VLOG to V709 LUT. I then added vibrancy and sharpness.
Each scene was corrected for exposure individually not pre-cooked LUTs were used.
This is the resulting video
I used slow motion from the GH6 at 50% and 25% speed this is really a great feature for wildlife. The only thing missing is a pre-roll
All in call the cameras when the GH6 has dynamic range boost off look very similar and this is because GH6 levels are clipped.
In 2018 I wrote the original article as I had acquired the GH5 and I was faced with a ton of non-sense on which format to use when I was shooting video. With the S series software stack Panasonic has made some changes to the options available and I thought it was about time to refresh the original article. As Before I will focus my analysis on 4K video and ignore other formats. This time I will be looking at the NTSC standard of 29.97 and 59.94 frames per second. This is simply because today majority of content produced by Panasonic consumer digital cameras is consumed online and all computer screen work at 60 Hz refresh rate so shooting anything different than 30 or 60 will result in choppy video. This presents some challenges if you are in a PAL zone and are shooting under artificial lights however for the purpose of this article I want to just ignore this issue, obviously you could shoot 24 fps and hope in a 24 – 30 conversion which is scatty of course. For simplicity I will refer to 30 and 60 fps and not exact values.
Today we have 5 settings for UHD
200 Mbps 420 10 Bits Long GOP 60 fps
150 Mbps 420 8 Bits Long GOP 60 fps
100 Mbps 420 8 Bits Long GOP 30 fps
150 Mbps 422 10 Bits Long GOP 30 fps
400 Mbps 422 10 Bits All-Intra 30 fps.
The last option is only available on the GH5 series and on the S1H. The first option is only available on the S series and the GH5M2.
Long GOP vs All Intra
The difference between Long GOP and All Intra is that in the Long GOP what is encoded is a group of pictures (GOP) and not separate individual pictures.
Within a Group of Pictures there are different type of frames:
I (Intra coded) frames containing a full picture
P (Predictive coded) frames containing motion interpolated picture based on a prediction from previous frames
B (bi-predictive coded) frames containing a prediction from previous or future frames
It is important to note that frames are not stored sequentially in a GOP and therefore the GOP needs to be decoded and the frames reordered to be played, this requires processing power.
The reason why H264/HEVC is very efficient is that within a group of picture there is only one full frame and the rest are predictions clearly if the prediction algorithm is accurate the level of perceived quality of long GOP is very high and similar to All-Intra clips.
This is the reason why comparing All Intra and Long Gop using static scenes or scenes with repetitive movement that can be predicted very accurately by the codec is a fundamental error.
So which format should you choose?
In order to understand the workings we need to dig deeper into the structure of the GOP but before doing so let’s evaluate the All-Intra codec.
AVC All-Intra explanation
This codec records at 400 Mbps so with 30 fps this means circa 13.4 Mbits per frame or 1.67 MB per frame and there is no motion interpolation so each frame is independent from the others. The implementation of All-Intra of the GH5 does not make use of CABAC entropy but only CAVLC coding is used, this makes the resulting files easier to read and to edit. The idea of All intra is that you don’t require powerful hardware to edit without conversion in an intermediate codec. However based on my experience this is not entirely through and you need a decent GPU to play it back and edit real time without issues.
If you consider a Jpeg image of your 3840×2160 frame on the GH5 you see that it stores around 4.8 MB per image because there is no chroma sub-sampling so if you wanted to have exactly the same result you would need to use ProRes 4444 to get a comparable quality (this not even taking into account that Jpeg are 8 bits images).
Video uses chroma sub-sampling so only part of the frame contain colours at a given time. Apple in their ProRes white paper declare that both ProRes 422 and 422 HQ are adequate to process 10 bit colour depth and 422 sub-sampling however they show some quality differences and different headroom for editing. If you count 50% for 4.2:0 sub-sampling and 67% for 422 you get around 2.34 MB and 3.5 MB frame sizes that correspond to ProRes 422 and ProRes 422 HQ individual frame sizes.
it would appear that All Intra 400 Mbps would fall short of Apple recommended bit-rate for 422 10 bit colour however practical tests show that AVC All intra at 400 Mbps is perceptually identical to ProRes 422 HQ and uses much less space. We also did some SNR measures time ago with the friend Paal Rasmussen and we did not find significant improvements shooting ProRes 422 HQ vs All-I on card.
Long GOP Codecs
Coming back to the other recording quality option we still need to evaluate how the various long GOP codecs compare relative to each other.
In order to fully understand a codec we need to decompose the GOP into the individual frames and evaluate the information recorded. If you look on Wikipedia it will tell you that P frames are approximately half the size of an I frame and B frame are 25%. I have analysed the Panasonic GH5M2 clips using ffprobe a component of ffmpeg that tells you what is exactly in each frame to see if this explains some of the people claims that there is no difference between the settings.
Link to Panasonic on the H264 implementation is here: documentation
There is unfortunately no documentation of the HEVC implementation that I have found to date.
200 Mbps 420 10 Bits Long GOP 60 fps Analysis
An analysis with ffprobe shows a GOP structure with N=30 and M=1 where N is the length in frames of the group of pictures and M is the distance between I or P frames.
This codec does not have B frames but only P frames.
Analysing a set of I frames of a fixed subject at 60 fps resulted in a frame size of 1.16MB for the I frames. This value is quite low however we need to understand that HEVC is much more efficient than H264.
I shot this test video time ago comparing the recording of this codec with a Ninja V in ProRes 422 HQ. As you can see no major differences however I have not pushed the grading in the clip.
The speed ramps in this video use this codec
150 Mbps 420 8 Bits Long GOP 60p Analysis
An analysis with ffprobe shows a GOP structure with N=30 and M=3 where N is the length in frames of the group of pictures and M is the distance between I or P frames.
So each Group of Pictures is made like this
IBBPBBPBBPBBPBBPBBPBBPBBPBBPBB before it repeats again.
Analysing a set of I frames of a fixed subject at 30 fps resulted in a frame size of 1.26MB for the I frames.
One very important aspect of the 150 Mbps codec is that as the GOP is double the length of the single frame rate 100 Mbps codec there are the same number of key frames per second and therefore it is NOT true that this codec is better at predicting motion however the additional frames result in better slow motion performance than what is done in software in majority of cases.
100 Mbps 420 8 Bits Long Gop 30 fps Analysis
An analysis with ffprobe shows a GOP structure with N=15 and M=3 where N is the length in frames of the group of pictures and M is the distance between I or P frames.
So each Group of Picture is made like this
IBBPBBPBBPBBPBBP before it repeats again.
Analysing a set of I frames of a fixed subject at 30 fps resulted in a frame size of 1.49MB for the I frames which is the highest if we exclude All I.
150Mbps 422 10 Bits Long Gop 30 fps
An analysis with ffprobe shows a GOP structure with N=15 and M=1 which means this codec does not use B frames but just I and P frames so the GOP structure is as follows:
IPPPPPPPPPPPPPP before it repeats again.
Analysing a set of I frames of a fixed subject at 30 fps resulted in a frame size of 1.25MB for the I frames.
H264 Codec Ranking for Static Image Quality UHD
So in terms of absolute image quality and not taking into account other factors the Panasonic GH5M2 and S series Movie recording settings ranked by codec quality are as follows:
400 Mbps 422 10 Bit All intra 30 fps (1.67 MB per frame)
100 Mbps 420 8 Bit Long Gop 30 fps (1.49 MB per frame)
150 Mbps 420 8 Bit Long Gop 60 fps (1.26 MB per frame)
150 Mbps 422 10 Bit Long Gop 30 fps (1.25 MB per frame)
The 100 Mbps and 400 Mbps codec are marginally different with the 150 Mbps long GOP really far away.
Note that as the technology is different I cannot directly compare the new 200 Mbps codec however based on visual impression and ability to grade I would recommend this over the 150 Mbps 420 8 bits
If you have a camera that has the 400 Mbps All Intra this remains the best format to use. V90 cards have dropped in price and are now available up to 256 GB. Unfortunately this option is only available on the GH5 series and on the S1H.
If you have a camera that does not have the All-I you can of course purchase an external recorder that in some cases will allow you to shoot RAW however this is not necessarily going to give better image quality and will definitely extend your processing time.
My revised advice, if your camera does not have the ALL I and you don’t have an external recorded, is as follows:
Use the 100 Mbps Long Gop codec it is very efficient in the compression and the perceived quality is very good. You need to get the exposure and white balance right in camera as the clips may not withstand extensive corrections. There is a risk with footage with a lot of motion of some errors in motion interpolation that can generate artefacts but this based on experience is not very high.
Use the new 200 Mbps HEVC for double frame rate it is not hard to process as HEVC 10 bits has hardware acceleration on all platforms.
Generally there appears to be no benefit using the internal 422 10 Bit codec nor the 420 8 bit double frame rate due to the limitations of the GOP structure, in addition the lack of hardware acceleration for H264 10 bits means you will need to convert the files for editing and they do not open with standard programs or load on phones or tablets. The same is true for All Intra but at least you can edit it ok.
To conclude this is a summary table with all key information
A certain number of GH5 users have upgraded to the S5, I was one of them until I sold the camera after 1 month of using and after buying a Ninja V. If you are a Panasonic S1/S5 user you need not only to contend with recording time limits but also with lack of codecs on the camera to fully use the potential that it has. You need to add an external recorder to really see the benefits because in real life situations you are not shooting a step chart so the dynamic range is destroyed by compression quality and errors and SNR drops. It would be interesting to test how does the GH5M2 400 Mbps compare with one of the S cameras using the 150 Mbps 10 bit codec but this is not something I did. I would only warn everyone going down that path that you may get less than what you think and you may require additional hardware to get there. Take also into account that S series only shoot 50/60 fps in APSC/Super35 mode and that in full frame mode there is a substantial amount of rolling shutter that makes pans and tilt practically not possible.
If you have a GH5 (Mark I, II or S) you are probably not just doing underwater video but also taking scenes on land. In fact perhaps you are reading this blog and you do not even use your camera underwater.
Either way you know by now that shooting video just with your camera handheld is not a terribly good idea and the tools required are different from photography where once you have a tripod a remote shutter and a bunch of filters are practically set.
Video requires quite a bit of hardware. I have gone through this process and even tried to adapt some of my underwater hardware but it was really bulky and at the end you are better off with proper gear so I use Smallrig.
Smallrig is a Chinese company that makes tons of various bits for your camera for a variety of situations. The first item you need is a cage and they have just released a new updated version for the GH5 cameras.
Clicking the image above will take you to Amazon UK where you can check details of the item.
The cage is the starting point and you can put your camera strap the weight of 190 grams does not add terribly to the camera and in any case if you have a GH5 you are used to carry some more weight. The cage lives permanently on my GH5 unless I put it in the underwater housing.
I am now going to go through a series of set up that I use with some other components as well but first some words of warning.
Smallrig suggests to use a Nato handle on the left hand side however this blocks the HDMI side port from opening. In addition the arri locating pins are too high and if you use a side handle in arri format it will hang and not be level with the cage.
I wrote Smallrig to tell them and they accepted the feedback the reality is that they do not test designs with camera they run 3D simulations and they do not go and open all the ports, plugs etc. So the only handles that actually work with this cage and do not impede any functionality are in this article.
Monopod/Tripod Rig for Wildlife shooting
The basic rig for monopod and handheld work (this means you hold the camera steady and you do not move) has a top handle and a side handle plus a small microphone and if required an LCD shade.
I have gone through a series of handles for this cage and I have settled for the following items:
We are looking at £180 for all this hardware this stuff is not cheap however I just can’t emphasize how more ergonomic and effective this is compared to holding the camera directly.
I have this rig used it for my deer film project.
There are situations where setting up a gimbal is too much and you want to take simple footage handheld just using the camera IBIS. This works very well with a standard zoom lens like the Panasonic DG Leica 12-60mm or the Lumix 12-60mm.
In those situations is better to have two handles and the set up needs to be as light as possible.
You will notice that the handles are smaller and they are also lighter. If you walk with your camera you are typically using the LCD and with two handles you are able to be much more steady if you were just gripping the camera.
You can also use the EVF instead of the LCD if you prefer to have three points of contact with the camera.
In those situation where you are in a studio or controlled environment you can use manual focus for pulling and usually you also have a monitor to have precise exposure. I tend to use this on a fluid video head and sturdy tripod for indoor shots.
Here I am shooting the mighty PanaLeica 10-25mm and I have my trusted Swit CM-55c field monitor.
I have an atomos ninja however since the GH5M2 has 60 fps in camera at 10 bits I only use it in bright outdoor scenes and the SWIT is just better in terms of tools and lighter.
With the new linear focus option for manual focus you can have a complete 360 degree turn of the wheel for manual focus if you so wish or shorter runs if preferred. I use 300 degrees.
It can be a bit daunting to search for the right items that do the job when you want to use your cage I hope this article will help you making the right choices and save time without having to do the trial error and return process I have done in the last year!
There is no doubt that LOG formats in digital cameras have a halo of mystery around them mostly due to the lack of technical documentation on how they really work. In this short article I will explain how the Panasonic V-Log actually works on different cameras. Some of what you will read may be a surprise to you so I have provided the testing methods and the evidence so you can understand if LOG is something worth considering for you or not. I will aim at making this write up self-contained so you have all the information you need here without having to go and search elsewhere, it is not entirely possible to create a layman version of what is after all a technical subject.
A logarithmic operator is a non-linear function that processes the input signal and maps it to a different output value according to a formula. This is well documented in Panasonic V-Log/V-Gamut technical specifications. If you consider the input reflection (in) you can see how the output is related to the input using two formulas:
IRE = 5.6*in+0.125 (in < cut1 ) *
IRE = c*log10(in+b)+d (in >= cut1 )
Where cut1 = 0.01, b=0.00873, c=0.241514, d=0.598206
There are few implications of this formula that are important:
0 input reflectance is mapped to 7.3% IRE
Dark values are not compressed until IRE=18%
Middle Grey (18% reflectance) is still 42% IRE as standard Rec709
White (90% reflectance) is 61% IRE so much lower than Rec709
100% IRE needs input reflectance 4609 which is 5.5 stops headroom for overexposure.
So what we have here is a shift of the black level from 0% to 7.3% and a compression of all tones over 18% this gives the washout look to V-LOG that is mistakenly interpreted as flat but it is not flat at all. In fact the master pedestal as it is known in video or black level is shifted. Another consequence of this formula is that VLOG under 18% IRE works exactly like standard gamma corrected Rec709 so it should have exactly the same performance in the darks with a range between 7.3% and 18% instead of 0-18%.
In terms of ISO measured at 18% reflectante V-LOG should have identical ISO value to any other photo style in your camera this means at given aperture and exposure time the ISO in a standard mode must match V-LOG.
When we look at the reality of V-LOG we can see that Panasonic sets 0 at a value of 50% IRE so generally ⅔ to 1 full stop overexposed this becomes obvious when you look at the waveform. As a result blacks are actually at 10% IRE and whites at 80% once a conversion LUT is applied.
Challenges of Log implementation
LOG conversion is an excellent method to compress a high dynamic range into a smaller bit depth format. The claim is that you can pack the full sensor dynamic range into 10 bits video. Panasonic made this claim for the GH5s and for the S1H, S5.
There is however a fundamental issue. In a consumer digital camera the sensor is already equipped with a digital to analog converter on board and this operates in a linear non log mode. This means the sensor dynamic range is limited to the bit depth of the analog to digital converter and in most cases sensors do not even saturate the on board ADC. It is true that ADC can also resolve portions of bits however this does not largely change the picture.
If we look at the sensor used in the S1H, S5 this is based on a Sony IMX410 that has saturation value of 15105 bits or 13.88 stops of dynamic range. The sensor of the GH5s which is a variant of Sony IMX299 has a saturation of 3895 (at 12 bits) or 11.93 stops.
None of the S1H, S5 or GH5s actually reaches the nominal dynamic range that the ADC can provide at sensor level. The sensor used by the GH5 has more than 12 stops dynamic range and achieves 12.3 EV of engineering DR, as the camera has 12 bits ADC it will resolve an inferior number of tones.
So the starting point is 12 or 14 stops of data to be digitally and not analogically compressed into 10 bits coding. Rec709 has a contrast ratio requirement of 1000:1 which is less than 10 stops dynamic range. This has not to be confused with bit depth. With 8 bits depth you can manage 10 stops using gamma compression. If you finish your work in Rec709 the dynamic range will never exceed log2(1000)=9.97 stops. So when you read that rec709 only has 6.5 stops of DR or similar it is flawed as gamma compression squeezes the dynamic range into a smaller bit depth.
When we look at a sensor with almost 14 stops of dynamic range the standard rec709 gamma compression is insufficient to preserve the full dynamic range as it is by default limited to 10 stops. It follows that logically LOG is better suited to larger sensors and this is where it is widely used by all cinema camera manufacturers.
In practical terms the actual photographic dynamic range (this is defined as the dynamic range you would see on a print of 10″ on the long side at arm length), the one you can see with your eyes in an image, is less than the engineering value. The Panasonic S5 in recent tests showed around 11.5 stops while the GH5S is around 10 and the GH5 9.5 stops of dynamic range. Clearly when you look at a step chart the tool will show more than this value but practically you will not see more DR in real terms.
This means that it is possible that a standard gamma encoded video in 10 bits can be adequate in most situations and nothing more is required. There is also a further issue with noise that the log compression and decompression produces. As any conversion that is not lossless the amount of noise increases: this is especially apparent in the shadows. In a recent test performed with a S5 in low light and measured using neat-video assessment V-Log was one of the worst performed in terms of SNR. The test involved shooting a color checker at 67 lux of ambient illumination and reading noise level on the 4 shadows and darks chips. Though this test was carried out at default setting it has to be noted that even increasing the noise reduction in V-LOG does not eliminate the noise in the shadow as this depends on how V-LOG is implemented.
The actual V-Log implementation
How does V-LOG really work? From my analysis I have found that V-Log is not implemented equally across cameras, this is for sure a dependency on the sensor performance and construction. I do not know how a Varicam camera is built but in order to perform the V-Log as described in the document you need a log converter before the signal is converted to digital. In a digital camera the sensor already has an on board ADC (analog to digital converter) and therefore the output is always linear on a bit scale of 12 or 14 bits. This is a fundamental difference and means that the math as illustrated by Panasonic in the V-LOG/V-Gamut documentation cannot actually be implemented in a consumer digital camera that does not have a separate analog log compressor.
I have taken a test shot in V-LOG as well as other standard Photo Styles with my Lumix S5 those are the RAW previews. V-LOG is exactly 2 2/3 stops underexposed on a linear scale all other parameters are identical.
What is happening here? As we have seen ISO values have to be the same between photo styles and refer to 18% middle grey however if you apply a log conversion to a digital signal this results in a very bright image. I do some wide field astrophotography and I use a tool called Siril to extract information from very dark images this helps visualise the effect of a log compression.
The first screenshot is the RAW file as recorded a very dark black and white image as those tools process separately RGB.
The second image shows the same RAW image with a logarithmic operator applied; this gives a very bright image.
Now if you have to keep the same middle grey value exposure has to match that linear image so what Panasonic does is to change the mapping of ISO to gain. Gain is the amplification on the sensor chip and has values typically up to 24-30 dB or 8 to 10 stops. While in a linear image the ISO would be defined as 100 at zero gain (I am simplifying here as actually even at 100 there will be some gain) in a log image zero gain corresponds to a different ISO value. So the mapping of ISO to gain is changed. When you read that the native ISO is 100 in normal mode and 640 in V-LOG this means that for the same gain of 0 dB a standard image looks like ISO 100 and a V-LOG image looks like ISO 640, this is because V-LOG needs less gain to achieve the same exposure as the log operator brightens the image. In practical terms the raw linear data of V-LOG at 640 is identical to an image taken at 100.
This is the reason why when a videographer takes occasional raw photos and leaves the camera in V-LOG the images are underexposed.
The benefit of the LOG implementation is that thanks to log data compression you can store the complete sensor information in a lower bit depth in our case this means going from 14 to 10 bits.
There are however some drawbacks due to the fact that at linear level the image was ‘underexposed‘, I put the terms in italic as exposure only depends on time and aperture of the lens, so in effect is lack of gain for which there is no term.
The first issue is noise in the shadows as those on a linear scale are compacted, as the image is underexposed: a higher amount of noise is present and this is then amplified by the LOG conversion. It is not the case that LOG does not have noise reduction, in fact standard noise reduction expects a linear signal gamma corrected and therefore could not work properly (try setting a high value in V-LOG on a S camera to see the results), the issue is with the underexposure (lack of gain) of the linear signal.
There are also additional side effects due to what is called black level range, I recommend reading on photonstophotos a great website maintained by Bill Claff. When you look at black levels you see that cameras do not really have pure black but have a range. This range results in errors at the lower scale of the exposure; the visible effect is colour bleeding (typically blue) in the shadows when there is underexposure. As V-LOG underexposed in linear terms you will have issues of colour bleeding in the shadows: those have been experienced by several users so far with no explanation.
The other side effect is that the LUT to decompress V-LOG remains in a 10 bit color space which was insufficient to store the complete dynamic range data and this does not change. So the LUT does not fully reverse the log compression in Panasonic case this goes into the V709 CineLike Gamma which is in a Rec709 gamma. As the full signal is not decompressed means that there are likely errors of hue accuracy so V-LOG does not have a better ability to reproduce accurate colors and luminance and this is the reason why even after a LUT is applied it needs to be graded. If you instead decompress V-LOG in a log space like Rec2020 HDR you will see that it does not look washed out at all and colors are much more vibrant as the receiving space has in excess of 20 stops.
Some users overexpose their footage saying they are doing ETTR. Due to the way log is implemented this means it will reach a clipping point sooner and therefore the dynamic range is no longer preserved. This is a possible remedy to reduce the amount of noise in low light however the log compression is not fully reversed by the LUT that is expecting middle grey exposure and therefore color and luminance accuracy errors are guaranteed. If you find yourself regularly overexposing V-LOG you should consider not using it at all.
Shadow Improvement and input referred noise
The Lumix cameras with dula gain sensor have a different behaviour to those without. This is visible in the following two graphs again from Bill Claff excellent website.
The first is the shadow improvement by ISO here you can see that while the GH5/G9 stay flat and are essentially ISO invariant, the GH5S and S5 that have a dual gain circuit have an improvement step when they go from low to high gain. What changes here is due to the way the sensors of the GH5s and S5 are constructed, the back illumination means that when the high gain circuit is active there is a material improvement in the shadows and the camera may even have a lower read noise at this ISO (gain) point than it had before because of this.
Another benefit of dual gain implementation is easier to understand when you look at input referred noise graphs. You can see that as the sensor enters the dual gain zone the input referred noise drops. Input referred noise means the noise that you would need to feed as an input to your circuit to produce the same noise as output. So this means when that step is passed the image will look less noisy. Again you can see that while the GH5 stays relatively flat the GH5s and S5 have a step improvement. Is it is not totally clear what happens in the intermediate zone for the GH5s possibly intermediate digital gain or more noise reduction is applied.
The combination of a certain type of sensor construction and dual conversion gain can be quite useful to improve shadows performance.
Do not confuse dual gain benefit with DR preservation, while dual gain reduces read noise it does not change the fact that the highlights will clip as gain is raised. So the effective PDR reduces in any case and is not preserved. The engineering DR is preserved but that is only useful to a machine and not to our eyes.
Now we are going to look at specific implementation of V-LOG in various camera models.
Front Illuminated 12 bits Sensors
Those are traditional digital cameras for photos and include the GH5, G9 for example. On those cameras you will see that the V-Log exposure shows a higher ISO value of 1 stop compared to other photo styles at identical aperture and shutter speed setting but the actual result is the same in a raw file so your RAW at 400 in VLOG is the same of another photo style at 200. This is a direct contradiction of Panasonic own V-Log model as the meter should read the same in all photo styles so something is going on here. As there is no underexposure it follows that there is no real log compression either. Those cameras are designed in a traditional way so low ISO (gain) is good high ISO (gain) is not. This is visible in the previous graphs.
Those screenshot show how the raw data of an image taken at ISO 250 in standard mode is identical to the V-LOG image and therefore shows how there is not LOG compression at all in the GH5. V-LOGL of the GH5 is therefore just a look and does not have any increase of dynamic range compared to other photo styles.
Is this version of V-LOGL more effective than other photo style with a compressed gamma like CineLikeD? According to Panasonic data CineLikeD has 450% headroom so it is already capable of storing the whole dynamic range that the GH5 can produce (450% means 12.13 stops vs 12.3 theoretical maximum).
In addition noise performance of V-Log is worse because all is doing is acting on shadows and highlights and not really doing any log conversion. The business case for acquiring a V-Log key on those cameras is limited if the objective was to preserve dynamic range as the camera already has this ability with photo styles included with the camera and moreover the V-LOG is not actually anything related to LOG compression otherwise the image would have needed to have less gain and would have shown underexposed. The fact that the camera is shooting at nominal ISO 400 means most likely that some form of noise reduction is active to counter the issue that V-Log itself introduces of noise in the shadows. So in this type of camera V-LOG is only a look and does not accomplish any dynamic range compression.
Back Illuminated 12 bits readout sensors
The cameras that have this technology are the GH5s and the BGH1, the back illumination gives the sensor a better ability to convert light into signal when illumination levels are low. Those cameras have actually a sensor with an 14 bits ADC but this is not used for video.
In order to decompose the procedure I have asked a friend to provide some RAW and Jpeg images in Vlog and normal. You can see that in the GH5s there is 1 stop underexposure and therefore a light form of log compression.
In the GH5s implementation the camera meters zero at the same aperture shutter and ISO in LOG and other photo styles and zero is 50% IRE so actually is 1 stop overexposed.
The procedure for V-Log in this cameras is as follows:
Meter the scene on middle grey + 1 stop (50%)
Reduce gain of the image 1 stop behind the scenes (so your 800 is 400 and 5000 is 2500)
Digital log compression and manipulation
As the underexposure is mild this means the log compression is also mild as it is only recovering 1 stop as the two effect cancels this is actually a balanced setting.
The IMX299 dual gain implementation was a bit messed up in the GH5s but has been corrected in the BGH1 with the values of 160 and 800. It is unclear what is happening to the GH5s and why Panasonic declared 400 and 2500 as the dual gain values as those do not correspond to sensor behaviour, perhaps additional on sensor noise reduction only starts at those values or just wanting to make a marketing statement.
Back Illuminated 14bits Sensors
Here we have the S1H and S5 that have identical sensors and dual gain structure.
The metering behaviour on the S series is the same as the GH5s so all photo styles result in identical metering. The examples were at the beginning of this post so I am not going to repeat them here.
Now the gain reduction is 2 and ⅔ stops which is significant. After this is applied a strong log compression is performed. This means that when you have ISO 640 on the screen the camera is actually at gain equivalent to ISO 100 and when you have 5000 is at 640 resulting in very dark images. In the case of the S5/S1H VLOG does offer additional dynamic range not achievable with other photo styles.
Interestingly V-Log on the S series does achieve decent low light SNR despite the strong negative gain bias. Here we can see that the Log implementation can be effective however other photo styles that do not reduce gain may be a better choice in low light as gain lifts the signal and improves SNR. It is also important to note that the additional DR of VLOG compared to other photo styles is in the highlights so it only shows on scenes with bright areas together with deep darks this was noted on dpreview and other websites.
Should you use V-LOG?
It looks like Panasonic is tweaking the procedure for each sensor or even camera as they go along. The behind the scenes gain reduction is really surprising however it is logical considering the effect of a log compression.
Now we can also see why Panasonic calls the GH5s implementation V-LOGL as the level of log compression is small only 1 stops as opposed to VLOG in the S series where the compression is 2 ⅔ stops. We have also seen that V-LOG, at least in a digital consumer camera with sensor with integrated ADC, has potentially several drawbacks and those are due to the way a camera functions.
Looking at benefits in terms of dynamic range preservation:
GH5/G9 and front illuminated sensor: None
GH5s/BGH1 back illuminated MFT: 1 stop
S5/S1H full frame: 2 ⅔ stops
What we need to consider is that changing the gamma curve can also store additional dynamic range in a standard video container. Dpreview is the only website that has compared the various modes when they reviewed the Panasonic S1H.
A particularly interesting comparison is with the CineLikeD photo style that according to Panasonic can store higher dynamic range and is also not affected by the issues of V-LOG in the shadows or by color accuracy problems due to log compression. The measures of dpreview show that:
On the GH5s V-LOG has 0.3 stops benefits over CineLikeD
On the S1H V-LOG has a benefit of 0.7 stops over CineLikeD2
Considering the potential issues of noise and color bleeding in the shadows together with hue accuracy errors due to the approximation of the V-LOG implementation I personally have decided not to use V-LOG at all for standard dynamic range but to use it for HDR footage only as the decompression of V-LOG seems to have limited to no side effects. In normal non HDR situations I have shot several clips with V-LOG but I never felt I could not control the scene to manage with other photo styles and the extra effort for a maximum benefit of 0.7 Ev is not worth my time nor the investment in noise reduction software or the extra grading effort required. As HDR is not very popular I have recently stopped using V-LOG altogether due to lack of support of HDR in browsers for online viewing.
Obviously this is a personal consideration and not a recommendation however I hope this post helps you making the right choices depending on what you shoot.
This write up is based on my analysis on Panasonic V-LOG and does not necessarily mean the implementation of other camera manufacturers is identical however the challenges in a digital camera are similar and I expect the solutions to be similar too.
In order to product HDR clips you need HDR footage. This comes in two forms:
Cameras have been shooting HDR since years the issue has been that no consumer operating system or display were capable of displaying it. The situation has changed as Windows 10 and Mac Os now have HDR-10 support. This is limited for example on Mac Os there is no browser support but the Tv app is supported, while on windows you can watch HDR-10 videos on YouTube.
You need to have in mind your target format because Log and HLG are not actually interchangeable. HLG today is really only Tv sets and some smartphones, HDR-10 instead is growing in computer support and is more widely supported. Both are royalty free. This post is not about what is the best standard is just about producing some HDR content.
The process is almost identical but there are some significant differences downstream.
Let me explain why this graph produced using the outstanding online application LutCalc show the output input relationship of V-LOG against a standard display gamma for rec709.
V-LOG -> PQ
Looking at the stop diagram we can appreciate that the curves are not only different but a lot of values differ substantially and this is why we need to use a LUT.
Once we apply a LUT the relationship between V-LOG and Rec709 is clearly not linear and only a small parts of bits fit into the target space.
We can see that V-Log fills Rec709 with just a bit more than 60% IRE so there will need to be a lot of squeezing to be done to fit it back in and this is the reason why many people struggle with V-Log and the reason why I do not use V-Log for SDR content.
However the situation changes if we use V-Log for HDR specifically PQ.
You can see that net of an offset the curves are almost identical in shape.
This is more apparent looking at the LUT in / out.
With the exception of the initial part that for V-Log is linear while PQ is fully logarithmic the curve is almost a straight line. As PQ is a larger space than that V-Log can produce on a consumer camera we do not have issues of squeezing bits in as PQ accommodates all bits just fine.
Similar to V-LOG HLG does not have a great fit into an SDR space.
The situation becomes apparent looking at the In/Out Lutted values.
We can see that as HLG is also a log gamma with a different ramp up 100% is achieved with even less bits that V-Log.
So really in pure mathematical terms the fit of log spaces into Rec709 is not a great idea and should be avoided. Note with the arrival of RAW video we still lack editors capable to work in 16 bit depth space like photo editors do and currently all processes go through LOG because they need to fit into a 10/12 bits working space.
It is also a bad idea to use V-Log for HLG due to the difference of the log curves.
And the graph demonstrates what I said at the beginning. You need to decide at the outset your output and stick to a compatible format.
Importing Footage in Final Cut Pro X 10.4.8
Once we have HLG or LOG footage we need to import it into a Wide Gamut Library, make sure you check this because SDR is default in FCPX.
HLG footage will not require any processing, but LUTs have to be applied to V-LOG as this is different from any Rec2100 target spaces.
The most convenient way is to go into Organise workspace select all clips than press the i button and select General. Apply the Panasonic V-Log LUT to all clips.
Creating a Project
Once all files have been handled as required we create our HDR-10 project which in final cut means Rec2020 PQ.
The following screenshots demonstrate the effect of the LUT on footage on a PQ timeline.
With the LUT applied the V-LOG is expanded in the PQ space and the colours and tones come back.
We can see the brightness of the scene is approaching 1000 nits and looks exactly we we experienced it.
Once all edits are finished and just as last step we add the HDR Tools to limit peak brightness to 1000 Nits which is a requirement of YouTube and most consumer displays. The Scope flex slightly with an automatic highlight roll-off.
Exporting the Project
I have been using Panasonic AVCI 400 mbps so I will export a master file using ProRes422 HQ if you use a lower bitrate ProRes 422 may be sufficient but don’t go lower as it won’t be HDR anymore.
YouTube and other devices use default settings for HDR-10 metadata so do not fill the mastering display nor content information it is not required and you would not know how to fill it correctly with exception of peak brightness.
Converting for YouTube
I use the free program handbrake and YouTube guidelines for upload to produce a compatible files. It is ESSENTIAL to produce an mp4 file otherwise your TV and YouTube may not be able to display HDR correctly avoid any other format at all costs.
The finished product can be seen here
SDR version from HDR master
There are residual issues with this process one is the production of an SDR version. This currently works much better for HLG than HDR-10 which is interesting because HLG is unsupported on any computer so if you produce HDR HLG you are effectively giving something decent to both audiences.
For HDR-10 YouTube applies their own one fits all LUT and the results can be really bad. You may experience oversaturated colours in some cases, dark footage in others, and some clips may look totally fine.
At professional level you would produce a separate SDR grade however it is possible to improve the quality of YouTube conversion using specific techniques I will cover in a separate post.
Grading in HDR is not widely supported the only tools available are scopes and Tone Mapping of your display. There is no concept of correct exposure for skin tones, in one scene those have a certain brightness and in another this changes again because this is not a 0-100% relative scale but goes with absolute values.
If you invested in a series of cinema LUT you will find none of them work and compresses the signal to under 100 nits. So there is less headroom for looks. There are other things you can do to give some vintage look like adding grain but you need to be careful as the incredible brightness of the footage and the details of 10 bits means if you push it up too much it looks a mess. Currently I am avoiding adding film grain and if I add it I blend it to 10%-20%.
One thing that is interesting is that Log footage in PQ does have a nice feel to it despite the incredible contrast. After all Log is a way to emulate film specifically Cineon, this is true for almost all log formats. Then you would have the different characteristics of each film stock, this is now our camera sensor and because most of them are made by Sony or Canon the clips tend to look very similar to each other nowadays. So if you want to have something different you need to step in the world of Red or ARRI but this is not in the scope of what I am writing here and what you my readers are interested in.
Am keeping a playlist with all my HDR experiments here and I will keep adding to it.
If you find this useful please donate using the button on the side and I will have a drink on you…Cheers!
As you have read, I have been at the forefront of HDR use at home. I have a total of 5 devices with HDR certification of which 2 supporting all standards all the way to Dolby Vision and 3 supporting at least HLG and HDR-10. The consumption of content is composed for most of Netflix or Amazon originals and occasional BBC HLG broadcasts that are streamed concurrently to live programs. So, it is fair to say I have some practical experience on the subject and two years ago I started writing about shooting HLG with the GH5. This was mostly limited by lack of editing capabilities on the display side, but recently Mac OS 10.15.4 has brought HDR-10 support that means you can see HDR signal on a compatible HDMI or DisplayPort device. This is not HLG but there are ways around it as I wrote in a recent post. This post makes some considerations on the issues of shooting HDR and why as of 2020 shooting SDR Rec709 with your Panasonic GH5 is still my preferred option for underwater video and not.
Real vs Theoretical Dynamic Range
You will recall the schematic of a digital camera from a previous post.
This was presented to discuss dual gain circuits but if you ignore the two gain circuits it remains valid. In this post we will focus on the ADC which stands for Analog to Digital Converter. Contemporary cameras have 12- and 14-bits ADC, typically 14 bits ADC are a prerogative of DSLR cameras or high-end cameras. If we want to simplify to the extremes the signal arriving to the ADC will be digitalised on a 12- or 14-bits scale. In the case of the GH5 we have a 12-bits ADC, it is unclear if the GH5s has a 14-bits ADC despite producing 14-bits RAW, for the purpose of this post I will ignore this possibility and focus on 12-bits ADC.
12-bits means you have 4096 levels of signal for each RGB channel this effectively means the dynamic range limit of the camera is 12 Ev as this is defined as Log10(4096)/Log10(2)=12. Stop wait a minute how is that possible? I have references that the Panasonic GH5 dynamic range is 13 Ev how did this become 12?
Firstly, we need to ignore the effect of oversampling and focus on 1:1 pixel ratio and therefore look at the Screen diagram that shows just a bit more than 12 Ev. We then have to look at how DxOMark measures dynamic range this is explained here. In real life we will not be shooting a grey scale but a coloured scene, so unless you are taking pictures of the moon you will not get much more than 12 stops in any scenarios as the colours will eat the data.
This was for what concerns RAW sensor data before de-mosaicing and digital signal processing that will further deteriorate DR when the signal is converted down to 10-bits even if a nonlinear gamma curve is put in place. We do not know what is really the useable DR of the GH5 but Panasonic statement when V-LOG was announced referenced 12 stops dynamic range using a logarithmic curve so we can safely conclude that the best case is 12 stops when a log curve is used and 10 for a gamma curve with a constant correction factor. Again, it is worth stressing that the 12 stops DR is the absolute maximum at the camera setting with 0 gain applied aka base or native ISO which for the GH5 is 200 corresponding to 400 in log modes.
Shooting HLG vs SDR
Shooting HLG with the GH5 or any other prosumer device is not easy.
The first key issue in shooting HLG is the lack of monitoring capabilities on the internal LCD and on external monitors. Let’s start with the internal monitor that is not capable to display HLG signals and relies on two modes:
Mode 1 : priorities the highlights wherever they are
Mode 2 prioritise the subject i.e. center of the frame
In essence you are not able to see what you get during the shot. Furthermore, when you set zebra to 90% the camera will be rarely reaching this value. You need to rely on the waveform, that is not user friendly in an underwater scene, or on the exposure meter. If you have a monitor you will find if you are carefully in the spec that the screens are rec709 so will not display the HLG gamma while they will correctly record the colour gamut. https://www.atomos.com/ninjav : if you read under HDR monitoring gamma you see BT.2020 that is not HDR is SDR. So you encounter the same issues albeit on a much brighter 1000 nits display that you have on the LCD and you need to either adapt to the different values of the waveform or trust the exposure meter and zebra that as we have said are not very useful as it take a lot to clip. On the other hand if you shoot an SDR format the LCD and external monitor will show exactly what you are going to get except you shoot in V-LOG, in this case the waveform and the zebra will need to be adjusted to consider that VLOG absolute max is 80% and 90% white is 60%. Once you apply a monitor LUT however, you will see exactly what you are going to get on the internal or external display.
Editing HLG vs SDR
In the editing phase you will be faced with similar challenges although as we have seen there are workarounds to edit HLG if you wish so. A practical consideration is around contrast ratio. Despite all claims that SDR is just 6 stops I have actually dug out the BT.709, BT.1886, BT.2100 recommendations and I this is what I have found.
Specifications of ITU display standards
In essence Rec709 has a contrast ratio of 1000 which means 9.97 Stops of DR and already allows for 8- and 10-bits colour. BT.1886 was issued to consider CRT screens no longer exist and this means that the DR goes to 10.97 stops. BT.2100 has a contrast ratio of 200000:1 or 17.61 stops of DR.
DisplayHDR Performance Standards
Looking at HDR monitors you see that, with the exception of OLED screens, no consumer devices can meet BT.2100 standards; so even if you have an HDR monitor in most cases is falling short of BT.2100 recommendation.
Our GH5 is capable of a maximum 12 stops DR in V-Log and maybe a bit more in HLG however those values are far below BT.2100 recommendations and more in line with BT.1886 recommendation. If we look at DxOMark DR charts we see that at ISO 1600 nominal that is in effect just above 800 the DR has fallen below 10 Ev. Consider that this is engineering DR practically speaking you are getting your 12 stops just at ISO 200 and your real HDR range is limited to 200-400 ISO range this makes sense as those are the bright scenes. Consider that log photo styles start at ISO 400 but this really translates to ISO 200 on this chart as well as exposure values. Unless you are shooting at low ISO you will get limited DR improvement. Underwater is quite easy to be at higher ISO than 200 and even when you are at 200 unless you are shooting the surface the scene has limited DR anyway. Generally, 10 stops are more than adequate as this is what we get when we produce a Jpeg from a RAW file.
I think the final nail in the coffin arrives when we look where the content will be consumed.
Typical Devices Performance
Phones have IPS screen with some exceptions and contrast ratio below 1000:1 and so do computer screens. If you share on YouTube you will know phones and computer constitute around 85% of playback devices. Tv are around 10% and a small part of those will be HDR. So other than your own home you will not find many HDR devices out there to give justice to your content.
10-bits vs 8 bits
It is best practice to shoot 10 bits and both SDR and HDR support 10 bits colour depth. For compatibility purposes SDR is delivered with 8 bits colour and HDR on 10 bits colour.
Looking at tonal range for RAW files on 8 Megapixels we see that the camera has 24 bits depth over RGB this means 8 bits per channel and 9 bits tonal range. Tonal range are grey levels so in short, the camera will not produce 10 bits colour bit will have more than 8 bits of grey tones which are helpful to counter banding but only at low ISO, so more useful for blue skies than for blue water. Considering that image for photo competitions are JPEGs and that nobody has felt the need for something more we can conclude that as long as we shot at high bitrate something as close to a raw format 8 bit for delivery are adequate.
Cases for HDR and Decision Tree
There are cases where shooting HLG can be meaningful those include snorkelling at the surface on bright days. You will not be going at depth so the footage will look good straight off the camera, likewise, for bright shots in the sun on the surface. But generally, the benefit will drop when the scene has limited DR or at higher ISO values where DR drops anyway.
What follows is my decision tree to choose between SDR and HDR and 10 bits vs 8 bits formats. I like my pictures and my videos to look better than life and I think editing adds value to the imaging although this is not an excuse for poor capture. There are circumstances where editing is less important, namely when the scene is amazing by itself and requires no extra help, or when I am looking at fast paced, documentary style scenes that do not benefit from editing. For the rest my preference remains for editing friendly formats and high bit rate 10 bits codec all intra. Recently I have purchased the V-Log upgrade and I have not found difficult to use or expose so I have included this here as possible option.
The future of HDR
Except a cinema like setting with dark surrounding and low ambient light HDR mass consumption remains challenging. Yes, you can have high peak brightness but not high contrast ratio and this can be obtained with SDR for most already. There is a lot of noise in the cinema community at present because the PQ curve is hard to manage and the work in post processing is multiplied, clearly PQ is not a way forward for broadcasting and HLG will prevail thanks to the pioneering efforts of the BBC but the lack of monitoring and editing devices means HLG is not going to fit cine like scenarios and little productions. It could be a good fit for a zero-edit shooter someone that like to see the scene as it was.
When marketing myths and incorrect information is netted out we realise that our prosumer devices are very far away from what would be required to shoot, edit and consume HDR. Like many other things in digital imaging is much more important to focus on shooting techniques and how to make the most of what we have, instead of engaging on a quest for theoretical benefits that may not exist.
It has been almost two years from my first posts on HLG capture with the GH5 https://interceptor121.com/2018/06/15/setting-up-your-gh5-for-hlg-hdr-capture/ and last week Apple released Catalina 10.15.4 that now supports HDR-10 with compatible devices. Apple and in general computer are still not supporting HLG and it is unlikely this is ever going to happen as the gaming industry is following VESA DisplayHDR standard that is aligned to HDR-10.
After some initial experiments with GH5 and HLG HDR things have gone quiet and this is for two reasons:
There are no affordable monitors that support HLG
There has been lack of software support
While on the surface it looks like there is still no solution to those issues, in this post I will explain how to grade HLG footage in Final Cut Pro should you wish to do so. The situation is not that different on Windows and DaVinci Resolve that also only support HDR-10 monitors but I leave it to Resolve users to figure out. This tutorial is about final cut pro.
A word about Vlog
It is possible to use Vlog to create HDR content however VLOG is recorded as rec709 10 bits. Panasonic LUT and any other LUT are only mapping the VLOG gamma curve to Rec709 so your luminance and colours will be off. It would be appropriate to have a VLOG to PQ LUT however I am not aware this exists. Surely Panasonic can create that but the VLOG LUT that comes with the camera is only for processing in Rec709. So, from our perspective we will ignore VLOG for HDR until such time we have a fully working LUT and clarity about the process.
Why is a bad idea to grade directly in HLG
There is a belief that HLG is a delivery format and it is not edit ready. While that may be true, the primary issue with HLG is that no consumer screens support BT.2020 colour space and the HLG gamma curve. Most display are plain sRGB and others support partially or fully DCI-P3 or the computer version Display P3. Although the white point is the same for all those colour spaces there is a different definition of what red, green and blue and therefore without taking into this into account, if you change a hue, the results will not be as expected. You may still white balance or match colours in HLG but you should not attempt anything more.
What do you need for grading HDR?
In order to successfully and correctly grade HDR footage on your computer you need the following:
HDR HLG footage
Editing software compatible with HDR-10 (Final Cut or DaVinci)
An HDR-10 10 bits monitor
If you want to produce and edit HDR content you must have compatible monitor let’s see how we identify one.
Finding an HDR-10 Monitor
HDR is highly unregulated when it comes to monitors, TVs have Ultra HD Premium Alliance and recently Vesa has introduced DisplayHDR standards https://displayhdr.org/ that are dedicated to display devices. So far, the Display HDR certification has been a prerogative of gaming monitors that have quick response time, high contrast but not necessarily high colour accuracy. We can use the certified list of monitors to find a consumer grade device that may be fit for our purpose: https://displayhdr.org/certified-products/
A DisplayHDR 1000 certified is equivalent to a PQ grading device as it has peak brightness of 1000 nits and minimum of 0.005 this is ideally what you want, but you can get by with an HDR-400 certified display as long as it supports wide colour gamut. In HDR terms wide gamut means covering the DCI-P3 colour space at least for 90% so we can use Vesa list to find a monitor that is HDR-10 compatible and has a decent colour accuracy. Even inside the HDR-400 category there are displays that are fit for purpose and reasonably priced. If you prefer a brand more orientated to professional design or imaging look for the usual suspects Eizo, Benq, and others but here it will be harder to find HDR support as usually those manufacturers are focussed on colour accuracy, so you may find a display covering 95% DCI-P3 but not necessarily producing a high brightness. As long as the device supports HDR-10 you are good to go.
I have a Benq PD2720U that is HDR-10 certified, has a maximum brightness of 350 nits and a minimum of 0.35, it covers 100% sRGB and REC709 and 95% DCI-P3, so is adequate for the task. It is worth nothing that a typical monitor with 350-400 nits brightness offers 10 stops of dynamic range.
In summary any of this will work if you do not have a professional grade monitor:
Search professional display specifications for HDR-10 compatibility and 10 bits wide gamut > 90% DCI-P3
Final Cut Pro Steps
The easy way to have HDR ready content with the GH5 is to shoot with the HLG Photo Style. This produces clips that when analysed have the following characteristics with AVCI coded.
Limited means that it is not using the full 10 bits range for brightness you do not need to worry about that.
With your material ready create a new library in Final Cut Pro that has a Wide Gamut and import your footage.
As we know Apple does not support HLG so when you look at the Luma scope you will see a traditional Rec709 IRE diagram. In addition, the ‘Tone Mapping Functionality’ will not work so you do not have a real idea of colour and brightness accuracy.
At this stage you have two options:
Proceed in HLG and avoid grading
Convert your material in PQ so that you can edit it
We will go on option 2 as we want to grade our footage.
Create a project with PQ gamut and enter your display information in the project properties. In my case the display has a minimum brightness of 0.35 nits and max of 350 and it has P3 primaries with a standard D65 white point. It is important to know those parameters to have a good editing experience otherwise the colours will be off. If you do not know your display parameters do some research. I have a Benq monitor that comes with a calibration certificate the information is right there. Apple screens are typically also P3 with D65 white point and you can find the maximum brightness in the specs. Usually around 500 nits for apple with minimum of 0.5 nits. Do not enter Rec2020 in the monitor information unless your monitor has native primaries in that space (there are almost none). Apple documentation tells you that if you do not know those values you can leave them blank, final cut pro will use the display information from colour sync and try a best match but this is far from ideal.
For the purpose of grading we will convert HLG to PQ using the HDR tools. The two variants of HDR have a different way to manage brightness so a conversion is required however the colour information is consistent between the two.
Please note that the maximum brightness value is typically 1000 Nits however there are not many displays out there that support this level of brightness, for the purpose of what we are going to do this is irrelevant so DO NOT change this value. Activate tone mapping accessible under the view pull down in the playback window this will adapt the footage to your display according to the parameters of the project without capping the scopes in the project.
Finalising your project
When you have finished with your editing you have two options:
Stay in PQ and produce an HDR-10 master
Delete all HDR tools HLG to PQ conversions and change back the project to HLG
If you produce an HDR-10 master you will need to edit twice for SDR: duplicate the project and apply the HDR tool from HLG to SDR or other LUT of your choice.
If you stay in HLG you will produce a single file but is likely that HDR will only be displayed on a narrower range of devices due to the lack of support of HLG in computers. The HLG clip will have correct grading as the corrections performed when the project was in PQ with tone mapping will survive the editing as HLG and PQ share the same colour mapping. The important thing is that you were able to see the effects of your grade.
In my case I have an HLG TV so I produce only one file as I can’t be bothered doing the exercise two times.
The steps to produce your master file are identical to any other projects, I recommend creating a ProRes 422 HQ master and from there other formats using handbrake. If you change your project back to HLG you will get a warning about the master display you can ignore it.
Am not getting into ambient light filters but there are articles on that too.
Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.
You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.
I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.
In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.
I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.
A few notes just to anticipate possible questions
Why are you not looking to have the Luma or the RGB parades at 50% of the scale?
50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.
2. Is it important to execute the steps in sequence?
Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.
3. Why don’t you correct the overall saturation?
Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.
4. Is there a difference between using corrections like Vibrancy instead of just saturation?
Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.
5. Can you avoid an effect LUT and just get the look you want with other tools?
Yes this is entirely down to personal preference.
6. My footage straight from camera does not look like yours and I want it to look good straight away.
That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.
I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.