Category Archives: Panasonic GH5

Panasonic GH5 Cage Rigs

If you have a GH5 (Mark I, II or S) you are probably not just doing underwater video but also taking scenes on land. In fact perhaps you are reading this blog and you do not even use your camera underwater.

Either way you know by now that shooting video just with your camera handheld is not a terribly good idea and the tools required are different from photography where once you have a tripod a remote shutter and a bunch of filters are practically set.

Video requires quite a bit of hardware. I have gone through this process and even tried to adapt some of my underwater hardware but it was really bulky and at the end you are better off with proper gear so I use Smallrig.

Smallrig is a Chinese company that makes tons of various bits for your camera for a variety of situations. The first item you need is a cage and they have just released a new updated version for the GH5 cameras.

Clicking the image above will take you to Amazon UK where you can check details of the item.

The cage is the starting point and you can put your camera strap the weight of 190 grams does not add terribly to the camera and in any case if you have a GH5 you are used to carry some more weight. The cage lives permanently on my GH5 unless I put it in the underwater housing.

I am now going to go through a series of set up that I use with some other components as well but first some words of warning.

Smallrig suggests to use a Nato handle on the left hand side however this blocks the HDMI side port from opening. In addition the arri locating pins are too high and if you use a side handle in arri format it will hang and not be level with the cage.

I wrote Smallrig to tell them and they accepted the feedback the reality is that they do not test designs with camera they run 3D simulations and they do not go and open all the ports, plugs etc. So the only handles that actually work with this cage and do not impede any functionality are in this article.

Monopod/Tripod Rig for Wildlife shooting

The basic rig for monopod and handheld work (this means you hold the camera steady and you do not move) has a top handle and a side handle plus a small microphone and if required an LCD shade.

Cage plus top and side handle microphone and LCD shader
Front view of the Handheld set up

I have gone through a series of handles for this cage and I have settled for the following items:

We are looking at £180 for all this hardware this stuff is not cheap however I just can’t emphasize how more ergonomic and effective this is compared to holding the camera directly.

I have this rig used it for my deer film project.

The Deer Watcher was shot with this rig on a monopod

Handheld Rig

There are situations where setting up a gimbal is too much and you want to take simple footage handheld just using the camera IBIS. This works very well with a standard zoom lens like the Panasonic DG Leica 12-60mm or the Lumix 12-60mm.

In those situations is better to have two handles and the set up needs to be as light as possible.

Handheld Rig for walking shots

You will notice that the handles are smaller and they are also lighter. If you walk with your camera you are typically using the LCD and with two handles you are able to be much more steady if you were just gripping the camera.

Handheld Rig rear view

You can also use the EVF instead of the LCD if you prefer to have three points of contact with the camera.

You will need two Mini Side Handles 2916.

Follow Focus Video Rig

In those situation where you are in a studio or controlled environment you can use manual focus for pulling and usually you also have a monitor to have precise exposure. I tend to use this on a fluid video head and sturdy tripod for indoor shots.

Focus Pulling Rig

Here I am shooting the mighty PanaLeica 10-25mm and I have my trusted Swit CM-55c field monitor.

Focus pulling rear view

I have an atomos ninja however since the GH5M2 has 60 fps in camera at 10 bits I only use it in bright outdoor scenes and the SWIT is just better in terms of tools and lighter.

Side view

With the new linear focus option for manual focus you can have a complete 360 degree turn of the wheel for manual focus if you so wish or shorter runs if preferred. I use 300 degrees.

You will need the Mini Follow Focus 3010 and then as monitor support you can go for a Universal Magic Arm with Small Ballhead – 2157 if your monitor is light. If you have a recorder you will need a Swivel and Tilt Monitor Adjustable Mount with Cold Shoe Mount – 2905 or if you prefer a similar item with Nato or screw mount. I find the cold shoe more versatile.

Wrap Up

It can be a bit daunting to search for the right items that do the job when you want to use your cage I hope this article will help you making the right choices and save time without having to do the trial error and return process I have done in the last year!

The truth about v-log

There is no doubt that LOG formats in digital cameras have a halo of mystery around them mostly due to the lack of technical documentation on how they really work. In this short article I will explain how the Panasonic V-Log actually works on different cameras. Some of what you will read may be a surprise to you so I have provided the testing methods and the evidence so you can understand if LOG is something worth considering for you or not. I will aim at making this write up self-contained so you have all the information you need here without having to go and search elsewhere, it is not entirely possible to create a layman version of what is after all a technical subject.

Panasonic V-LOG/V-Gamut

A logarithmic operator is a non-linear function that processes the input signal and maps it to a different output value according to a formula. This is well documented in Panasonic V-Log/V-Gamut technical specifications. If you consider the input reflection (in) you can see how the output is related to the input using two formulas:

  1. IRE = 5.6*in+0.125 (in < cut1 ) *
  2. IRE = c*log10(in+b)+d (in >= cut1 ) 

Where cut1 = 0.01, b=0.00873, c=0.241514, d=0.598206

There are few implications of this formula that are important:

  • 0 input reflectance is mapped to 7.3% IRE
  • Dark values are not compressed until IRE=18%
  • Middle Grey (18% reflectance) is still 42% IRE as standard Rec709
  • White (90% reflectance) is 61% IRE so much lower than Rec709
  • 100% IRE needs input reflectance 4609 which is 5.5 stops headroom for overexposure.

So what we have here is a shift of the black level from 0% to 7.3% and a compression of all tones over 18% this gives the washout look to V-LOG that is mistakenly interpreted as flat but it is not flat at all. In fact the master pedestal as it is known in video or black level is shifted. Another consequence of this formula is that VLOG under 18% IRE works exactly like standard gamma corrected Rec709 so it should have exactly the same performance in the darks with a range between 7.3% and 18% instead of 0-18%.

In terms of ISO measured at 18% reflectante V-LOG should have identical ISO value to any other photo style in your camera this means at given aperture and exposure time the ISO in a standard mode must match V-LOG.

When we look at the reality of V-LOG we can see that Panasonic sets 0 at a value of 50% IRE so generally ⅔ to 1 full stop overexposed this becomes obvious when you look at the waveform. As a result blacks are actually at 10% IRE and whites at 80% once a conversion LUT is applied.

Challenges of Log implementation

LOG conversion is an excellent method to compress a high dynamic range into a smaller bit depth format. The claim is that you can pack the full sensor dynamic range into 10 bits video. Panasonic made this claim for the GH5s and for the S1H, S5.

There is however a fundamental issue. In a consumer digital camera the sensor is already equipped with a digital to analog converter on board and this operates in a linear non log mode. This means the sensor dynamic range is limited to the bit depth of the analog to digital converter and in most cases sensors do not even saturate the on board ADC. It is true that ADC can also resolve portions of bits however this does not largely change the picture.

If we look at the sensor used in the S1H, S5 this is based on a Sony IMX410 that has saturation value of 15105 bits or 13.88 stops of dynamic range. The sensor of the GH5s which is a variant of Sony IMX299 has a saturation of 3895 (at 12 bits) or 11.93 stops.

None of the S1H, S5 or GH5s actually reaches the nominal dynamic range that the ADC can provide at sensor level. The sensor used by the GH5 has more than 12 stops dynamic range and achieves 12.3 EV of engineering DR, as the camera has 12 bits ADC it will resolve an inferior number of tones.

So the starting point is 12 or 14 stops of data to be digitally and not analogically compressed into 10 bits coding. Rec709 has a contrast ratio requirement of 1000:1 which is less than 10 stops dynamic range. This has not to be confused with bit depth. With 8 bits depth you can manage 10 stops using gamma compression. If you finish your work in Rec709 the dynamic range will never exceed log2(1000)=9.97 stops. So when you read that rec709 only has 6.5 stops of DR or similar it is flawed as gamma compression squeezes the dynamic range into a smaller bit depth.

When we look at a sensor with almost 14 stops of dynamic range the standard rec709 gamma compression is insufficient to preserve the full dynamic range as it is by default limited to 10 stops. It follows that logically LOG is better suited to larger sensors and this is where it is widely used by all cinema camera manufacturers.

In practical terms the actual photographic dynamic range (this is defined as the dynamic range you would see on a print of 10″ on the long side at arm length), the one you can see with your eyes in an image, is less than the engineering value. The Panasonic S5 in recent tests showed around 11.5 stops while the GH5S is around 10 and the GH5 9.5 stops of dynamic range. Clearly when you look at a step chart the tool will show more than this value but practically you will not see more DR in real terms.

This means that it is possible that a standard gamma encoded video in 10 bits can be adequate in most situations and nothing more is required. There is also a further issue with noise that the log compression and decompression produces. As any conversion that is not lossless the amount of noise increases: this is especially apparent in the shadows. In a recent test performed with a S5 in low light and measured using neat-video assessment V-Log was one of the worst performed in terms of SNR. The test involved shooting a color checker at 67 lux of ambient illumination and reading noise level on the 4 shadows and darks chips. Though this test was carried out at default setting it has to be noted that even increasing the noise reduction in V-LOG does not eliminate the noise in the shadow as this depends on how V-LOG is implemented.

V-LOG Noisy Shadows

The actual V-Log implementation

How does V-LOG really work? From my analysis I have found that V-Log is not implemented equally across cameras, this is for sure a dependency on the sensor performance and construction.  I do not know how a Varicam camera is built but in order to perform the V-Log as described in the document you need a log converter before the signal is converted to digital. In a digital camera the sensor already has an on board ADC (analog to digital converter) and therefore the output is always linear on a bit scale of 12 or 14 bits. This is a fundamental difference and means that the math as illustrated by Panasonic in the V-LOG/V-Gamut documentation cannot actually be implemented in a consumer digital camera that does not have a separate analog log compressor.

I have taken a test shot in V-LOG as well as other standard Photo Styles with my Lumix S5 those are the RAW previews. V-LOG is exactly 2 2/3 stops underexposed on a linear scale all other parameters are identical.

Image on a standard photo mode looks correctly exposed
RAW image shot in V-LOG shows 2 2/3 underexposure

What is happening here? As we have seen ISO values have to be the same between photo styles and refer to 18% middle grey however if you apply a log conversion to a digital signal this results in a very bright image. I do some wide field astrophotography and I use a tool called Siril to extract information from very dark images this helps visualise the effect of a log compression.

The first screenshot is the RAW file as recorded a very dark black and white image as those tools process separately RGB.

Original image in linear representation

The second image shows the same RAW image with a logarithmic operator applied; this gives a very bright image.

Same image in logarithmic scale

Now if you have to keep the same middle grey value exposure has to match that linear image so what Panasonic does is to change the mapping of ISO to gain. Gain is the amplification on the sensor chip and has values typically up to 24-30 dB or 8 to 10 stops. While in a linear image the ISO would be defined as 100 at zero gain (I am simplifying here as actually even at 100 there will be some gain) in a log image zero gain corresponds to a different ISO value. So the mapping of ISO to gain is changed. When you read that the native ISO is 100 in normal mode and 640 in V-LOG this means that for the same gain of 0 dB a standard image looks like ISO 100 and a V-LOG image looks like ISO 640, this is because V-LOG needs less gain to achieve the same exposure as the log operator brightens the image. In practical terms the raw linear data of V-LOG at 640 is identical to an image taken at 100.

This is the reason why when a videographer takes occasional raw photos and leaves the camera in V-LOG the images are underexposed.

The benefit of the LOG implementation is that thanks to log data compression you can store the complete sensor information in a lower bit depth in our case this means going from 14 to 10 bits. 

There are however some drawbacks due to the fact that at linear level the image was ‘underexposed‘, I put the terms in italic as exposure only depends on time and aperture of the lens, so in effect is lack of gain for which there is no term.

The first issue is noise in the shadows as those on a linear scale are compacted, as the image is underexposed: a higher amount of noise is present and this is then amplified by the LOG conversion. It is not the case that LOG does not have noise reduction, in fact standard noise reduction expects a linear signal gamma corrected and therefore could not work properly (try setting a high value in V-LOG on a S camera to see the results), the issue is with the underexposure (lack of gain) of the linear signal.

There are also additional side effects due to what is called black level range, I recommend reading on photonstophotos a great website maintained by Bill Claff. When you look at black levels you see that cameras do not really have pure black but have a range. This range results in errors at the lower scale of the exposure; the visible effect is colour bleeding (typically blue) in the shadows when there is underexposure. As V-LOG underexposed in linear terms you will have issues of colour bleeding in the shadows: those have been experienced by several users so far with no explanation.

The other side effect is that the LUT to decompress V-LOG remains in a 10 bit color space which was insufficient to store the complete dynamic range data and this does not change. So the LUT does not fully reverse the log compression in Panasonic case this goes into the V709 CineLike Gamma which is in a Rec709 gamma. As the full signal is not decompressed means that there are likely errors of hue accuracy so V-LOG does not have a better ability to reproduce accurate colors and luminance and this is the reason why even after a LUT is applied it needs to be graded. If you instead decompress V-LOG in a log space like Rec2020 HDR you will see that it does not look washed out at all and colors are much more vibrant as the receiving space has in excess of 20 stops.

Some users overexpose their footage saying they are doing ETTR. Due to the way log is implemented this means it will reach a clipping point sooner and therefore the dynamic range is no longer preserved. This is a possible remedy to reduce the amount of noise in low light however the log compression is not fully reversed by the LUT that is expecting middle grey exposure and therefore color and luminance accuracy errors are guaranteed. If you find yourself regularly overexposing V-LOG you should consider not using it at all.

Shadow Improvement and input referred noise

The Lumix cameras with dula gain sensor have a different behaviour to those without. This is visible in the following two graphs again from Bill Claff excellent website. 

The first is the shadow improvement by ISO here you can see that while the GH5/G9 stay flat and are essentially ISO invariant, the GH5S and S5 that have a dual gain circuit have an improvement step when they go from low to high gain. What changes here is due to the way the sensors of the GH5s and S5 are constructed, the back illumination means that when the high gain circuit is active there is a material improvement in the shadows and the camera may even have a lower read noise at this ISO (gain) point than it had before because of this.

Another benefit of dual gain implementation is easier to understand when you look at input referred noise graphs. You can see that as the sensor enters the dual gain zone the input referred noise drops. Input referred noise means the noise that you would need to feed as an input to your circuit to produce the same noise as output. So this means when that step is passed the image will look less noisy. Again you can see that while the GH5 stays relatively flat the GH5s and S5 have a step improvement. Is it is not totally clear what happens in the intermediate zone for the GH5s possibly intermediate digital gain or more noise reduction is applied.

The combination of a certain type of sensor construction and dual conversion gain can be quite useful to improve shadows performance.

Do not confuse dual gain benefit with DR preservation, while dual gain reduces read noise it does not change the fact that the highlights will clip as gain is raised. So the effective PDR reduces in any case and is not preserved. The engineering DR is preserved but that is only useful to a machine and not to our eyes.

Now we are going to look at specific implementation of V-LOG in various camera models.

Front Illuminated 12 bits Sensors

Those are traditional digital cameras for photos and include the GH5, G9 for example. On those cameras you will see that the V-Log exposure shows a higher ISO value of 1 stop compared to other photo styles at identical aperture and shutter speed setting but the actual result is the same in a raw file so your RAW at 400 in VLOG is the same of another photo style at 200. This is a direct contradiction of Panasonic own V-Log model as the meter should read the same in all photo styles so something is going on here. As there is no underexposure it follows that there is no real log compression either. Those cameras are designed in a traditional way so low ISO (gain) is good high ISO (gain) is not. This is visible in the previous graphs.

Those screenshot show how the raw data of an image taken at ISO 250 in standard mode is identical to the V-LOG image and therefore shows how there is not LOG compression at all in the GH5. V-LOGL of the GH5 is therefore just a look and does not have any increase of dynamic range compared to other photo styles.

Image in standard photo style at ISO 250
Identical image at ISO 500 showing that there is no compression at all
VLOG L look of the same raw data

Is this version of V-LOGL more effective than other photo style with a compressed gamma like CineLikeD? According to Panasonic data CineLikeD has 450% headroom so it is already capable of storing the whole dynamic range that the GH5 can produce (450% means 12.13 stops vs 12.3 theoretical maximum).

In addition noise performance of V-Log is worse because all is doing is acting on shadows and highlights and not really doing any log conversion. The business case for acquiring a V-Log key on those cameras is limited if the objective was to preserve dynamic range as the camera already has this ability with photo styles included with the camera and moreover the V-LOG is not actually anything related to LOG compression otherwise the image would have needed to have less gain and would have shown underexposed. The fact that the camera is shooting at nominal ISO 400 means most likely that some form of noise reduction is active to counter the issue that V-Log itself introduces of noise in the shadows. So in this type of camera V-LOG is only a look and does not accomplish any dynamic range compression.

Back Illuminated 12 bits readout sensors

The cameras that have this technology are the GH5s and the BGH1, the back illumination gives the sensor a better ability to convert light into signal when illumination levels are low. Those cameras have actually a sensor with an 14 bits ADC but this is not used for video.

In order to decompose the procedure I have asked a friend to provide some RAW and Jpeg images in Vlog and normal. You can see that in the GH5s there is 1 stop underexposure and therefore a light form of log compression.

Standard Photo Style GH5s
V-LOG -1 stops from standard at identical setting due to gain reduction
VLOGL in the GH5s as presented by the camera

In the GH5s implementation the camera meters zero at the same aperture shutter and ISO in LOG and other photo styles and zero is 50% IRE so actually is 1 stop overexposed.

The procedure for V-Log in this cameras is as follows:

  1. Meter the scene on middle grey + 1 stop (50%)
  2. Reduce gain of the image 1 stop behind the scenes (so your 800 is 400 and 5000 is 2500)
  3. Digital log compression and manipulation

As the underexposure is mild this means the log compression is also mild as it is only recovering 1 stop as the two effect cancels this is actually a balanced setting.

The IMX299 dual gain implementation was a bit messed up in the GH5s but has been corrected in the BGH1 with the values of 160 and 800. It is unclear what is happening to the GH5s and why Panasonic declared 400 and 2500 as the dual gain values as those do not correspond to sensor behaviour, perhaps additional on sensor noise reduction only starts at those values or just wanting to make a marketing statement.

Back Illuminated 14bits Sensors

Here we have the S1H and S5 that have identical sensors and dual gain structure. 

The metering behaviour on the S series is the same as the GH5s so all photo styles result in identical metering. The examples were at the beginning of this post so I am not going to repeat them here.

Now the gain reduction is 2 and ⅔ stops which is significant. After this is applied a strong log compression is performed. This means that when you have ISO 640 on the screen the camera is actually at gain equivalent to ISO 100 and when you have 5000 is at 640 resulting in very dark images. In the case of the S5/S1H VLOG does offer additional dynamic range not achievable with other photo styles.

Interestingly V-Log on the S series does achieve decent low light SNR despite the strong negative gain bias. Here we can see that the Log implementation can be effective however other photo styles that do not reduce gain may be a better choice in low light as gain lifts the signal and improves SNR. It is also important to note that the additional DR of VLOG compared to other photo styles is in the highlights so it only shows on scenes with bright areas together with deep darks this was noted on dpreview and other websites.

Should you use V-LOG?

It looks like Panasonic is tweaking the procedure for each sensor  or even camera as they go along. The behind the scenes gain reduction is really surprising however it is logical considering the effect of a log compression. 

Now we can also see why Panasonic calls the GH5s implementation V-LOGL as the level of log compression is small only 1 stops as opposed to VLOG in the S series where the compression is 2 ⅔ stops. We have also seen that V-LOG, at least in a digital consumer camera with sensor with integrated ADC, has potentially several drawbacks and those are due to the way a camera functions.

Looking at benefits in terms of dynamic range preservation:

  1. GH5/G9 and front illuminated sensor: None
  2. GH5s/BGH1 back illuminated MFT: 1 stop
  3. S5/S1H full frame: 2 ⅔ stops

What we need to consider is that changing the gamma curve can also store additional dynamic range in a standard video container. Dpreview is the only website that has compared the various modes when they reviewed the Panasonic S1H.

A particularly interesting comparison is with the CineLikeD photo style that according to Panasonic can store higher dynamic range and is also not affected by the issues of V-LOG in the shadows or by color accuracy problems due to log compression. The measures of dpreview show that:

  1. On the GH5s V-LOG has 0.3 stops benefits over CineLikeD
  2. On the S1H V-LOG has a benefit of 0.7 stops over CineLikeD2

Considering the potential issues of noise and color bleeding in the shadows together with hue accuracy errors due to the approximation of the V-LOG implementation I personally have decided not to use V-LOG at all for standard dynamic range but to use it for HDR footage only as the decompression of V-LOG seems to have limited to no side effects. In normal non HDR situations I have shot several clips with V-LOG but I never felt I could not control the scene to manage with other photo styles and the extra effort for a maximum benefit of 0.7 Ev is not worth my time nor the investment in noise reduction software or the extra grading effort required. As HDR is not very popular I have recently stopped using V-LOG altogether due to lack of support of HDR in browsers for online viewing.

Obviously this is a personal consideration and not a recommendation however I hope this post helps you making the right choices depending on what you shoot.

This write up is based on my analysis on Panasonic V-LOG and does not necessarily mean the implementation of other camera manufacturers is identical however the challenges in a digital camera are similar and I expect the solutions to be similar too.

The definitive guide to hdr with the panasonic gh5/s1 in final cut pro x

First of all the requirements for HDR at home are:

  1. Log or HLG footage
  2. Final Cut Pro X 10.4.8
  3. Mac OS Catalina 10.15.4
  4. HDR-10 Monitor with 10 bit gamut

It is possible to work with a non HDR-10 monitor using scopes but is not ideal and only acceptable for HLG and in any case 10 bits is a must.

Recommended reading: https://images.apple.com/final-cut-pro/docs/Working_with_Wide_Color_Gamut_and_High_Dynamic_Range_in_Final_Cut_Pro_X.pdf

HDR Footage

In order to product HDR clips you need HDR footage. This comes in two forms:

  1. Log footage
  2. HLG

Cameras have been shooting HDR since years the issue has been that no consumer operating system or display were capable of displaying it. The situation has changed as Windows 10 and Mac Os now have HDR-10 support. This is limited for example on Mac Os there is no browser support but the Tv app is supported, while on windows you can watch HDR-10 videos on YouTube.

You need to have in mind your target format because Log and HLG are not actually interchangeable. HLG today is really only Tv sets and some smartphones, HDR-10 instead is growing in computer support and is more widely supported. Both are royalty free. This post is not about what is the best standard is just about producing some HDR content.

The process is almost identical but there are some significant differences downstream.

Let me explain why this graph produced using the outstanding online application LutCalc show the output input relationship of V-LOG against a standard display gamma for rec709.

V-LOG -> PQ

Stop diagram V-LOG vs Rec709

Looking at the stop diagram we can appreciate that the curves are not only different but a lot of values differ substantially and this is why we need to use a LUT.

Once we apply a LUT the relationship between V-LOG and Rec709 is clearly not linear and only a small parts of bits fit into the target space.

Output vs Input diagram for V-LOG and Rec709

We can see that V-Log fills Rec709 with just a bit more than 60% IRE so there will need to be a lot of squeezing to be done to fit it back in and this is the reason why many people struggle with V-Log and the reason why I do not use V-Log for SDR content.

However the situation changes if we use V-Log for HDR specifically PQ.

Stop Table V-Log to PQ

You can see that net of an offset the curves are almost identical in shape.

This is more apparent looking at the LUT in / out.

LUT in/Out V-Log to Rec2100 PQ

With the exception of the initial part that for V-Log is linear while PQ is fully logarithmic the curve is almost a straight line. As PQ is a larger space than that V-Log can produce on a consumer camera we do not have issues of squeezing bits in as PQ accommodates all bits just fine.

HLG

Similar to V-LOG HLG does not have a great fit into an SDR space.

Stop Table HLG to Rec709

The situation becomes apparent looking at the In/Out Lutted values.

HLG to Rec709

We can see that as HLG is also a log gamma with a different ramp up 100% is achieved with even less bits that V-Log.

So really in pure mathematical terms the fit of log spaces into Rec709 is not a great idea and should be avoided. Note with the arrival of RAW video we still lack editors capable to work in 16 bit depth space like photo editors do and currently all processes go through LOG because they need to fit into a 10/12 bits working space.

It is also a bad idea to use V-Log for HLG due to the difference of the log curves.

V-Log vs HLG

And the graph demonstrates what I said at the beginning. You need to decide at the outset your output and stick to a compatible format.

Importing Footage in Final Cut Pro X 10.4.8

Once we have HLG or LOG footage we need to import it into a Wide Gamut Library, make sure you check this because SDR is default in FCPX.

Library Settings

HLG footage will not require any processing, but LUTs have to be applied to V-LOG as this is different from any Rec2100 target spaces.

The most convenient way is to go into Organise workspace select all clips than press the i button and select General. Apply the Panasonic V-Log LUT to all clips.

Organise View the LUT option is not available in the Basic view so make sure you select General

Creating a Project

Once all files have been handled as required we create our HDR-10 project which in final cut means Rec2020 PQ.

For HLG project change colour space to HLG

The following screenshots demonstrate the effect of the LUT on footage on a PQ timeline.

LUT not applied footage looks dim as values are limited to 80%

With the LUT applied the V-LOG is expanded in the PQ space and the colours and tones come back.

LUTed clip on PQ timeline

We can see the brightness of the scene is approaching 1000 nits and looks exactly we we experienced it.

Once all edits are finished and just as last step we add the HDR Tools to limit peak brightness to 1000 Nits which is a requirement of YouTube and most consumer displays. The Scope flex slightly with an automatic highlight roll-off.

Exporting the Project

I have been using Panasonic AVCI 400 mbps so I will export a master file using ProRes422 HQ if you use a lower bitrate ProRes 422 may be sufficient but don’t go lower as it won’t be HDR anymore.

Export in ProRes 422 HQ

YouTube and other devices use default settings for HDR-10 metadata so do not fill the mastering display nor content information it is not required and you would not know how to fill it correctly with exception of peak brightness.

Converting for YouTube

I use the free program handbrake and YouTube guidelines for upload to produce a compatible files. It is ESSENTIAL to produce an mp4 file otherwise your TV and YouTube may not be able to display HDR correctly avoid any other format at all costs.

The finished product can be seen here

Home HDR Video HDR-10
HLG Documentary style footage

SDR version from HDR master

There are residual issues with this process one is the production of an SDR version. This currently works much better for HLG than HDR-10 which is interesting because HLG is unsupported on any computer so if you produce HDR HLG you are effectively giving something decent to both audiences.

For HDR-10 YouTube applies their own one fits all LUT and the results can be really bad. You may experience oversaturated colours in some cases, dark footage in others, and some clips may look totally fine.

At professional level you would produce a separate SDR grade however it is possible to improve the quality of YouTube conversion using specific techniques I will cover in a separate post.

Final Remarks

Grading in HDR is not widely supported the only tools available are scopes and Tone Mapping of your display. There is no concept of correct exposure for skin tones, in one scene those have a certain brightness and in another this changes again because this is not a 0-100% relative scale but goes with absolute values.

If you invested in a series of cinema LUT you will find none of them work and compresses the signal to under 100 nits. So there is less headroom for looks. There are other things you can do to give some vintage look like adding grain but you need to be careful as the incredible brightness of the footage and the details of 10 bits means if you push it up too much it looks a mess. Currently I am avoiding adding film grain and if I add it I blend it to 10%-20%.

One thing that is interesting is that Log footage in PQ does have a nice feel to it despite the incredible contrast. After all Log is a way to emulate film specifically Cineon, this is true for almost all log formats. Then you would have the different characteristics of each film stock, this is now our camera sensor and because most of them are made by Sony or Canon the clips tend to look very similar to each other nowadays. So if you want to have something different you need to step in the world of Red or ARRI but this is not in the scope of what I am writing here and what you my readers are interested in.

Am keeping a playlist with all my HDR experiments here and I will keep adding to it.

YouTube HDR Playlist

If you find this useful please donate using the button on the side and I will have a drink on you…Cheers!

HDR or SDR with the Panasonic GH5

As you have read, I have been at the forefront of HDR use at home. I have a total of 5 devices with HDR certification of which 2 supporting all standards all the way to Dolby Vision and 3 supporting at least HLG and HDR-10. The consumption of content is composed for most of Netflix or Amazon originals and occasional BBC HLG broadcasts that are streamed concurrently to live programs. So, it is fair to say I have some practical experience on the subject and two years ago I started writing about shooting HLG with the GH5. This was mostly limited by lack of editing capabilities on the display side, but recently Mac OS 10.15.4 has brought HDR-10 support that means you can see HDR signal on a compatible HDMI or DisplayPort device. This is not HLG but there are ways around it as I wrote in a recent post. This post makes some considerations on the issues of shooting HDR and why as of 2020 shooting SDR Rec709 with your Panasonic GH5 is still my preferred option for underwater video and not.

Real vs Theoretical Dynamic Range

You will recall the schematic of a digital camera from a previous post.

This was presented to discuss dual gain circuits but if you ignore the two gain circuits it remains valid. In this post we will focus on the ADC which stands for Analog to Digital Converter. Contemporary cameras have 12- and 14-bits ADC, typically 14 bits ADC are a prerogative of DSLR cameras or high-end cameras. If we want to simplify to the extremes the signal arriving to the ADC will be digitalised on a 12- or 14-bits scale. In the case of the GH5 we have a 12-bits ADC, it is unclear if the GH5s has a 14-bits ADC despite producing 14-bits RAW, for the purpose of this post I will ignore this possibility and focus on 12-bits ADC.

12-bits means you have 4096 levels of signal for each RGB channel this effectively means the dynamic range limit of the camera is 12 Ev as this is defined as Log10(4096)/Log10(2)=12. Stop wait a minute how is that possible? I have references that the Panasonic GH5 dynamic range is 13 Ev how did this become 12?

Firstly, we need to ignore the effect of oversampling and focus on 1:1 pixel ratio and therefore look at the Screen diagram that shows just a bit more than 12 Ev. We then have to look at how DxOMark measures dynamic range this is explained here. In real life we will not be shooting a grey scale but a coloured scene, so unless you are taking pictures of the moon you will not get much more than 12 stops in any scenarios as the colours will eat the data.

This was for what concerns RAW sensor data before de-mosaicing and digital signal processing that will further deteriorate DR when the signal is converted down to 10-bits even if a nonlinear gamma curve is put in place. We do not know what is really the useable DR of the GH5 but Panasonic statement when V-LOG was announced referenced 12 stops dynamic range using a logarithmic curve so we can safely conclude that the best case is 12 stops when a log curve is used and 10 for a gamma curve with a constant correction factor. Again, it is worth stressing that the 12 stops DR is the absolute maximum at the camera setting with 0 gain applied aka base or native ISO which for the GH5 is 200 corresponding to 400 in log modes.

Shooting HLG vs SDR

Shooting HLG with the GH5 or any other prosumer device is not easy.

The first key issue in shooting HLG is the lack of monitoring capabilities on the internal LCD and on external monitors. Let’s start with the internal monitor that is not capable to display HLG signals and relies on two modes:

  • Mode 1 : priorities the highlights wherever they are
  • Mode 2 prioritise the subject i.e. center of the frame

In essence you are not able to see what you get during the shot. Furthermore, when you set zebra to 90% the camera will be rarely reaching this value. You need to rely on the waveform, that is not user friendly in an underwater scene, or on the exposure meter. If you have a monitor you will find if you are carefully in the spec that the screens are rec709 so will not display the HLG gamma while they will correctly record the colour gamut. https://www.atomos.com/ninjav : if you read under HDR monitoring gamma you see BT.2020 that is not HDR is SDR. So you encounter the same issues albeit on a much brighter 1000 nits display that you have on the LCD and you need to either adapt to the different values of the waveform or trust the exposure meter and zebra that as we have said are not very useful as it take a lot to clip. On the other hand if you shoot an SDR format the LCD and external monitor will show exactly what you are going to get except you shoot in V-LOG, in this case the waveform and the zebra will need to be adjusted to consider that VLOG absolute max is 80% and 90% white is 60%. Once you apply a monitor LUT however, you will see exactly what you are going to get on the internal or external display.

Editing HLG vs SDR

In the editing phase you will be faced with similar challenges although as we have seen there are workarounds to edit HLG if you wish so. A practical consideration is around contrast ratio. Despite all claims that SDR is just 6 stops I have actually dug out the BT.709, BT.1886, BT.2100 recommendations and I this is what I have found.

 Contrast RatioMax BrightnessMin BrightnessAnalog DR
BT.70910001000.19.97
BT.188620001000.0510.97
BT.210020000010000.00517.61
Specifications of ITU display standards

In essence Rec709 has a contrast ratio of 1000 which means 9.97 Stops of DR and already allows for 8- and 10-bits colour. BT.1886 was issued to consider CRT screens no longer exist and this means that the DR goes to 10.97 stops. BT.2100 has a contrast ratio of 200000:1 or 17.61 stops of DR.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
HDR40010004000.49.97
HDR50050005000.112.29
HDR60060006000.112.55
HDR10002000010000.0514.29
HDR14007000014000.0216.10
400 TB8000004000.000519.61
500 TB10000005000.000519.93
DisplayHDR Performance Standards

Looking at HDR monitors you see that, with the exception of OLED screens, no consumer devices can meet BT.2100 standards; so even if you have an HDR monitor in most cases is falling short of BT.2100 recommendation.

Our GH5 is capable of a maximum 12 stops DR in V-Log and maybe a bit more in HLG however those values are far below BT.2100 recommendations and more in line with BT.1886 recommendation. If we look at DxOMark DR charts we see that at ISO 1600 nominal that is in effect just above 800 the DR has fallen below 10 Ev. Consider that this is engineering DR practically speaking you are getting your 12 stops just at ISO 200 and your real HDR range is limited to 200-400 ISO range this makes sense as those are the bright scenes. Consider that log photo styles start at ISO 400 but this really translates to ISO 200 on this chart as well as exposure values. Unless you are shooting at low ISO you will get limited DR improvement. Underwater is quite easy to be at higher ISO than 200 and even when you are at 200 unless you are shooting the surface the scene has limited DR anyway. Generally, 10 stops are more than adequate as this is what we get when we produce a Jpeg from a RAW file.

Viewing HDR

I think the final nail in the coffin arrives when we look where the content will be consumed.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
IPS/Phones10003500.359.97
LED Tv40004000.111.97
OLED60000006000.000122.52
Typical Devices Performance

Phones have IPS screen with some exceptions and contrast ratio below 1000:1 and so do computer screens. If you share on YouTube you will know phones and computer constitute around 85% of playback devices. Tv are around 10% and a small part of those will be HDR. So other than your own home you will not find many HDR devices out there to give justice to your content.

10-bits vs 8 bits

It is best practice to shoot 10 bits and both SDR and HDR support 10 bits colour depth. For compatibility purposes SDR is delivered with 8 bits colour and HDR on 10 bits colour.

Looking at tonal range for RAW files on 8 Megapixels we see that the camera has 24 bits depth over RGB this means 8 bits per channel and 9 bits tonal range. Tonal range are grey levels so in short, the camera will not produce 10 bits colour bit will have more than 8 bits of grey tones which are helpful to counter banding but only at low ISO, so more useful for blue skies than for blue water. Considering that image for photo competitions are JPEGs and that nobody has felt the need for something more we can conclude that as long as we shot at high bitrate something as close to a raw format 8 bit for delivery are adequate.

Cases for HDR and Decision Tree

There are cases where shooting HLG can be meaningful those include snorkelling at the surface on bright days. You will not be going at depth so the footage will look good straight off the camera, likewise, for bright shots in the sun on the surface. But generally, the benefit will drop when the scene has limited DR or at higher ISO values where DR drops anyway.

What follows is my decision tree to choose between SDR and HDR and 10 bits vs 8 bits formats. I like my pictures and my videos to look better than life and I think editing adds value to the imaging although this is not an excuse for poor capture. There are circumstances where editing is less important, namely when the scene is amazing by itself and requires no extra help, or when I am looking at fast paced, documentary style scenes that do not benefit from editing. For the rest my preference remains for editing friendly formats and high bit rate 10 bits codec all intra. Recently  I have purchased the V-Log upgrade and I have not found difficult to use or expose so I have included this here as possible option.

The future of HDR

Except a cinema like setting with dark surrounding and low ambient light HDR mass consumption remains challenging. Yes, you can have high peak brightness but not high contrast ratio and this can be obtained with SDR for most already. There is a lot of noise in the cinema community at present because the PQ curve is hard to manage and the work in post processing is multiplied, clearly PQ is not a way forward for broadcasting and HLG will prevail thanks to the pioneering efforts of the BBC but the lack of monitoring and editing devices means HLG is not going to fit cine like scenarios and little productions. It could be a good fit for a zero-edit shooter someone that like to see the scene as it was.

Conclusion

When marketing myths and incorrect information is netted out we realise that our prosumer devices are very far away from what would be required to shoot, edit and consume HDR. Like many other things in digital imaging is much more important to focus on shooting techniques and how to make the most of what we have, instead of engaging on a quest for theoretical benefits that may not exist.

Producing and grading HDR content with the Panasonic GH5 in Final Cut Pro X

It has been almost two years from my first posts on HLG capture with the GH5 https://interceptor121.com/2018/06/15/setting-up-your-gh5-for-hlg-hdr-capture/ and last week Apple released Catalina 10.15.4 that now supports HDR-10 with compatible devices. Apple and in general computer are still not supporting HLG and it is unlikely this is ever going to happen as the gaming industry is following VESA DisplayHDR standard that is aligned to HDR-10.

After some initial experiments with GH5 and HLG HDR things have gone quiet and this is for two reasons:

  1. There are no affordable monitors that support HLG
  2. There has been lack of software support

While on the surface it looks like there is still no solution to those issues, in this post I will explain how to grade HLG footage in Final Cut Pro should you wish to do so. The situation is not that different on Windows and DaVinci Resolve that also only support HDR-10 monitors but I leave it to Resolve users to figure out. This tutorial is about final cut pro.

A word about Vlog

It is possible to use Vlog to create HDR content however VLOG is recorded as rec709 10 bits. Panasonic LUT and any other LUT are only mapping the VLOG gamma curve to Rec709 so your luminance and colours will be off.  It would be appropriate to have a VLOG to PQ LUT however I am not aware this exists. Surely Panasonic can create that but the VLOG LUT that comes with the camera is only for processing in Rec709. So, from our perspective we will ignore VLOG for HDR until such time we have a fully working LUT and clarity about the process.

Why is a bad idea to grade directly in HLG

There is a belief that HLG is a delivery format and it is not edit ready. While that may be true, the primary issue with HLG is that no consumer screens support BT.2020 colour space and the HLG gamma curve. Most display are plain sRGB and others support partially or fully DCI-P3 or the computer version Display P3. Although the white point is the same for all those colour spaces there is a different definition of what red, green and blue and therefore without taking into this into account, if you change a hue, the results will not be as expected. You may still white balance or match colours in HLG but you should not attempt anything more.

What do you need for grading HDR?

In order to successfully and correctly grade HDR footage on your computer you need the following:

  • HDR HLG footage
  • Editing software compatible with HDR-10 (Final Cut or DaVinci)
  • An HDR-10 10 bits monitor

If you want to produce and edit HDR content you must have compatible monitor let’s see how we identify one.

Finding an HDR-10 Monitor

HDR is highly unregulated when it comes to monitors, TVs have Ultra HD Premium Alliance and recently Vesa has introduced DisplayHDR standards https://displayhdr.org/ that are dedicated to display devices. So far, the Display HDR certification has been a prerogative of gaming monitors that have quick response time, high contrast but not necessarily high colour accuracy. We can use the certified list of monitors to find a consumer grade device that may be fit for our purpose: https://displayhdr.org/certified-products/

A DisplayHDR 1000 certified is equivalent to a PQ grading device as it has peak brightness of 1000 nits and minimum of 0.005 this is ideally what you want, but you can get by with an HDR-400 certified display as long as it supports wide colour gamut. In HDR terms wide gamut means covering the DCI-P3 colour space at least for 90% so we can use Vesa list to find a monitor that is HDR-10 compatible and has a decent colour accuracy. Even inside the HDR-400 category there are displays that are fit for purpose and reasonably priced. If you prefer a brand more orientated to professional design or imaging look for the usual suspects Eizo, Benq, and others but here it will be harder to find HDR support as usually those manufacturers are focussed on colour accuracy, so you may find a display covering 95% DCI-P3 but not necessarily producing a high brightness. As long as the device supports HDR-10 you are good to go.

I have a Benq PD2720U that is HDR-10 certified, has a maximum brightness of 350 nits and a minimum of 0.35, it covers 100% sRGB and REC709 and 95% DCI-P3, so is adequate for the task. It is worth nothing that a typical monitor with 350-400 nits brightness offers 10 stops of dynamic range.

In summary any of this will work if you do not have a professional grade monitor:

  • Look into Vesa list https://displayhdr.org/certified-products/ and identify a device that supports at least 90% DCI-P3, ideally HDR-1000 but less is ok too
  • Search professional display specifications for HDR-10 compatibility and 10 bits wide gamut > 90% DCI-P3

 

Final Cut Pro Steps

The easy way to have HDR ready content with the GH5 is to shoot with the HLG Photo Style. This produces clips that when analysed have the following characteristics with AVCI coded.

MediaInfo Details HLG 400 Mbps clip

Limited means that it is not using the full 10 bits range for brightness you do not need to worry about that.

With your material ready create a new library in Final Cut Pro that has a Wide Gamut and import your footage.

As we know Apple does not support HLG so when you look at the Luma scope you will see a traditional Rec709 IRE diagram. In addition, the ‘Tone Mapping Functionality’ will not work so you do not have a real idea of colour and brightness accuracy.

At this stage you have two options:

  1. Proceed in HLG and avoid grading
  2. Convert your material in PQ so that you can edit it

We will go on option 2 as we want to grade our footage.

Create a project with PQ gamut and enter your display information in the project properties. In my case the display has a minimum brightness of 0.35 nits and max of 350 and it has P3 primaries with a standard D65 white point. It is important to know those parameters to have a good editing experience otherwise the colours will be off. If you do not know your display parameters do some research. I have a Benq monitor that comes with a calibration certificate the information is right there. Apple screens are typically also P3 with D65 white point and you can find the maximum brightness in the specs. Usually around 500 nits for apple with minimum of 0.5 nits. Do not enter Rec2020 in the monitor information unless your monitor has native primaries in that space (there are almost none). Apple documentation tells you that if you do not know those values you can leave them blank, final cut pro will use the display information from colour sync and try a best match but this is far from ideal.

Monitor Metadata in the Project Properties

For the purpose of grading we will convert HLG to PQ using the HDR tools. The two variants of HDR have a different way to manage brightness so a conversion is required however the colour information is consistent between the two.

Please note that the maximum brightness value is typically 1000 Nits however there are not many displays out there that support this level of brightness, for the purpose of what we are going to do this is irrelevant so DO NOT change this value. Activate tone mapping accessible under the view pull down in the playback window this will adapt the footage to your display according to the parameters of the project without capping the scopes in the project.

Use HDR Tools to convert HLG to PQ

Finalising your project

When you have finished with your editing  you have two options:

  • Stay in PQ and produce an HDR-10 master
  • Delete all HDR tools HLG to PQ conversions and change back the project to HLG

If you produce an HDR-10 master you will need to edit twice for SDR: duplicate the project and apply the HDR tool from HLG to SDR or other LUT of your choice.

If you stay in HLG you will produce a single file but is likely that HDR will only be displayed on a narrower range of devices due to the lack of support of HLG in computers. The HLG clip will have correct grading as the corrections performed when the project was in PQ with tone mapping will survive the editing as HLG and PQ share the same colour mapping. The important thing is that you were able to see the effects of your grade.

Project back in HLG you can see how the RGB parade and the scope are back to IRE but all is exactly the same as with PQ

In my case I have an HLG TV so I produce only one file as I can’t be bothered doing the exercise two times.

The steps to produce your master file are identical to any other projects, I recommend creating a ProRes 422 HQ master and from there other formats using handbrake. If you change your project back to HLG you will get a warning about the master display you can ignore it.

Colour Correction in underwater video

This is my last instalment of the getting the right colour series.

The first read is the explanation of recording settings

https://interceptor121.com/2018/08/13/panasonic-gh5-demystifying-movie-recording-settings/

This post has been quite popular as it applies generally to the GH5 not just for underwater work.

The second article is about getting the best colours

https://interceptor121.com/2019/08/03/getting-the-best-colors-in-your-underwater-video-with-the-panasonic-gh5/

And then of course the issue of white balance

https://interceptor121.com/2019/09/24/the-importance-of-underwater-white-balance-with-the-panasonic-gh5/

Am not getting into ambient light filters but there are articles on that too.

Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.

You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.

I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.

In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.

I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.

A few notes just to anticipate possible questions

  1. Why are you not looking to have the Luma or the RGB parades at 50% of the scale?

50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.

2. Is it important to execute the steps in sequence?

Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.

3. Why don’t you correct the overall saturation?

Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.

4. Is there a difference between using corrections like Vibrancy instead of just saturation?

Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.

5. Can you avoid an effect LUT and just get the look you want with other tools?

Yes this is entirely down to personal preference.

6. My footage straight from camera does not look like yours and I want it to look good straight away.

That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.

I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.

Canon 8 – 15 mm Fisheye on the Panasonic GH5 Pool Tests

It was time to get wet and test the Canon 8 – 15 mm fisheye on the GH5 in the pool so I made my way to Luton Aspire with the help of Rec2Tec Bletchley.

I had the change to try a few things first of all to understand the store coverage of the fisheye frame, this is something I had not tested before but I had built a little model.

In purple the ideal rectangle built with the maximum width and height of the fisheye frame

This model ignores the corners the red circle are 90 degrees light beams and the amber is the 120 degrees angle. A strobe does not have a sharp fall off when you use diffusers so this model assumes your strobe can keep within 1 Ev loss around 90 degrees and then drop down to – 4 Ev at 120 degrees. I do not want to dig too deep into this topic anyway this is what I expected and this is the frame.

Shot at 1.5 meters from pool wall

You can see a tiny reflection of the strobes together with a mask falling on the left hand side… In order to test my theory I run this through false colour on my field monitor, at first glance it looks well lit and this is the false colour.

False colour diagram of previous shot

As you can see the strobes drop below 50 at the green colour band and therefore the nominal width of those strobes is probably 100 degrees. In the deep corners you see the drop to 20 % 10% and then 0 %.

Time to take some shots

Divers hovering @ 8 mm

The lens is absolutely pin sharp across the frame, I was shooting at f/5.6 in the 140 mm glass dome.

Happy divers @ 9 mm
BCD removal @ 10 mm
Gliding @ 11 mm
Open Water class @ 12mm
Divers couple @ 13 mm
Hover @ 15 mm

Performance remains stunning across the zoom range. I also tried few shots at f/4

9 mm f/4

There is no reef background but looks pretty good to me.

The pool gives a strong blue cast so the shots are white balanced.

If you want details of the rig and lens mount are in a previous post

https://interceptor121.com/2019/11/02/fisheye-zoom-for-micro-four-thirds/

Panasonic GH5 zoom fisheye rig

Matching Filters Techniques

The issue is that the Ambient light filters are set for a certain depth and water conditions and does not work well outside that range. While the idea of white balancing the scene and getting colour to penetrate deep into the frame is great the implementation is hard.

Thinking about Keldan we have a 6 meters version and a 12 meters version as listed on their website. The 6 meters version works well between 4 and 12 meters and the other between 10 and 18. At the same time the Spectrum filter for the lens works down to max 15 meters and really performs better shallower than 12 meters.

With that in mind it follows that if you plan to use the spectrum filter -2 you are probably getting the 6 meters ambient filters. So what happens if you go deeper than 12 meters? The ambient light filter is not aligned to the water ambient light and the lights start to look warm this is not such a bad thing but can get bad at times.

You can of course white balance the frame with the lights however this becomes somewhat inconvenient so I wanted to come out with a different technique. In a previous post I have described how to match a lens filter to a light/strobe filter. Instead of matching the light filter to the ambient light I match the filters on land between each other in daylight conditions to obtain a combination that is as much as possible neutral. I have done this for URPRO, Magic Filter and Keldan Spectrum filter and worked out the filter that when combines give a neutral tone.

Magic filter combined with 2 stops cyan filter giving almost no cast

This tone tends to emulate the depth where the filter has the best color rendition. So in case of Keldan this is around 4 meters and so is Magic with URPRO going deeper around 6-9 meters.

The idea is that you can use the filter without lights for landscape shots and when you put the lights into the mix you can almost shoot in auto white balance or set the white balance to the depth the two were matching. I wanted to try this theory in real life so I did 3 different days of diving testing the combination I had identified the results are in this video.

The theory of matching filters worked and the filter more or less performed all as expected. I did have some additional challenges that I had not foreseen.

Filter Performance

The specific performance of a filter is dependant on the camera color science. I have had great results with URPRO combined with Sony cameras but with Panasonic I always had an orange cast in the clips.

Even this time the same issue is confirmed with the URPRO producing this annoying cast that is hard if not impossible to remove also in post.

The Magic filter and the Spectrum filter performed very close, with magic giving a more saturated and baked in image with Keldan maintaining a higher tone accuracy. This is the result of the design of the filters: the Magic filter has been designed to take outstanding picture better than life, the Spectrum filter has been designed using tools to give accurate color rendition. What it means is that the magic images look good even in the LCD while Keldan are a bit dim but can be helped in post.

Looking at the clip in the first 3 and half minutes you can’t tell apart Magic and Spectrum down to 9 meters, with the URPRO giving consistent orange cast.

Going a bit deeper I realised you also need a scenario where you are swimming closer to a reef and want to bring some lights in the frame because you are outside the best working range of the filter. In order to avoid excessive gap when approaching the reef I had stored white balance readings at 6 9 12 15 meters so when I had a scene with mixed light instead of balancing for say 15 meters and then having an issue with the light I used the 9 meters setting so the image is dim when you are far and gets colorful as you approach which is somehow expected in underwater video.

The section at 15 meters are particularly interesting

You can see that URPRO gets better with depth but also how at 5:46 you see a fairly dim reef at 5:52 I switch on the lights and the difference is apparent.

At 6:20 the approach with Keldan was directly with the lights the footage still gives an idea of depth however the colours are there and the background water looks really blu as I had white balance set for 9 meters.

Key Takeaways

All filters produced acceptable results however I would not recommend URPRO for the Panasonic GH5 and settle for the Magic Filter or the Spectrum filter. Today the spectrum is the only wet filter for the Nauticam WWL-1 but I am waiting for some prototypes from Peter Rowlands for the magic. I would recommend both the magic and the spectrum and the choice really depends on preference. If you want a ready look with the least retouching the magic filter is definitely the way to go as it produces excellent ready to use clips that look good immediately in the LCD.

The Keldan Spectrum filter has a more desaturated look and requires more work in post but has the benefit of a more accurate image.

I think this experiment has proved to work and I will use this method again in the future. This method is also potentially available using the keldan or other ambient light using a tone that closely matches the lens filter.

 

Filter Poll

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

Fisheye Zoom for Micro Four Thirds

Looking at Nauticam port chart the only option for a fisheye zoom is to combine the Panasonic PZ 14-42 with a fisheye add on lens. This is a solution that is not that popular due to low optical quality.

So micro four thirds users have been left with a prime fisheye lens from Panasonic or Olympus…until now!

Looking at Nauticam port chart we can see that there is an option to use the Speedbooster Metabones adapter and with this you convert your MFT camera to a 1.42x crop allowing you to use Canon EF-M lenses for cropped sensor including the Tokina 10-17mm fisheye. This is certainly an option and can be combined with a Kenko 1.4x teleconverter giving you a range of 14.2 to 33.8 mm in full frame equivalent or 7.1 to 16.9 mm in MFT terms fisheye zoom of which the usable range is 8 -16.9 mm after removing vignetting.

A further issue is that the Speedbooster gives you another stop of light limiting the aperture to f/16 while this is generally a bonus for land shooting in low light underwater we want to use all apertures all the way to f/22 for sunbursts even if this means diffraction problems.

Wolfgang Shreibmayer started a trend time ago in WetPixel https://wetpixel.com/forums/index.php?/topic/61629-canon-ef-lenses-on-mft-cameras/ to use full frame lenses and in this post I want to do a deep dive on what is for me the most interesting lens option the Canon 8-15mm fisheye.

This lens on full frame can be used for a circular and diagonal fisheye but Wolfgang has devised a method to use it as an 8-15mm fisheye zoom on MFT.

Part list – missing the zoom gear

What you need are the following:

  • Canon EF 8-15mm f/4L fisheye USM
  • Metabones Smart Adapter MB_EF_m43_BT2 or Viltrox EF-M1 Adapter
  • A 3D printed gear extension ring
  • Nauticam C-815Z zoom gear
  • Nauticam 36064 N85 to N120 34.7mm port adapter with knob
  • Nauticam 21135 35mm extension ring with lock
  • Nauticam 18810 N120 140mm optical glass fisheye port

The assembly is quite complicated as the lens won’t fit through the N85 port. It starts with inserting the camera with no lens in the housing.

GH5 body only assembly
Camera in housing without port

The next step is to fit the port adapter

Attach N85 N120 Metabones adapter

Then we need to prepare the lens with the smart adapter once removed the tripod mount part.

Canon 8-15 on Metabones Smart Adapter IV

As the port is designed for the speed booster the lens will be few mm off therefore the gear will not grip. Wolfgang has devised a simple adapter to make it work.

gear extension ring
Zoom gear on lens

This shifts the gear backwards allowing to grip on the knob.

3D design is here

Lens inserted on housing

Looking at nauticam port chart an extension ring of 30mm is recommended for the speedbooster and now we have extra 5mm in length Wolfgang uses a 35mm extension. however looking at the lens entrance pupil I have concluded that 30mm will be actually better positioned. Nauticam have confirmed there won’t be performance differences. You need to secure the ring on the dome before final assembly.

Fisheye dome and extension
Full assembly top view
Side front view

The rig looks bigger than the 4.33 dome but the size of the GH5 housing is quite proportionate. It will look bigger on a traditional small size non clam style housing.

The disassembly will be made again in 3 steps.

Disassembly

I am not particularly interested in the 1.4x teleconverter version consider that once zoomed in to 15mm the lens is horizontally narrower than a 12mm native lens so there is no requirement for the teleconverter at all.

This table gives you an idea of the working range compared to a rectilinear lens along the horizontal axis as diagonal is not a fair comparison. The lens is very effective at 8-10mm where any rectilinear would do bad then overlaps with an 8-18mm lens. The choice of lens would be dictated by the need to have or not straight lines. The range from 13mm is particularly useful for sharks and fish that do not come that close.

Focal lengthHorizontalVerticalDiagonalHorizontal Linear EqWidthHeightDiagonal
8130.995.9170.217.31321.64
9114.984.7147.8
10102.575.9131.06.9
1192.668.7117.88.3
1284.562.9107.29.5
1377.757.998.410.8
1472.053.790.911.9
1567.050.184.613.0

Wolfgang has provided me with some shots that illustrate how versatile is this set up.

8mm end surface shot
Caves 8mm
15mm end close up
Dolphins at 15mm
Diver close up at 8mm
Snell windows 8mm
Robust ghost pipefish @15mm

As you can see you can even shoot a robust ghost pipefish!

The contrast of the glass dome is great and the optical quality is excellent. On my GH5 body there is uncorrected chromatic aberration that you can remove in one click. Furthermore lens profiles are available to de-fish images and make them rectilinear should you want to do so.

I would like to thank Wolfgang for being available for questions for providing the 3D print and the images that are featured here on this post.

If you can’t print 3D and need an adapter ring I can sell you one for £7 plus shipping contact me for arrangements.

Amazon links UK

Canon EF 8-15 mm f/4 fisheye USM lens

Viltrox EF-M1 Mount Adapter

Note: it is possible to use a Metabones Speed Booster Ultra in combination with a Tokina 10-17mm zoom fisheye and a smaller 4.33″ acrylic dome.

UK Cost of the canon option: £3,076

Uk Cost of the Tokina option: £2,111

However if you add the glass dome back

UK Cost of Tokina with glass dome: £2,615

The gap is £461 and if you go for a Vitrox adapter (would not recommend for the speedbooster) the difference on a comparable basis is £176 which for me does not make sense as the Canon optics are far superior.

So I would say either Tokina in acrylic for the cost conscious or Canon in glass for those looking for the ultimate optical quality.