Category Archives: UNDERWATER VIDEO

Producing and grading HDR content with the Panasonic GH5 in Final Cut Pro X

It has been almost two years from my first posts on HLG capture with the GH5 https://interceptor121.com/2018/06/15/setting-up-your-gh5-for-hlg-hdr-capture/ and last week Apple released Catalina 10.15.4 that now supports HDR-10 with compatible devices. Apple and in general computer are still not supporting HLG and it is unlikely this is ever going to happen as the gaming industry is following VESA DisplayHDR standard that is aligned to HDR-10.

After some initial experiments with GH5 and HLG HDR things have gone quiet and this is for two reasons:

  1. There are no affordable monitors that support HLG
  2. There has been lack of software support

While on the surface it looks like there is still no solution to those issues, in this post I will explain how to grade HLG footage in Final Cut Pro should you wish to do so. The situation is not that different on Windows and DaVinci Resolve that also only support HDR-10 monitors but I leave it to Resolve users to figure out. This tutorial is about final cut pro.

A word about Vlog

It is possible to use Vlog to create HDR content however VLOG is recorded as rec709 10 bits. Panasonic LUT and any other LUT are only mapping the VLOG gamma curve to Rec709 so your luminance and colours will be off.  It would be appropriate to have a VLOG to PQ LUT however I am not aware this exists. Surely Panasonic can create that but the VLOG LUT that comes with the camera is only for processing in Rec709. So, from our perspective we will ignore VLOG for HDR until such time we have a fully working LUT and clarity about the process.

Why is a bad idea to grade directly in HLG

There is a belief that HLG is a delivery format and it is not edit ready. While that may be true, the primary issue with HLG is that no consumer screens support BT.2020 colour space and the HLG gamma curve. Most display are plain sRGB and others support partially or fully DCI-P3 or the computer version Display P3. Although the white point is the same for all those colour spaces there is a different definition of what red, green and blue and therefore without taking into this into account, if you change a hue, the results will not be as expected. You may still white balance or match colours in HLG but you should not attempt anything more.

What do you need for grading HDR?

In order to successfully and correctly grade HDR footage on your computer you need the following:

  • HDR HLG footage
  • Editing software compatible with HDR-10 (Final Cut or DaVinci)
  • An HDR-10 10 bits monitor

If you want to produce and edit HDR content you must have compatible monitor let’s see how we identify one.

Finding an HDR-10 Monitor

HDR is highly unregulated when it comes to monitors, TVs have Ultra HD Premium Alliance and recently Vesa has introduced DisplayHDR standards https://displayhdr.org/ that are dedicated to display devices. So far, the Display HDR certification has been a prerogative of gaming monitors that have quick response time, high contrast but not necessarily high colour accuracy. We can use the certified list of monitors to find a consumer grade device that may be fit for our purpose: https://displayhdr.org/certified-products/

A DisplayHDR 1000 certified is equivalent to a PQ grading device as it has peak brightness of 1000 nits and minimum of 0.005 this is ideally what you want, but you can get by with an HDR-400 certified display as long as it supports wide colour gamut. In HDR terms wide gamut means covering the DCI-P3 colour space at least for 90% so we can use Vesa list to find a monitor that is HDR-10 compatible and has a decent colour accuracy. Even inside the HDR-400 category there are displays that are fit for purpose and reasonably priced. If you prefer a brand more orientated to professional design or imaging look for the usual suspects Eizo, Benq, and others but here it will be harder to find HDR support as usually those manufacturers are focussed on colour accuracy, so you may find a display covering 95% DCI-P3 but not necessarily producing a high brightness. As long as the device supports HDR-10 you are good to go.

I have a Benq PD2720U that is HDR-10 certified, has a maximum brightness of 350 nits and a minimum of 0.35, it covers 100% sRGB and REC709 and 95% DCI-P3, so is adequate for the task. It is worth nothing that a typical monitor with 350-400 nits brightness offers 10 stops of dynamic range.

In summary any of this will work if you do not have a professional grade monitor:

  • Look into Vesa list https://displayhdr.org/certified-products/ and identify a device that supports at least 90% DCI-P3, ideally HDR-1000 but less is ok too
  • Search professional display specifications for HDR-10 compatibility and 10 bits wide gamut > 90% DCI-P3

 

Final Cut Pro Steps

The easy way to have HDR ready content with the GH5 is to shoot with the HLG Photo Style. This produces clips that when analysed have the following characteristics with AVCI coded.

MediaInfo Details HLG 400 Mbps clip

Limited means that it is not using the full 10 bits range for brightness you do not need to worry about that.

With your material ready create a new library in Final Cut Pro that has a Wide Gamut and import your footage.

As we know Apple does not support HLG so when you look at the Luma scope you will see a traditional Rec709 IRE diagram. In addition, the ‘Tone Mapping Functionality’ will not work so you do not have a real idea of colour and brightness accuracy.

At this stage you have two options:

  1. Proceed in HLG and avoid grading
  2. Convert your material in PQ so that you can edit it

We will go on option 2 as we want to grade our footage.

Create a project with PQ gamut and enter your display information in the project properties. In my case the display has a minimum brightness of 0.35 nits and max of 350 and it has P3 primaries with a standard D65 white point. It is important to know those parameters to have a good editing experience otherwise the colours will be off. If you do not know your display parameters do some research. I have a Benq monitor that comes with a calibration certificate the information is right there. Apple screens are typically also P3 with D65 white point and you can find the maximum brightness in the specs. Usually around 500 nits for apple with minimum of 0.5 nits. Do not enter Rec2020 in the monitor information unless your monitor has native primaries in that space (there are almost none). Apple documentation tells you that if you do not know those values you can leave them blank, final cut pro will use the display information from colour sync and try a best match but this is far from ideal.

Monitor Metadata in the Project Properties

For the purpose of grading we will convert HLG to PQ using the HDR tools. The two variants of HDR have a different way to manage brightness so a conversion is required however the colour information is consistent between the two.

Please note that the maximum brightness value is typically 1000 Nits however there are not many displays out there that support this level of brightness, for the purpose of what we are going to do this is irrelevant so DO NOT change this value. Activate tone mapping accessible under the view pull down in the playback window this will adapt the footage to your display according to the parameters of the project without capping the scopes in the project.

Use HDR Tools to convert HLG to PQ

Finalising your project

When you have finished with your editing  you have two options:

  • Stay in PQ and produce an HDR-10 master
  • Delete all HDR tools HLG to PQ conversions and change back the project to HLG

If you produce an HDR-10 master you will need to edit twice for SDR: duplicate the project and apply the HDR tool from HLG to SDR or other LUT of your choice.

If you stay in HLG you will produce a single file but is likely that HDR will only be displayed on a narrower range of devices due to the lack of support of HLG in computers. The HLG clip will have correct grading as the corrections performed when the project was in PQ with tone mapping will survive the editing as HLG and PQ share the same colour mapping. The important thing is that you were able to see the effects of your grade.

Project back in HLG you can see how the RGB parade and the scope are back to IRE but all is exactly the same as with PQ

In my case I have an HLG TV so I produce only one file as I can’t be bothered doing the exercise two times.

The steps to produce your master file are identical to any other projects, I recommend creating a ProRes 422 HQ master and from there other formats using handbrake. If you change your project back to HLG you will get a warning about the master display you can ignore it.

Focussing Techniques for Video – Part II Auto Focus Settings

If you have some experience with video on land you will know that many professional videographers do not use autofocus but rely on follow focus devices. Basically those are accessories that control the focus ring of the camera and avoid the shake that you would create if you were turning the focus ring with your hand.

The bad news is that there are no devices to perform follow focus underwater and if you use a focus knob you will indeed create camera shake. This is the primary reason why I do not use focus knobs on any of my lenses with the exception of the Olympus 60mm macro and in those rare occasions I uses it I do not actually use to obtain focus but to ensure I am at the closest working distance.

So how do you achieve good focus if you can’t use a focus ring and continuous autofocus cannot be trusted? There are essentially three methods that I will discuss here and provide some examples:

  1. Set and forget
  2. Set and adjust
  3. Optimised Continuous Autofocus

You have noticed that there is still an option for continuous autofocus in the list. Before we drill down in the method I want to give some background on autofocus technology.

If after reading this post you are still confused I recommend you get some tuition either joining my Red Sea trip or 1 to 1 (offered in Milton Keynes area in UK).

https://interceptor121.com/2019/07/28/calling-out-to-all-image-makers-1st-interceptor121-liveaboard-red-sea-2020/

Contrast Detect vs Phase Detect and Hybrid Autofocus

The internet is full of autofocus videos showing how well or bad certain camera perform and how one system is superior to another. The reality is that professional cameramen will use follow focus in majority of cases and this is because the camera does not know who the subject is.

Though it is true that one focus system may perform better than other you need to consider that Red cameras use contrast detection autofocus same as your cheap compact camera so clearly autofocus must not be that important.

The second fact is that any camera focus system needs contrast including phase detect. Due to scattering of blue light in water there are many situations where the contrast is low in the scene resulting in focus hunt of the camera autofocus system.

So my first recommendation is to ignore the whole discussion about which focus system is superior because the reality is that there will be situation where the focus will be difficult to achieve and the technology will not come to help. You need to devise strategies to make things work and this is what this post is about.

Let’s go now in the techniques.

Method 1: Set and Forget

As the name implies with this method we set focus at the beginning of the shot and never change this again. This means disabling the camera continuous focus in video mode. This is essential so that this technique works.

This works in three situations:

  1. Using a lens at the hyperfocal distance behind a flat port
  2. Using wet wide angle lenses
  3. Using fisheye lenses

Method 1.a Hyperfocal Distance Method

I am not going to write a dissertation on this there is good content on wikipedia worth a read: https://en.wikipedia.org/wiki/Hyperfocal_distance

The key concept is that depth of field at a given aperture and subject distance will reach infinity. The wider the lens closer this subject distance. For example a 14mm lens on a micro four third body at f/5.6 is 1.65 meters so if you focus on an object at this distance anything between 0.8 meters and infinity will be in focus. As you close the aperture the hyperfocal distance diminishes. This technique is good for medium or reefscape shots where you don’t mind that the whole frame is sharp in focus. It is not suitable for macro or close shots as the aperture required would be too small and diffraction would kick in.

Looking at the past CWK clips if continuous autofocus was disabled and he had focussed just at the start of the scene at 1.85 meters no focus was required until the manta was at 0.9 meters. Note that distances have to be adjusted to account for magnification of water effect.

Once you have your lens and aperture setting you can quickly work out some distances in your scene and fine tune your expertise.

Obviously shooting those shots with a flat port is not exactly the most common method however understanding this technique is paramount to the other two.

Method 1.bc Wet Lenses and Fisheyes

Fisheye lenses tend to have an incredible amount of depth of field even wide open and therefore the set and forget applies in full here without even bothering about hyperfocal distance. Usually focussing on your feet is all is required.

The real revelation to this technique are afocal wet lenses. Afocal means that the focal length of the wet lens is infinity and the light coming through does not diverge or converge. Together with the magnification factor typically 0.3-0.4x means you get to a fisheye situation without the same amount of distortion.
This is the primary reason to buy a lens like the Nauticam WWL-1 or even an Inon wet lens with afocal design.

My Tiger and Hammerhead videos are shot with the camera locked in manual focus after focussing on my feet.

Even when the shark hits the camera the image is in focus

I do not have technical information on newer Nauticam WACP-1 or WACP-2 so am not in a position to confirm if those lenses are afocal or not and therefore I cannot help you. I would think consideration on depth of field still apply. If Nauticam or a shop or user lends me a set up for pool testing I can provide optimise settings for WACP.

Set and forget is the number one method for wide angle and reefscapes underwater and it is easy.

Method 2: Set and Adjust

As the name implies this method sets the focus at the beginning of the shot and then adjusts when required this is necessary especially in macro situations.

The set and adjust method varies depending on how the camera managed push on focus. If the camera manages a refocus using a half press shutter no other settings are required other than disabling continuous auto focus.

For cameras that do not have a refocus half shutter setting you need to operate in manual focus and the set a custom button to perform a single auto focus.

In both cases you need peaking to be active during the shot.

Procedure:

  1. Set the focus as required using half shutter or AF On button
  2. Observe the peaking to ensure the subject is in focus if required moving the camera.
  3. In case of loss of focus refocus using the shutter or the AF On button

This method works well with macro where typically you set focus and then move the camera back and forth to keep focus, in those cases where you want to switch focus on another part of the frame you refocus. This would have helped Brian in the two crab situation.

As the refocus does bring a moment of blur in the clip you need to ensure that when you trigger the refocus the camera will succeed this is best achieved when using a single area of focus.

Method 3: Optimised Continuous Autofocus

Although autofocus has some risks there are situation when this is required those include:

  • Shooting aperture that do not have sufficient depth of field to warrant a set and forget
  • Using dome ports and rectilinear lenses from what I have experienced those lenses do not work well with hyperfocal distances due to physics of dome ports

Obviously the best option remains using a wet lens and set and forget however there are instances where we absolutely want straight lines for example shooting divers or models. In those cases we will use a dome port and as we can’t use a focus gear because the camera would shake we need autofocus.

Focus Area Settings

Cameras have a selection of modes to set the area that will be used by autofocus:

  1. Face / Animal recognition -> locks on recognised shapes
  2. Multi area -> selects the highest contrast area in a number of smaller area of the frame cameras have up to 225 or more areas and you can customise the shape of it
  3. Single area -> an area of selectable size and position in the frame
  4. Tracking -> tracks the contour of an object in the frame

Face recognition and animal recognition are not useful in our case.

Tracking requires the object to keep the shape within the frame this is useful for nudibranches for example or anything that does not change shape in the frame, a fish turning for example will be lost by this method so this is seldom used. To be honest this fails also on land most times.

So we really are left with multi area and single area.

My advice is to avoid multi area because particles in the water for example can generate sufficient contrast to fool the camera and make it lock on it.

So the best option is to use single area, I typically set this to a size smaller than the central third of a nine block grid. With this configuration is also possible to focus on a subject off the centre by moving the area within the frame. This setting works well when the subject is tracked by our movement and the subject is in the centre which is the majority of situations.

This video is shot on a 12-60 mid range zoom using single area AF for all scenes including macro.

The single more significant risk for single area is that if the centre of the frame goes to blue water the camera will go hunting so if you are shooting in caves or on a wall make sure the AF area is on one side of the frame to avoid hunting or lock occasionally focus to prevent the camera seek focus that won’t be found.

Conclusion

Achieving focus in underwater video requires different techniques from land use and a good understanding of ports and optics.

If you think you are not skilled enough and need help from autofocus my advice is to get an afocal wet wide angle lens. This will transform your shooting experience and guarantee all your wide angle to be in focus. If you work in a macro situation you need to master the single AF setting of your camera and make sure you are super stable.

The most difficult scenario is using dome ports and this is one of the reasons I do not recommend those for video. If you are adamant on rectilinear lenses than the specific settings.

Donations are appreciated use the PayPal button on the left.

Focussing Techniques for Video – Part I Problem Diagnostic

Thanks to Brian Lim and WK’S gone diving for providing some examples.

When I started thinking about writing this post I thought of presenting a whole piece on the theory of focus and how a camera achieves it however I later decided it made more sense to start from example and then drill down on the theory based on specific cases.

So we will look at three common issues, understand why they happened and then discuss possible mitigations.

Issue 1: Wide angle Manta Focus Hunt

This clips has been provided by WK’s and has been taken during a trip to Socorro

The water is quite dark and murky and there is a substantial amount of suspended particles in water otherwise we would not have mantas. The water is also fairly milky and therefore the image lacks contrast which is not ideal for the camera to focus as all cameras, including those working on phase detection AF need contrast.

WK’s had a flat port and was shooting quite narrow aperture at f/7.1 which should ensure plenty depth of field on his 14mm lens.

In this clip you can literally see the autofocus pulsating trying to find focus the hunting carries on until the manta is very close at around 15 seconds in the clip. At that point the clips is stable however the overall approach has been ruined.

Diagnostics

The key observations are that the subject was not in focus at the very beginning of the shot and then you can distinctively see how some fairly bright particles come into the scene at 0.04 for example and disturb the camera process as they create a strong contrast against the black manta and the camera can’t decide who is the subject so it starts hunting. When the manta is close and well defined in the frame the camera knows she is the subject and therefore focus issues stop. The white particles in the water when the manta is far are large and bright enough to be picked up by the matrix point of the camera AF this is true regardless of the manta being in the frame and the same would have applied if another fish was doing a photobomb.

Solution

The problem in this clip is not new to video shooters similar things happen when you have the bride walking to the altar and someone the priest or the husband steps into the frame and they are far apart. On land you would keep control using manual focus or if you were really daring you would use tracking. In our case WK’s does not have focus gear and it is not possible for him to manually change the focus.

WK’s could have used tracking  if available on the camera. With tracking you need to ensure that the camera can lock onto the manta and then if it does that the manta does not turn or change shape and nothing bigger comes in front. At this point everything would work. This is a high risk technique only worth trying in clear water and when there are no particle in the water so in this scenario not advised.

The last option and the solution to this issue was for WKs to switch to manual focus and engage peaking. Use a single AF on to focus on his feet or an intermediate target and then check the manta was in focus. If focus was lost WK’s could have triggered AF again at least being able to control how many times the camera was refocussing.

Issue 2: Macro Subject Switching

This other clip has been provided by Brian Lim and it is a macro situation.

We can see that there are particles flying in the water and some other small critters at close range. The main subjects are the large crab and the two small crabs in the foreground.

Brian is not happy about the focus on this shot as not everything is sharp.

Diagnostics

Despite the murky water Brian has correctly locked focus on the crabs in the foreground and due to the high level of magnification the camera does not have sufficient depth of field to make the small and large crab crisp in the frame. It is possible that Brian could not detect on this screen that the crab behind was not sharp which could be avoided with peaking. In any case it is likely that there is no possibility to have this shot sharp end to end. Brian is super stable in the shot so he was set to make it work.

Solution

Brian does not have a focus gear on this camera this would have been required to pull focus in the same shot on the small crab and then go onto the larger crab.

However even in this situation in manual focus Brian could have shot two clips focussing on the two different focal planes and then managed this in post. It is critical to be able to review focus on screen when we shoot or to review right after before we leave the scene.

Issue 3: Too many fish and too much water

The last clip is mine and is taken during a recent trip to Sataya reef.

I have deliberately left this clip uncut because it lets you see that you can use autofocus in water behind a dome port and for most part it works but there are some pitfalls so the most photogenic dolphins at 00:50 are initially blurred.

Diagnostics

I was not expecting the sheer amount of dolphin on the day and certainly I was not expecting them this close so I had a standard zoom lens at 24mm FF equivalent behind a dome port. In most cases I managed to have some fish in the AF area of the camera but at 00:45 and 00:58 the camera does not have anything in the middle of the frame and goes on a hunt.

Solution

Working with a dome port and a lens of that nature does not warrant you will have enough depth of field to leave the camera locked even at f/8 so some refocussing activity was indeed required. In this case I was using a single AF area in the centre and in those moments the camera has just the blue and nothing to focus on and goes on a hunt, as soon as the subject is back in the AF area the camera locks back in. Note that the AF change speed is not fast enough to follow when the dolphin come too close therefore here the only real solution was to have a wider lens, however I could have avoided the hunt if I had set the camera to AF lock and intercepted the moment the AF area was empty preventing the camera to re-engage.

Summary

In all examples of this post the issues have been generated by a lack of intervention. All the situations I have analysed could have been dealt with at time of the shot for most part and did not require extra gear. I believe that when we are in water there is already lots to think about and therefore, we make mistakes or not apply the decisive corrective action that would have saved the shots.

In the next post I will drill down in focus settings and how they can help your underwater shots and also discuss how those apply to macro, wide and mid shots. I am also happy to look at specific examples or issues please get in touch. Specific coaching or troubleshooting is provided in exchange of a drink or two.

Donations are appreciated use the PayPal button on the left.

Announcing New 2020 Offering

Dear readers in 2020 I will be adding some services to the blog to reflect some requirements that have been developing in the last few years.

It happens at times that people get in touch either through comments or directly by email to ask about their current challenges so I thought why not to address this with a bespoke service. Here are my current ideas:

  • Equipment selection – this is generally to do with port lenses, strobes, lights, accessories more than with camera and housing
  • Photo editing clinic – people seem to struggle to handle the editing of their images. While some are definitely skilled majority aren’t and editing an image is almost as important as shooting a good image
  • Video editing clinic – like above but for video that is sometimes even more complex

Those will be offered at the symbolic price of a few beers at UK prices £10 donation using the link on the left hand side.

Other topics that are also becoming interesting are discussions around issues like focus, framing, lens quality. For those I welcome input material by email interceptor121@aol.com send me your images or videos with problems and I will use them to build an article for yours and other benefits.

Currently am working on a feature on focus in video so I am looking for your blurred videos (sorry) as I don’t have many myself I need some help from you guys.

Thank you for reading this short post!

Export Workflows for underwater (and not) video

This is post is going to focus on exporting our videos for consumption on a web platform like YouTube or Vimeo.

This is a typical workflow for video production

We want to focus in the export to publish steps for this post as things are not as straighforward as it may seem.

In general each platform has specific requirements for the uploads and has predefined encoding settings to create their version of the upload this means that is advised to feed those platforms with files that match their expectations.

The easiest way to do this is to separate the production of the master from the encodes that are needed for the various platforms.

For example in Final cut this means exporting a master file in ProRes 422 HQ in my case with GH5 10 bit material. Each camera differs and if your source material is higher or lower quality you need to adjust however the master file will be a significantly large file with mild compression and based on intermediate codecs.

So how do we produce the various encodes?

Some programs like Final Cut Pro have specific add ons in this case Compressor to tune the export however I have had poor experience with compressor and underwater video to the point I do not use it and do not recommend it. Furthermore we can separate the task of encoding from production if we insert a platform independent software in the workflow.

Today encoding happens primarily by H264 and H265 formats through a number of encoders the most popular being x264 and x265 that are free. There is a commercial right issue to use HEVC (x265 output) for streaming so a platform like YouTube uses the free VP9 codec while Vimeo uses HEVC. This does not matter to us.

So to uploade to YouTube for example we have several options:

  1. Upload the ProRes file
  2. Upload a compressed file that we optimised based on our requirements
  3. Upload a compressed file optimised for YouTube requirements

While Option 1 is technically possible we are talking about 200+ GB/hour which means endless upload time.

Option 2 may lead to unexpected results as you are not sure of the quality of YouTube output and how it matches your file so my recommendation is to follow option 3 and give the platform what they want.

YouTube Recommended Settings are on this link

YouTube recommends H264 settings as follow for SDR (standard dynamic range) Uploads

  • Progressive scan (no interlacing)
  • High Profile
  • 2 consecutive B frames
  • Closed GOP. GOP of half the frame rate.
  • CABAC
  • Variable bitrate. No bitrate limit required, although we offer recommended bitrates below for reference
  • Chroma subsampling: 4:2:0

There is no upper bitrate limit so of course you can make significantly large files however for H264 there is a point in which the quality reaches a point that you can’t see any visible differences.

Recommended video bitrates for SDR uploads

To view new 4K uploads in 4K, use a browser or device that supports VP9.

TypeVideo Bitrate, Standard Frame Rate
(24, 25, 30)
Video Bitrate, High Frame Rate
(48, 50, 60)
2160p (4k)35–45 Mbps53–68 Mbps
1440p (2k)16 Mbps24 Mbps
1080p8 Mbps12 Mbps
720p5 Mbps7.5 Mbps
480p2.5 Mbps4 Mbps
360p1 Mbps1.5 Mbps
YouTube Bitrate table

YouTube recommended settings are actually quite generous and if we perform a high quality encode we may easily be able to create smaller file however we are unsure of the logic that YouTube applies to their compression if we deviate so to be sure we will follow the recommendations.

It is very important to understand that bitrate controls the compression together with other factors however in order to get a good file we need to make sure we put some good logic in the analysis of the file itself this will greatly influence the quality of the compression process.

There is a whole book on x264 settings if you fancy a read here.

For my purposes I use handbrake and to make YouTube happy I use Variable Bit Rate with two pass and target bitrate of 45 Mbps. Together with that I have a preset that takes into account what YouTube does not like and then does a pretty solid analysis of motion as H264 is motion interpolated. This is required to avoid artefacts.

Note the long string of x264 coding commands

I have tested this extensively against the built in Final Cut Pro X YouTube Export.

Starting from the timeline and going directly into YouTube resulted in files of 88 Mb starting from a 7.06 GB ProRes 422 HQ comparable for the project. Following the guidelines and the handbrake process I ended up with 110.1 MB which is a 24% increase.

I have also exported to H264 in FCPX this gave me a 45.8 Mbps file however when I checked on YouTube their file it was still smaller than my manually generated file of 12%. I have used 4K video downloader to retrieve file sizes.

Same source file different encodes different results in YouTube

For HDR files there are higher allowed bitrates and considerations on colour space and color depth but is essentially the same story and I have developed HandBrake presets for that too.

When I have to produce an export for my own use I choose H265 and usually a 16 Mbps bitrate which is what Netflix maxes at. Using Quality at RF=22 produces around 20 Mbps files which is amazing considering the starting point of 400 Mbps for GH5 AVCI files. YouTube own files range between 10 and 20 Mbps to give you an idea once compressed in VP9. I cannot see any difference between my 16 Mbps and 20 Mbps files so I have decided to stay with the same settings of Netflix if it works for them will work for me.

There is also a YouTube video to explain in detail what I just said and some comparative videos here

For all my YouTube and Blog subscribers (need to be both) please fill the form and I will send you my 3 handbrake presets.

Edit following some facebook discussions: if you want to upload HD you have better results if you make the file 4K. According to my tests this is not true. Using x264 and uploading an HD file produces same or better results than the HD clip YouTube created out of the same source using a 4K upload. I would be vary about what you read on the internet unless you know exactly how clips are produced. 90% of the issue is poor quality encoding before it even gets to YouTube!

Colour Correction in underwater video

This is my last instalment of the getting the right colour series.

The first read is the explanation of recording settings

https://interceptor121.com/2018/08/13/panasonic-gh5-demystifying-movie-recording-settings/

This post has been quite popular as it applies generally to the GH5 not just for underwater work.

The second article is about getting the best colours

https://interceptor121.com/2019/08/03/getting-the-best-colors-in-your-underwater-video-with-the-panasonic-gh5/

And then of course the issue of white balance

https://interceptor121.com/2019/09/24/the-importance-of-underwater-white-balance-with-the-panasonic-gh5/

Am not getting into ambient light filters but there are articles on that too.

Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.

You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.

I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.

In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.

I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.

A few notes just to anticipate possible questions

  1. Why are you not looking to have the Luma or the RGB parades at 50% of the scale?

50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.

2. Is it important to execute the steps in sequence?

Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.

3. Why don’t you correct the overall saturation?

Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.

4. Is there a difference between using corrections like Vibrancy instead of just saturation?

Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.

5. Can you avoid an effect LUT and just get the look you want with other tools?

Yes this is entirely down to personal preference.

6. My footage straight from camera does not look like yours and I want it to look good straight away.

That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.

I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.

Matching Filters Techniques

The issue is that the Ambient light filters are set for a certain depth and water conditions and does not work well outside that range. While the idea of white balancing the scene and getting colour to penetrate deep into the frame is great the implementation is hard.

Thinking about Keldan we have a 6 meters version and a 12 meters version as listed on their website. The 6 meters version works well between 4 and 12 meters and the other between 10 and 18. At the same time the Spectrum filter for the lens works down to max 15 meters and really performs better shallower than 12 meters.

With that in mind it follows that if you plan to use the spectrum filter -2 you are probably getting the 6 meters ambient filters. So what happens if you go deeper than 12 meters? The ambient light filter is not aligned to the water ambient light and the lights start to look warm this is not such a bad thing but can get bad at times.

You can of course white balance the frame with the lights however this becomes somewhat inconvenient so I wanted to come out with a different technique. In a previous post I have described how to match a lens filter to a light/strobe filter. Instead of matching the light filter to the ambient light I match the filters on land between each other in daylight conditions to obtain a combination that is as much as possible neutral. I have done this for URPRO, Magic Filter and Keldan Spectrum filter and worked out the filter that when combines give a neutral tone.

Magic filter combined with 2 stops cyan filter giving almost no cast

This tone tends to emulate the depth where the filter has the best color rendition. So in case of Keldan this is around 4 meters and so is Magic with URPRO going deeper around 6-9 meters.

The idea is that you can use the filter without lights for landscape shots and when you put the lights into the mix you can almost shoot in auto white balance or set the white balance to the depth the two were matching. I wanted to try this theory in real life so I did 3 different days of diving testing the combination I had identified the results are in this video.

The theory of matching filters worked and the filter more or less performed all as expected. I did have some additional challenges that I had not foreseen.

Filter Performance

The specific performance of a filter is dependant on the camera color science. I have had great results with URPRO combined with Sony cameras but with Panasonic I always had an orange cast in the clips.

Even this time the same issue is confirmed with the URPRO producing this annoying cast that is hard if not impossible to remove also in post.

The Magic filter and the Spectrum filter performed very close, with magic giving a more saturated and baked in image with Keldan maintaining a higher tone accuracy. This is the result of the design of the filters: the Magic filter has been designed to take outstanding picture better than life, the Spectrum filter has been designed using tools to give accurate color rendition. What it means is that the magic images look good even in the LCD while Keldan are a bit dim but can be helped in post.

Looking at the clip in the first 3 and half minutes you can’t tell apart Magic and Spectrum down to 9 meters, with the URPRO giving consistent orange cast.

Going a bit deeper I realised you also need a scenario where you are swimming closer to a reef and want to bring some lights in the frame because you are outside the best working range of the filter. In order to avoid excessive gap when approaching the reef I had stored white balance readings at 6 9 12 15 meters so when I had a scene with mixed light instead of balancing for say 15 meters and then having an issue with the light I used the 9 meters setting so the image is dim when you are far and gets colorful as you approach which is somehow expected in underwater video.

The section at 15 meters are particularly interesting

You can see that URPRO gets better with depth but also how at 5:46 you see a fairly dim reef at 5:52 I switch on the lights and the difference is apparent.

At 6:20 the approach with Keldan was directly with the lights the footage still gives an idea of depth however the colours are there and the background water looks really blu as I had white balance set for 9 meters.

Key Takeaways

All filters produced acceptable results however I would not recommend URPRO for the Panasonic GH5 and settle for the Magic Filter or the Spectrum filter. Today the spectrum is the only wet filter for the Nauticam WWL-1 but I am waiting for some prototypes from Peter Rowlands for the magic. I would recommend both the magic and the spectrum and the choice really depends on preference. If you want a ready look with the least retouching the magic filter is definitely the way to go as it produces excellent ready to use clips that look good immediately in the LCD.

The Keldan Spectrum filter has a more desaturated look and requires more work in post but has the benefit of a more accurate image.

I think this experiment has proved to work and I will use this method again in the future. This method is also potentially available using the keldan or other ambient light using a tone that closely matches the lens filter.

 

Filter Poll

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

interceptor121’s cut – Nauticam n85 Panasonic Olympus and BMPCC port chart

I thought of adding a little stickie post of what I use for my Panasonic GH5 in terms of lenses ports so I made some edits on the official port chart v7.19 please find the google drive link here

There is an addition that I will cover in future posts and relates to using the Canon 8-15 mm Fisheye zoom lens on the GH5 body using a Smartbones Smart Adapter or Vitrox EF-M1.

I have already written about choice of Macro lenses fisheye and wet lenses for video and wide and for macro video.

My latest post is on rectilinear wide angle lenses that is a tricky subject for most.

The importance of Underwater white balance with the Panasonic gh5

One of the key steps in order to get the best underwater colours in your video is to perform a custom white balance.

This is true on land and on water because auto white balance only works in a specified range of color temperatures.

Panasonic GH5 advanced user manual

For our GH5 the range where auto works goes is approximately 3200-7500K. When the camera is working outside this range you get a colour cast. Let’s see with some examples:

Grey card Auto White Balance 8mm
Grey card Custom White Balance 8mm

In the example above I am taking a picture of a white balance reference card under warm lights that have a colour temperature of 2700K.

As you can see the auto white balance fails resulting in a yellowish tinge, while the shots taken after the custom white balance is accurate.

In terms of white balance card I use the Whibal G7 Studio 3.5″x6″ (8.9×15.2 cm). I found this card to work well underwater and I use it with a lanyard attached to a clip that I hook on my BCD D rings.

More info on the whibal here

It is possible to buy a larger card such as the reference that is 7.5″x10″ however this is cumbersome and I found the Studio version to work well with the Panasonic GH5 as it only uses the central part of the frame for white balance.

Custom white balance with the 8mm fisheye

Going back to our GH5 instruction manual you can also see that the camera white balance is limited to 10,000K which is the colour of blue sky.

Underwater due to light absorption at longer wavelengths red and orange disappear at depth and blue tends to scatter over suspended particles. So the colour temperature of water tends to be higher than 10,000K and also the blue is somewhat washed out by scattering.

This is the reason filters are essential because reduce the amount of blue or to say better cyan and bring the camera into a range where custom white balance works again.

I have already posted a whole range of observations on filters in a previous post so am not repeating here.

With the right filter for the water colour I dive in and with the appropriate white balance card you can get some pretty decent results with custom white balance.

To help the colour accuracy I have experimented with the Leeming Luts and I want to thank Paul Leeming for answering my obscure questions. Obviously you do not have to use the LUTs and you can design them yourself however I found that using the Cinelike D LUT I have a very good starting point for colour correction.

The starting point is a CineLike D profile with saturation, noise reduction and sharpness set to -5 all other settings to default as suggested by Paul, there is no need to lower the contrast as CineLike D is already a flat curve.

*Noise and sharpness have actually nothing to do with grading but are set to -5 as the GH5 applies sharpening and noise reduction even at -5 setting. Sharpening has generally a negative effect all around while noise reduction if required is better performed in the editor.

Looking at imaging resource tests of the GH5 we can appreciate that the camera colours are oversaturated by default.

the GH5 has around 113% over saturated colours

The GH5 tends to push deep colour and wash out cyan and yellow. This becomes apparent when we look at a white balanced clip uncorrected.

White balanced clip in final cut pro you can see how the water column is washed out whilst red and other dark colours are accurate

The Leeming Lut helps rebalancing the camera distorted colours and when you apply the camera LUT, provided you have followed the exposure instructions and applied the profile as described, the improvement is immediate.

The previous clip now with the CineLike D Leeming LUT applied

From here onwards it is possible to perform a better grading and work to improve the footage further.

For the whole read please look at Leeming Lut website

One other thing that I believe it is interesting is that while generally for ambient light or balanced light shots I do not actually trust the camera exposure and go -1/3 to -2/3 for close up shots exposing to the right greatly helps highlights recovery

In the two frames you can see the difference the LUT brings restoring the correct balance to the head of the turtle.

Turte detail the highlights appear blown out
Turtle detail with Leeming Lut applied

To be clear the turtle detail has been white balanced in water on the whibal card while using a Keldan Spectrum filter -2, then in fcpx automatic balancing is applied. The LUT brings out a better dynamic range from the same frames.

Obviously you are free to avoid lens filters and LUTs and to some extent it is possible to get similar results however the quality I obtain using automatic settings I believe is quite impressive.

I found myself most times correcting my own wrong exposures or wanting to increase contrast in scene where I had little however this only happens in sever circumstances where white balance and filters are at the limits.

Conclusion

There are many paths to get the right colours for your GH5 underwater videos in my opinion there are four essential ingredients to make your life easier and give your footage a jump start:

  • Take a custom white balance using a professional grade white balance card
  • Set the right picture profile and exposure when shooting
  • (Recommended) Use appropriate filters for the water conditions
  • Apply the appropriate LUT to eliminate the errors in the GH5 colour rendering in post processing

With the following settings producing a video like this is very simple and all your efforts are in the actual cutting of the clip.

Short clip that applies this blog tips

Please note some of the scenes that look off are shot beyond the working conditions of filters and white balance at around 25 meters…