Focussing Techniques for Video – Part II Auto Focus Settings

If you have some experience with video on land you will know that many professional videographers do not use autofocus but rely on follow focus devices. Basically those are accessories that control the focus ring of the camera and avoid the shake that you would create if you were turning the focus ring with your hand.

The bad news is that there are no devices to perform follow focus underwater and if you use a focus knob you will indeed create camera shake. This is the primary reason why I do not use focus knobs on any of my lenses with the exception of the Olympus 60mm macro and in those rare occasions I uses it I do not actually use to obtain focus but to ensure I am at the closest working distance.

So how do you achieve good focus if you can’t use a focus ring and continuous autofocus cannot be trusted? There are essentially three methods that I will discuss here and provide some examples:

  1. Set and forget
  2. Set and adjust
  3. Optimised Continuous Autofocus

You have noticed that there is still an option for continuous autofocus in the list. Before we drill down in the method I want to give some background on autofocus technology.

If after reading this post you are still confused I recommend you get some tuition either joining my Red Sea trip or 1 to 1 (offered in Milton Keynes area in UK).

https://interceptor121.com/2019/07/28/calling-out-to-all-image-makers-1st-interceptor121-liveaboard-red-sea-2020/

Contrast Detect vs Phase Detect and Hybrid Autofocus

The internet is full of autofocus videos showing how well or bad certain camera perform and how one system is superior to another. The reality is that professional cameramen will use follow focus in majority of cases and this is because the camera does not know who the subject is.

Though it is true that one focus system may perform better than other you need to consider that Red cameras use contrast detection autofocus same as your cheap compact camera so clearly autofocus must not be that important.

The second fact is that any camera focus system needs contrast including phase detect. Due to scattering of blue light in water there are many situations where the contrast is low in the scene resulting in focus hunt of the camera autofocus system.

So my first recommendation is to ignore the whole discussion about which focus system is superior because the reality is that there will be situation where the focus will be difficult to achieve and the technology will not come to help. You need to devise strategies to make things work and this is what this post is about.

Let’s go now in the techniques.

Method 1: Set and Forget

As the name implies with this method we set focus at the beginning of the shot and never change this again. This means disabling the camera continuous focus in video mode. This is essential so that this technique works.

This works in three situations:

  1. Using a lens at the hyperfocal distance behind a flat port
  2. Using wet wide angle lenses
  3. Using fisheye lenses

Method 1.a Hyperfocal Distance Method

I am not going to write a dissertation on this there is good content on wikipedia worth a read: https://en.wikipedia.org/wiki/Hyperfocal_distance

The key concept is that depth of field at a given aperture and subject distance will reach infinity. The wider the lens closer this subject distance. For example a 14mm lens on a micro four third body at f/5.6 is 1.65 meters so if you focus on an object at this distance anything between 0.8 meters and infinity will be in focus. As you close the aperture the hyperfocal distance diminishes. This technique is good for medium or reefscape shots where you don’t mind that the whole frame is sharp in focus. It is not suitable for macro or close shots as the aperture required would be too small and diffraction would kick in.

Looking at the past CWK clips if continuous autofocus was disabled and he had focussed just at the start of the scene at 1.85 meters no focus was required until the manta was at 0.9 meters. Note that distances have to be adjusted to account for magnification of water effect.

Once you have your lens and aperture setting you can quickly work out some distances in your scene and fine tune your expertise.

Obviously shooting those shots with a flat port is not exactly the most common method however understanding this technique is paramount to the other two.

Method 1.bc Wet Lenses and Fisheyes

Fisheye lenses tend to have an incredible amount of depth of field even wide open and therefore the set and forget applies in full here without even bothering about hyperfocal distance. Usually focussing on your feet is all is required.

The real revelation to this technique are afocal wet lenses. Afocal means that the focal length of the wet lens is infinity and the light coming through does not diverge or converge. Together with the magnification factor typically 0.3-0.4x means you get to a fisheye situation without the same amount of distortion.
This is the primary reason to buy a lens like the Nauticam WWL-1 or even an Inon wet lens with afocal design.

My Tiger and Hammerhead videos are shot with the camera locked in manual focus after focussing on my feet.

Even when the shark hits the camera the image is in focus

I do not have technical information on newer Nauticam WACP-1 or WACP-2 so am not in a position to confirm if those lenses are afocal or not and therefore I cannot help you. I would think consideration on depth of field still apply. If Nauticam or a shop or user lends me a set up for pool testing I can provide optimise settings for WACP.

Set and forget is the number one method for wide angle and reefscapes underwater and it is easy.

Method 2: Set and Adjust

As the name implies this method sets the focus at the beginning of the shot and then adjusts when required this is necessary especially in macro situations.

The set and adjust method varies depending on how the camera managed push on focus. If the camera manages a refocus using a half press shutter no other settings are required other than disabling continuous auto focus.

For cameras that do not have a refocus half shutter setting you need to operate in manual focus and the set a custom button to perform a single auto focus.

In both cases you need peaking to be active during the shot.

Procedure:

  1. Set the focus as required using half shutter or AF On button
  2. Observe the peaking to ensure the subject is in focus if required moving the camera.
  3. In case of loss of focus refocus using the shutter or the AF On button

This method works well with macro where typically you set focus and then move the camera back and forth to keep focus, in those cases where you want to switch focus on another part of the frame you refocus. This would have helped Brian in the two crab situation.

As the refocus does bring a moment of blur in the clip you need to ensure that when you trigger the refocus the camera will succeed this is best achieved when using a single area of focus.

Method 3: Optimised Continuous Autofocus

Although autofocus has some risks there are situation when this is required those include:

  • Shooting aperture that do not have sufficient depth of field to warrant a set and forget
  • Using dome ports and rectilinear lenses from what I have experienced those lenses do not work well with hyperfocal distances due to physics of dome ports

Obviously the best option remains using a wet lens and set and forget however there are instances where we absolutely want straight lines for example shooting divers or models. In those cases we will use a dome port and as we can’t use a focus gear because the camera would shake we need autofocus.

Focus Area Settings

Cameras have a selection of modes to set the area that will be used by autofocus:

  1. Face / Animal recognition -> locks on recognised shapes
  2. Multi area -> selects the highest contrast area in a number of smaller area of the frame cameras have up to 225 or more areas and you can customise the shape of it
  3. Single area -> an area of selectable size and position in the frame
  4. Tracking -> tracks the contour of an object in the frame

Face recognition and animal recognition are not useful in our case.

Tracking requires the object to keep the shape within the frame this is useful for nudibranches for example or anything that does not change shape in the frame, a fish turning for example will be lost by this method so this is seldom used. To be honest this fails also on land most times.

So we really are left with multi area and single area.

My advice is to avoid multi area because particles in the water for example can generate sufficient contrast to fool the camera and make it lock on it.

So the best option is to use single area, I typically set this to a size smaller than the central third of a nine block grid. With this configuration is also possible to focus on a subject off the centre by moving the area within the frame. This setting works well when the subject is tracked by our movement and the subject is in the centre which is the majority of situations.

This video is shot on a 12-60 mid range zoom using single area AF for all scenes including macro.

The single more significant risk for single area is that if the centre of the frame goes to blue water the camera will go hunting so if you are shooting in caves or on a wall make sure the AF area is on one side of the frame to avoid hunting or lock occasionally focus to prevent the camera seek focus that won’t be found.

Conclusion

Achieving focus in underwater video requires different techniques from land use and a good understanding of ports and optics.

If you think you are not skilled enough and need help from autofocus my advice is to get an afocal wet wide angle lens. This will transform your shooting experience and guarantee all your wide angle to be in focus. If you work in a macro situation you need to master the single AF setting of your camera and make sure you are super stable.

The most difficult scenario is using dome ports and this is one of the reasons I do not recommend those for video. If you are adamant on rectilinear lenses than the specific settings.

Donations are appreciated use the PayPal button on the left.

Focussing Techniques for Video – Part I Problem Diagnostic

Thanks to Brian Lim and WK’S gone diving for providing some examples.

When I started thinking about writing this post I thought of presenting a whole piece on the theory of focus and how a camera achieves it however I later decided it made more sense to start from example and then drill down on the theory based on specific cases.

So we will look at three common issues, understand why they happened and then discuss possible mitigations.

Issue 1: Wide angle Manta Focus Hunt

This clips has been provided by WK’s and has been taken during a trip to Socorro

The water is quite dark and murky and there is a substantial amount of suspended particles in water otherwise we would not have mantas. The water is also fairly milky and therefore the image lacks contrast which is not ideal for the camera to focus as all cameras, including those working on phase detection AF need contrast.

WK’s had a flat port and was shooting quite narrow aperture at f/7.1 which should ensure plenty depth of field on his 14mm lens.

In this clip you can literally see the autofocus pulsating trying to find focus the hunting carries on until the manta is very close at around 15 seconds in the clip. At that point the clips is stable however the overall approach has been ruined.

Diagnostics

The key observations are that the subject was not in focus at the very beginning of the shot and then you can distinctively see how some fairly bright particles come into the scene at 0.04 for example and disturb the camera process as they create a strong contrast against the black manta and the camera can’t decide who is the subject so it starts hunting. When the manta is close and well defined in the frame the camera knows she is the subject and therefore focus issues stop. The white particles in the water when the manta is far are large and bright enough to be picked up by the matrix point of the camera AF this is true regardless of the manta being in the frame and the same would have applied if another fish was doing a photobomb.

Solution

The problem in this clip is not new to video shooters similar things happen when you have the bride walking to the altar and someone the priest or the husband steps into the frame and they are far apart. On land you would keep control using manual focus or if you were really daring you would use tracking. In our case WK’s does not have focus gear and it is not possible for him to manually change the focus.

WK’s could have used tracking  if available on the camera. With tracking you need to ensure that the camera can lock onto the manta and then if it does that the manta does not turn or change shape and nothing bigger comes in front. At this point everything would work. This is a high risk technique only worth trying in clear water and when there are no particle in the water so in this scenario not advised.

The last option and the solution to this issue was for WKs to switch to manual focus and engage peaking. Use a single AF on to focus on his feet or an intermediate target and then check the manta was in focus. If focus was lost WK’s could have triggered AF again at least being able to control how many times the camera was refocussing.

Issue 2: Macro Subject Switching

This other clip has been provided by Brian Lim and it is a macro situation.

We can see that there are particles flying in the water and some other small critters at close range. The main subjects are the large crab and the two small crabs in the foreground.

Brian is not happy about the focus on this shot as not everything is sharp.

Diagnostics

Despite the murky water Brian has correctly locked focus on the crabs in the foreground and due to the high level of magnification the camera does not have sufficient depth of field to make the small and large crab crisp in the frame. It is possible that Brian could not detect on this screen that the crab behind was not sharp which could be avoided with peaking. In any case it is likely that there is no possibility to have this shot sharp end to end. Brian is super stable in the shot so he was set to make it work.

Solution

Brian does not have a focus gear on this camera this would have been required to pull focus in the same shot on the small crab and then go onto the larger crab.

However even in this situation in manual focus Brian could have shot two clips focussing on the two different focal planes and then managed this in post. It is critical to be able to review focus on screen when we shoot or to review right after before we leave the scene.

Issue 3: Too many fish and too much water

The last clip is mine and is taken during a recent trip to Sataya reef.

I have deliberately left this clip uncut because it lets you see that you can use autofocus in water behind a dome port and for most part it works but there are some pitfalls so the most photogenic dolphins at 00:50 are initially blurred.

Diagnostics

I was not expecting the sheer amount of dolphin on the day and certainly I was not expecting them this close so I had a standard zoom lens at 24mm FF equivalent behind a dome port. In most cases I managed to have some fish in the AF area of the camera but at 00:45 and 00:58 the camera does not have anything in the middle of the frame and goes on a hunt.

Solution

Working with a dome port and a lens of that nature does not warrant you will have enough depth of field to leave the camera locked even at f/8 so some refocussing activity was indeed required. In this case I was using a single AF area in the centre and in those moments the camera has just the blue and nothing to focus on and goes on a hunt, as soon as the subject is back in the AF area the camera locks back in. Note that the AF change speed is not fast enough to follow when the dolphin come too close therefore here the only real solution was to have a wider lens, however I could have avoided the hunt if I had set the camera to AF lock and intercepted the moment the AF area was empty preventing the camera to re-engage.

Summary

In all examples of this post the issues have been generated by a lack of intervention. All the situations I have analysed could have been dealt with at time of the shot for most part and did not require extra gear. I believe that when we are in water there is already lots to think about and therefore, we make mistakes or not apply the decisive corrective action that would have saved the shots.

In the next post I will drill down in focus settings and how they can help your underwater shots and also discuss how those apply to macro, wide and mid shots. I am also happy to look at specific examples or issues please get in touch. Specific coaching or troubleshooting is provided in exchange of a drink or two.

Donations are appreciated use the PayPal button on the left.

Announcing New 2020 Offering

Dear readers in 2020 I will be adding some services to the blog to reflect some requirements that have been developing in the last few years.

It happens at times that people get in touch either through comments or directly by email to ask about their current challenges so I thought why not to address this with a bespoke service. Here are my current ideas:

  • Equipment selection – this is generally to do with port lenses, strobes, lights, accessories more than with camera and housing
  • Photo editing clinic – people seem to struggle to handle the editing of their images. While some are definitely skilled majority aren’t and editing an image is almost as important as shooting a good image
  • Video editing clinic – like above but for video that is sometimes even more complex

Those will be offered at the symbolic price of a few beers at UK prices £10 donation using the link on the left hand side.

Other topics that are also becoming interesting are discussions around issues like focus, framing, lens quality. For those I welcome input material by email interceptor121@aol.com send me your images or videos with problems and I will use them to build an article for yours and other benefits.

Currently am working on a feature on focus in video so I am looking for your blurred videos (sorry) as I don’t have many myself I need some help from you guys.

Thank you for reading this short post!

Export Workflows for underwater (and not) video

This is post is going to focus on exporting our videos for consumption on a web platform like YouTube or Vimeo.

This is a typical workflow for video production

We want to focus in the export to publish steps for this post as things are not as straighforward as it may seem.

In general each platform has specific requirements for the uploads and has predefined encoding settings to create their version of the upload this means that is advised to feed those platforms with files that match their expectations.

The easiest way to do this is to separate the production of the master from the encodes that are needed for the various platforms.

For example in Final cut this means exporting a master file in ProRes 422 HQ in my case with GH5 10 bit material. Each camera differs and if your source material is higher or lower quality you need to adjust however the master file will be a significantly large file with mild compression and based on intermediate codecs.

So how do we produce the various encodes?

Some programs like Final Cut Pro have specific add ons in this case Compressor to tune the export however I have had poor experience with compressor and underwater video to the point I do not use it and do not recommend it. Furthermore we can separate the task of encoding from production if we insert a platform independent software in the workflow.

Today encoding happens primarily by H264 and H265 formats through a number of encoders the most popular being x264 and x265 that are free. There is a commercial right issue to use HEVC (x265 output) for streaming so a platform like YouTube uses the free VP9 codec while Vimeo uses HEVC. This does not matter to us.

So to uploade to YouTube for example we have several options:

  1. Upload the ProRes file
  2. Upload a compressed file that we optimised based on our requirements
  3. Upload a compressed file optimised for YouTube requirements

While Option 1 is technically possible we are talking about 200+ GB/hour which means endless upload time.

Option 2 may lead to unexpected results as you are not sure of the quality of YouTube output and how it matches your file so my recommendation is to follow option 3 and give the platform what they want.

YouTube Recommended Settings are on this link

YouTube recommends H264 settings as follow for SDR (standard dynamic range) Uploads

  • Progressive scan (no interlacing)
  • High Profile
  • 2 consecutive B frames
  • Closed GOP. GOP of half the frame rate.
  • CABAC
  • Variable bitrate. No bitrate limit required, although we offer recommended bitrates below for reference
  • Chroma subsampling: 4:2:0

There is no upper bitrate limit so of course you can make significantly large files however for H264 there is a point in which the quality reaches a point that you can’t see any visible differences.

Recommended video bitrates for SDR uploads

To view new 4K uploads in 4K, use a browser or device that supports VP9.

TypeVideo Bitrate, Standard Frame Rate
(24, 25, 30)
Video Bitrate, High Frame Rate
(48, 50, 60)
2160p (4k)35–45 Mbps53–68 Mbps
1440p (2k)16 Mbps24 Mbps
1080p8 Mbps12 Mbps
720p5 Mbps7.5 Mbps
480p2.5 Mbps4 Mbps
360p1 Mbps1.5 Mbps
YouTube Bitrate table

YouTube recommended settings are actually quite generous and if we perform a high quality encode we may easily be able to create smaller file however we are unsure of the logic that YouTube applies to their compression if we deviate so to be sure we will follow the recommendations.

It is very important to understand that bitrate controls the compression together with other factors however in order to get a good file we need to make sure we put some good logic in the analysis of the file itself this will greatly influence the quality of the compression process.

There is a whole book on x264 settings if you fancy a read here.

For my purposes I use handbrake and to make YouTube happy I use Variable Bit Rate with two pass and target bitrate of 45 Mbps. Together with that I have a preset that takes into account what YouTube does not like and then does a pretty solid analysis of motion as H264 is motion interpolated. This is required to avoid artefacts.

Note the long string of x264 coding commands

I have tested this extensively against the built in Final Cut Pro X YouTube Export.

Starting from the timeline and going directly into YouTube resulted in files of 88 Mb starting from a 7.06 GB ProRes 422 HQ comparable for the project. Following the guidelines and the handbrake process I ended up with 110.1 MB which is a 24% increase.

I have also exported to H264 in FCPX this gave me a 45.8 Mbps file however when I checked on YouTube their file it was still smaller than my manually generated file of 12%. I have used 4K video downloader to retrieve file sizes.

Same source file different encodes different results in YouTube

For HDR files there are higher allowed bitrates and considerations on colour space and color depth but is essentially the same story and I have developed HandBrake presets for that too.

When I have to produce an export for my own use I choose H265 and usually a 16 Mbps bitrate which is what Netflix maxes at. Using Quality at RF=22 produces around 20 Mbps files which is amazing considering the starting point of 400 Mbps for GH5 AVCI files. YouTube own files range between 10 and 20 Mbps to give you an idea once compressed in VP9. I cannot see any difference between my 16 Mbps and 20 Mbps files so I have decided to stay with the same settings of Netflix if it works for them will work for me.

There is also a YouTube video to explain in detail what I just said and some comparative videos here

For all my YouTube and Blog subscribers (need to be both) please fill the form and I will send you my 3 handbrake presets.

Edit following some facebook discussions: if you want to upload HD you have better results if you make the file 4K. According to my tests this is not true. Using x264 and uploading an HD file produces same or better results than the HD clip YouTube created out of the same source using a 4K upload. I would be vary about what you read on the internet unless you know exactly how clips are produced. 90% of the issue is poor quality encoding before it even gets to YouTube!

Colour Correction in underwater video

This is my last instalment of the getting the right colour series.

The first read is the explanation of recording settings

https://interceptor121.com/2018/08/13/panasonic-gh5-demystifying-movie-recording-settings/

This post has been quite popular as it applies generally to the GH5 not just for underwater work.

The second article is about getting the best colours

https://interceptor121.com/2019/08/03/getting-the-best-colors-in-your-underwater-video-with-the-panasonic-gh5/

And then of course the issue of white balance

https://interceptor121.com/2019/09/24/the-importance-of-underwater-white-balance-with-the-panasonic-gh5/

Am not getting into ambient light filters but there are articles on that too.

Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.

You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.

I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.

In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.

I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.

A few notes just to anticipate possible questions

  1. Why are you not looking to have the Luma or the RGB parades at 50% of the scale?

50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.

2. Is it important to execute the steps in sequence?

Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.

3. Why don’t you correct the overall saturation?

Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.

4. Is there a difference between using corrections like Vibrancy instead of just saturation?

Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.

5. Can you avoid an effect LUT and just get the look you want with other tools?

Yes this is entirely down to personal preference.

6. My footage straight from camera does not look like yours and I want it to look good straight away.

That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.

I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.

Canon 8 – 15 mm Fisheye on the Panasonic GH5 Pool Tests

It was time to get wet and test the Canon 8 – 15 mm fisheye on the GH5 in the pool so I made my way to Luton Aspire with the help of Rec2Tec Bletchley.

I had the change to try a few things first of all to understand the store coverage of the fisheye frame, this is something I had not tested before but I had built a little model.

In purple the ideal rectangle built with the maximum width and height of the fisheye frame

This model ignores the corners the red circle are 90 degrees light beams and the amber is the 120 degrees angle. A strobe does not have a sharp fall off when you use diffusers so this model assumes your strobe can keep within 1 Ev loss around 90 degrees and then drop down to – 4 Ev at 120 degrees. I do not want to dig too deep into this topic anyway this is what I expected and this is the frame.

Shot at 1.5 meters from pool wall

You can see a tiny reflection of the strobes together with a mask falling on the left hand side… In order to test my theory I run this through false colour on my field monitor, at first glance it looks well lit and this is the false colour.

False colour diagram of previous shot

As you can see the strobes drop below 50 at the green colour band and therefore the nominal width of those strobes is probably 100 degrees. In the deep corners you see the drop to 20 % 10% and then 0 %.

Time to take some shots

Divers hovering @ 8 mm

The lens is absolutely pin sharp across the frame, I was shooting at f/5.6 in the 140 mm glass dome.

Happy divers @ 9 mm
BCD removal @ 10 mm
Gliding @ 11 mm
Open Water class @ 12mm
Divers couple @ 13 mm
Hover @ 15 mm

Performance remains stunning across the zoom range. I also tried few shots at f/4

9 mm f/4

There is no reef background but looks pretty good to me.

The pool gives a strong blue cast so the shots are white balanced.

If you want details of the rig and lens mount are in a previous post

https://interceptor121.com/2019/11/02/fisheye-zoom-for-micro-four-thirds/

Panasonic GH5 zoom fisheye rig

Matching Filters Techniques

The issue is that the Ambient light filters are set for a certain depth and water conditions and does not work well outside that range. While the idea of white balancing the scene and getting colour to penetrate deep into the frame is great the implementation is hard.

Thinking about Keldan we have a 6 meters version and a 12 meters version as listed on their website. The 6 meters version works well between 4 and 12 meters and the other between 10 and 18. At the same time the Spectrum filter for the lens works down to max 15 meters and really performs better shallower than 12 meters.

With that in mind it follows that if you plan to use the spectrum filter -2 you are probably getting the 6 meters ambient filters. So what happens if you go deeper than 12 meters? The ambient light filter is not aligned to the water ambient light and the lights start to look warm this is not such a bad thing but can get bad at times.

You can of course white balance the frame with the lights however this becomes somewhat inconvenient so I wanted to come out with a different technique. In a previous post I have described how to match a lens filter to a light/strobe filter. Instead of matching the light filter to the ambient light I match the filters on land between each other in daylight conditions to obtain a combination that is as much as possible neutral. I have done this for URPRO, Magic Filter and Keldan Spectrum filter and worked out the filter that when combines give a neutral tone.

Magic filter combined with 2 stops cyan filter giving almost no cast

This tone tends to emulate the depth where the filter has the best color rendition. So in case of Keldan this is around 4 meters and so is Magic with URPRO going deeper around 6-9 meters.

The idea is that you can use the filter without lights for landscape shots and when you put the lights into the mix you can almost shoot in auto white balance or set the white balance to the depth the two were matching. I wanted to try this theory in real life so I did 3 different days of diving testing the combination I had identified the results are in this video.

The theory of matching filters worked and the filter more or less performed all as expected. I did have some additional challenges that I had not foreseen.

Filter Performance

The specific performance of a filter is dependant on the camera color science. I have had great results with URPRO combined with Sony cameras but with Panasonic I always had an orange cast in the clips.

Even this time the same issue is confirmed with the URPRO producing this annoying cast that is hard if not impossible to remove also in post.

The Magic filter and the Spectrum filter performed very close, with magic giving a more saturated and baked in image with Keldan maintaining a higher tone accuracy. This is the result of the design of the filters: the Magic filter has been designed to take outstanding picture better than life, the Spectrum filter has been designed using tools to give accurate color rendition. What it means is that the magic images look good even in the LCD while Keldan are a bit dim but can be helped in post.

Looking at the clip in the first 3 and half minutes you can’t tell apart Magic and Spectrum down to 9 meters, with the URPRO giving consistent orange cast.

Going a bit deeper I realised you also need a scenario where you are swimming closer to a reef and want to bring some lights in the frame because you are outside the best working range of the filter. In order to avoid excessive gap when approaching the reef I had stored white balance readings at 6 9 12 15 meters so when I had a scene with mixed light instead of balancing for say 15 meters and then having an issue with the light I used the 9 meters setting so the image is dim when you are far and gets colorful as you approach which is somehow expected in underwater video.

The section at 15 meters are particularly interesting

You can see that URPRO gets better with depth but also how at 5:46 you see a fairly dim reef at 5:52 I switch on the lights and the difference is apparent.

At 6:20 the approach with Keldan was directly with the lights the footage still gives an idea of depth however the colours are there and the background water looks really blu as I had white balance set for 9 meters.

Key Takeaways

All filters produced acceptable results however I would not recommend URPRO for the Panasonic GH5 and settle for the Magic Filter or the Spectrum filter. Today the spectrum is the only wet filter for the Nauticam WWL-1 but I am waiting for some prototypes from Peter Rowlands for the magic. I would recommend both the magic and the spectrum and the choice really depends on preference. If you want a ready look with the least retouching the magic filter is definitely the way to go as it produces excellent ready to use clips that look good immediately in the LCD.

The Keldan Spectrum filter has a more desaturated look and requires more work in post but has the benefit of a more accurate image.

I think this experiment has proved to work and I will use this method again in the future. This method is also potentially available using the keldan or other ambient light using a tone that closely matches the lens filter.

 

Filter Poll

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

Fisheye Zoom for Micro Four Thirds

Looking at Nauticam port chart the only option for a fisheye zoom is to combine the Panasonic PZ 14-42 with a fisheye add on lens. This is a solution that is not that popular due to low optical quality.

So micro four thirds users have been left with a prime fisheye lens from Panasonic or Olympus…until now!

Looking at Nauticam port chart we can see that there is an option to use the Speedbooster Metabones adapter and with this you convert your MFT camera to a 1.42x crop allowing you to use Canon EF-M lenses for cropped sensor including the Tokina 10-17mm fisheye. This is certainly an option and can be combined with a Kenko 1.4x teleconverter giving you a range of 14.2 to 33.8 mm in full frame equivalent or 7.1 to 16.9 mm in MFT terms fisheye zoom of which the usable range is 8 -16.9 mm after removing vignetting.

A further issue is that the Speedbooster gives you another stop of light limiting the aperture to f/16 while this is generally a bonus for land shooting in low light underwater we want to use all apertures all the way to f/22 for sunbursts even if this means diffraction problems.

Wolfgang Shreibmayer started a trend time ago in WetPixel https://wetpixel.com/forums/index.php?/topic/61629-canon-ef-lenses-on-mft-cameras/ to use full frame lenses and in this post I want to do a deep dive on what is for me the most interesting lens option the Canon 8-15mm fisheye.

This lens on full frame can be used for a circular and diagonal fisheye but Wolfgang has devised a method to use it as an 8-15mm fisheye zoom on MFT.

Part list – missing the zoom gear

What you need are the following:

  • Canon EF 8-15mm f/4L fisheye USM
  • Metabones Smart Adapter MB_EF_m43_BT2 or Viltrox EF-M1 Adapter
  • A 3D printed gear extension ring
  • Nauticam C-815Z zoom gear
  • Nauticam 36064 N85 to N120 34.7mm port adapter with knob
  • Nauticam 21135 35mm extension ring with lock
  • Nauticam 18810 N120 140mm optical glass fisheye port

The assembly is quite complicated as the lens won’t fit through the N85 port. It starts with inserting the camera with no lens in the housing.

GH5 body only assembly
Camera in housing without port

The next step is to fit the port adapter

Attach N85 N120 Metabones adapter

Then we need to prepare the lens with the smart adapter once removed the tripod mount part.

Canon 8-15 on Metabones Smart Adapter IV

As the port is designed for the speed booster the lens will be few mm off therefore the gear will not grip. Wolfgang has devised a simple adapter to make it work.

gear extension ring
Zoom gear on lens

This shifts the gear backwards allowing to grip on the knob.

3D design is here

Lens inserted on housing

Looking at nauticam port chart an extension ring of 30mm is recommended for the speedbooster and now we have extra 5mm in length Wolfgang uses a 35mm extension. however looking at the lens entrance pupil I have concluded that 30mm will be actually better positioned. Nauticam have confirmed there won’t be performance differences. You need to secure the ring on the dome before final assembly.

Fisheye dome and extension
Full assembly top view
Side front view

The rig looks bigger than the 4.33 dome but the size of the GH5 housing is quite proportionate. It will look bigger on a traditional small size non clam style housing.

The disassembly will be made again in 3 steps.

Disassembly

I am not particularly interested in the 1.4x teleconverter version consider that once zoomed in to 15mm the lens is horizontally narrower than a 12mm native lens so there is no requirement for the teleconverter at all.

This table gives you an idea of the working range compared to a rectilinear lens along the horizontal axis as diagonal is not a fair comparison. The lens is very effective at 8-10mm where any rectilinear would do bad then overlaps with an 8-18mm lens. The choice of lens would be dictated by the need to have or not straight lines. The range from 13mm is particularly useful for sharks and fish that do not come that close.

Focal lengthHorizontalVerticalDiagonalHorizontal Linear EqWidthHeightDiagonal
8130.995.9170.217.31321.64
9114.984.7147.8
10102.575.9131.06.9
1192.668.7117.88.3
1284.562.9107.29.5
1377.757.998.410.8
1472.053.790.911.9
1567.050.184.613.0

Wolfgang has provided me with some shots that illustrate how versatile is this set up.

8mm end surface shot
Caves 8mm
15mm end close up
Dolphins at 15mm
Diver close up at 8mm
Snell windows 8mm
Robust ghost pipefish @15mm

As you can see you can even shoot a robust ghost pipefish!

The contrast of the glass dome is great and the optical quality is excellent. On my GH5 body there is uncorrected chromatic aberration that you can remove in one click. Furthermore lens profiles are available to de-fish images and make them rectilinear should you want to do so.

I would like to thank Wolfgang for being available for questions for providing the 3D print and the images that are featured here on this post.

If you can’t print 3D and need an adapter ring I can sell you one for £7 plus shipping contact me for arrangements.

Amazon links UK

Canon EF 8-15 mm f/4 fisheye USM lens

Viltrox EF-M1 Mount Adapter

Note: it is possible to use a Metabones Speed Booster Ultra in combination with a Tokina 10-17mm zoom fisheye and a smaller 4.33″ acrylic dome.

UK Cost of the canon option: £3,076

Uk Cost of the Tokina option: £2,111

However if you add the glass dome back

UK Cost of Tokina with glass dome: £2,615

The gap is £461 and if you go for a Vitrox adapter (would not recommend for the speedbooster) the difference on a comparable basis is £176 which for me does not make sense as the Canon optics are far superior.

So I would say either Tokina in acrylic for the cost conscious or Canon in glass for those looking for the ultimate optical quality.

Using Rectilinear Wide Lenses Underwater

I was checking the technical details of Alex Mustard Underwater Photography Master Class and the majority of wide angle pictures are taken with a fisheye lens. In the section about shooting sharks Alex says that he prefers to shoot sharks with a fisheye otherwise they look ‘skinny’.

If you look online on underwater video forums you frequently see comments on problems with wide angle lenses connected with the use of a rectilinear wide angle lens in a dome.

The two most common complaints are soft corners and distortion.

Soft corners are due to a combination of lens optical issues and dome port optics. In short any lens is to some extent curved and therefore if you shoot a flat surface the image may be sharp in the centre and softer as you move to the corners. Issues with field of curvature are corrected stopping down the lens. The issue with field of curvature happens everywhere not just underwater.

Right now there are four wide angle lens that can be housed for a micro four third camera:

Olympus 9-18mm

This lens has a nice working range that allows to capture 100 degrees diagonal at widest setting and still has a 35mm equivalent at the tele end. This is a pretty little lens at $699 is the most affordable option that can be put in a housing. You will need a wide angle port and the zoom gear. The whole combination for your Nauticam housing comes at $1,399. This lens can also be combined with a glass dome but this will make the whole combination much more expensive and you may want to think about getting a better lens instead.

Olympus 7-14mm

This is an outstanding lens especially on land due to the fast f/2.8 aperture. It is expensive at $1,299.99 and very heavy and bulky. The lens does not fit through the N85 port opening and requires a port adapter this gives the extra benefit of a focus know but with such a wide lens is not really useful due to high depth of field. You will need a 180mm glass dome and the zoom gear for the lens to complete the set up ending at a whopping $3159.99.

Panasonic 7-14mm

I have owned this lens and I have to say that at $799 is the right compromise between wide field of view and price. Furthermore once you get the zoom gear you have the option of a cost effective acrylic dome that will give you a very wide set up for $1589.99. There are reports of poor performance with this lens and it is true that is not as sharp in corners but the results are perfectly acceptable if you stop at f/8 in close shots.

Steering Wheel Truck
Panasonic 7-14mm with acrylic dome 9mm f/8
Exploring the Chrisoula
Panasonic 7-14mm with acrylic dome 7mm f/5

This lens is prone to reflections and flare however once you add the N120 port adapter and the 180mm glass dome this will get you to $2819 at that point you may want to consider the Olympus combination instead.

Panasonic Leica 8-18mm

This is my favourite lens is sharp does not suffer from field of curvature issues and has a very useful zoom range 16-35mm in 35mm equivalent. The zoom gear and the 7″ acrylic dome will take you to 1889.99 that is an excellent price point. The lens is not prone to reflection or flare and as the 7″ dome has the same curvature radius than the 180mm dome it will produce very similar results.

Encircled
Panasonic 8-18mm in 7 acrylic dome f/8
Sunset Neat
Panasonic 8-18mm at 8mm f/10

The significant size of the acrylic port and the fact it floats make it ideal for split shots and this is the lens that gives me the best results.

This lens can also take port adapter that allows you to use the 180mm glass dome. This adds up to $2919.99 if you experience bad reflections and shoot frequently in the sun it may be worth it but I have not had any issue so far with this lens probably because of its nano coating.

I have found the 7mm focal length too problematic for dome ports and the amount of perspective distortion excessive generally it would be preferred to shoot at 9mm and narrower however this maybe insufficient for wreck interiors if you want a rectilinear look.

Perspective Distortion

One of the regular complaints of video shooters especially in wrecks or caves is that the edges look horrible and distorted and that there is an issue with the corners pulling. This is in fact not an issue but a problem with perspective as you shoot very wide angle. The following test shots will illustrate that the issue happens on land and has nothing to do with dome ports.

Shot at f/2.8 with Panasonic 8-18mm at 8mm shows sharp corners
Image with objects in edges at 8mm

As we can see the football looks like an oval and the chair is pulled. This is due to a perspective issue and is not a lens problem. When you shoot underwater video the objects on the edges of the frame change shape creating this pull effect that most people dislike.

Same scene at 9mm

At 9mm the amount of perspective distortion is reduced and this is the reason why 18mm on 35mm equivalent is one of the favourite focal length for rectilinear video and the maximum angle that should be used in small spaces to avoid the pulling edges.

One of the reason why a lens like the Nauticam WWL-1 is preferred for video is because the corners look sharp but is that really true?

Not really let’s apply some barrel distortion to simulate the WWL-1 to the image that looked badly distorted.

Barrel distortion applied -60 8mm

Now the football looks circular as we have applied -60 barrel distortion, obviously the rest of the image is now bent but this seems not to be of a concern to most people!

Barrel distortion -30 9mm

It needs much less correction to bring the 9mm shot into shape and for sure between the 8mm and 9mm the 9mm is the dimension that produces the most acceptable results.

It has to be said that in video with 16:9 aspect ratio most of the issue will be cropped away at the edges but the distortion in the middle of the frame will remain. For the same reason the 9mm image will appear practically rectilinear with no issues

16:9 crop still showing the edge ‘pulling’ at 8mm

16:9 crop looks straight at 9mm

I hope this post was useful there are four options for micro four thirds shooters to use rectilinear lenses I have settled for the Panasonic 8-18mm as in most cases it is still possible to control the perspective issue, I found this impossible at 7mm.

Bike on Hold 2
Bike in hold 2 on SS Thistlegorm Panasonic 8-18 at 8mm
Bubbling Bike
Shot at 7mm showing the front tyre pulling outside the frame

Obviously if you shoot in the blue this problem will not be visible however rectilinear lenses are popular with wreck shooters and I think this posts gives an idea of the challenges at play.

Finally I would discourage the use of the 7-8mm focal length range for video to those that want to have a rectilinear look.

From this post I started supporting Bluewater Photo in US for my links because it still provides multi brand and choice and because I learnt a lot from Scott Gietler Underwater photography guide back in the days where there was no internet resource to learn from.