Choosing a Camera Format for Macro Underwater Photography

Following from my previous post I wanted to further investigate the implications of formats and megapixels on Macro Underwater Photography.

I also want to stress that my posts are not guides on which camera to choose. For Macro for example some people rely on autofocus so there is no point talking about sensors if your camera does not focus on the shot!

Macro underwater photography and fish portraits in general is easier than wide angle because is totally managed with artificial illumination, although some real masterpieces take advantage also of ambient light.

There are a number of misconceptions also here but probably on the opposite side of wide angle there is a school of thinking that smaller cameras are better for macro but is that really the case?

Myth 1: Wide angle lens -> More Depth of field than Macro

Depth of field depends on a number of factors you can find the full description on sites like Cambridge in Colour a good read is here.

A common misconception without even starting with sensor size is that depth of field is related to focal length and therefore a macro lens that is long has less depth of field than a wide angle lens.

If we look at a DOF formula we can see that the effect of focal length and aperture cancel themselvers

Depth of field approximation

A long lens will have a smaller field of view of a wide lens so the distance u will increase and cancel the effect of the focal length f.

The other variables in this formula are the circle of confusion c and the F-number N. As we are looking at the same sensor the c number is invariant and therefore at equal magnification the depth of field depends only on F number.

Example: we have a macro lens 60mm and a wide angle lens 12mm, and a subject at 1 meter with the 60mm lens. In order to have the same size subject (magnification) we need to shoot at 20cm with the 12mm lens at that point the depth of field will be the same at the same f-number.

So a wide angle lens does not give more depth of field but it gets you closer to a subject. At some point this gets too close and that is why macro lenses are long focal so you can have good magnification and decent working distance.

Myth 2: Smaller Sensor has more depth of field

We have already seen that sensor size is not in the depth of field formula so clearly sensor size is not related to depth of field so why is there such misconception?

Primarily because people do not understand depth of field equivalence and they compare the same f-number on two different formats.

Due to crop factor f/8 on a 2x crop sensor is equivalent to f/16 on a full frame and therefore as long as the larger sensor camera has smaller possible aperture there is no benefit on a smaller sensor for macro until there are available apertures.

So typically the smaller sensor is an advantage only at f/22 on a 2x MFT body or f/32 on a APSC compared to a DSLR. At this small aperture diffraction becomes significant so in real life even in the extreme cases there is no benefit.

Myth 3: Larger Sensor Means I can crop more

The high level of magnification of macro photography create a strain on resolution due to the effects of diffraction this has a real impact on macro photography.

We have two cases first case is camera with same megapixel count and different pixel size.

In our example we can compare a 20.3 MFT 2x crop camera with a 20.8 APSC 1.5x crop and a 20.8 full frame Nikon D5.

Those cameras will have different diffraction limits as they have pixels of 3.33, 4.2 and 6.4 microns respectively those sensor will reach diffraction at f/6.3, f/7.1 and f/11 respectively so in practical terms the smaller camera format have no benefit on larger sensor as even if there is higher depth of field at same f-number the equivalent depth of field and diffraction soon destroy the resolution cancelling the apparent benefit and confirming that sensor size does not matter.

Finally we examine the case of same pixel size and different sensor size.

This is the case for example of Nikon D500 vs D850 the two cameras have the same pixel size and therefore similar circle of confusion. This means that they will be diffraction limited at the same f-number despite the larger sensor. So the 45.7 megapixels of the D850 will not look any different from the 20.7 megapixels of the D500 and none will actually resolve 20.8 megapixels.

So what is the actual real resolution we can resolve?

Using this calculator you can enter parameters in megapixels for the various sensor size.

In macro photography depth of field is essential otherwise the shot is not in focus, for this exercise I have assumed comparable aperture and calculated the number of megapixels until diffraction destroys resolution

Formatf-NumberMP
MFT 2xf/117.1*
APSC 1.5xf/145.6
Full Framef/226.3
Resolution in Megapixels at constrained DOF

Note that the apparent benefit of MFT does not actually exist as the aspect ratio is 4:3 so once this is normalised to 3:2 we are back to the same 6.3 megapixels of full frame. APSC that has the strong reputation for macro comes last in this comparison.

So although you can crop more with more megapixels the resolution that you can achieve is dropping because of diffraction and therefore your macro image will always look worse when you crop even on screen as now most screens are 4K or 8 megapixels.

Other Considerations

For a macro image depth of field is of course essential to have a sharp shot however we have seen that sensor size is not actually a consideration and therefore everything is level.

Color depth is important in portrait work and provided we have the correct illumination full frame cameras are able to resolve more colours. We are probably not likely to see them anyway if we are diffraction limited but for mid size portraits there will be a difference between a full frame and any cropped format. In this graph you can see that there is nothing in between APSC and MFT but full frame has a benefit of 2.5 Ev and this will show.

The D850 has a clear benefit in color resolution compared to top range APSC and MFT

Conclusion

Surprisingly for most the format that has an edge for macro is actually full frame because it can resolve more colours. The common belief that smaller formats are better is not actually true however some of those rigs will definitely be more portable and able to access awkward and narrow spaces to what extent this is an advantage we will have to wait and see. It may be worth noting that macro competitions are typically dominated by APSC shooters whose crop factor is actually the worst looking at diffraction figures.

Choosing a Camera Format for Underwater Photography

The objective of this post is not to determine what is the best camera for underwater photography, as that is simply the best camera with the best housing and the best strobes and lenses. All needs to be seen as a system in order to take stunning images.

The purpose of this article is to provide some clarity and eliminate common misconceptions that seem to be hindering the decision making of a person wanting to take underwater photos. There is always a vested interested of camera manufacturers to drive sales as well as underwater photography equipment shops to push users to upgrade their gear as frequently as possible as that generates value to them, however this will not necessarily generate value to you the consumer, the only person injecting cash in this network.

I recently posted on WetPixel a discussion that to generate a debate about the gap between APSC and MFT cameras. This in turn made me do some more research on camera sensor and I found some information that is very insightful and confirms some of my suspicions I had years ago when I attended a workshop in the Red Sea with Alex Mustard. In that occasion I was the only user on the boat with a compact camera but managed to pull some decent shots and this made me realise that there are circumstances that equalise your equipment and make the gap in the image quality smaller to the point that a compact camera picture in some cases looks similar to a much larger sensor camera. Although I shoot micro four thirds underwater I have owned and shot DSLR full frame and cropped, film and digital, I have also had an array of compact cameras, so what you are going to read is not focused on one format being better than another.

Let’s discuss some of those misconceptions in more detail.

For those that do not understand optics of dome ports underwater the reason you need to stop down the aperture is NOT because you are looking for depth of field, in fact on land you would shoot a wide angle lens wide open and it would have plenty of depth of field. The reason to stop down the lens is the field of curvature of the dome which makes the areas off centre and on the edges soft this can only be fixed by stopping down the lens. So before you think I can shoot at f/4 on a APSC so what think that your pictures will be mostly blurry on the side and besides each format has got fast lenses so this is not a main consideration for what you are going to read.

Myth 1: Larger Sensor -> Better SNR

Signal to Noise ratio is an important factor in electronics as it allows to distinguish information from noise. Contrary to what most people think SNR is not related to sensor size.

There is an in depth demonstration on this website https://clarkvision.com/articles/does.pixel.size.matter/#etendue

The comprehension of some of the concept may be too hard for many so I will attempt a simplification. What R.J.Clark says is that you need to balance the amount of light hitting the sensor before drawing conclusion on SNR. For example assume a camera with a lens of 16mm on a full frame sensor and compare this with a camera with a lens of 8mm on micro four thirds, I am using MFT as crop factor is two and makes examples easier.

An exposure of f/8 on a 16mm lens on Full frame camera is equivalent to an exposure of f/4 on a 8mm lens on MFT. Those will send the same amount of light to the sensor at equivalent exposure. However the smaller sensor will have the same amount of light distributed on a surface that is 1/4 of the larger sensor and therefore if we equalise everything we have a situation whereby the exposure value are balanced and the SNR is pretty much identical because the gain or ISO value necessary was 1/4 of the larger sensor. This SNR 18% graph on DxOMark gives an idea. I have chosen 3 cameras with the same megapixel count to remove megapixels from the discussion.

The dotted line highlights that once ISO values are equalised sensor size has no impact on SNR

Once exposure is equalised the larger sensor has no longer a benefit this is due to the fact that the components of noise shot noise and read noise do not depend on sensor size.

However an important consideration is that ISO 100 does not actually mean the same gain in all systems and in fact a larger camera will have more photons than a smaller one at the same ISO level, this means that at the so called base ISO the larger sensor camera will have an advantage as the smaller sensor can’t decrease the ISO anymore and will need to close aperture. It also means that ISO 100 does not mean the same SNR amongst different formats. So when we compare two shots at the same ISO larger sensors will have more signal than smaller ones. This is the reason sometimes you hear things like why is my shot on my compact camera so noisy at ISO 400 compared to a full frame that looks so clean at ISO 400 but those ISO are not actually the same thing and the smaller sensor has much less photons at that identical ISO number.

Another consequence of this is that as the camera in questions have the same megapixel size larger pixels do not yield better SNR.

However with larger pixels holding more signal it is possible to extend the range of an amplifier to higher value of gain therefore larger pixel camera (less megapixel on the same size) will be able to work at higher ISO levels. This is the reason why MFT camera have a lower maximum ISO than full frame at same megapixel count.

Underwater we use strobes to counter colour absorption and never reach those high ISO levels. If you were shooting at night on land without a flash you may easily reach high ISO value like ISO 25800 or 64000 with strobes however we rarely reach even values like 1600.

Myth 2: Larger Sensor -> Better Dynamic Range

The characteristic that drives dynamic range is not actually sensor size but pixel size however at some point DR no longer grows with very large pixels.

This graph shows that the Panasonic GH5 has a respectable DR at low ISO however it drops faster than the D500 and 1DX MkII. Surprisingly for some the D500 has more DR than the larger pixel 1Dx MKII.

Dotted line for DOF equalisation purposes

If we look at the maximum possible DR and the ISO at which we would still have 7 bits colour and at least 10 stops of DR we have the following values:

CameraMax DRHighest Usable ISO
Canon 1DX MKII13.5 Ev3207
Nikon D50014 Ev1324
Panasonic GH513 Ev807
The larger pixel size makes usable DR go to higher ISO

Although the larger pixel camera does not hold the highest DR it is able to shoot at higher ISO and still keep a decent DR and color tone.

If we calculate the Ev between the ISO value we see that the MFT sensor is 2 Ev away and the APSC is 1.3 Ev away from full frame, this is pretty much in line with the crop factor and therefore once we equalise Depth of field there is no benefit between the various formats at same megapixel count, though the Nikon D500 is the camera that has the highest DR in absolute value. So if you have an extremely high amount of light the D500 would be able to product a high DR image. Underwater however this is rarely the case underwater so the conclusion is that if you are after a 20 Megapixel camera there is no material difference among the various formats in practical underwater use.

Myth 3: Larger Pixels are Better at equal sensor size

Although larger pixels are better at sustained dynamic range, for example in low light, evidence shows that as long as the camera is not limited by diffraction more megapixels are better.

I am comparing here 3 Nikon full frame cameras that have respectively 24, 36 and 47.8 Megapixels.

SNR is not impacted by pixel size

SNR is not impacted by the sensor resolution and this is due mostly to the fact that at similar size downsampling equalises the smaller pixels.

Dynamic range is also unaffected with more megapixels having better results

Looking at Dynamic range the situation is the same and actually the camera with more megapixels has an edge until ISO value become very high.

Color Sensitivity appears to benefit from Pixel Count

Finally the graph for color sensitivity, an important metric for shots with strobes and portrait work, confirms that more megapixels also bear better results.

Please note that this data is limited to sensor analysis and does not take into account the effect of very small pixels on diffraction and sharpness that is a topic on its own.

Choosing a Camera for Social Media

Today majority of people do not print their images and post them on social media or website. Those typically have a low resolution frequently less than 4 megapixels. Screens usually have low dynamic range, and JPEG images are generally limited to 12 Ev Dynamic Range this is a value that is at reach of any camera today starting from 1″ compact cameras but is unreachable to majority of computer screens or phones.

My suggestion for users that post on social media is to find the best camera that fits their budget and ergonomics and worry less about sensors, invest in optics either wet or lenses and port and strobes, as those will yield a higher return.

Today most cameras have a port system anyway so an advanced compact such as the Sony RX100 series or a Micro Four Third camera of small factor (Panasonic GX9 for example) are more than enough.

Choosing a Camera for Medium Size Print

I print my images typically on 16″x12″ or 18″x12″ paper or canvas.

Generally I want to have around 300 dpi so that means I need a 20 Megapixel camera as a minimum. This cuts out a large part of the smaller MFT cameras and also the compacts because the real life resolution is far from the declared pixels.

In my opinion, if you are a user that prints medium formats, a pro grade MFT or an APSC camera is all you need, besides the latest winner of UPY shoots an APSC with a Tokina lens and plenty of winners don’t use full frame.

For those who just want the Best

The best image quality today is produced by high megapixel full frame cameras there is no doubt about it. Full frame cameras however are subject to depth of field issues and as we have seen once you shoot at equal depth of field the benefit is for most eroded.

To get the best outcome of a high megapixel full frame camera you need to be able to shoot at the lowest possible ISO, this is almost impossible if you are shooting a fisheye lens behind a dome as your aperture of f/11 means very little light is hitting the sensor so your ISO will most likely hit 400 many times and at that point the benefit of full frame is gone.

I have looked at all technical details of Alex Mustard images on his book and nearly all shots taken with a full frame camera have at least ISO 400 or higher, with very few exceptions at 200 or lower.

So how to do you manage to shoot at the lowest possible ISO on full frame? You need to be able to shoot at wider aperture and this today means optics like the Nauticam WACP that have two stops benefit on a wide angle lens and three on a rectilinear lens behind a dome on full frame.

WACP retails at $4,500 plus sales tax

The WACP however has a field of view of 130 degrees and therefore is not as wide as a fisheye and unsuitable for close focus wide angle, recently Nauticam has released the WACP-2 that retails at $7,460 and can cover 140 degrees.

My consideration is that, if you are not prepared to spend money for a WACP like solution, then there is no point investing in a full frame system as the benefit goes away once you equalise depth of field.

The Nikon D850 once DOF is equalised performs worse than the old 7200 APSC

Conclusion

Underwater photography is an expensive hobby and every time I am on a boat and see how much money goes into equipment to product average photos this saddens me. While improving technique is only a matter of practice and learning, making the right choice is something we can all do once we have the correct data and information at hand.

I hope this post is useful and helps your decision making going forward.

Focussing Techniques for Video – Part II Auto Focus Settings

If you have some experience with video on land you will know that many professional videographers do not use autofocus but rely on follow focus devices. Basically those are accessories that control the focus ring of the camera and avoid the shake that you would create if you were turning the focus ring with your hand.

The bad news is that there are no devices to perform follow focus underwater and if you use a focus knob you will indeed create camera shake. This is the primary reason why I do not use focus knobs on any of my lenses with the exception of the Olympus 60mm macro and in those rare occasions I uses it I do not actually use to obtain focus but to ensure I am at the closest working distance.

So how do you achieve good focus if you can’t use a focus ring and continuous autofocus cannot be trusted? There are essentially three methods that I will discuss here and provide some examples:

  1. Set and forget
  2. Set and adjust
  3. Optimised Continuous Autofocus

You have noticed that there is still an option for continuous autofocus in the list. Before we drill down in the method I want to give some background on autofocus technology.

If after reading this post you are still confused I recommend you get some tuition either joining my Red Sea trip or 1 to 1 (offered in Milton Keynes area in UK).

https://interceptor121.com/2019/07/28/calling-out-to-all-image-makers-1st-interceptor121-liveaboard-red-sea-2020/

Contrast Detect vs Phase Detect and Hybrid Autofocus

The internet is full of autofocus videos showing how well or bad certain camera perform and how one system is superior to another. The reality is that professional cameramen will use follow focus in majority of cases and this is because the camera does not know who the subject is.

Though it is true that one focus system may perform better than other you need to consider that Red cameras use contrast detection autofocus same as your cheap compact camera so clearly autofocus must not be that important.

The second fact is that any camera focus system needs contrast including phase detect. Due to scattering of blue light in water there are many situations where the contrast is low in the scene resulting in focus hunt of the camera autofocus system.

So my first recommendation is to ignore the whole discussion about which focus system is superior because the reality is that there will be situation where the focus will be difficult to achieve and the technology will not come to help. You need to devise strategies to make things work and this is what this post is about.

Let’s go now in the techniques.

Method 1: Set and Forget

As the name implies with this method we set focus at the beginning of the shot and never change this again. This means disabling the camera continuous focus in video mode. This is essential so that this technique works.

This works in three situations:

  1. Using a lens at the hyperfocal distance behind a flat port
  2. Using wet wide angle lenses
  3. Using fisheye lenses

Method 1.a Hyperfocal Distance Method

I am not going to write a dissertation on this there is good content on wikipedia worth a read: https://en.wikipedia.org/wiki/Hyperfocal_distance

The key concept is that depth of field at a given aperture and subject distance will reach infinity. The wider the lens closer this subject distance. For example a 14mm lens on a micro four third body at f/5.6 is 1.65 meters so if you focus on an object at this distance anything between 0.8 meters and infinity will be in focus. As you close the aperture the hyperfocal distance diminishes. This technique is good for medium or reefscape shots where you don’t mind that the whole frame is sharp in focus. It is not suitable for macro or close shots as the aperture required would be too small and diffraction would kick in.

Looking at the past CWK clips if continuous autofocus was disabled and he had focussed just at the start of the scene at 1.85 meters no focus was required until the manta was at 0.9 meters. Note that distances have to be adjusted to account for magnification of water effect.

Once you have your lens and aperture setting you can quickly work out some distances in your scene and fine tune your expertise.

Obviously shooting those shots with a flat port is not exactly the most common method however understanding this technique is paramount to the other two.

Method 1.bc Wet Lenses and Fisheyes

Fisheye lenses tend to have an incredible amount of depth of field even wide open and therefore the set and forget applies in full here without even bothering about hyperfocal distance. Usually focussing on your feet is all is required.

The real revelation to this technique are afocal wet lenses. Afocal means that the focal length of the wet lens is infinity and the light coming through does not diverge or converge. Together with the magnification factor typically 0.3-0.4x means you get to a fisheye situation without the same amount of distortion.
This is the primary reason to buy a lens like the Nauticam WWL-1 or even an Inon wet lens with afocal design.

My Tiger and Hammerhead videos are shot with the camera locked in manual focus after focussing on my feet.

Even when the shark hits the camera the image is in focus

I do not have technical information on newer Nauticam WACP-1 or WACP-2 so am not in a position to confirm if those lenses are afocal or not and therefore I cannot help you. I would think consideration on depth of field still apply. If Nauticam or a shop or user lends me a set up for pool testing I can provide optimise settings for WACP.

Set and forget is the number one method for wide angle and reefscapes underwater and it is easy.

Method 2: Set and Adjust

As the name implies this method sets the focus at the beginning of the shot and then adjusts when required this is necessary especially in macro situations.

The set and adjust method varies depending on how the camera managed push on focus. If the camera manages a refocus using a half press shutter no other settings are required other than disabling continuous auto focus.

For cameras that do not have a refocus half shutter setting you need to operate in manual focus and the set a custom button to perform a single auto focus.

In both cases you need peaking to be active during the shot.

Procedure:

  1. Set the focus as required using half shutter or AF On button
  2. Observe the peaking to ensure the subject is in focus if required moving the camera.
  3. In case of loss of focus refocus using the shutter or the AF On button

This method works well with macro where typically you set focus and then move the camera back and forth to keep focus, in those cases where you want to switch focus on another part of the frame you refocus. This would have helped Brian in the two crab situation.

As the refocus does bring a moment of blur in the clip you need to ensure that when you trigger the refocus the camera will succeed this is best achieved when using a single area of focus.

Method 3: Optimised Continuous Autofocus

Although autofocus has some risks there are situation when this is required those include:

  • Shooting aperture that do not have sufficient depth of field to warrant a set and forget
  • Using dome ports and rectilinear lenses from what I have experienced those lenses do not work well with hyperfocal distances due to physics of dome ports

Obviously the best option remains using a wet lens and set and forget however there are instances where we absolutely want straight lines for example shooting divers or models. In those cases we will use a dome port and as we can’t use a focus gear because the camera would shake we need autofocus.

Focus Area Settings

Cameras have a selection of modes to set the area that will be used by autofocus:

  1. Face / Animal recognition -> locks on recognised shapes
  2. Multi area -> selects the highest contrast area in a number of smaller area of the frame cameras have up to 225 or more areas and you can customise the shape of it
  3. Single area -> an area of selectable size and position in the frame
  4. Tracking -> tracks the contour of an object in the frame

Face recognition and animal recognition are not useful in our case.

Tracking requires the object to keep the shape within the frame this is useful for nudibranches for example or anything that does not change shape in the frame, a fish turning for example will be lost by this method so this is seldom used. To be honest this fails also on land most times.

So we really are left with multi area and single area.

My advice is to avoid multi area because particles in the water for example can generate sufficient contrast to fool the camera and make it lock on it.

So the best option is to use single area, I typically set this to a size smaller than the central third of a nine block grid. With this configuration is also possible to focus on a subject off the centre by moving the area within the frame. This setting works well when the subject is tracked by our movement and the subject is in the centre which is the majority of situations.

This video is shot on a 12-60 mid range zoom using single area AF for all scenes including macro.

The single more significant risk for single area is that if the centre of the frame goes to blue water the camera will go hunting so if you are shooting in caves or on a wall make sure the AF area is on one side of the frame to avoid hunting or lock occasionally focus to prevent the camera seek focus that won’t be found.

Conclusion

Achieving focus in underwater video requires different techniques from land use and a good understanding of ports and optics.

If you think you are not skilled enough and need help from autofocus my advice is to get an afocal wet wide angle lens. This will transform your shooting experience and guarantee all your wide angle to be in focus. If you work in a macro situation you need to master the single AF setting of your camera and make sure you are super stable.

The most difficult scenario is using dome ports and this is one of the reasons I do not recommend those for video. If you are adamant on rectilinear lenses than the specific settings.

Donations are appreciated use the PayPal button on the left.

Focussing Techniques for Video – Part I Problem Diagnostic

Thanks to Brian Lim and WK’S gone diving for providing some examples.

When I started thinking about writing this post I thought of presenting a whole piece on the theory of focus and how a camera achieves it however I later decided it made more sense to start from example and then drill down on the theory based on specific cases.

So we will look at three common issues, understand why they happened and then discuss possible mitigations.

Issue 1: Wide angle Manta Focus Hunt

This clips has been provided by WK’s and has been taken during a trip to Socorro

The water is quite dark and murky and there is a substantial amount of suspended particles in water otherwise we would not have mantas. The water is also fairly milky and therefore the image lacks contrast which is not ideal for the camera to focus as all cameras, including those working on phase detection AF need contrast.

WK’s had a flat port and was shooting quite narrow aperture at f/7.1 which should ensure plenty depth of field on his 14mm lens.

In this clip you can literally see the autofocus pulsating trying to find focus the hunting carries on until the manta is very close at around 15 seconds in the clip. At that point the clips is stable however the overall approach has been ruined.

Diagnostics

The key observations are that the subject was not in focus at the very beginning of the shot and then you can distinctively see how some fairly bright particles come into the scene at 0.04 for example and disturb the camera process as they create a strong contrast against the black manta and the camera can’t decide who is the subject so it starts hunting. When the manta is close and well defined in the frame the camera knows she is the subject and therefore focus issues stop. The white particles in the water when the manta is far are large and bright enough to be picked up by the matrix point of the camera AF this is true regardless of the manta being in the frame and the same would have applied if another fish was doing a photobomb.

Solution

The problem in this clip is not new to video shooters similar things happen when you have the bride walking to the altar and someone the priest or the husband steps into the frame and they are far apart. On land you would keep control using manual focus or if you were really daring you would use tracking. In our case WK’s does not have focus gear and it is not possible for him to manually change the focus.

WK’s could have used tracking  if available on the camera. With tracking you need to ensure that the camera can lock onto the manta and then if it does that the manta does not turn or change shape and nothing bigger comes in front. At this point everything would work. This is a high risk technique only worth trying in clear water and when there are no particle in the water so in this scenario not advised.

The last option and the solution to this issue was for WKs to switch to manual focus and engage peaking. Use a single AF on to focus on his feet or an intermediate target and then check the manta was in focus. If focus was lost WK’s could have triggered AF again at least being able to control how many times the camera was refocussing.

Issue 2: Macro Subject Switching

This other clip has been provided by Brian Lim and it is a macro situation.

We can see that there are particles flying in the water and some other small critters at close range. The main subjects are the large crab and the two small crabs in the foreground.

Brian is not happy about the focus on this shot as not everything is sharp.

Diagnostics

Despite the murky water Brian has correctly locked focus on the crabs in the foreground and due to the high level of magnification the camera does not have sufficient depth of field to make the small and large crab crisp in the frame. It is possible that Brian could not detect on this screen that the crab behind was not sharp which could be avoided with peaking. In any case it is likely that there is no possibility to have this shot sharp end to end. Brian is super stable in the shot so he was set to make it work.

Solution

Brian does not have a focus gear on this camera this would have been required to pull focus in the same shot on the small crab and then go onto the larger crab.

However even in this situation in manual focus Brian could have shot two clips focussing on the two different focal planes and then managed this in post. It is critical to be able to review focus on screen when we shoot or to review right after before we leave the scene.

Issue 3: Too many fish and too much water

The last clip is mine and is taken during a recent trip to Sataya reef.

I have deliberately left this clip uncut because it lets you see that you can use autofocus in water behind a dome port and for most part it works but there are some pitfalls so the most photogenic dolphins at 00:50 are initially blurred.

Diagnostics

I was not expecting the sheer amount of dolphin on the day and certainly I was not expecting them this close so I had a standard zoom lens at 24mm FF equivalent behind a dome port. In most cases I managed to have some fish in the AF area of the camera but at 00:45 and 00:58 the camera does not have anything in the middle of the frame and goes on a hunt.

Solution

Working with a dome port and a lens of that nature does not warrant you will have enough depth of field to leave the camera locked even at f/8 so some refocussing activity was indeed required. In this case I was using a single AF area in the centre and in those moments the camera has just the blue and nothing to focus on and goes on a hunt, as soon as the subject is back in the AF area the camera locks back in. Note that the AF change speed is not fast enough to follow when the dolphin come too close therefore here the only real solution was to have a wider lens, however I could have avoided the hunt if I had set the camera to AF lock and intercepted the moment the AF area was empty preventing the camera to re-engage.

Summary

In all examples of this post the issues have been generated by a lack of intervention. All the situations I have analysed could have been dealt with at time of the shot for most part and did not require extra gear. I believe that when we are in water there is already lots to think about and therefore, we make mistakes or not apply the decisive corrective action that would have saved the shots.

In the next post I will drill down in focus settings and how they can help your underwater shots and also discuss how those apply to macro, wide and mid shots. I am also happy to look at specific examples or issues please get in touch. Specific coaching or troubleshooting is provided in exchange of a drink or two.

Donations are appreciated use the PayPal button on the left.

Announcing New 2020 Offering

Dear readers in 2020 I will be adding some services to the blog to reflect some requirements that have been developing in the last few years.

It happens at times that people get in touch either through comments or directly by email to ask about their current challenges so I thought why not to address this with a bespoke service. Here are my current ideas:

  • Equipment selection – this is generally to do with port lenses, strobes, lights, accessories more than with camera and housing
  • Photo editing clinic – people seem to struggle to handle the editing of their images. While some are definitely skilled majority aren’t and editing an image is almost as important as shooting a good image
  • Video editing clinic – like above but for video that is sometimes even more complex

Those will be offered at the symbolic price of a few beers at UK prices £10 donation using the link on the left hand side.

Other topics that are also becoming interesting are discussions around issues like focus, framing, lens quality. For those I welcome input material by email interceptor121@aol.com send me your images or videos with problems and I will use them to build an article for yours and other benefits.

Currently am working on a feature on focus in video so I am looking for your blurred videos (sorry) as I don’t have many myself I need some help from you guys.

Thank you for reading this short post!

Export Workflows for underwater (and not) video

This is post is going to focus on exporting our videos for consumption on a web platform like YouTube or Vimeo.

This is a typical workflow for video production

We want to focus in the export to publish steps for this post as things are not as straighforward as it may seem.

In general each platform has specific requirements for the uploads and has predefined encoding settings to create their version of the upload this means that is advised to feed those platforms with files that match their expectations.

The easiest way to do this is to separate the production of the master from the encodes that are needed for the various platforms.

For example in Final cut this means exporting a master file in ProRes 422 HQ in my case with GH5 10 bit material. Each camera differs and if your source material is higher or lower quality you need to adjust however the master file will be a significantly large file with mild compression and based on intermediate codecs.

So how do we produce the various encodes?

Some programs like Final Cut Pro have specific add ons in this case Compressor to tune the export however I have had poor experience with compressor and underwater video to the point I do not use it and do not recommend it. Furthermore we can separate the task of encoding from production if we insert a platform independent software in the workflow.

Today encoding happens primarily by H264 and H265 formats through a number of encoders the most popular being x264 and x265 that are free. There is a commercial right issue to use HEVC (x265 output) for streaming so a platform like YouTube uses the free VP9 codec while Vimeo uses HEVC. This does not matter to us.

So to uploade to YouTube for example we have several options:

  1. Upload the ProRes file
  2. Upload a compressed file that we optimised based on our requirements
  3. Upload a compressed file optimised for YouTube requirements

While Option 1 is technically possible we are talking about 200+ GB/hour which means endless upload time.

Option 2 may lead to unexpected results as you are not sure of the quality of YouTube output and how it matches your file so my recommendation is to follow option 3 and give the platform what they want.

YouTube Recommended Settings are on this link

YouTube recommends H264 settings as follow for SDR (standard dynamic range) Uploads

  • Progressive scan (no interlacing)
  • High Profile
  • 2 consecutive B frames
  • Closed GOP. GOP of half the frame rate.
  • CABAC
  • Variable bitrate. No bitrate limit required, although we offer recommended bitrates below for reference
  • Chroma subsampling: 4:2:0

There is no upper bitrate limit so of course you can make significantly large files however for H264 there is a point in which the quality reaches a point that you can’t see any visible differences.

Recommended video bitrates for SDR uploads

To view new 4K uploads in 4K, use a browser or device that supports VP9.

TypeVideo Bitrate, Standard Frame Rate
(24, 25, 30)
Video Bitrate, High Frame Rate
(48, 50, 60)
2160p (4k)35–45 Mbps53–68 Mbps
1440p (2k)16 Mbps24 Mbps
1080p8 Mbps12 Mbps
720p5 Mbps7.5 Mbps
480p2.5 Mbps4 Mbps
360p1 Mbps1.5 Mbps
YouTube Bitrate table

YouTube recommended settings are actually quite generous and if we perform a high quality encode we may easily be able to create smaller file however we are unsure of the logic that YouTube applies to their compression if we deviate so to be sure we will follow the recommendations.

It is very important to understand that bitrate controls the compression together with other factors however in order to get a good file we need to make sure we put some good logic in the analysis of the file itself this will greatly influence the quality of the compression process.

There is a whole book on x264 settings if you fancy a read here.

For my purposes I use handbrake and to make YouTube happy I use Variable Bit Rate with two pass and target bitrate of 45 Mbps. Together with that I have a preset that takes into account what YouTube does not like and then does a pretty solid analysis of motion as H264 is motion interpolated. This is required to avoid artefacts.

Note the long string of x264 coding commands

I have tested this extensively against the built in Final Cut Pro X YouTube Export.

Starting from the timeline and going directly into YouTube resulted in files of 88 Mb starting from a 7.06 GB ProRes 422 HQ comparable for the project. Following the guidelines and the handbrake process I ended up with 110.1 MB which is a 24% increase.

I have also exported to H264 in FCPX this gave me a 45.8 Mbps file however when I checked on YouTube their file it was still smaller than my manually generated file of 12%. I have used 4K video downloader to retrieve file sizes.

Same source file different encodes different results in YouTube

For HDR files there are higher allowed bitrates and considerations on colour space and color depth but is essentially the same story and I have developed HandBrake presets for that too.

When I have to produce an export for my own use I choose H265 and usually a 16 Mbps bitrate which is what Netflix maxes at. Using Quality at RF=22 produces around 20 Mbps files which is amazing considering the starting point of 400 Mbps for GH5 AVCI files. YouTube own files range between 10 and 20 Mbps to give you an idea once compressed in VP9. I cannot see any difference between my 16 Mbps and 20 Mbps files so I have decided to stay with the same settings of Netflix if it works for them will work for me.

There is also a YouTube video to explain in detail what I just said and some comparative videos here

For all my YouTube and Blog subscribers (need to be both) please fill the form and I will send you my 3 handbrake presets.

Edit following some facebook discussions: if you want to upload HD you have better results if you make the file 4K. According to my tests this is not true. Using x264 and uploading an HD file produces same or better results than the HD clip YouTube created out of the same source using a 4K upload. I would be vary about what you read on the internet unless you know exactly how clips are produced. 90% of the issue is poor quality encoding before it even gets to YouTube!

Colour Correction in underwater video

This is my last instalment of the getting the right colour series.

The first read is the explanation of recording settings

https://interceptor121.com/2018/08/13/panasonic-gh5-demystifying-movie-recording-settings/

This post has been quite popular as it applies generally to the GH5 not just for underwater work.

The second article is about getting the best colours

https://interceptor121.com/2019/08/03/getting-the-best-colors-in-your-underwater-video-with-the-panasonic-gh5/

And then of course the issue of white balance

https://interceptor121.com/2019/09/24/the-importance-of-underwater-white-balance-with-the-panasonic-gh5/

Am not getting into ambient light filters but there are articles on that too.

Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.

You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.

I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.

In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.

I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.

A few notes just to anticipate possible questions

  1. Why are you not looking to have the Luma or the RGB parades at 50% of the scale?

50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.

2. Is it important to execute the steps in sequence?

Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.

3. Why don’t you correct the overall saturation?

Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.

4. Is there a difference between using corrections like Vibrancy instead of just saturation?

Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.

5. Can you avoid an effect LUT and just get the look you want with other tools?

Yes this is entirely down to personal preference.

6. My footage straight from camera does not look like yours and I want it to look good straight away.

That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.

I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.

Canon 8 – 15 mm Fisheye on the Panasonic GH5 Pool Tests

It was time to get wet and test the Canon 8 – 15 mm fisheye on the GH5 in the pool so I made my way to Luton Aspire with the help of Rec2Tec Bletchley.

I had the change to try a few things first of all to understand the store coverage of the fisheye frame, this is something I had not tested before but I had built a little model.

In purple the ideal rectangle built with the maximum width and height of the fisheye frame

This model ignores the corners the red circle are 90 degrees light beams and the amber is the 120 degrees angle. A strobe does not have a sharp fall off when you use diffusers so this model assumes your strobe can keep within 1 Ev loss around 90 degrees and then drop down to – 4 Ev at 120 degrees. I do not want to dig too deep into this topic anyway this is what I expected and this is the frame.

Shot at 1.5 meters from pool wall

You can see a tiny reflection of the strobes together with a mask falling on the left hand side… In order to test my theory I run this through false colour on my field monitor, at first glance it looks well lit and this is the false colour.

False colour diagram of previous shot

As you can see the strobes drop below 50 at the green colour band and therefore the nominal width of those strobes is probably 100 degrees. In the deep corners you see the drop to 20 % 10% and then 0 %.

Time to take some shots

Divers hovering @ 8 mm

The lens is absolutely pin sharp across the frame, I was shooting at f/5.6 in the 140 mm glass dome.

Happy divers @ 9 mm
BCD removal @ 10 mm
Gliding @ 11 mm
Open Water class @ 12mm
Divers couple @ 13 mm
Hover @ 15 mm

Performance remains stunning across the zoom range. I also tried few shots at f/4

9 mm f/4

There is no reef background but looks pretty good to me.

The pool gives a strong blue cast so the shots are white balanced.

If you want details of the rig and lens mount are in a previous post

https://interceptor121.com/2019/11/02/fisheye-zoom-for-micro-four-thirds/

Panasonic GH5 zoom fisheye rig

Matching Filters Techniques

The issue is that the Ambient light filters are set for a certain depth and water conditions and does not work well outside that range. While the idea of white balancing the scene and getting colour to penetrate deep into the frame is great the implementation is hard.

Thinking about Keldan we have a 6 meters version and a 12 meters version as listed on their website. The 6 meters version works well between 4 and 12 meters and the other between 10 and 18. At the same time the Spectrum filter for the lens works down to max 15 meters and really performs better shallower than 12 meters.

With that in mind it follows that if you plan to use the spectrum filter -2 you are probably getting the 6 meters ambient filters. So what happens if you go deeper than 12 meters? The ambient light filter is not aligned to the water ambient light and the lights start to look warm this is not such a bad thing but can get bad at times.

You can of course white balance the frame with the lights however this becomes somewhat inconvenient so I wanted to come out with a different technique. In a previous post I have described how to match a lens filter to a light/strobe filter. Instead of matching the light filter to the ambient light I match the filters on land between each other in daylight conditions to obtain a combination that is as much as possible neutral. I have done this for URPRO, Magic Filter and Keldan Spectrum filter and worked out the filter that when combines give a neutral tone.

Magic filter combined with 2 stops cyan filter giving almost no cast

This tone tends to emulate the depth where the filter has the best color rendition. So in case of Keldan this is around 4 meters and so is Magic with URPRO going deeper around 6-9 meters.

The idea is that you can use the filter without lights for landscape shots and when you put the lights into the mix you can almost shoot in auto white balance or set the white balance to the depth the two were matching. I wanted to try this theory in real life so I did 3 different days of diving testing the combination I had identified the results are in this video.

The theory of matching filters worked and the filter more or less performed all as expected. I did have some additional challenges that I had not foreseen.

Filter Performance

The specific performance of a filter is dependant on the camera color science. I have had great results with URPRO combined with Sony cameras but with Panasonic I always had an orange cast in the clips.

Even this time the same issue is confirmed with the URPRO producing this annoying cast that is hard if not impossible to remove also in post.

The Magic filter and the Spectrum filter performed very close, with magic giving a more saturated and baked in image with Keldan maintaining a higher tone accuracy. This is the result of the design of the filters: the Magic filter has been designed to take outstanding picture better than life, the Spectrum filter has been designed using tools to give accurate color rendition. What it means is that the magic images look good even in the LCD while Keldan are a bit dim but can be helped in post.

Looking at the clip in the first 3 and half minutes you can’t tell apart Magic and Spectrum down to 9 meters, with the URPRO giving consistent orange cast.

Going a bit deeper I realised you also need a scenario where you are swimming closer to a reef and want to bring some lights in the frame because you are outside the best working range of the filter. In order to avoid excessive gap when approaching the reef I had stored white balance readings at 6 9 12 15 meters so when I had a scene with mixed light instead of balancing for say 15 meters and then having an issue with the light I used the 9 meters setting so the image is dim when you are far and gets colorful as you approach which is somehow expected in underwater video.

The section at 15 meters are particularly interesting

You can see that URPRO gets better with depth but also how at 5:46 you see a fairly dim reef at 5:52 I switch on the lights and the difference is apparent.

At 6:20 the approach with Keldan was directly with the lights the footage still gives an idea of depth however the colours are there and the background water looks really blu as I had white balance set for 9 meters.

Key Takeaways

All filters produced acceptable results however I would not recommend URPRO for the Panasonic GH5 and settle for the Magic Filter or the Spectrum filter. Today the spectrum is the only wet filter for the Nauticam WWL-1 but I am waiting for some prototypes from Peter Rowlands for the magic. I would recommend both the magic and the spectrum and the choice really depends on preference. If you want a ready look with the least retouching the magic filter is definitely the way to go as it produces excellent ready to use clips that look good immediately in the LCD.

The Keldan Spectrum filter has a more desaturated look and requires more work in post but has the benefit of a more accurate image.

I think this experiment has proved to work and I will use this method again in the future. This method is also potentially available using the keldan or other ambient light using a tone that closely matches the lens filter.

 

Filter Poll

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

Tips to make the most of underwater time