Category Archives: UNDERWATER VIDEO

Trip Baia di Napoli and Sorrento Peninsula

There is no doubt that until a Covid-19 vaccine is widespread our travel plans have to adjust to the new conditions. As of today 2 August 2020 most of our favourite destinations are still in the no go list and are not covered by travel insurance.

The latest list of countries and territories published by the British FCO does not include Egypt, Indonesia, Philippines and no countries in South America although it does have many Caribbean destinations.

With the situation evolving fast and the imminent prospect of tighter lock down as we go towards winter many people would not travel long haul anyway to avoid risks of quarantine or possible issues coming back to their home country. So for now, many of us will travel more locally. We have seen lots of new underwater photographs taken locally in British Waters but there is no doubt this is not out of choice and most people would rather be elsewhere.

After the postponement of my Red Sea live-aboard to 2021 I have been invited to the Italian Nauticam days in Italy in the stunning location of Napoli and Sorrento and coast. I am from the same region and all my diving training has been abroad so I am guilty of not having tried the local diving until now. If you don’t want to read the whole article the summary is that the diving is great and combined with the natural beauty of the area, the warmth of the local and the food and drink there is probably no better alternative for diving safe in Covid-19 times in Europe right now. I am sure there are equally stunning places in Liguria and some of the Sicilian or Tuscany locations however the Penisola Sorrentina is very hard to beat when you consider the other elements. Please get in touch if you want to dive the area as I am planning a trip mid September 2020.

The Diving Centre and Location

I used Punta Subaia and Punta Campanella Diving centre two long standing operations on the cost. The first is located in Bacoli north of Naples and the second is in Massa Lubrense just past Sorrento. Bacoli is Naples local beach so gets more local traffic while the other location is more touristic in nature with a good ratio of foreigners: during my stay there were English, German, French, Swiss and Dutch on the dives.

I used a 5mm wetsuit with a 3mm hooded vest and a thermal top under and was fine. Locals dive with a 7/5mm semidry suit.

Diving is done using 7.5 meters RIBs that can take up to 8 divers on a double tank or 12 on a single tank dive. Covid-19 procedures are in place and face masks are not mandatory outdoors in Italy however spacing on the RIB is challenging so you have checks and declarations to fill in. Some people wear face masks on the boat too is entirely up to you.

1 meter distance on the boat is possible

Journey time to the dive sites is 5 minutes in Baia while in Punta Campanell it can be up to half hour and the scenery is amazing as Capri is just in front of the coast and the landscape is jut breathtaking.

Under those cracks there are frequently underwater caves at shallow depth

If there is one thing that I did not like is that in the morning there was not a systematic double tank excursion so sometimes the day would finish at 6 pm with only 3 dives done. Crew are very helpful and 15 litres tanks are included at no extra so in all cases I came up because I reached the 1 hour limit still having plenty of air.

Divers getting ready to enter the water on a coastal dive

I booked a double room with single occupancy at €80 per night B&B 2 minutes walk to the dive centre. Food and drinks with wine runs at €50 or less per day and is glorious!

Spaghetti with clams will cost you €13

Underwater Photography

If you want to have an idea of the critters in the area I would recommend the book Into the Mirror from Mimmo Roscigno ISBN: 9788890966804 is only in Italian but it is a typical coffe table book the images are simply amazing.

For wide angle a good sample is on Punta Campanella Dive Center website, also look for photographers Marco Gargiulo that is local of the area. Other photographers like Franco Banfi have also been here for workshops. So there has been some fame but mostly limited to Italian speaking photographers, this is a shame as the staff speaks English and this is a photo friendly operation.

Subaia

I went for this trip with a selection of wide angle lenses, I had been told by Pietro Cremone about the underwater archeology park so I packed a rectilinear wide angle in order to avoid distortion.

Dives in Subaia are typically 1 hour long max by law at depth of 5 meters.

Dive site maps are placed underwater however you need to dive with an autorized guide

The dives have to be done with an expert guide as the mosaics are normally hidden to protect from the agents and the water.

Edoardo Ruspantini clears the debris to show the underlying Mosaic
Delfino
The Dolphin Mosaic

There are also replica statues that are good subjects, the originals are in the Napoli Museum.

Goddess of Men
Goddess of Men
Where is my hand

There are many villas and it is impossible to cover the grounds in two dives however I had planned to move to the second location so I drove two hours to Massa Lubrense on the night.

Punta Campanella

Here the diving is about fish and caves. You have a combination of close up subjects and wide angle. I took by zoom fisheye with me so I focussed on wide angle. Sea life includes plenty of Anthias and Damsel, Snappers, large groupers, eagle rays, breams, bass there is a lot of fish as the area has been a protected marine park for more than 20 years now. I was not expecting this abundance, there is also a resident shoal of Barracudas 1000+ strong specimen that is in shallow water at one of the sites. Due to limited processing power I have not yet created a 4K video however I took plenty of shots. The whole album is on flickr. I hereby include some key shots.

Medusa
Medusa
Diving Penisola Sorrentina 2020
Red Gorgonia
Ambush photo
Grouper
Behind the Mask
The Mask
Barracudas
Barracudas
Diver going through Scoglio a Penna
Caves
Eagle Ray
Eagle Ray

Wrap Up

I was frankly surprised by the sheer abundance of photo opportunities and I will be always taking my equipment whenever I go back to Italy in the summer. There are so many positives to the location:

  • Great photo opportunities
  • Well organised dive operation English speaking and photo friendly
  • Stunning location also for non divers
  • Amazing food
  • Fantastic people
  • Easy to reach from UK and other EU countries
  • Covid-19 procedures in place safe location with prime health system

I am so impressed by the location that I will be back and in fact I am planning a photo trip the week of 14 or 21 September, with the following itinerary:

  • Sunday arrival dinner with local photographers to have a taste of the area
  • Monday to Friday double tank morning dive, afternoon optional 3rd dive or sightseeing
  • Photos of the day debrief after dinner time – optional
  • Saturday no dive day local trips optional or travel independently
  • Sunday free morning transfer to airport and return

Diving cost is €400 for 5×2 tank dives to be booked in advance through me. For those we will have exclusive use of the boat optional dives in the afternoon non exclusive will be €35 per dive. Accommodation will be typically less than €600 euro for the week in single occupation and plane in the region of £100-150 depending on extras. I can help with accommodation, travel and transfers. You can also rent a car as low as £15 per day this is especially of value if planning to come with partner or family.

Please fill the contact form if interested spaces will be limited to maximum 8 for the trip. I think it will be a long time for anyone to be in tropical waters with the Covid-19 situation, this is an opportunity not to be missed until the water stays warm and enjoy one of the world very best destinations.

RED SEA 2021 UNDERWATER IMAGE MAKERS LIVEABOARD

Due to Covid-19 I have decided to postpone the boat to 31 July 2021. I have also had some cancellations due to the same reason so currently have 7 spaces. Prices remain unchanged. What follows is content from the original post.

__________________________

Diving for images or video can be frustrating at times. I find this less so for macro and super macro where you are resort based and you can hire a guide with super sharp eyes that will help you find the right subjects. For wide angle it is a totally different story. Land based may preclude the best access to certain destinations whilst if you are on a liveaboard with divers there is a conflict of interest. The boat will typically run a fixed itinerary cruise and the result is that you will visit many times so more memorable than others and typically just once. The single dive you do may not be at the right time of the day and the ambient light may not be the best for what you trying to do.

I am self taught and I like to read books and experiment myself however some years ago I was invited by Nauticam to a Red Sea workshop with Alex Mustard.

I wrote some articles at the time you can find them all if you click this link https://interceptor121.com/?s=workshop

What I really liked about that workshop was the ability to steer the boat to the right sites, to be able to dive at the right time of the day and also to repeat dives on the best sites and omit the areas that were not promising. For me this had great value on its own.

Of course Dr Alex Mustard tuition was also superb however I have now done this workshop 3 times and I believe that element has become less interesting. I also happened to work in Sharm El Sheikh as resident instructor at the Marriot Hotel so all dive sites were already known to me as a diver at least.

On those workshops I found very useful the fact that you could see the work of others and learn from the group, I also like the fact that there was no competition so everybody was encouraged to share.

Needless to say that after years of diving the same sites I still find the Northern Wreck and reefs of the Red Sea one of the best imaging destination in the world so I thought how do I have the same experience without the workshop part and the related high costs – it costs almost double a standard diving trip to book Alex workshop and they are fully booked almost immediately.

A further issue that has occurred in time is that there are no flights to Sharm El Sheikh from UK and now majority of boats live from Hurghada. This seriously limits the workshop as you have a lot more navigation.

So my ideal requirements for such a trip would be:

  1. Boat to live from Sharm El Sheikh not Hurghada. I rather have indirect flights and burn land time vs consuming cruise time in transfers
  2. Need to be able to have full control of the itinerary
  3. Dive as a photographer with a loose buddy concept
  4. Have a good boat and logistics
  5. Have small number of people in the water – I think 20 is too much so I have set my target to 8 min 12 max

I reconnected with my old network and after looking around I have found a boat and a company that can help with this.

King Snefro is the only liveaboard fleet currently departing from Sharm El Sheikh and the boat of choice is the Snefro Pearl

Cruise Dates: 31 July – 7 August 2021

Price: €1250 per Pax in twin cabin includes:

  • 32% Nitrox
  • Airport transfers
  • 12 Liter tanks
  • 3 meals, snacks and soft drinks, tea and coffee
  • Special imaging orientated dive briefing to make the most of the sites
  • Group image debrief – optional participation
  • Arrival on Saturday 31st July – check in commences at 1800
  • Check out Saturday 7th August – 1200 latest
  • For those whose flight leaves much later possibility of a stop gap in a beach resort before final departure

You need to be a PADI Advanced Open Water Diver or equivalent and 30 logged dives are required for this safari. All dives, especially some more demanding wreck dives, are subject to diver’s qualification and experience. 

EAN or other Nitrox certification required if not training will be provided on the boat at a charge.

Extra Hotel arrangements if you are coming the day before or leaving the day after

per night per person

*Club El Faraana Reef*​ –  www.faraanareef.com

Halfboard  in Single Room = 50 € per night per person
 Soft All in per in Single room = 60 € per night per person

Halfboard  in Double  Room = 35 € per night per person
Soft All in Double room = 45 € per night per person

Halfboard  in Triple  Room = 30 € per night per person
Soft All in Triple room = 40 € per night per person

Service Charge & taxes included, Transfer Airport to Hotel/ Hotel to Airport is included
(Check in starts from 14:00 H, Check out till 12:00 H,  in combination with safari booking early check in or late check out will be arranged free of charge) 

On to the dive sites:

Wrecks of Abu Nuhas

Giannis D

Gianni's D classic shot
Giannis D Classic Shot

Carnatic

Encircled
Silversides and diver in the Carnatic

Chrisoula K

Chrisoula K Bow
Bow of Chrisoula K

The Tugboat

Stay Away from my Eggs
Tiger cardinal fish with eggs

The Thistlegorm

Motorbike in Hold 2
bike on hold 2

Ras Za’tar (Optional site for sunbursts)

Sunburst
Suburst on Ras Za-tar

Jackfish Alley – Optional site for caves

1st Cave@Jackfish Alley
Cave 2 Jackfish alley

Ras Mohammed where at that time of the year you can have various shoals of fish

Bohar Snappers

Sunburst  Snap
Snapper Sunburst

Barracudas

Arrows
Arrows

Batfish

Schooling Batfish on Reef
Bats

Surgeonfish

Toilet Flush
Toilet flush

Instead of night dives we will do snorkelling session for split shots or sunset dives

Sunset Neat
Sunset on Ras Katy

I will be glad to help with ideas for the sites or the shots to take however this is not for beginners so if you don’t know even how to work out your camera works maybe it is not for you. The trip is open to photographers and videographers I will shoot both and will provide assistance as required. Below little sample of the video opportunity in Shark Reef

Please use the form to book a space. In case the cruise it is sold out I will operate strictly a first come first serve basis at time of writing there are five space left so hurry up. In case of cancellation I will also run a wait list. Please inquiry for any other details as well

do you need raw video?

We are finally there. Thanks to smaller companies that are keen to get a share of the market we now have at least two cameras with MFT sensor that are able to produce RAW video.

RAW Video and RED

It has been RED to patent the original algorithm to compress raw video data straight out of the sensor before the demosaicing process. Apple tried to circumvent the patent with their ProRes RAW but lost in court the legal battle and now has to pay licenses to Red. Coverage is here.

So RED is the only company that has this science, to avoid paying royalties Blackmagic Design developed an algorithm that uses data taken from a step of the video pipeline after demosaic for their BRAW.

I do not want to discuss if BRAW is better than RedCode or ProRes RAW however with a background in photography I only consider RAW what is straight out of the sensor Analag Digital Converter so for me RAW is RedCode or ProRes RAW and not BMRAW.

How big is RAW Video

If you are a photographer you know that a RAW image data file is roughly the same size in megabytes than the megapixels of your camera.

How is that possible I have a 20 Megapixel camera and the RAW file is only a bit more than 20 megabytes? My Panasonic RW2 files are 24.2 MB without fail out of 20.89 Megapixels so on average 9.26 bits per pixel. Why don’t we have the full 12 bits per pixel and therefore a 31 MB file? Well cameras are made of a grid of pixels that are monochromatic so each pixel is either red, green or blue. In each 2×2 matrix there are 2 green pixels, 1 red and 1 blue pixel. Through a series of steps of which on is to decode this mosaic into an image (demosaic) we rebuild an RGB image for display.

Each one of our camera pixels will not have the full 4096 possible tones, measures from DxoMark suggest that the Sony IMX272AQK only resolves 24 bits colours in total and 9 bits of grey tones. So this is why a lossless raw files is only 24.2 MB. This means that an 8K frame video in RAW would be 9.25 MB and therefore a 24 fps RAW video stream would be 222 MB/s or 1,776 Mb/s if we had equivalent compression efficiency. After chroma subsampling to 422 this would become 1184 Mb/s.

Cameras like the ZCam E2 or the BMPCC4K that can record ProRes 422 HQ approach those bitrates and can be considered virtually lossless.

But now we have ProRes RAW so what changes? The CEO of ZCAM has posted an example of a 50 fps ProRes RAW HQ files and this has a bitrate of 2255 Mb/s if this was 24 fps it would be 1082 Mb/s so we can see how my maths are actually stacking up nicely.

Those bit rates are out of reach of almost all memory card so an SSD drive support is required and this is where Atomos comes into the picture.

Atomos have decided to adopt ProRes RAW and currently offer support for Nikon, Panasonic and Zcam selected model.

ProRes RAW workflow

So with the ProRes RAW file at hand I wanted to test the workflow in Final Cut Pro X. Being an Apple codec all works very well however we encounter a number of issues that photographers have resolved a long time ago.

The first one is that RAW has more dynamic range than your SDR delivery space, this also happens with photos however programs work in larger RGB spaces like ProPhotoRGB at 16 bits and using tone mapping you can edit your images and then bring them back to an 8 bit jpeg that is not as good as the RAW file but is in most cases fine for everyone.

Video NLE are not in the same league of photo raw editors and usually deal with a signal that is already video is not raw data. So the moment you drop your ProRes RAW clip on a SDR timeline it clips as you would expect. A lot of work is required to bring back clips into an SDR space and this is not the purpose of this post.

To avoid big issues I decided to work on an HDR timeline in PQ so that with a super wide gamut and gamma there were no clipping issues. The footage drops perfectly into the timeline without any work required to confirm which is brilliant. So RAW for HDR is definitely the way forward.

ProRes RAW vs LOG

My camera does not have ProRes RAW so I wanted to understand what is lost going through LOG compression? For cameras that have an analog gain on sensor there is no concept of base ISO fixed like it happens on Red or ARRI cameras. Our little cameras have a programmable gain amplifier and as gain goes up DR drops. So the first bad news is that by using LOG you will lose DR from RAW sensors.

This graph shows that on the Panasonic GH5 there is a loss of 1 Ev from ISO 100 to 400 but still we have our 11.3 Ev minimum to play with. I am not interested in the whole DR but I just want to confirm that for those cameras that have more DR than their ADC allows you will have a loss with LOG as this needs gain and gain means clipping sooner.

Panasonic GH5 full resolution 20.9 MPixels DR

What is very interesting is that net of this the ProRes RAW file allowed me to test how good is LOG compression. So in this clip I have :

  1. RAW video unprocessed
  2. RAW video processed using Panasonic LOG
  3. RAW video processed using Canon LOG
  4. RAW video processed using Sony LOG

In this example the ZCAM E2 has a maximum dynamic range of 11.9 Ev (log2(3895)) from Sony IMX299CJK datasheet. As the camera has less DR than the maximum limit of the ADC there is likely to be no loss.

We can see that there are no visible differences between the various log processing options. This confirms that log footage is an effective way to compress dynamic range in a smaller bit depth space (12->10 bits) for MFT sensors.

The same ProRes RAW files processed using log from Panasonic, Canon and Sony shows no visual difference

Final Cut Pro gives you the option to go directly to RAW or go through LOG, this is because all your log based workflow and LUT would continue to work. I can confirm this approach is sound as there is no deterioration that I can see.

Is ProRes RAW worth it?

Now that we know that log compression is effective the question is do I need it? And the answer is it depends…

Going back to our ProRes RAW 1082 Mb/s once 422 subsampling is applied this drops to 721 Mb/s this is pretty much identical to ProRes 422 HQ nominal bit rate of 707 Mb/s. So if you have a Zcam and record ProRes RAW or ProRes 422 HQ you should not be able to see any difference. I can confirm that I have compressed such footage in ProRes 422 HQ and I could not see any difference at all.

However typically with photos a RAW files can hold heavy modifications while a JPEG cannot. We are used processing ProRes and there is no doubt that ProRes 422 HQ can take a lot of beating. In my empirical tests I can see that Final Cut Pro X is very efficient manipulating ProRes RAW files and in terms of holding modifications I cannot see that this codec provides a benefit but this may be due to the lack of capability of FCPX.

For reference Panasonic AVC Intra 422 is identical in terms of quality to ProRes 422 HQ though harder to process, and much harder to process than ProRes RAW.

Conclusion

If you have already a high quality output from your camera such as ProRes 422 HQ or Panasonic AVCI 400 Mbps with the tools at our disposal there is not a lot of difference at least for an MFT sensor. This may have to do with the fact that the sensor DR and colour depth is anyway limited and therefore log compression is effective to the point that ProRes RAW does not appear to make a difference, however there is no doubt that if you have a more capable camera, there is more valuable data there and this may be well worth it.

I am currently looking for Panasonic S1H ProRes RAW files. Atomos only supports 12 bits so the DR of the camera will be capped as RAW is linearly encoded. However SNR will he higher and the camera will have more tones and colors resulting in superior overall image quality, someone calls this incorrectly usable DR but is just image quality. it will be interesting to see if AVCI 10 bits and log is more effective than ProRes RAW 12 bits.

The definitive guide to hdr with the panasonic gh5/s1 in final cut pro x

First of all the requirements for HDR at home are:

  1. Log or HLG footage
  2. Final Cut Pro X 10.4.8
  3. Mac OS Catalina 10.15.4
  4. HDR-10 Monitor with 10 bit gamut

It is possible to work with a non HDR-10 monitor using scopes but is not ideal and only acceptable for HLG and in any case 10 bits is a must.

Recommended reading: https://images.apple.com/final-cut-pro/docs/Working_with_Wide_Color_Gamut_and_High_Dynamic_Range_in_Final_Cut_Pro_X.pdf

HDR Footage

In order to product HDR clips you need HDR footage. This comes in two forms:

  1. Log footage
  2. HLG

Cameras have been shooting HDR since years the issue has been that no consumer operating system or display were capable of displaying it. The situation has changed as Windows 10 and Mac Os now have HDR-10 support. This is limited for example on Mac Os there is no browser support but the Tv app is supported, while on windows you can watch HDR-10 videos on YouTube.

You need to have in mind your target format because Log and HLG are not actually interchangeable. HLG today is really only Tv sets and some smartphones, HDR-10 instead is growing in computer support and is more widely supported. Both are royalty free. This post is not about what is the best standard is just about producing some HDR content.

The process is almost identical but there are some significant differences downstream.

Let me explain why this graph produced using the outstanding online application LutCalc show the output input relationship of V-LOG against a standard display gamma for rec709.

V-LOG -> PQ

Stop diagram V-LOG vs Rec709

Looking at the stop diagram we can appreciate that the curves are not only different but a lot of values differ substantially and this is why we need to use a LUT.

Once we apply a LUT the relationship between V-LOG and Rec709 is clearly not linear and only a small parts of bits fit into the target space.

Output vs Input diagram for V-LOG and Rec709

We can see that V-Log fills Rec709 with just a bit more than 60% IRE so there will need to be a lot of squeezing to be done to fit it back in and this is the reason why many people struggle with V-Log and the reason why I do not use V-Log for SDR content.

However the situation changes if we use V-Log for HDR specifically PQ.

Stop Table V-Log to PQ

You can see that net of an offset the curves are almost identical in shape.

This is more apparent looking at the LUT in / out.

LUT in/Out V-Log to Rec2100 PQ

With the exception of the initial part that for V-Log is linear while PQ is fully logarithmic the curve is almost a straight line. As PQ is a larger space than that V-Log can produce on a consumer camera we do not have issues of squeezing bits in as PQ accommodates all bits just fine.

HLG

Similar to V-LOG HLG does not have a great fit into an SDR space.

Stop Table HLG to Rec709

The situation becomes apparent looking at the In/Out Lutted values.

HLG to Rec709

We can see that as HLG is also a log gamma with a different ramp up 100% is achieved with even less bits that V-Log.

So really in pure mathematical terms the fit of log spaces into Rec709 is not a great idea and should be avoided. Note with the arrival of RAW video we still lack editors capable to work in 16 bit depth space like photo editors do and currently all processes go through LOG because they need to fit into a 10/12 bits working space.

It is also a bad idea to use V-Log for HLG due to the difference of the log curves.

V-Log vs HLG

And the graph demonstrates what I said at the beginning. You need to decide at the outset your output and stick to a compatible format.

Importing Footage in Final Cut Pro X 10.4.8

Once we have HLG or LOG footage we need to import it into a Wide Gamut Library, make sure you check this because SDR is default in FCPX.

Library Settings

HLG footage will not require any processing, but LUTs have to be applied to V-LOG as this is different from any Rec2100 target spaces.

The most convenient way is to go into Organise workspace select all clips than press the i button and select General. Apply the Panasonic V-Log LUT to all clips.

Organise View the LUT option is not available in the Basic view so make sure you select General

Creating a Project

Once all files have been handled as required we create our HDR-10 project which in final cut means Rec2020 PQ.

For HLG project change colour space to HLG

The following screenshots demonstrate the effect of the LUT on footage on a PQ timeline.

LUT not applied footage looks dim as values are limited to 80%

With the LUT applied the V-LOG is expanded in the PQ space and the colours and tones come back.

LUTed clip on PQ timeline

We can see the brightness of the scene is approaching 1000 nits and looks exactly we we experienced it.

Once all edits are finished and just as last step we add the HDR Tools to limit peak brightness to 1000 Nits which is a requirement of YouTube and most consumer displays. The Scope flex slightly with an automatic highlight roll-off.

Exporting the Project

I have been using Panasonic AVCI 400 mbps so I will export a master file using ProRes422 HQ if you use a lower bitrate ProRes 422 may be sufficient but don’t go lower as it won’t be HDR anymore.

Export in ProRes 422 HQ

YouTube and other devices use default settings for HDR-10 metadata so do not fill the mastering display nor content information it is not required and you would not know how to fill it correctly with exception of peak brightness.

Converting for YouTube

I use the free program handbrake and YouTube guidelines for upload to produce a compatible files. It is ESSENTIAL to produce an mp4 file otherwise your TV and YouTube may not be able to display HDR correctly avoid any other format at all costs.

The finished product can be seen here

Home HDR Video HDR-10
HLG Documentary style footage

SDR version from HDR master

There are residual issues with this process one is the production of an SDR version. This currently works much better for HLG than HDR-10 which is interesting because HLG is unsupported on any computer so if you produce HDR HLG you are effectively giving something decent to both audiences.

For HDR-10 YouTube applies their own one fits all LUT and the results can be really bad. You may experience oversaturated colours in some cases, dark footage in others, and some clips may look totally fine.

At professional level you would produce a separate SDR grade however it is possible to improve the quality of YouTube conversion using specific techniques I will cover in a separate post.

Final Remarks

Grading in HDR is not widely supported the only tools available are scopes and Tone Mapping of your display. There is no concept of correct exposure for skin tones, in one scene those have a certain brightness and in another this changes again because this is not a 0-100% relative scale but goes with absolute values.

If you invested in a series of cinema LUT you will find none of them work and compresses the signal to under 100 nits. So there is less headroom for looks. There are other things you can do to give some vintage look like adding grain but you need to be careful as the incredible brightness of the footage and the details of 10 bits means if you push it up too much it looks a mess. Currently I am avoiding adding film grain and if I add it I blend it to 10%-20%.

One thing that is interesting is that Log footage in PQ does have a nice feel to it despite the incredible contrast. After all Log is a way to emulate film specifically Cineon, this is true for almost all log formats. Then you would have the different characteristics of each film stock, this is now our camera sensor and because most of them are made by Sony or Canon the clips tend to look very similar to each other nowadays. So if you want to have something different you need to step in the world of Red or ARRI but this is not in the scope of what I am writing here and what you my readers are interested in.

Am keeping a playlist with all my HDR experiments here and I will keep adding to it.

YouTube HDR Playlist

If you find this useful please donate using the button on the side and I will have a drink on you…Cheers!

SNR in Digital Cameras in 2020

There are significant number of misconceptions about noise in digital cameras and how this depends on variables like the sensor size or the pixel size. In this short post I will try to explain in clear terms the relationship between Signal Noise Ratio (SNR) and sensor size.

Signal (S) is the number of photons captured by the lens and arriving on the sensor, this will be converted in electric signal by the sensor and digitised later on by an Analog Digital Converter (ADC) and further processed by Digital Signal Processors (DSP). Signal depending on light is not affected by pixel size but by sensor size. There are many readings on this subject and you can google it yourself using sentences like ‘does pixel size matter’. Look out for scientific evidence backed up by data and formulas and not YouTube videos.

S = P * e where P is the photon arrival rate that is directly proportional to the surface area of the sensor, through physical aperture of the lens and solid angle of view, and e is the exposure time.

This equation also means that once we equalise lens aperture there is no difference in performance between sensors. Example two lenses with equivalent field of view 24mm and 12mm on full frame and MFT with crop 2x when the lens aperture is equalised produce the same SNR. Considering a full frame at f/2.8 and the MFT at f/1.4 gives the same result as 24/2.8=12/1.4 this is called constrained depth of field. And until there is sufficient light ensures SNR is identical between formats.

Noise is made of three components:

  1. Photon Noise (PN) is the inherent noise in the light, that is made of particles even though is approximated in optics with linear beams
  2. Read Noise (RN) is the combined read noise of the sensor and the downstream electronic noise
  3. Dark Current Noise (DN) is the thermal noise generated by long exposure heating up the sensor

I have discovered wordpress has no equation editor so forgive if the formulas appear rough.

Photo Noise is well mapped by Poisson distribution and the average level can be approximated with SQRT(S).

The ‘apparent’ read noise is generally constant and does not depend on the signal intensity.

While 3 is fundamental to Astrophotography it can be neglected for majority of photographic applications as long as the sensor does not heat up so we will ignore it for this discussion.

If we write down the Noise equation we obtain the following:

Noise=sqrt({PN}^2+{RN}^2+{DN}^2)

Ignoring DN in our application we have two scenarios, the first one is where the signal is strong enough that the Read Noise is considerably smaller than Photon Noise. This is the typical scenario in standard working conditions of a camera. If PN >> RN the signal to noise ratio becomes:

SNR =sqrt S

S is unrelated to pixel size but is affected by sensor size. If we take a camera with a full frame and one with a 2x crop factor at high signal rate the full frame camera and identical f/number it has double the SNR of the smaller 2x crop. Because the signal is high enough this benefit is almost not visible in normal conditions. If we operate at constrained depth of field the larger sensor camera has no benefit on the smaller sensor.

When the number of photons collected drops the Read Noise becomes more important than the photon noise. The trigger point will change depending on the size of the sensor and smaller sensor will become subject to Read Noise sooner than larger sensors but broadly the SNR benefit will remain double. If we look at DxOMark measurements of the Panasonic S1 full frame vs the GH5 micro four thirds we see that the benefit is around 6 dB at the same ISO value, so almost spot on with the theory.

Full Frame vs MFT SNR graph shows 2 stop benefit over 2x crop

Due to the way the curve of SNR drops the larger sensor camera will have a benefit or two stops also on ISO and this is the reason why DxOMark Sport Score for the GH5 is 807 while the S1 has a sport score of 3333 a total difference of 2.046 stops. The values of 807 and 3333 are measured and correspond to 1250 and 5000 on the actual GH5 and S1 cameras.

If we consider two Nikon camera the D850 full frame and the D7500 APSC we should find the difference to be one stop ISO and the SNR to drop at the same 3 dB per ISO increment.

The graphic from DxoMark confirms the theory.

Full Frame vs APSC SNR graph shows 1 stop benefit over 1.5x crop

If the SNR does not depend on pixel size, why do professional video cameras and, some high end SLR, have smaller pixel count? This is due to a feature called dual native ISO. It is obvious that a sensor has only one sensitivity and this cannot change, so what is happening then? We have seen that when signal drops, the SNR becomes dominated by the Read Noise of the sensor so what manufacturers do is to cap the full well capacity of the sensor and therefore cap the maximum dynamic range and apply a much stronger amplification through a low signal amplifier stage. In order to have enough signal to be effective the cameras have large pixel pitch so that the maximum signal per pixel is sufficiently high that even clipped is high enough to benefit from the amplification. This has the effect of pushing the SNR up two stops on average. Graphic of the read noise of the GH5s and S1 show a similar pattern.

Panasonic Dual Gain Amplifier in MFT and Full Frame cameras shows knees in the read noise graphs

Sone manufacturers like Sony appear to use dual gain systematically even with smaller pixel pitch in those cases the benefit is reduced from 2 stops to sometimes 1 or less. Look carefully for the read noise charts on sites like photonsforphotos to understand the kind of circuit in your camera and make the most of the SNR.

Because most of the low light situation have limited dynamic range, and the viewer is more sensitive to noise than DR, when the noise goes above a certain floor the limitation of the DR is seen as acceptable. The actual DR is falling well below values that would be considered acceptable for photography, but with photos you can intervene on noise in post processing but not DR, so highest DR is always the priority. This does not mean however that one should artificially inflate requirements introducing incorrect concepts like Useable DR especially when the dual gain circuit reduce maximum DR. Many cameras from Sony and Panasonic and other manufacturers have a dual gain amplifier, sometimes advertised other times not. A SNR of 1 or 0 dB is the standard to define useable signal because you can still see an image when noise and signal are comparable.

It is important to understand that once depth of field is equalised all performance indicators flatten and the benefit of one format on the other is at the edges of the ISO range, at very low ISO values and very high ISO and in both cases is the ability of the sensor to collect more photons that makes the difference, net of other structural issues in the camera.

As majority of users do not work at the boundaries of the ISO range or in low light and the differences in the more usual values get equalised, we can understand why many users prefer smaller sensor formats, that make not just the camera bodies smaller, but also the lenses.

In conclusion a larger sensor will always be superior to a smaller sensor camera regardless all additional improvement made by dual gain circuits. A full frame camera will be able to offer sustained dynamic range together with acceptable SNR value until higher ISO levels. Looking for example at the Panasonic video orientated S1H the trade off point of ISO 4000 is sufficient on a full frame camera to cover most real-life situation while the 2500 of the GH5s leaves out a large chunk of night scenes where in addition to good SNR, some dynamic range may still be required.

HDR or SDR with the Panasonic GH5

As you have read, I have been at the forefront of HDR use at home. I have a total of 5 devices with HDR certification of which 2 supporting all standards all the way to Dolby Vision and 3 supporting at least HLG and HDR-10. The consumption of content is composed for most of Netflix or Amazon originals and occasional BBC HLG broadcasts that are streamed concurrently to live programs. So, it is fair to say I have some practical experience on the subject and two years ago I started writing about shooting HLG with the GH5. This was mostly limited by lack of editing capabilities on the display side, but recently Mac OS 10.15.4 has brought HDR-10 support that means you can see HDR signal on a compatible HDMI or DisplayPort device. This is not HLG but there are ways around it as I wrote in a recent post. This post makes some considerations on the issues of shooting HDR and why as of 2020 shooting SDR Rec709 with your Panasonic GH5 is still my preferred option for underwater video and not.

Real vs Theoretical Dynamic Range

You will recall the schematic of a digital camera from a previous post.

This was presented to discuss dual gain circuits but if you ignore the two gain circuits it remains valid. In this post we will focus on the ADC which stands for Analog to Digital Converter. Contemporary cameras have 12- and 14-bits ADC, typically 14 bits ADC are a prerogative of DSLR cameras or high-end cameras. If we want to simplify to the extremes the signal arriving to the ADC will be digitalised on a 12- or 14-bits scale. In the case of the GH5 we have a 12-bits ADC, it is unclear if the GH5s has a 14-bits ADC despite producing 14-bits RAW, for the purpose of this post I will ignore this possibility and focus on 12-bits ADC.

12-bits means you have 4096 levels of signal for each RGB channel this effectively means the dynamic range limit of the camera is 12 Ev as this is defined as Log10(4096)/Log10(2)=12. Stop wait a minute how is that possible? I have references that the Panasonic GH5 dynamic range is 13 Ev how did this become 12?

Firstly, we need to ignore the effect of oversampling and focus on 1:1 pixel ratio and therefore look at the Screen diagram that shows just a bit more than 12 Ev. We then have to look at how DxOMark measures dynamic range this is explained here. In real life we will not be shooting a grey scale but a coloured scene, so unless you are taking pictures of the moon you will not get much more than 12 stops in any scenarios as the colours will eat the data.

This was for what concerns RAW sensor data before de-mosaicing and digital signal processing that will further deteriorate DR when the signal is converted down to 10-bits even if a nonlinear gamma curve is put in place. We do not know what is really the useable DR of the GH5 but Panasonic statement when V-LOG was announced referenced 12 stops dynamic range using a logarithmic curve so we can safely conclude that the best case is 12 stops when a log curve is used and 10 for a gamma curve with a constant correction factor. Again, it is worth stressing that the 12 stops DR is the absolute maximum at the camera setting with 0 gain applied aka base or native ISO which for the GH5 is 200 corresponding to 400 in log modes.

Shooting HLG vs SDR

Shooting HLG with the GH5 or any other prosumer device is not easy.

The first key issue in shooting HLG is the lack of monitoring capabilities on the internal LCD and on external monitors. Let’s start with the internal monitor that is not capable to display HLG signals and relies on two modes:

  • Mode 1 : priorities the highlights wherever they are
  • Mode 2 prioritise the subject i.e. center of the frame

In essence you are not able to see what you get during the shot. Furthermore, when you set zebra to 90% the camera will be rarely reaching this value. You need to rely on the waveform, that is not user friendly in an underwater scene, or on the exposure meter. If you have a monitor you will find if you are carefully in the spec that the screens are rec709 so will not display the HLG gamma while they will correctly record the colour gamut. https://www.atomos.com/ninjav : if you read under HDR monitoring gamma you see BT.2020 that is not HDR is SDR. So you encounter the same issues albeit on a much brighter 1000 nits display that you have on the LCD and you need to either adapt to the different values of the waveform or trust the exposure meter and zebra that as we have said are not very useful as it take a lot to clip. On the other hand if you shoot an SDR format the LCD and external monitor will show exactly what you are going to get except you shoot in V-LOG, in this case the waveform and the zebra will need to be adjusted to consider that VLOG absolute max is 80% and 90% white is 60%. Once you apply a monitor LUT however, you will see exactly what you are going to get on the internal or external display.

Editing HLG vs SDR

In the editing phase you will be faced with similar challenges although as we have seen there are workarounds to edit HLG if you wish so. A practical consideration is around contrast ratio. Despite all claims that SDR is just 6 stops I have actually dug out the BT.709, BT.1886, BT.2100 recommendations and I this is what I have found.

 Contrast RatioMax BrightnessMin BrightnessAnalog DR
BT.70910001000.19.97
BT.188620001000.0510.97
BT.210020000010000.00517.61
Specifications of ITU display standards

In essence Rec709 has a contrast ratio of 1000 which means 9.97 Stops of DR and already allows for 8- and 10-bits colour. BT.1886 was issued to consider CRT screens no longer exist and this means that the DR goes to 10.97 stops. BT.2100 has a contrast ratio of 200000:1 or 17.61 stops of DR.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
HDR40010004000.49.97
HDR50050005000.112.29
HDR60060006000.112.55
HDR10002000010000.0514.29
HDR14007000014000.0216.10
400 TB8000004000.000519.61
500 TB10000005000.000519.93
DisplayHDR Performance Standards

Looking at HDR monitors you see that, with the exception of OLED screens, no consumer devices can meet BT.2100 standards; so even if you have an HDR monitor in most cases is falling short of BT.2100 recommendation.

Our GH5 is capable of a maximum 12 stops DR in V-Log and maybe a bit more in HLG however those values are far below BT.2100 recommendations and more in line with BT.1886 recommendation. If we look at DxOMark DR charts we see that at ISO 1600 nominal that is in effect just above 800 the DR has fallen below 10 Ev. Consider that this is engineering DR practically speaking you are getting your 12 stops just at ISO 200 and your real HDR range is limited to 200-400 ISO range this makes sense as those are the bright scenes. Consider that log photo styles start at ISO 400 but this really translates to ISO 200 on this chart as well as exposure values. Unless you are shooting at low ISO you will get limited DR improvement. Underwater is quite easy to be at higher ISO than 200 and even when you are at 200 unless you are shooting the surface the scene has limited DR anyway. Generally, 10 stops are more than adequate as this is what we get when we produce a Jpeg from a RAW file.

Viewing HDR

I think the final nail in the coffin arrives when we look where the content will be consumed.

StandardContrast RatioMax BrightnessMin BrightnessAnalog DR
IPS/Phones10003500.359.97
LED Tv40004000.111.97
OLED60000006000.000122.52
Typical Devices Performance

Phones have IPS screen with some exceptions and contrast ratio below 1000:1 and so do computer screens. If you share on YouTube you will know phones and computer constitute around 85% of playback devices. Tv are around 10% and a small part of those will be HDR. So other than your own home you will not find many HDR devices out there to give justice to your content.

10-bits vs 8 bits

It is best practice to shoot 10 bits and both SDR and HDR support 10 bits colour depth. For compatibility purposes SDR is delivered with 8 bits colour and HDR on 10 bits colour.

Looking at tonal range for RAW files on 8 Megapixels we see that the camera has 24 bits depth over RGB this means 8 bits per channel and 9 bits tonal range. Tonal range are grey levels so in short, the camera will not produce 10 bits colour bit will have more than 8 bits of grey tones which are helpful to counter banding but only at low ISO, so more useful for blue skies than for blue water. Considering that image for photo competitions are JPEGs and that nobody has felt the need for something more we can conclude that as long as we shot at high bitrate something as close to a raw format 8 bit for delivery are adequate.

Cases for HDR and Decision Tree

There are cases where shooting HLG can be meaningful those include snorkelling at the surface on bright days. You will not be going at depth so the footage will look good straight off the camera, likewise, for bright shots in the sun on the surface. But generally, the benefit will drop when the scene has limited DR or at higher ISO values where DR drops anyway.

What follows is my decision tree to choose between SDR and HDR and 10 bits vs 8 bits formats. I like my pictures and my videos to look better than life and I think editing adds value to the imaging although this is not an excuse for poor capture. There are circumstances where editing is less important, namely when the scene is amazing by itself and requires no extra help, or when I am looking at fast paced, documentary style scenes that do not benefit from editing. For the rest my preference remains for editing friendly formats and high bit rate 10 bits codec all intra. Recently  I have purchased the V-Log upgrade and I have not found difficult to use or expose so I have included this here as possible option.

The future of HDR

Except a cinema like setting with dark surrounding and low ambient light HDR mass consumption remains challenging. Yes, you can have high peak brightness but not high contrast ratio and this can be obtained with SDR for most already. There is a lot of noise in the cinema community at present because the PQ curve is hard to manage and the work in post processing is multiplied, clearly PQ is not a way forward for broadcasting and HLG will prevail thanks to the pioneering efforts of the BBC but the lack of monitoring and editing devices means HLG is not going to fit cine like scenarios and little productions. It could be a good fit for a zero-edit shooter someone that like to see the scene as it was.

Conclusion

When marketing myths and incorrect information is netted out we realise that our prosumer devices are very far away from what would be required to shoot, edit and consume HDR. Like many other things in digital imaging is much more important to focus on shooting techniques and how to make the most of what we have, instead of engaging on a quest for theoretical benefits that may not exist.

Producing and grading HDR content with the Panasonic GH5 in Final Cut Pro X

It has been almost two years from my first posts on HLG capture with the GH5 https://interceptor121.com/2018/06/15/setting-up-your-gh5-for-hlg-hdr-capture/ and last week Apple released Catalina 10.15.4 that now supports HDR-10 with compatible devices. Apple and in general computer are still not supporting HLG and it is unlikely this is ever going to happen as the gaming industry is following VESA DisplayHDR standard that is aligned to HDR-10.

After some initial experiments with GH5 and HLG HDR things have gone quiet and this is for two reasons:

  1. There are no affordable monitors that support HLG
  2. There has been lack of software support

While on the surface it looks like there is still no solution to those issues, in this post I will explain how to grade HLG footage in Final Cut Pro should you wish to do so. The situation is not that different on Windows and DaVinci Resolve that also only support HDR-10 monitors but I leave it to Resolve users to figure out. This tutorial is about final cut pro.

A word about Vlog

It is possible to use Vlog to create HDR content however VLOG is recorded as rec709 10 bits. Panasonic LUT and any other LUT are only mapping the VLOG gamma curve to Rec709 so your luminance and colours will be off.  It would be appropriate to have a VLOG to PQ LUT however I am not aware this exists. Surely Panasonic can create that but the VLOG LUT that comes with the camera is only for processing in Rec709. So, from our perspective we will ignore VLOG for HDR until such time we have a fully working LUT and clarity about the process.

Why is a bad idea to grade directly in HLG

There is a belief that HLG is a delivery format and it is not edit ready. While that may be true, the primary issue with HLG is that no consumer screens support BT.2020 colour space and the HLG gamma curve. Most display are plain sRGB and others support partially or fully DCI-P3 or the computer version Display P3. Although the white point is the same for all those colour spaces there is a different definition of what red, green and blue and therefore without taking into this into account, if you change a hue, the results will not be as expected. You may still white balance or match colours in HLG but you should not attempt anything more.

What do you need for grading HDR?

In order to successfully and correctly grade HDR footage on your computer you need the following:

  • HDR HLG footage
  • Editing software compatible with HDR-10 (Final Cut or DaVinci)
  • An HDR-10 10 bits monitor

If you want to produce and edit HDR content you must have compatible monitor let’s see how we identify one.

Finding an HDR-10 Monitor

HDR is highly unregulated when it comes to monitors, TVs have Ultra HD Premium Alliance and recently Vesa has introduced DisplayHDR standards https://displayhdr.org/ that are dedicated to display devices. So far, the Display HDR certification has been a prerogative of gaming monitors that have quick response time, high contrast but not necessarily high colour accuracy. We can use the certified list of monitors to find a consumer grade device that may be fit for our purpose: https://displayhdr.org/certified-products/

A DisplayHDR 1000 certified is equivalent to a PQ grading device as it has peak brightness of 1000 nits and minimum of 0.005 this is ideally what you want, but you can get by with an HDR-400 certified display as long as it supports wide colour gamut. In HDR terms wide gamut means covering the DCI-P3 colour space at least for 90% so we can use Vesa list to find a monitor that is HDR-10 compatible and has a decent colour accuracy. Even inside the HDR-400 category there are displays that are fit for purpose and reasonably priced. If you prefer a brand more orientated to professional design or imaging look for the usual suspects Eizo, Benq, and others but here it will be harder to find HDR support as usually those manufacturers are focussed on colour accuracy, so you may find a display covering 95% DCI-P3 but not necessarily producing a high brightness. As long as the device supports HDR-10 you are good to go.

I have a Benq PD2720U that is HDR-10 certified, has a maximum brightness of 350 nits and a minimum of 0.35, it covers 100% sRGB and REC709 and 95% DCI-P3, so is adequate for the task. It is worth nothing that a typical monitor with 350-400 nits brightness offers 10 stops of dynamic range.

In summary any of this will work if you do not have a professional grade monitor:

  • Look into Vesa list https://displayhdr.org/certified-products/ and identify a device that supports at least 90% DCI-P3, ideally HDR-1000 but less is ok too
  • Search professional display specifications for HDR-10 compatibility and 10 bits wide gamut > 90% DCI-P3

 

Final Cut Pro Steps

The easy way to have HDR ready content with the GH5 is to shoot with the HLG Photo Style. This produces clips that when analysed have the following characteristics with AVCI coded.

MediaInfo Details HLG 400 Mbps clip

Limited means that it is not using the full 10 bits range for brightness you do not need to worry about that.

With your material ready create a new library in Final Cut Pro that has a Wide Gamut and import your footage.

As we know Apple does not support HLG so when you look at the Luma scope you will see a traditional Rec709 IRE diagram. In addition, the ‘Tone Mapping Functionality’ will not work so you do not have a real idea of colour and brightness accuracy.

At this stage you have two options:

  1. Proceed in HLG and avoid grading
  2. Convert your material in PQ so that you can edit it

We will go on option 2 as we want to grade our footage.

Create a project with PQ gamut and enter your display information in the project properties. In my case the display has a minimum brightness of 0.35 nits and max of 350 and it has P3 primaries with a standard D65 white point. It is important to know those parameters to have a good editing experience otherwise the colours will be off. If you do not know your display parameters do some research. I have a Benq monitor that comes with a calibration certificate the information is right there. Apple screens are typically also P3 with D65 white point and you can find the maximum brightness in the specs. Usually around 500 nits for apple with minimum of 0.5 nits. Do not enter Rec2020 in the monitor information unless your monitor has native primaries in that space (there are almost none). Apple documentation tells you that if you do not know those values you can leave them blank, final cut pro will use the display information from colour sync and try a best match but this is far from ideal.

Monitor Metadata in the Project Properties

For the purpose of grading we will convert HLG to PQ using the HDR tools. The two variants of HDR have a different way to manage brightness so a conversion is required however the colour information is consistent between the two.

Please note that the maximum brightness value is typically 1000 Nits however there are not many displays out there that support this level of brightness, for the purpose of what we are going to do this is irrelevant so DO NOT change this value. Activate tone mapping accessible under the view pull down in the playback window this will adapt the footage to your display according to the parameters of the project without capping the scopes in the project.

Use HDR Tools to convert HLG to PQ

Finalising your project

When you have finished with your editing  you have two options:

  • Stay in PQ and produce an HDR-10 master
  • Delete all HDR tools HLG to PQ conversions and change back the project to HLG

If you produce an HDR-10 master you will need to edit twice for SDR: duplicate the project and apply the HDR tool from HLG to SDR or other LUT of your choice.

If you stay in HLG you will produce a single file but is likely that HDR will only be displayed on a narrower range of devices due to the lack of support of HLG in computers. The HLG clip will have correct grading as the corrections performed when the project was in PQ with tone mapping will survive the editing as HLG and PQ share the same colour mapping. The important thing is that you were able to see the effects of your grade.

Project back in HLG you can see how the RGB parade and the scope are back to IRE but all is exactly the same as with PQ

In my case I have an HLG TV so I produce only one file as I can’t be bothered doing the exercise two times.

The steps to produce your master file are identical to any other projects, I recommend creating a ProRes 422 HQ master and from there other formats using handbrake. If you change your project back to HLG you will get a warning about the master display you can ignore it.

Focussing Techniques for Video – Part II Auto Focus Settings

If you have some experience with video on land you will know that many professional videographers do not use autofocus but rely on follow focus devices. Basically those are accessories that control the focus ring of the camera and avoid the shake that you would create if you were turning the focus ring with your hand.

The bad news is that there are no devices to perform follow focus underwater and if you use a focus knob you will indeed create camera shake. This is the primary reason why I do not use focus knobs on any of my lenses with the exception of the Olympus 60mm macro and in those rare occasions I uses it I do not actually use to obtain focus but to ensure I am at the closest working distance.

So how do you achieve good focus if you can’t use a focus ring and continuous autofocus cannot be trusted? There are essentially three methods that I will discuss here and provide some examples:

  1. Set and forget
  2. Set and adjust
  3. Optimised Continuous Autofocus

You have noticed that there is still an option for continuous autofocus in the list. Before we drill down in the method I want to give some background on autofocus technology.

If after reading this post you are still confused I recommend you get some tuition either joining my Red Sea trip or 1 to 1 (offered in Milton Keynes area in UK).

https://interceptor121.com/2019/07/28/calling-out-to-all-image-makers-1st-interceptor121-liveaboard-red-sea-2020/

Contrast Detect vs Phase Detect and Hybrid Autofocus

The internet is full of autofocus videos showing how well or bad certain camera perform and how one system is superior to another. The reality is that professional cameramen will use follow focus in majority of cases and this is because the camera does not know who the subject is.

Though it is true that one focus system may perform better than other you need to consider that Red cameras use contrast detection autofocus same as your cheap compact camera so clearly autofocus must not be that important.

The second fact is that any camera focus system needs contrast including phase detect. Due to scattering of blue light in water there are many situations where the contrast is low in the scene resulting in focus hunt of the camera autofocus system.

So my first recommendation is to ignore the whole discussion about which focus system is superior because the reality is that there will be situation where the focus will be difficult to achieve and the technology will not come to help. You need to devise strategies to make things work and this is what this post is about.

Let’s go now in the techniques.

Method 1: Set and Forget

As the name implies with this method we set focus at the beginning of the shot and never change this again. This means disabling the camera continuous focus in video mode. This is essential so that this technique works.

This works in three situations:

  1. Using a lens at the hyperfocal distance behind a flat port
  2. Using wet wide angle lenses
  3. Using fisheye lenses

Method 1.a Hyperfocal Distance Method

I am not going to write a dissertation on this there is good content on wikipedia worth a read: https://en.wikipedia.org/wiki/Hyperfocal_distance

The key concept is that depth of field at a given aperture and subject distance will reach infinity. The wider the lens closer this subject distance. For example a 14mm lens on a micro four third body at f/5.6 is 1.65 meters so if you focus on an object at this distance anything between 0.8 meters and infinity will be in focus. As you close the aperture the hyperfocal distance diminishes. This technique is good for medium or reefscape shots where you don’t mind that the whole frame is sharp in focus. It is not suitable for macro or close shots as the aperture required would be too small and diffraction would kick in.

Looking at the past CWK clips if continuous autofocus was disabled and he had focussed just at the start of the scene at 1.85 meters no focus was required until the manta was at 0.9 meters. Note that distances have to be adjusted to account for magnification of water effect.

Once you have your lens and aperture setting you can quickly work out some distances in your scene and fine tune your expertise.

Obviously shooting those shots with a flat port is not exactly the most common method however understanding this technique is paramount to the other two.

Method 1.bc Wet Lenses and Fisheyes

Fisheye lenses tend to have an incredible amount of depth of field even wide open and therefore the set and forget applies in full here without even bothering about hyperfocal distance. Usually focussing on your feet is all is required.

The real revelation to this technique are afocal wet lenses. Afocal means that the focal length of the wet lens is infinity and the light coming through does not diverge or converge. Together with the magnification factor typically 0.3-0.4x means you get to a fisheye situation without the same amount of distortion.
This is the primary reason to buy a lens like the Nauticam WWL-1 or even an Inon wet lens with afocal design.

My Tiger and Hammerhead videos are shot with the camera locked in manual focus after focussing on my feet.

Even when the shark hits the camera the image is in focus

I do not have technical information on newer Nauticam WACP-1 or WACP-2 so am not in a position to confirm if those lenses are afocal or not and therefore I cannot help you. I would think consideration on depth of field still apply. If Nauticam or a shop or user lends me a set up for pool testing I can provide optimise settings for WACP.

Set and forget is the number one method for wide angle and reefscapes underwater and it is easy.

Method 2: Set and Adjust

As the name implies this method sets the focus at the beginning of the shot and then adjusts when required this is necessary especially in macro situations.

The set and adjust method varies depending on how the camera managed push on focus. If the camera manages a refocus using a half press shutter no other settings are required other than disabling continuous auto focus.

For cameras that do not have a refocus half shutter setting you need to operate in manual focus and the set a custom button to perform a single auto focus.

In both cases you need peaking to be active during the shot.

Procedure:

  1. Set the focus as required using half shutter or AF On button
  2. Observe the peaking to ensure the subject is in focus if required moving the camera.
  3. In case of loss of focus refocus using the shutter or the AF On button

This method works well with macro where typically you set focus and then move the camera back and forth to keep focus, in those cases where you want to switch focus on another part of the frame you refocus. This would have helped Brian in the two crab situation.

As the refocus does bring a moment of blur in the clip you need to ensure that when you trigger the refocus the camera will succeed this is best achieved when using a single area of focus.

Method 3: Optimised Continuous Autofocus

Although autofocus has some risks there are situation when this is required those include:

  • Shooting aperture that do not have sufficient depth of field to warrant a set and forget
  • Using dome ports and rectilinear lenses from what I have experienced those lenses do not work well with hyperfocal distances due to physics of dome ports

Obviously the best option remains using a wet lens and set and forget however there are instances where we absolutely want straight lines for example shooting divers or models. In those cases we will use a dome port and as we can’t use a focus gear because the camera would shake we need autofocus.

Focus Area Settings

Cameras have a selection of modes to set the area that will be used by autofocus:

  1. Face / Animal recognition -> locks on recognised shapes
  2. Multi area -> selects the highest contrast area in a number of smaller area of the frame cameras have up to 225 or more areas and you can customise the shape of it
  3. Single area -> an area of selectable size and position in the frame
  4. Tracking -> tracks the contour of an object in the frame

Face recognition and animal recognition are not useful in our case.

Tracking requires the object to keep the shape within the frame this is useful for nudibranches for example or anything that does not change shape in the frame, a fish turning for example will be lost by this method so this is seldom used. To be honest this fails also on land most times.

So we really are left with multi area and single area.

My advice is to avoid multi area because particles in the water for example can generate sufficient contrast to fool the camera and make it lock on it.

So the best option is to use single area, I typically set this to a size smaller than the central third of a nine block grid. With this configuration is also possible to focus on a subject off the centre by moving the area within the frame. This setting works well when the subject is tracked by our movement and the subject is in the centre which is the majority of situations.

This video is shot on a 12-60 mid range zoom using single area AF for all scenes including macro.

The single more significant risk for single area is that if the centre of the frame goes to blue water the camera will go hunting so if you are shooting in caves or on a wall make sure the AF area is on one side of the frame to avoid hunting or lock occasionally focus to prevent the camera seek focus that won’t be found.

Conclusion

Achieving focus in underwater video requires different techniques from land use and a good understanding of ports and optics.

If you think you are not skilled enough and need help from autofocus my advice is to get an afocal wet wide angle lens. This will transform your shooting experience and guarantee all your wide angle to be in focus. If you work in a macro situation you need to master the single AF setting of your camera and make sure you are super stable.

The most difficult scenario is using dome ports and this is one of the reasons I do not recommend those for video. If you are adamant on rectilinear lenses than the specific settings.

Donations are appreciated use the PayPal button on the left.

Focussing Techniques for Video – Part I Problem Diagnostic

Thanks to Brian Lim and WK’S gone diving for providing some examples.

When I started thinking about writing this post I thought of presenting a whole piece on the theory of focus and how a camera achieves it however I later decided it made more sense to start from example and then drill down on the theory based on specific cases.

So we will look at three common issues, understand why they happened and then discuss possible mitigations.

Issue 1: Wide angle Manta Focus Hunt

This clips has been provided by WK’s and has been taken during a trip to Socorro

The water is quite dark and murky and there is a substantial amount of suspended particles in water otherwise we would not have mantas. The water is also fairly milky and therefore the image lacks contrast which is not ideal for the camera to focus as all cameras, including those working on phase detection AF need contrast.

WK’s had a flat port and was shooting quite narrow aperture at f/7.1 which should ensure plenty depth of field on his 14mm lens.

In this clip you can literally see the autofocus pulsating trying to find focus the hunting carries on until the manta is very close at around 15 seconds in the clip. At that point the clips is stable however the overall approach has been ruined.

Diagnostics

The key observations are that the subject was not in focus at the very beginning of the shot and then you can distinctively see how some fairly bright particles come into the scene at 0.04 for example and disturb the camera process as they create a strong contrast against the black manta and the camera can’t decide who is the subject so it starts hunting. When the manta is close and well defined in the frame the camera knows she is the subject and therefore focus issues stop. The white particles in the water when the manta is far are large and bright enough to be picked up by the matrix point of the camera AF this is true regardless of the manta being in the frame and the same would have applied if another fish was doing a photobomb.

Solution

The problem in this clip is not new to video shooters similar things happen when you have the bride walking to the altar and someone the priest or the husband steps into the frame and they are far apart. On land you would keep control using manual focus or if you were really daring you would use tracking. In our case WK’s does not have focus gear and it is not possible for him to manually change the focus.

WK’s could have used tracking  if available on the camera. With tracking you need to ensure that the camera can lock onto the manta and then if it does that the manta does not turn or change shape and nothing bigger comes in front. At this point everything would work. This is a high risk technique only worth trying in clear water and when there are no particle in the water so in this scenario not advised.

The last option and the solution to this issue was for WKs to switch to manual focus and engage peaking. Use a single AF on to focus on his feet or an intermediate target and then check the manta was in focus. If focus was lost WK’s could have triggered AF again at least being able to control how many times the camera was refocussing.

Issue 2: Macro Subject Switching

This other clip has been provided by Brian Lim and it is a macro situation.

We can see that there are particles flying in the water and some other small critters at close range. The main subjects are the large crab and the two small crabs in the foreground.

Brian is not happy about the focus on this shot as not everything is sharp.

Diagnostics

Despite the murky water Brian has correctly locked focus on the crabs in the foreground and due to the high level of magnification the camera does not have sufficient depth of field to make the small and large crab crisp in the frame. It is possible that Brian could not detect on this screen that the crab behind was not sharp which could be avoided with peaking. In any case it is likely that there is no possibility to have this shot sharp end to end. Brian is super stable in the shot so he was set to make it work.

Solution

Brian does not have a focus gear on this camera this would have been required to pull focus in the same shot on the small crab and then go onto the larger crab.

However even in this situation in manual focus Brian could have shot two clips focussing on the two different focal planes and then managed this in post. It is critical to be able to review focus on screen when we shoot or to review right after before we leave the scene.

Issue 3: Too many fish and too much water

The last clip is mine and is taken during a recent trip to Sataya reef.

I have deliberately left this clip uncut because it lets you see that you can use autofocus in water behind a dome port and for most part it works but there are some pitfalls so the most photogenic dolphins at 00:50 are initially blurred.

Diagnostics

I was not expecting the sheer amount of dolphin on the day and certainly I was not expecting them this close so I had a standard zoom lens at 24mm FF equivalent behind a dome port. In most cases I managed to have some fish in the AF area of the camera but at 00:45 and 00:58 the camera does not have anything in the middle of the frame and goes on a hunt.

Solution

Working with a dome port and a lens of that nature does not warrant you will have enough depth of field to leave the camera locked even at f/8 so some refocussing activity was indeed required. In this case I was using a single AF area in the centre and in those moments the camera has just the blue and nothing to focus on and goes on a hunt, as soon as the subject is back in the AF area the camera locks back in. Note that the AF change speed is not fast enough to follow when the dolphin come too close therefore here the only real solution was to have a wider lens, however I could have avoided the hunt if I had set the camera to AF lock and intercepted the moment the AF area was empty preventing the camera to re-engage.

Summary

In all examples of this post the issues have been generated by a lack of intervention. All the situations I have analysed could have been dealt with at time of the shot for most part and did not require extra gear. I believe that when we are in water there is already lots to think about and therefore, we make mistakes or not apply the decisive corrective action that would have saved the shots.

In the next post I will drill down in focus settings and how they can help your underwater shots and also discuss how those apply to macro, wide and mid shots. I am also happy to look at specific examples or issues please get in touch. Specific coaching or troubleshooting is provided in exchange of a drink or two.

Donations are appreciated use the PayPal button on the left.

Announcing New 2020 Offering

Dear readers in 2020 I will be adding some services to the blog to reflect some requirements that have been developing in the last few years.

It happens at times that people get in touch either through comments or directly by email to ask about their current challenges so I thought why not to address this with a bespoke service. Here are my current ideas:

  • Equipment selection – this is generally to do with port lenses, strobes, lights, accessories more than with camera and housing
  • Photo editing clinic – people seem to struggle to handle the editing of their images. While some are definitely skilled majority aren’t and editing an image is almost as important as shooting a good image
  • Video editing clinic – like above but for video that is sometimes even more complex

Those will be offered at the symbolic price of a few beers at UK prices £10 donation using the link on the left hand side.

Other topics that are also becoming interesting are discussions around issues like focus, framing, lens quality. For those I welcome input material by email interceptor121@aol.com send me your images or videos with problems and I will use them to build an article for yours and other benefits.

Currently am working on a feature on focus in video so I am looking for your blurred videos (sorry) as I don’t have many myself I need some help from you guys.

Thank you for reading this short post!