Category Archives: Uncategorized

Announcing 121 with Nick More

Nick More will be answering my Q&A on a post next to be published on my blog.

Nick, is the British Underwater photographer of the year 2020 and has become a trademark of motion blur

Nick has recently authored a chapter on the latest edition of Martin Edge Underwater Photography and has provided me with some of his most exciting image including some that he consider special

Check his images on http://instagram.com/nicholasmoreuw and stay tuned the post will go up this Saturday June 13 2020

do you need raw video?

We are finally there. Thanks to smaller companies that are keen to get a share of the market we now have at least two cameras with MFT sensor that are able to produce RAW video.

RAW Video and RED

It has been RED to patent the original algorithm to compress raw video data straight out of the sensor before the demosaicing process. Apple tried to circumvent the patent with their ProRes RAW but lost in court the legal battle and now has to pay licenses to Red. Coverage is here.

So RED is the only company that has this science, to avoid paying royalties Blackmagic Design developed an algorithm that uses data taken from a step of the video pipeline after demosaic for their BRAW.

I do not want to discuss if BRAW is better than RedCode or ProRes RAW however with a background in photography I only consider RAW what is straight out of the sensor Analag Digital Converter so for me RAW is RedCode or ProRes RAW and not BMRAW.

How big is RAW Video

If you are a photographer you know that a RAW image data file is roughly the same size in megabytes than the megapixels of your camera.

How is that possible I have a 20 Megapixel camera and the RAW file is only a bit more than 20 megabytes? My Panasonic RW2 files are 24.2 MB without fail out of 20.89 Megapixels so on average 9.26 bits per pixel. Why don’t we have the full 12 bits per pixel and therefore a 31 MB file? Well cameras are made of a grid of pixels that are monochromatic so each pixel is either red, green or blue. In each 2×2 matrix there are 2 green pixels, 1 red and 1 blue pixel. Through a series of steps of which on is to decode this mosaic into an image (demosaic) we rebuild an RGB image for display.

Each one of our camera pixels will not have the full 4096 possible tones, measures from DxoMark suggest that the Sony IMX272AQK only resolves 24 bits colours in total and 9 bits of grey tones. So this is why a lossless raw files is only 24.2 MB. This means that an 8K frame video in RAW would be 9.25 MB and therefore a 24 fps RAW video stream would be 222 MB/s or 1,776 Mb/s if we had equivalent compression efficiency. After chroma subsampling to 422 this would become 1184 Mb/s.

Cameras like the ZCam E2 or the BMPCC4K that can record ProRes 422 HQ approach those bitrates and can be considered virtually lossless.

But now we have ProRes RAW so what changes? The CEO of ZCAM has posted an example of a 50 fps ProRes RAW HQ files and this has a bitrate of 2255 Mb/s if this was 24 fps it would be 1082 Mb/s so we can see how my maths are actually stacking up nicely.

Those bit rates are out of reach of almost all memory card so an SSD drive support is required and this is where Atomos comes into the picture.

Atomos have decided to adopt ProRes RAW and currently offer support for Nikon, Panasonic and Zcam selected model.

ProRes RAW workflow

So with the ProRes RAW file at hand I wanted to test the workflow in Final Cut Pro X. Being an Apple codec all works very well however we encounter a number of issues that photographers have resolved a long time ago.

The first one is that RAW has more dynamic range than your SDR delivery space, this also happens with photos however programs work in larger RGB spaces like ProPhotoRGB at 16 bits and using tone mapping you can edit your images and then bring them back to an 8 bit jpeg that is not as good as the RAW file but is in most cases fine for everyone.

Video NLE are not in the same league of photo raw editors and usually deal with a signal that is already video is not raw data. So the moment you drop your ProRes RAW clip on a SDR timeline it clips as you would expect. A lot of work is required to bring back clips into an SDR space and this is not the purpose of this post.

To avoid big issues I decided to work on an HDR timeline in PQ so that with a super wide gamut and gamma there were no clipping issues. The footage drops perfectly into the timeline without any work required to confirm which is brilliant. So RAW for HDR is definitely the way forward.

ProRes RAW vs LOG

My camera does not have ProRes RAW so I wanted to understand what is lost going through LOG compression? For cameras that have an analog gain on sensor there is no concept of base ISO fixed like it happens on Red or ARRI cameras. Our little cameras have a programmable gain amplifier and as gain goes up DR drops. So the first bad news is that by using LOG you will lose DR from RAW sensors.

This graph shows that on the Panasonic GH5 there is a loss of 1 Ev from ISO 100 to 400 but still we have our 11.3 Ev minimum to play with. I am not interested in the whole DR but I just want to confirm that for those cameras that have more DR than their ADC allows you will have a loss with LOG as this needs gain and gain means clipping sooner.

Panasonic GH5 full resolution 20.9 MPixels DR

What is very interesting is that net of this the ProRes RAW file allowed me to test how good is LOG compression. So in this clip I have :

  1. RAW video unprocessed
  2. RAW video processed using Panasonic LOG
  3. RAW video processed using Canon LOG
  4. RAW video processed using Sony LOG

In this example the ZCAM E2 has a maximum dynamic range of 11.9 Ev (log2(3895)) from Sony IMX299CJK datasheet. As the camera has less DR than the maximum limit of the ADC there is likely to be no loss.

We can see that there are no visible differences between the various log processing options. This confirms that log footage is an effective way to compress dynamic range in a smaller bit depth space (12->10 bits) for MFT sensors.

The same ProRes RAW files processed using log from Panasonic, Canon and Sony shows no visual difference

Final Cut Pro gives you the option to go directly to RAW or go through LOG, this is because all your log based workflow and LUT would continue to work. I can confirm this approach is sound as there is no deterioration that I can see.

Is ProRes RAW worth it?

Now that we know that log compression is effective the question is do I need it? And the answer is it depends…

Going back to our ProRes RAW 1082 Mb/s once 422 subsampling is applied this drops to 721 Mb/s this is pretty much identical to ProRes 422 HQ nominal bit rate of 707 Mb/s. So if you have a Zcam and record ProRes RAW or ProRes 422 HQ you should not be able to see any difference. I can confirm that I have compressed such footage in ProRes 422 HQ and I could not see any difference at all.

However typically with photos a RAW files can hold heavy modifications while a JPEG cannot. We are used processing ProRes and there is no doubt that ProRes 422 HQ can take a lot of beating. In my empirical tests I can see that Final Cut Pro X is very efficient manipulating ProRes RAW files and in terms of holding modifications I cannot see that this codec provides a benefit but this may be due to the lack of capability of FCPX.

For reference Panasonic AVC Intra 422 is identical in terms of quality to ProRes 422 HQ though harder to process, and much harder to process than ProRes RAW.

Conclusion

If you have already a high quality output from your camera such as ProRes 422 HQ or Panasonic AVCI 400 Mbps with the tools at our disposal there is not a lot of difference at least for an MFT sensor. This may have to do with the fact that the sensor DR and colour depth is anyway limited and therefore log compression is effective to the point that ProRes RAW does not appear to make a difference, however there is no doubt that if you have a more capable camera, there is more valuable data there and this may be well worth it.

I am currently looking for Panasonic S1H ProRes RAW files. Atomos only supports 12 bits so the DR of the camera will be capped as RAW is linearly encoded. However SNR will he higher and the camera will have more tones and colors resulting in superior overall image quality, someone calls this incorrectly usable DR but is just image quality. it will be interesting to see if AVCI 10 bits and log is more effective than ProRes RAW 12 bits.

What is Dual Native ISO and does it matter to Underwater Video?

Dual native ISO is one of the most confusing topics in modern videography. Almost any professional camera Alexa, Varicam have dual native ISO. So what is it and does it matter to underwater shooters?

Sensitivity and ISO

Most of the confusion stems from the fact film no longer exists. When you had film you could choose different ASA or film sensitivities and once loaded in the camera you were stuck with it until it was finished.

With digital cameras having a memory storage you can flexibly change the ISO but there is some confusion about ISO and sensitivity so let’s have a look at some details.

Simplified digital camera schematic

In the schematic above the film represent the sensor. As with film the sensor has a fixed level of sensitivity that does not change.

The two triangles are gain circuits those will amplify the signal coming from the sensor that is still analog has yet to be converted into digital signal. A camera has typically a single gain circuit but some camera have two in this case we will have a dual gain circuit like the Panasonic GH5s or the Blackmagic Pocket Cinema Camera 4/6K.

Base or Native ISO?

When the gain circuit is set to 1 or passthrough an ISO measurement is taken according to ISO 12232 https://www.iso.org/standard/73758.html

It follows that as the amplifier is in pass through the camera can only have a single native ISO. So the whole definition of dual native ISO is incorrect and this should be called as Dual Gain camera as the sensor really has only 1 ISO. The ISO formula defines speed in lux*sec with the following formula this gives the native ISO of the sensor and then gain levels on the amplifier are mapped to an Ev or Stops scale.

It is worth noting that ISO values as seen on a camera are typically off and real values are lower, this is because manufacturers tend to leave headroom before clipping.

And how do I find out the native ISO of my camera? This is typically not defined clearly but generally is the lowest ISO you can set on the camera that is outside the extended range, where extended really means additional digital enhancement range.

For simplicity this is a snapshot of the GH5 manual where you can see that the native ISO is 200. The extended gain is below 200 and above 25600.

Panasonic GH5 manual

A method to check what gain circuitry is installed in the camera is to look at Read noise graphics from PhotonsforPhotos.

White dots map the extended ISO setting

When we look at a camera with a dual gain circuit the same graph has a different shape.

GH5s read noise shows the dual gain circuit, white dots are extended ISO

In the case of the Panasonic GH5s the sensor has a native ISO of 160, this is the value without any gain applied. You can also see that at ISO 800 when the high gain amplifier is active the read noise is as low as at ISO 320. This is why there is a common misconception that the GH5s native ISO is 800 but as we have seen it is not.

GH5s Manual

The GH5s manual mentions a dual native ISO setting, as we have seen this is actually an incorrect definition as the sensor has only 1 native ISO and this is 160.

The first low gain analog amplifier works from 160 to 800 ISO and the high gain amplifies works from 800 to 51200, values outside this range are only digital manipulation.

Gain and Dynamic Range

In order to understand dynamic range, defined as the Ev difference between the darkest and brightest part of the image, we can look at a DR chart.

Dynamic Range Plot for GH5s

This chart looks at photographic dynamic range (usable range) so it is much lower than the advertised 12 or 13 Ev from Panasonic but neverthless shows that dynamic range is always higher at the lowest ISO. This may or not be the native ISO, in the GH5s case is actually ISO 80 in the extended range. First of all is not possible to increase dynamic range by virtue of amplification so it is not true that the camera DR will be higher at say ISO 800. So why you find plenty of internet posts and video saying that the GH5s native ISO is 800? It is because of confusion between photo styles, gain and gamma curve.

Dual Native ISO VLOG Confusion

VLOG is a logarithmic gamma curve. https://en.wikipedia.org/wiki/Gamma_correction

When the gamma curve is logaritmics the camera will no longer reach saturation at the native ISO of 160 but will require an additional stop of light. This is explained in the manual where we can see that the values 160 and 800 have shifted to 320 and 1600.

A Standard Rec709 Photo Styles, E Vlog photo style

We can also see that when in variable frame rate the camera needs additional gain to record VLOG so the ranges are 320-2500 and 2500 25600. Values above 25600 are not implemented for VLOG because actually the camera has already at 51200.

So what has changed in the situation above are the base ISO of the Low and High gain setting depending on the gamma curve.

The compression of the gamma curve allows further dynamic range to be recorded despite higher noise due to a higher gain applied.

Comparison of Standard Styles and VLOG

From what we have seen before VLOG has higher dynamic range due to gamma curve compression compared to a standard photo style this has been measured by EBU. Full report here https://tech.ebu.ch/docs/tech/tech3335_s29.pdf

EBU DR Table

In terms of EV or stops HLG has more dynamic range than VLOG however is not grading ready and really is more an alternative to Like709. In this evaluation the knee function has not been activated so the real gap between HLG and Like709 is less than 4.3 Ev.

When it comes to VLOG vs CineLike D we can see that VLOG has a higher maximum exposure of cinelike D however in virtue of the additional gain applied also a higher minimum exposure resulting in 0.4 Ev less dynamic range. However what really matters is the maximum brightness as displays typically are not true black and a lot of the lower darks are just clipped.

Due to the difference of gamma curve and impact on ISO and the in variance of native ISO it is totally pointless to compare a linear style like CineLike with a log one (vlog) at same ISO setting. The comparison has to be done with VLOG set 1 stop higher ISO.

So most of the videos you see on YouTube comparing the two settings at same exposure settings are flawed and no conclusion should be drawn from there.

Because VLOG needs higher gain and higher gain means higher noise log footage in dark conditions may as well appear more grainy than linear photo styles. As VLOG really excels on highlights you need to evaluate case by case if it is worth using it or not for your project. In particular when the high gain amplifier is engaged it may make more sense to use CineLike D so that the gamma is not compressed and there are no additional artefacts due to the decompression of the dark tones.

Underwater Video Implications

When filming underwater we are not in situation of extreme brightness except specific cases and this is the reason why generally log profiles are not useful. However dual gain camera can be useful depending on the lens and port used.

In a macro situation generally we control light and therefore dual gain cameras do not offer an advantage.

For wide angle supported by artificial lights the case is marginally better and strongly depends on the optics used. If appropriate wet optics are used and aperture f-numbers are reasonably low the case for low gain cameras is not very high.

For ambient light wide angle on larger sensor cameras with dome ports dual gain cameras are mandatory to improve SNR and footage quality. This is even more true if colour correction filters are used and this is the reason a Varicam or Alexa with dual gain are a great option. However considering depth of field equivalence you need to assess case by case your situation. If you shoot systematically higher than ISO 800-1250 than a camera with dual gain is an absolute must even in MFT format.

Dark non tropical environments like kelp forest or Mediterranean are best fit for dual gain cameras

PANASONIC GH5 my underwater still rigs

The Panasonic GH5 is well known to be a great camera for video and I can confirm that see my latest videos

Clearly the camera is fantastic and with the right set up that I will cover in future posts it takes amazing video.

I wanted to start however from photography as the GH5 also takes great still images.

First and foremost macro. In general terms the lens choice of other micro four third also applies to the GH5 so my favourite lens is the Olympus 60mm. Alternatively if you don’t have that lens and you don’t have extremely small subjects you can get good results with a zoom lens I use the Pana 14/42 MKII but other work well too.

In terms of arms strobes nothing changes so my current rig is based on a Nauticam NA-GH5 housing and two Inon Z240. I have each arm set with 1x 8″ and 1x 5″ arm segments you will notice that the longer arms is closer to the camera this because in macro you will usually shoot above the sand so this makes it easier.

For wide angle the situation changes slightly as I use a 12″ arm segment as in the rig below.

GH5 with WWL-1 wet lens and 8+12 arm segments

For wide angle the arms return to a standard situation with shorter segment close to the housing. The same configuration applies if you shoot the 8mm fisheye.

In terms of floatation the GH5 housing is heavy with the 35 macro port is 720 grams and with the fisheye 620 grams. For macro I like my set up to be negative so less floats . For wide angle in case you take slow shutter speed shots I also use the tripod kit for the NA-GH5 this is not as good as a complete tripod but works well in wrecks

In the next posts I will talk about video as here there is a question about diopters and which ones do I use. Lights is also a topic of contention and will discuss few options there too

As always please ask questions if you wish

Panasonic GH5 settings for underwater video

In the previous post I described the HDR settings especially relevant if you have an external recorder. However there is quite a lot of discussion if it is worth shooting HDR underwater video with the Panasonic GH5 at all. This follows the discussions about using VLOG L underwater versus studio production: many people that start using VLOG L revert to a more normal setting something using standard profiles and not even Cine profiles because the workflow is just too much work.

In general there are 3 characteristics that are important to underwater footage but more in general to any footage: colour , contrast and noise. This is the reason why when you look at DXOMark you have some measures of those 3 characteristics.

GH5DXOMARKSCORES

What DxOMark is telling us is that looking at a RAW image produced from the GH5 the colour depth is at best 23.9 bits, the dynamic range is at best 13 Evs and the Low-light ISO that still gives some decent colour depth and dynamic range is 807 ISO.

Let’s have some interpretation of those measures colour depth of 23.9 bits means 15.6 millions colours, this is actually less than true colour of an sRGB display. Considering the RGB scale the 23.9 bits per colour really mean 8 bit colour. OK so why does the camera have a 10 bit colour (equivalent to 30 bits per pixel no camera reaches that even full frame) option at all? We will talk about it in a minute…

Dynamic range for a RAW image is 13 Evs however Panasonic says VLOG L offers 12 stops compared to 10 stops of professional SDR footage. Now 12 stops require a display with a contrast ratio of 4000:1 which is beyond all commercial computer monitor and in the range of HDR devices. The new VESA DisplayHDR standard HDR600 is a minimum requirement to display this level of contrast ratio.

Finally the Low-light ISO of 807 (corresponding to 1600 on your GH5 as ISO values are always incorrect and geared towards higher values for marketing reasons) means that unless you are at the surface pretty soon there won’t be any colour or dynamic range to show (low-light ISO requires 18 bit colour depth 9 Ev Dynamic range and 30 dB SNR).

WHAT ABOUT THE GH5S?

The GH5s will give you 1.5 stops more of low-light performance and therefore your footage will look good until ISO 2400 or ISO 4800 looking at the camera settings which is quite a bump.

OK now coming to the main point of the post having seen those limitations why would I bother shooting in VLOG or HLOG?

First consideration: Noise

As we have seen both dynamic range and colour depth drop considerably when ISO goes up. In short unless you have abundance of natural light or you are shooting macro with a lot of artificial light is unlikely you will see any benefit shooting VLOG or HLG.

Considering that compression brings additional noise here we see why shooting with an external recorder at higher bitrate really helps fighting noise even if you don’t shoot log because you reduce the compression artifacts. If you don’t have a recorder consider setting a max ISO limit quite low around 1600 on your GH5 or you will see a lot of grain.

Second consideration: color depth

If the camera cannot even resolve 10 bits per pixel RGB why would you shoot 10 bits? When you shoot VLOG or HLG you are not operating in the REC709 colour space which is limited to 8 bits so it is possible that the colour that the sensor is capturing are not all the 16.7 millions of the RGB palette but some of them are outside in the Dci-P3 or even REC.2020 colour space. Clearly if you do not have a 10 bit screen (and almost all computer screens are 8 bits) or 8 bits with FRC to simulate 10 bits, this is a total waste of time and you won’t see those colours and nobody on a computer working in sRGB will see them either. So unless you have a proper screen to watch your clips there is no point working in 10 bits. When it comes to grading again if you can’t display those colours it won’t be possible to do your work properly so don’t waste your time and shoot in 8 bits.

You now understand why you can’t see any difference in all those youtube comparison that by the way have been encoded 8 bits!

A lot of people records in VLOG 10 bit to then produce in REC709 that has 8 bit colour and the reason is that they have proper grading monitors to see what they are doing.

Just to give you an example laptops with exception of some recent MacBook Pro and others like the Dell XPS can’t display 10 bit colour. An iMac displays 10 bit colour and some screens that support DCI-P3 also are capable any other RGB screen won’t work.

Conclusion don’t waste your time with 10 bit if you don’t have a decent screen and if you only produce for youtube.

Third consideration: Dynamic range

VLOG and HLG start at base ISO 400 (that really is 200) and this is where you have your 12 stops. Once you get to ISO 1600 (nominal 3200 on your GH5) you still have 9.5 stops but the colours are gone. Generally it does make sense to shoot LOG however the issue may well be that your editing display is not HDR600 and therefore you can’t really see what you are shooting accurately. Having a screen that can correctly display HDR is even harder than finding one that can display 10 bit colours. What you need to consider though that unless you are capturing a sunburst or a backlit scene or you are shooting the surface you will not have more than 10 stops in your scene anyway.

Conclusion

The settings you can shoot really depend on your editing and display devices.

If you have a laptop or just an 8 bit computer screen and no external recorder you can shoot at 100 mbps 8 bit colour with the picture profile of your choice, standard, natural, cine like whatever you like as you won’t be able to tell the difference at any point in the process from any other formats 10 bits logs etc.

If you have a DCI-P3 display or better for editing shoot 10 bit colour. Examples are iMac and MacBook Pro or some Philips or Acer screens on the market.

If you have an HDR display for editing and an HDR Tv set shoot HLG.

If you have an external recorder shoot in PRORES HQ (as the GH5 does not support camera RAW). Some of those recorders like the Atomos Shogun Inferno support HDR and can also be used for editing with some adapters so shoot in HLG to get the best results.

Generally VLOG L requires a lot of work and is best suited to studio production so if you don’t have a good grading set up don’t waste your time with it.

If you are one of those shooters that after a lot of trial and error ended up shooting 8 bits colour because you don’t have a recorder or shooting natural or cine-like because you don’t have a proper grading HDR monitor now you know why you are doing what you are doing….

Setting up your GH5 for HLG HDR capture

We got our GH5 ready for HDR capture in the previous post so how do we make the most of it?

If you have an external recorder or monitor that supports HDR it is easy! Also if you do you probably have a fair bit of money and you are not reading this blog…

Currently Atomos recorders that can be housed all support HDR including HLG

DSC_9783_2c838594-1601-42d1-b145-40821cd34bb2_1024x1024
Nauticam Atomos Flame

The Nauticam Atomos Flame available at list price of $3,650 will house the Shogun Inferno, Shogun Flame, Ninja Inferno and Ninja Flame

On the Atomos website you can see that for the GH5 the products recommended are the Ninja and Shogun Inferno there first is priced at $995 and the second at $1,295.

There is a difference of $300 between the Shogun  and the Ninja  however the Shogun  provides an SDI video port that may turn out quite useful in grading phase. So if you got to the point of spending $3,650 for the housing I would definitely invest the extra $300 needed for the Shogun Inferno.

Once you get a recorder you can set up the GH5 to output 4Kp50/60 at 10 bit and be happy. The HDR screen of the Atomos device will provide the real time monitoring you need to expose footage properly in HLG. It is not my intention to start a debate about log vs HLG there is plenty of material out there.

A very good video is here

If you don’t have a recorder you are left to the GH5 screen that does not support HDR so how are you going to expose correctly? You have a couple of tools available.

The first one is Zebra Patterns that can be accessed in the Monitor subsection of the menu.

There is a great tutorial on YouTube

Now if you are working in HLG you will notice that the maximum value that can be set is 95% this is because luminance in HLG is limited to 64-940.

If you look on ITU website you can see that white ranges between 69 and 87 in HLG so using Zebra we can still attempt at exposing properly without an HDR monitor.

If you do have a reference white balance card you should set the Zebra to 75% as this is the reference for white if you are in the field without a reference your value should be set to max 90% to ensure you don’t blow highlights. Now you will find some website that tell you 95% is fine too but you do want to leave a bit of headroom. If you want you can set Zebra 1 to 75% and Zebra 2 to 95% so you cover all eventualities.

So once you have set the Zebra the next step is to decide if you want to use HLG View Assist or not. Here you have three options:

  1. Off
  2. Mode 1
  3. Mode 2

Off leaves the display in REC709

Mode 1 gives priority to background areas for example the sky

Mode 2 gives priority to the main subject

The 3 modes are really a progression of brightness, when Off the image looks completely desaturated and Log like. In Mode 1 the image appears to have a preference to show shadows in Mode 2 the image looks the brighter and the most punchy making it easy to work on the foreground but crashes the black and shadows quite a bit.

No matter what you select the Zebra value remain unchanged.

The final setting that can be useful is the Waveform monitor which is accessible in the creative video menu. As the Zebra this gives you a real time display of the image within a diagram that on the horizontal axis represent the image left to right and on the vertical has the signal. This is practically a spacial representation of your image and has the same intensity of the Zebra from 0 to 100. So anything too dark on the bottom won’t be visible and things above max will be clipped.

There are several tutorials available on YouTube

So in essence you could try to expose correctly using Zebra and waveform monitors on the GH5 LCD display but let’s face it the screen is tiny and underwater you won’t be really able to use it effectively. If you have an external monitor or recorder this becomes more useful and something to effectively try.

If you are using the camera meter to expose remember that the GH5 as most cameras has only three settings for metering: multi area, center weighted and spot those influence how the camera calculates the average exposure, this is true also if you use manual mode the reading on the meter will change depending on the metering mode. However for what we have said here if your objective is simply not to clip highlights you have a long way to go before reaching 90% IRE with HLG.

In short you have three options to set exposure on your GH5:

1. Super lazy option trust your camera meter as this was a still image, most likely you will be exposing to the right and without further checks there is a chance to have dark area or clipped highlights.

2. Use Zebra and manual exposure in combination with the camera meter to ensure you stay within safe limits.

3. Use waveform monitor and completely ignore the other parameters as this gives you full control of what you are shooting and removes any dependency on having or not an HDR monitor

As a final note it is important to remember that performing a white balance adjustment is essential in order to expose correctly it is not just to get the colour right as the IRE values on what is white actually change and the camera makes assumptions on what is white to calculate the rest. This is especially true for environment in difficult light conditions.

Getting yourself familiar with waveform monitoring is essential for editing as majority of people will not have the possibility to grade on an HDR screen. In the next post I will explain how to get the lowest possible cost HDR screen that supports HLG.

 

4K with the Sony RX100 in Egypt

 

 

It was time to go for a second trip with the RX100 Mark IV

I decided last minute to use the UWL-H100 LD however I managed to forget the M67-LD converter so ended up taking footage holding the wet lens with my left hand.

This created some flare issues in some scenes anyway judge for yourself.

I used the Picture Profile PP6 modified with some small changes around color matrix (I used the Pro setting) and some increased saturation.

As always the RX100 cannot white balance underwater so I used a filter (deeproof), this gives a magenta tinge and sometime the water looked a bit purple.
I have two versions of this clip the first one uncorrected and the second where I tried to remove the purple water. Look for yourself which one is best.

First version with minimal to no editing is here

 

The second version has some colour correction mostly to remove the cast but I have also done some minimal correction in some scenes at the surface shot without filter.

The other settings were shutter speed 1/50 fixed, Auto ISO with max ISO set to 800, auto white balance.

Generally I am very happy with the RX100 however the snorkeling footage was affected by one episode of fogging of the glass port. This was during a dolphin trip so very disappointing. The camera got extremely hot and I think the fact I was holding my hand close to the port to hold the wetlens created the problem as this had never occurred before.

Upon reflection I think I will go back to the UWL-100 M67 type two as the colours I get with the magic filter were superior in my opinion and more natural.

For those wondering about the dugong dugong it was dark as I was free-diving to 12 meters with the camera and the wet lens hand held so not the easiest job.

Let me know which version of the video you prefer!

What does UHD Premium specification mean to 4K

The UHD alliance is a working group that includes a number of well known brands.

In the board are directors of the following major players:

  • Fox
  • Sony
  • Netflix
  • Panasonic
  • Dolby
  • Technicolor
  • Samsung
  • LG
  • Universal
  • Warner Bros
  • Walt Disney
  • Direct Tv

The members include companies like Sky, Amazon, Intel, Thx, Dts and others.

The key purpose is specifications mostly for high end use and the key pillars are:

  • High dynamic range video (SMPTE ST2084 EOTF)
  • Wide colour gamut (BT.2020)
  • 4K resolution
  •  10 bit colour depth

This is obviously a large improvement compared to the current specification of HD Video:

  • BT.709 colour
  • 1920×1080 Resolution
  • 8 bit colour

Probably the most interesting feature is high dynamic range video as the human eye is more sensitive to contrast than it is to colour and resolution although surely the 10bit colour depth will make a difference.

Currently all professional recorders that manage 4K use 10 bit colour but none uses the BT.2020 colour gamut and the dynamic range is left to the sensor quality and has no minimum specifications.

So what will UHD premium mean to us? Well currently not much!

The key is that UHD alliance has also stayed clear from the major issue for distribution that are the video codecs.

Currently HEVC or H.265 has got royalty challenges but is the most  efficient codec on the market and the widest in terms of diffusion in hardware.

To give an idea two minutes of 100 Mbps H.264 become 76.5 Mbps once you push the H.264 to the limit but the corresponding H.265 is only 13.6 Mbps only 18% of the size.

Google does not support HEVC and are distributing 4K using VP9 and H.264. From my tests VP9 is not as efficient as HEVC the same file came at 17 Mbps. The key issue of VP9 is playback that does not even work on a powerful home computer although some new Android TV have accelerated VP9 and so has the new Nvidia box.

Whilst this gets worked out it is likely that cameras will continue to record in H.264 and the key here is higher bitrate as H.264 is clearly inefficient with 4K.

If you are in the 4K space and you want to produce semipro or pro footage you need to have an external recorder working in Prores HQ or your device needs to be able to record higher than 100 Mbps.

Sony has just introduced the XQD memory cards that write 800 Mbps

http://pro.sony.com/bbsc/ssr/mkt-recmedia/mkt-recmediaxqd/product-QDM128/

This is potentially a way forward for higher bitrate recording as UHS 3 is limited to 240 Mbps and would only work with compressed footage.

Another thing to consider is that you need a pretty big Tv to notice UHD at the normal viewing distances we tend to watch.

http://s3.carltonbale.com/resolution_chart.html

Carlton Bale was on the scene few years ago when HD came about and the conclusion was you need 55″ or more at 8 feet to ‘see’ HD as your eyes can’t resolve more.

This distance becomes 120″ at 8 feet which is essentially the size of a projector screen.

Essentially UHD seems to be more for computer freaks watching clips very close to the screen that for the average user.

I did several test on my Tv with clips I had produced in 4K downscaled to HD and at my normal viewing distance I could not see any difference what so ever!

Essentially I have determined that 50 Mbps XAVC from the RX100 Mark IV looks actually better than 4K on my Tv.

I guess we will have to wait for HDR to see some real benefits meanwhile the clips from you tube look better simply because they have more information. There is a factor of 6x for UHD compared to HD and this shows a higher quality clip.

I don’t see a large future for UHD in TV broadcast it could die as 3D just did.

 

The painful quest of 4K Video

2015 has probably been the first year where consumer devices have taken the journey to 4K as even iPhones now can record at Ultra High Definition.

However there is still a very long way to get us to the level of standardisation of HD video and the war of the codecs has still to determine a winner.

As of January 2016 if we consider only digital cameras only three manufacturers produce 4K capable devices that can be housed for underwater use and those are Canon, Sony and Panasonic.

Specifically we have two compact cameras with fixed lenses, the Sony RX100 Mark IV and the Panasonic LX100, two micro four thirds the Panasonic GH4 and GX8 and three DSLR the Sony A7IIR and A7IIS and the Canon EOS-1D C that was in fact the first camera to record 4K video in 2013.

From a consumer point of view we are interested in a 4K device that can operate with wet lenses across the focal range and that is under the $5,000 mark including the housing so I will focus on the Micro Four Thirds and fixed lens compacts and exclude immediately the Panasonic LX100 that requires a port system to operate we are now left with 3 devices that today are the real options for 4K underwater video.

4K Digital Cameras for Underwater Use

  1. Panasonic GH4 with Panasonic 14-42mm II Mega OIS
  2. Panasonic GX8 with Panasonic 12-32mm Mega OIS
  3. Sony RX100 Mark IV

I have added the lenses of choice of each camera for convenience.

In 35mm terms the focal lengths offered by the 3 devices are:

Panasonic GH4 with 14-42mm : 35-105mm

Panasonic GX8 with 12-32mm: 31.2-83.2mm

Sony RX100 IV: 28-80mm

Wet lenses

Both Panasonic cameras revert to a traditional 35mm cameras when the 4K crop is applied. The wet lens of choice is therefore the old Inon UWL-100 with M67 thread. This is a lens with a magnification of 0.57077 that with the 14-42mm II Mega OIS and Macro Port 35 or the 12-32mm and Macro Port 29 performs very well without vignetting and offers zoom through the whole focal range. The same lens appears to work fine also with the Sony RX100 Mark IV but is almost border line in terms of vignetting and I will need to conduct further experiments for now we will refer to the Inon UWL-H100

Focal range with Inon UWL-100 / UWL-H100* (Sony)

Panasonic GH4+14-42 : 20-60mm

Panasonic GX8+12-32 : 18-48mm

Sony RX100 Mark IV : 17-48mm

You can see that the GX8 and the RX100 are virtually equivalent and the same holds true for macro with the GX8 and the RX100 offering same working distance and magnification. The GH4 is superior in this area due to the longer focal length after crop of 105mm. For me the most versatile wet lenses for macro remain the Inon UCL-165 despite the various Nauticam and Subsee options because you can cover all the working distances from 16cm to 8cm which is the sweet spot for macro work.

Unfortunately the level of magnification obtainable with the GX8 and RX100 is not great and really small subject will still look tiny in the frame. Obviously the use of the 14-42mm lens on the GX8 resolves all problems except the field of view with wet lens at wide end is now 21mm anyway not a huge issue.

I am still waiting for a proper review of the GX8 but in terms of 4K resolution I have been impressed with the Sony RX100 Mark IV that appears to be sharper than the GH4 and even the A7IIR.

4K formats

In terms of 4K recording all devices on the market use some form of H264 100 Mb/s codec Sony uses what they call XAVC S while Panasonic uses a standard Mp4 compatible wrapper. Sony codecs do not use B frames in their H264 implementation but this does not seem to affect quality that much.

So now that you have your 4K footage what do you do with it?

The first consideration is that all cameras record internally at 8 bit with 4:2:0 subsampling this means that colours are only recorded for 50% of the pixels and then interpolated. The implication is that color grading opporunites are limited and heavy manipulation should be avoided to avoid undesired effects such as banding.

This means custom white balance better with a filter is still very much needed for 4K.

In the Mac camp there are many consumer options for 4K editing including iMovie, Final Cut Pro X, Adobe Premiere Pro and for Windows you also add Sony Vegas, Avid composer and many more

S-logs or V-log are also not meaningful at 8 bits without external recording capabilities as grading will ruin the footage.

Workflow for Consumer Use on Mac

For the average home user on a Mac iMovie offers now decent functionality and imports and edits in native format all the clips produced by our selected cameras. iMovie also exports in Prores 422 which is ideal for storing your master copy after editing.

Unless you edit on a laptop or a machine with poor hardware there is no need to convert the footage in intermediate formats as most of GPU have H264 acceleration so your 4K workflow will look like this:

  1. Import into your 4k editor
  2. cut and edit sequence
  3. Perform minimal corrections to exposure and color
  4. Add some transitions
  5. Add music of voice over
  6. Export to Prores 422 or 422 HQ if available
  7. Compress with 3rd party software or plug in

Compression Headaches

Step 6 is particularly important as none of the above mentioned editors has good native export capabilities so you want to do that with another program. If we take for good what apple says Prores HQ is very rarely fooled by 4K footage at 737 Mb/s for 25p. Consindering that our footage was 4:2:0 to start with this means we need only 75% of that bandwidth or 552 Mb/s. As Prores 422 records at 492 Mb/s which is only 11% less than the required bandwidth so iMovie with the Prores export option is pretty good.

We now have our 492 Mb/s video with most likely an AAC audio what are we going to do with it?

This is where it gets really painful. If you have a 4K Tv you definitely want to watch your footage on it, today UHD Tv support the HEVC codec and more recently also the VP9 codec that google uses in YouTube however both those codecs have limited options for encoding and do not have any hardware acceleration support in your computer that will be used to compress the footage.

To make matters worse if you then share your footage online on YouTube this will be heavily re-compressed. I have done some analysis on some clips that I watch to find that 4K bandwidth is between 17 and 20 Mb/s in H264 and the files are not even encoded with CABAC to ensure they can be played on devices with limited hardware capabilities. In terms of web browser many now supports VP9 however hardware acceleration is lacking so it is likely that you will be watching H264 4K footage at 18 Mb/s when you connect to YouTube on your computer.

It is likely that the mp4 files that you can produce with handbrake or other tools are easily coded at 60-70 Mb/s so YouTube, as it does with HD footage, will introduce significant issues to your 4K videos.

Interestingly the 4K bandwidth is higher in terms of Bits/(Pixel*Frame):

  • 0.090 for 4K
  • 0.076 for 2K
  • 0.055 for HD

This would suggest that 4K videos are less compressed but on the other hand the compression is less efficient. 2K appears an interesting mix that still uses Cabac and 3 reference frames but is really a computer only option.

For who has access to an x264 encoder  this is a suggestion for 4K  encoding that does not kill your computer

Preset slower – modified

cabac=1 / ref=5 / deblock=1:0:0 / analyse=0x3:0x113 / me=umh / subme=9 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=3 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=60 / rc=crf / mbtree=1 / crf=18.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00

The options that differ are ref=5 otherwise we break the limit of level 5.1 and the decoder may have issues, and crf=18 from 23 to increase quality.

This H264 encoding can easily produce files around 1.4 Gb for just 3 minutes and will require playback on the device or USB disk attached or a good cat 5 ethernet network or solid wireless at at least 100 Mb/s effective speed.

It follows that H264 really is not the way forward at 32 Mb/s HEVC two pass or crf=23 in single pass you get files that are 20% or less of the size and work well if you have a 4K HEVC accelerated player like I do. At this bitrate is also very easy to stream over your wireless LAN even at mediocre quality. Unfortunately YouTube will reject your HEVC files and require H264 or VP9.

Google plot for 4K world domination

Google did not want to incur more royalties so they pushed out HEVC to use the open source VP9 as they did years ago with Vp8.

VP9 is at least in the Mac version very slow and seems fairly amateurish. They have been succesful with Android Tvs that have YouTube as a prime source of 4K content because the YouTube app does not work in 4k on Tv sets unless it can decode VP9. This is clearly only a commercial plot as all TVs can play H264 and YouTube wants to reach as many people as possible with 4K therefore keeping bandwidth below 20 Mb/s and accessible to the higher end of DSL connections not just fiber so that they can push their ads to the masses, however it also means that your video will look pretty pathetic on YouTube unless  you use a VP9 capable browser or Tv set or android to box to watch it.

At time of writing the only android box that can decode Vp9 is the Nvidia Shield Tv so if you want to watch YouTube 4K videos at 18 Mb/s Vp9 there is at least one choice.

http://shield.nvidia.com/android-tv

Also the Roku and Kindle fire Tv support YouTube 4K but don’t have Kodi so I would not consider them

 

 

 

 

 

 

 

 

 

 

Blog Posts Coming Soon

These days I don’t have too much time to write nevertheless there are some exciting articles coming.

Nauticam is sending me a few items for testing that include the Panasonic LX100 housing and the new Nauticam wet lens.

The nauticam wet lens has been in the works for a very long time and is going to be released end of September, I will compare performance with the Inon lenses and report back finding.

I am also going to review the new Leak Sentinel v4 that has a number of promising updates.

But the first post will be about lenses for micro four third cameras. The system is very flexible and so is the Nauticam Port system there are many lenses supported and getting the right one is a bit of a headache for newcomers to the ILC space, I will try and make it simple and suggest a way forward so stay tuned.