Category Archives: final cut pro

The definitive guide to hdr with the panasonic gh5/s1 in final cut pro x

First of all the requirements for HDR at home are:

  1. Log or HLG footage
  2. Final Cut Pro X 10.4.8
  3. Mac OS Catalina 10.15.4
  4. HDR-10 Monitor with 10 bit gamut

It is possible to work with a non HDR-10 monitor using scopes but is not ideal and only acceptable for HLG and in any case 10 bits is a must.

Recommended reading: https://images.apple.com/final-cut-pro/docs/Working_with_Wide_Color_Gamut_and_High_Dynamic_Range_in_Final_Cut_Pro_X.pdf

HDR Footage

In order to product HDR clips you need HDR footage. This comes in two forms:

  1. Log footage
  2. HLG

Cameras have been shooting HDR since years the issue has been that no consumer operating system or display were capable of displaying it. The situation has changed as Windows 10 and Mac Os now have HDR-10 support. This is limited for example on Mac Os there is no browser support but the Tv app is supported, while on windows you can watch HDR-10 videos on YouTube.

You need to have in mind your target format because Log and HLG are not actually interchangeable. HLG today is really only Tv sets and some smartphones, HDR-10 instead is growing in computer support and is more widely supported. Both are royalty free. This post is not about what is the best standard is just about producing some HDR content.

The process is almost identical but there are some significant differences downstream.

Let me explain why this graph produced using the outstanding online application LutCalc show the output input relationship of V-LOG against a standard display gamma for rec709.

V-LOG -> PQ

Stop diagram V-LOG vs Rec709

Looking at the stop diagram we can appreciate that the curves are not only different but a lot of values differ substantially and this is why we need to use a LUT.

Once we apply a LUT the relationship between V-LOG and Rec709 is clearly not linear and only a small parts of bits fit into the target space.

Output vs Input diagram for V-LOG and Rec709

We can see that V-Log fills Rec709 with just a bit more than 60% IRE so there will need to be a lot of squeezing to be done to fit it back in and this is the reason why many people struggle with V-Log and the reason why I do not use V-Log for SDR content.

However the situation changes if we use V-Log for HDR specifically PQ.

Stop Table V-Log to PQ

You can see that net of an offset the curves are almost identical in shape.

This is more apparent looking at the LUT in / out.

LUT in/Out V-Log to Rec2100 PQ

With the exception of the initial part that for V-Log is linear while PQ is fully logarithmic the curve is almost a straight line. As PQ is a larger space than that V-Log can produce on a consumer camera we do not have issues of squeezing bits in as PQ accommodates all bits just fine.

HLG

Similar to V-LOG HLG does not have a great fit into an SDR space.

Stop Table HLG to Rec709

The situation becomes apparent looking at the In/Out Lutted values.

HLG to Rec709

We can see that as HLG is also a log gamma with a different ramp up 100% is achieved with even less bits that V-Log.

So really in pure mathematical terms the fit of log spaces into Rec709 is not a great idea and should be avoided. Note with the arrival of RAW video we still lack editors capable to work in 16 bit depth space like photo editors do and currently all processes go through LOG because they need to fit into a 10/12 bits working space.

It is also a bad idea to use V-Log for HLG due to the difference of the log curves.

V-Log vs HLG

And the graph demonstrates what I said at the beginning. You need to decide at the outset your output and stick to a compatible format.

Importing Footage in Final Cut Pro X 10.4.8

Once we have HLG or LOG footage we need to import it into a Wide Gamut Library, make sure you check this because SDR is default in FCPX.

Library Settings

HLG footage will not require any processing, but LUTs have to be applied to V-LOG as this is different from any Rec2100 target spaces.

The most convenient way is to go into Organise workspace select all clips than press the i button and select General. Apply the Panasonic V-Log LUT to all clips.

Organise View the LUT option is not available in the Basic view so make sure you select General

Creating a Project

Once all files have been handled as required we create our HDR-10 project which in final cut means Rec2020 PQ.

For HLG project change colour space to HLG

The following screenshots demonstrate the effect of the LUT on footage on a PQ timeline.

LUT not applied footage looks dim as values are limited to 80%

With the LUT applied the V-LOG is expanded in the PQ space and the colours and tones come back.

LUTed clip on PQ timeline

We can see the brightness of the scene is approaching 1000 nits and looks exactly we we experienced it.

Once all edits are finished and just as last step we add the HDR Tools to limit peak brightness to 1000 Nits which is a requirement of YouTube and most consumer displays. The Scope flex slightly with an automatic highlight roll-off.

Exporting the Project

I have been using Panasonic AVCI 400 mbps so I will export a master file using ProRes422 HQ if you use a lower bitrate ProRes 422 may be sufficient but don’t go lower as it won’t be HDR anymore.

Export in ProRes 422 HQ

YouTube and other devices use default settings for HDR-10 metadata so do not fill the mastering display nor content information it is not required and you would not know how to fill it correctly with exception of peak brightness.

Converting for YouTube

I use the free program handbrake and YouTube guidelines for upload to produce a compatible files. It is ESSENTIAL to produce an mp4 file otherwise your TV and YouTube may not be able to display HDR correctly avoid any other format at all costs.

The finished product can be seen here

Home HDR Video HDR-10
HLG Documentary style footage

SDR version from HDR master

There are residual issues with this process one is the production of an SDR version. This currently works much better for HLG than HDR-10 which is interesting because HLG is unsupported on any computer so if you produce HDR HLG you are effectively giving something decent to both audiences.

For HDR-10 YouTube applies their own one fits all LUT and the results can be really bad. You may experience oversaturated colours in some cases, dark footage in others, and some clips may look totally fine.

At professional level you would produce a separate SDR grade however it is possible to improve the quality of YouTube conversion using specific techniques I will cover in a separate post.

Final Remarks

Grading in HDR is not widely supported the only tools available are scopes and Tone Mapping of your display. There is no concept of correct exposure for skin tones, in one scene those have a certain brightness and in another this changes again because this is not a 0-100% relative scale but goes with absolute values.

If you invested in a series of cinema LUT you will find none of them work and compresses the signal to under 100 nits. So there is less headroom for looks. There are other things you can do to give some vintage look like adding grain but you need to be careful as the incredible brightness of the footage and the details of 10 bits means if you push it up too much it looks a mess. Currently I am avoiding adding film grain and if I add it I blend it to 10%-20%.

One thing that is interesting is that Log footage in PQ does have a nice feel to it despite the incredible contrast. After all Log is a way to emulate film specifically Cineon, this is true for almost all log formats. Then you would have the different characteristics of each film stock, this is now our camera sensor and because most of them are made by Sony or Canon the clips tend to look very similar to each other nowadays. So if you want to have something different you need to step in the world of Red or ARRI but this is not in the scope of what I am writing here and what you my readers are interested in.

Am keeping a playlist with all my HDR experiments here and I will keep adding to it.

YouTube HDR Playlist

If you find this useful please donate using the button on the side and I will have a drink on you…Cheers!

Colour Correction in underwater video

This is my last instalment of the getting the right colour series.

The first read is the explanation of recording settings

https://interceptor121.com/2018/08/13/panasonic-gh5-demystifying-movie-recording-settings/

This post has been quite popular as it applies generally to the GH5 not just for underwater work.

The second article is about getting the best colours

https://interceptor121.com/2019/08/03/getting-the-best-colors-in-your-underwater-video-with-the-panasonic-gh5/

And then of course the issue of white balance

https://interceptor121.com/2019/09/24/the-importance-of-underwater-white-balance-with-the-panasonic-gh5/

Am not getting into ambient light filters but there are articles on that too.

Now I wanted to discuss editing as I see many posts on line that are plain incorrect. As it is true for photos you don’t edit just looking at an histogram. The histogram is a representation of the average of the image and this is not the right approach to create strong images or videos.

You need to know how the tools work in order to do the appropriate exposure corrections and colour corrections but it is down to you to decide the look you want to achieve.

I like my imaging video or still to be strong with deep blue and generally dark that is the way I go about it and is my look however the tools can be used to have the look you prefer for your materials.

In this YouTube tutorial I explain how to edit and grade footage produced buy the camera and turn it into something I enjoy watching time and time again.

I called this clip Underwater Video Colour Correction Made Easy as it is not difficult to obtain pleasing colours if you followed all the steps.

A few notes just to anticipate possible questions

  1. Why are you not looking to have the Luma or the RGB parades at 50% of the scale?

50% of the IRE scale is for neutral grey 18% I do not want my footage to look washed out which is what happens if you aim at 50%.

2. Is it important to execute the steps in sequence?

Yes. Camera LUT should be applied before grading as they normalise the gamma curve. In terms of correction steps setting the correct white balance has an influence on the RGB curves and therefore needs to be done before further grading is carried out.

3. Why don’t you correct the overall saturation?

Most of the highlights and shadows are in the light grey or dark grey areas. Saturating those can lead to clipping or noise.

4. Is there a difference between using corrections like Vibrancy instead of just saturation?

Yes saturation shifts equally the colours towards higher intensity vibrancy tends to stretch the colours in both direction.

5. Can you avoid an effect LUT and just get the look you want with other tools?

Yes this is entirely down to personal preference.

6. My footage straight from camera does not look like yours and I want it to look good straight away.

That is again down to personal preference however if you crush the blacks or clip the highlights or introduce a hue by clipping one of the RGB channels this can no longer be remediated.

I hope you find this useful wishing all my followers a Merry Xmas and Happy 2020.

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

The importance of Underwater white balance with the Panasonic gh5

One of the key steps in order to get the best underwater colours in your video is to perform a custom white balance.

This is true on land and on water because auto white balance only works in a specified range of color temperatures.

Panasonic GH5 advanced user manual

For our GH5 the range where auto works goes is approximately 3200-7500K. When the camera is working outside this range you get a colour cast. Let’s see with some examples:

Grey card Auto White Balance 8mm
Grey card Custom White Balance 8mm

In the example above I am taking a picture of a white balance reference card under warm lights that have a colour temperature of 2700K.

As you can see the auto white balance fails resulting in a yellowish tinge, while the shots taken after the custom white balance is accurate.

In terms of white balance card I use the Whibal G7 Studio 3.5″x6″ (8.9×15.2 cm). I found this card to work well underwater and I use it with a lanyard attached to a clip that I hook on my BCD D rings.

More info on the whibal here

It is possible to buy a larger card such as the reference that is 7.5″x10″ however this is cumbersome and I found the Studio version to work well with the Panasonic GH5 as it only uses the central part of the frame for white balance.

Custom white balance with the 8mm fisheye

Going back to our GH5 instruction manual you can also see that the camera white balance is limited to 10,000K which is the colour of blue sky.

Underwater due to light absorption at longer wavelengths red and orange disappear at depth and blue tends to scatter over suspended particles. So the colour temperature of water tends to be higher than 10,000K and also the blue is somewhat washed out by scattering.

This is the reason filters are essential because reduce the amount of blue or to say better cyan and bring the camera into a range where custom white balance works again.

I have already posted a whole range of observations on filters in a previous post so am not repeating here.

With the right filter for the water colour I dive in and with the appropriate white balance card you can get some pretty decent results with custom white balance.

To help the colour accuracy I have experimented with the Leeming Luts and I want to thank Paul Leeming for answering my obscure questions. Obviously you do not have to use the LUTs and you can design them yourself however I found that using the Cinelike D LUT I have a very good starting point for colour correction.

The starting point is a CineLike D profile with saturation, noise reduction and sharpness set to -5 all other settings to default as suggested by Paul, there is no need to lower the contrast as CineLike D is already a flat curve.

*Noise and sharpness have actually nothing to do with grading but are set to -5 as the GH5 applies sharpening and noise reduction even at -5 setting. Sharpening has generally a negative effect all around while noise reduction if required is better performed in the editor.

Looking at imaging resource tests of the GH5 we can appreciate that the camera colours are oversaturated by default.

the GH5 has around 113% over saturated colours

The GH5 tends to push deep colour and wash out cyan and yellow. This becomes apparent when we look at a white balanced clip uncorrected.

White balanced clip in final cut pro you can see how the water column is washed out whilst red and other dark colours are accurate

The Leeming Lut helps rebalancing the camera distorted colours and when you apply the camera LUT, provided you have followed the exposure instructions and applied the profile as described, the improvement is immediate.

The previous clip now with the CineLike D Leeming LUT applied

From here onwards it is possible to perform a better grading and work to improve the footage further.

For the whole read please look at Leeming Lut website

One other thing that I believe it is interesting is that while generally for ambient light or balanced light shots I do not actually trust the camera exposure and go -1/3 to -2/3 for close up shots exposing to the right greatly helps highlights recovery

In the two frames you can see the difference the LUT brings restoring the correct balance to the head of the turtle.

Turte detail the highlights appear blown out
Turtle detail with Leeming Lut applied

To be clear the turtle detail has been white balanced in water on the whibal card while using a Keldan Spectrum filter -2, then in fcpx automatic balancing is applied. The LUT brings out a better dynamic range from the same frames.

Obviously you are free to avoid lens filters and LUTs and to some extent it is possible to get similar results however the quality I obtain using automatic settings I believe is quite impressive.

I found myself most times correcting my own wrong exposures or wanting to increase contrast in scene where I had little however this only happens in sever circumstances where white balance and filters are at the limits.

Conclusion

There are many paths to get the right colours for your GH5 underwater videos in my opinion there are four essential ingredients to make your life easier and give your footage a jump start:

  • Take a custom white balance using a professional grade white balance card
  • Set the right picture profile and exposure when shooting
  • (Recommended) Use appropriate filters for the water conditions
  • Apply the appropriate LUT to eliminate the errors in the GH5 colour rendering in post processing

With the following settings producing a video like this is very simple and all your efforts are in the actual cutting of the clip.

Short clip that applies this blog tips

Please note some of the scenes that look off are shot beyond the working conditions of filters and white balance at around 25 meters…

Panasonic GH5 Demystifying Movie recording settings

 

There are a lot of videos on YouTube that suggest that there is not much difference among the various recording settings of the GH5 for UHD.

To recap we have 4 settings for UHD (I will refer to PAL system because it is easier but all applies equally to 24p, the 30p/60p format will be the same with worse results)

  1. 100 Mbps 420 8 Bits Long GOP 25p
  2. 150 Mbps 420 8 Bits Long GOP 50p
  3. 150 Mbps 422 10 Bits Long GOP 25p
  4. 400 Mbps 422 10 Bits All-Intra 25p

The difference between Long GOP and All Intra is that in the Long GOP what is encoded is a group of pictures (GOP) and not separate individual pictures. In this article I will use ProRes as a proxy to AVC-Intra as, in the GH5 implementation, they have very similar logic and performance you can find some posts on the internet of people trying to discern the two but there really is not difference as essentially this is just image compression. 

Within a Group of Pictures there are different type of frames:

  • I (Intra coded) frames containing a full picture
  • P (Predictive coded) frames containing  motion interpolated picture based on a prediction from previous frames
  • B (bi-predictive coded) frames containing a prediction from previous or future frames

It is important to note that frames are not stored sequentially in a GOP and therefore the GOP needs to be decoded and the frames reordered to be played, this requires processing power.

The reason why H264 is very efficient is that within a group of picture there is only one full frame and the rest are predictions clearly if the prediction algorithm is accurate the level of perceived quality of long GOP is very high and similar to All-Intra clips.

This is the reason why comparing All Intra and Long Gop using static scenes or scenes with repetitive movement that can be predicted very accurately by the codec is a fundamental error.

Incorrect example here:

The scene is composed of static predictable objects with no motion and after YouTube compression the (wrong) conclusion is that there is no absolute difference between the codecs. Instead what this shows is the effectiveness of Long GOP when the prediction is accurate which is exactly the point of the codec plus the fact that YouTube flattens differences due to heavy compression and use of Long GOP.

Another example is a bit better as it uses a fountain which is a good representation of unpredictable motion

In the 300% crop you can see how All_Intra performs better than Long GOP in terms of prediction despite the YouTube compression, but generally those tests are unreliable if you see the last section of the video where there is a semi-static scene you cannot really take the three examples apart.

So why is that and is there any point selecting different settings on your Panasonic GH5?

In order to understand the workings we need to dig deeper into the structure of the GOP but before doing so let’s evaluate the All-Intra codec.

AVC All-Intra explanation

This codec records at 400 Mbps so with 25 fps this means circa 16 Mbits per frame or  1.9 MB per frame and there is no motion interpolation so each frame is independent from the others. The implementation of All-Intra of the GH5 does not make use of CABAC entropy encoding as Panasonic does not believe this is beneficial at higher bit-rates making this AVC-Intra implementation very close to ProRes as both are based on Discrete Cosine Transform.

If you consider a Jpeg image of your 3840×2160 frame on the GH5 you see that it stores around 4.8 MB per image because there is no chroma sub-sampling so if you wanted to have exactly the same result you would need to use ProRes 4444 to get a comparable quality (this not even taking into account that Jpeg are 8 bits images).

Video uses chroma sub-sampling so only part of the frame contain colours at a given time. Apple in their ProRes white paper declare that both ProRes 422 and 422 HQ are adequate to process 10 bit colour depth and 422 sub-sampling however they show some quality differences and different headroom for editing. If you count 50% for 4.2:0 sub-sampling and 67% for 422 you get around 2.34 MB and 3.5 MB frame sizes that correspond to ProRes 422 and ProRes 422 HQ individual frame sizes.

In simple terms All Intra 400 Mbps would fall short of Apple recommended bit-rate for 422 10 bit colour for circa 92 Mbps is like saying you are missing 0.44 MB from your ProRes 422 frame and 1.6 MB from ProRes 422 HQ and you have 0.3 MB more than ProRes LT however I do not have the full technical details of ProRes to evaluate directly.

The real benefit of such codec is that it can be processed with modest hardware without conversion as the AVC Intra codec is edit ready and each frame is captured individually without any motion artefacts and therefore the computer does not have to do a great deal of work to decode and render the clips.

In order to record All-Intra in your memory card you need a V60 or higher specs card which in terms of $ per GB costs you more than an SSD drive however you no longer need a recorder.

Coming back to the other recording quality option we still need to evaluate how the various long GOP codecs compare relative to each other.

In order to fully understand a codec we need to decompose the GOP into the individual frames and evaluate the information recorded. If you look on Wikipedia it will tell you that P frames are approximately half the size of an I frame and B frame are 25%. I have analysed the Panasonic GH5 clips using ffprobe a component of ffmpeg that tells you what is exactly in each frame to see if this explains some of the people claims that there is no difference between the settings.

Link to Panasonic documentation

 

100 Mbps 420 8 Bits Long Gop 25p Deep Dive

An analysis with ffprobe shows a GOP structure with N=12 and M=3 where N is the length in frames of the group of pictures and M is the distance between I or P frames.

So each Group of Picture is made like this

IBBPBBPBBPBBP before it repeats again.

A size analysis shows that B frames are in average 14% of the I frame and P frames are around 44% of the I frame.

I B B P B B P B B P B B
Size 1648326 247334 237891 728777 231947 228048 721242 228347 227544 713771 236866 232148
Ratio to I frame 100% 15.01% 14.43% 44.21% 14.07% 13.84% 43.76% 13.85% 13.80% 43.30% 14.37% 14.08%

With an average video bit-rate of 94 Mbps each GOP has 45.3 Mbps which means an I Frame has around 13.1 Mbits or 1.57 MB per frame and an equivalent All-Intra bit-rate of approximately 328 Mbps however this codec is using CABAC entropy encoding that Panasonic states is 20-30% more efficient than CAVLC used in All-Intra so net of motion artefacts this codec is pretty strong.

150 Mbps 420 8 Bits Long GOP 50p Deep Dive

An analysis with ffprobe shows a GOP structure with N=24 and M=3 where N is the length in frames of the group of pictures and M is the distance between I or P frames.

So each Group of Pictures is made like this

IBBPBBPBBPBBPBBPBBPBBPBB before it repeats again.

A size analysis shows that B frames are in average 13.4% of the I frame and P frames are around 41% of the I frame. With an average bit-rate of 142.7 Mbps each GOP has 68.5 Mbits which means an I Frame has around 11.3 Mbits or 1.35 MB per frame and an equivalent all Intra bit-rate of approximately 566 Mbps. Again this uses CABAC entropy encoding so the equivalent All-Intra is higher.

One very important aspect of the 150 Mbps codec is that as the GOP is double the length of the single frame rate 100 Mbps codec there are the same number of key frames per second and therefore it is NOT true that this codec is better at predicting motion. In fact it is exactly the same so if you had acquired a 100 Mbps codec at 25 fps and then slowed down the footage to half speed asking your editor to interpolate intermediate frames it would come to the same result although with some more processing required.

150Mbps 422 10 Bits Long Gop 25 fps

An analysis with ffprobe shows a GOP structure with N=12 and M=1 which means this codec does not use B frames but just I and P frames so the GOP structure is as follows:

IPPPPPPPPPPP before it repeats again.

A size analysis shows that P frames are on average 53% of I frames so this codec is in fact less compressed however this has also some consequences.

With an average bitrate of 150 Mbps each GOP has 72 Mbits which means an I Frame has around 10.5 Mbits or 1.25 MB per frame and an equivalent all Intra bitrate of approximately 262 Mbps. So this codec in terms of compression efficiency this is actually the worst and this is due to the lack of B frames.

We can only think that the Panasonic GH5 processing is not strong enough to capture 10 bit and then write 422 Long GOP with IPB structure.

Codec Ranking for Static Image Quality UHD

So in terms of absolute image quality and not taking into account other factors the Panasonic GH5 Movie recording settings ranked by codec quality are as follows:

  1. 400 Mbps 422 10 Bit All intra 25 fps (1.9 MB per frame)
  2. 100 Mbps 420 8 Bit Long Gop 25 fps (1.57 MB per frame)
  3. 150 Mbps 420 8 Bit Long Gop 50 fps (1.35 MB per frame)
  4. 150 Mbps 422 10 Bit Long Gop 25 fps (1.25 MB per frame)

The 100 Mbps  and 400 Mbps codec are marginally different (21% larger frame size) with the 422 10 Bits long GOP really far away.

Conclusion

If you want to record your footage to the internal memory card you are really left with two choices:

  1. Use the 100 Mbps Long Gop codec it is very efficient in the compression and the perceived quality is very good. It does however require you to convert to ProRes or similar during editing if you don’t want to overload your computer as the codec is really heavy on H264 features. You need to get the exposure and white balance right in camera as the clips may not withstand extensive corrections. There is a risk with footage with a lot of motion of some errors in motion interpolation that can generate artefacts.
  2. Buy a V60 or V90 memory card and use 400 All intra at single frame rate. This will give you edit ready footage of higher quality without motion artefacts, You still need to get exposure and white balance right in camera as the headroom is not so large to allow extensive corrections. The bit-rate and frame size is not sufficient to really give you all the benefits of 422 sampling and 10 bit colour but it will be a good stepping stone to produce good quality rec709 420 8 bit footage.

Generally there appears to be no benefit using the internal 422 10 Bit codec nor the 420 8 bit double frame rate due to the limitations of the GOP structure, here Panasonic has created a few options that to be honest appear more a marketing effort than anything else.

There may be some use to the 150 Mbps double frame rate if you intend to slow down the footage after the conversion to ProRes or similar but the extremely long GOP does not make this codec particularly robust to scenes with a lot of motion and in any case not more robust than the 100 Mbps codec.

A final thought if you are interested in 10 bit colour is that the FHD All Intra 200 Mbps codec has enough quality and headroom to allow manipulation. This is in fact the only codec that has bit-rate higher than ProRes HQ at least at 24 and 25 fps so if you want to check the real range of colours and dynamic range the camera is capable of you should try this codec.

Note: I have removed some comments on ProRes and external recorders as there are plenty of people that believe that the intra codec does better than ProRes HQ on the Atomos

100,000 visits – In Depth into Sharing Videos on the Internet

Two years and few months later I am pleased my blog hit 100,000 visits. Considering that there is sponsorship and this is pretty much content produced during free time I am well pleased.

So as commemoration topic I want to put a few considerations that spin off a post on the editing and share section of wet pixel.

Many people spend a lot of money on underwater video rigs and use sharing websites such as youtube and vimeo to host or promote their content. The reason is clear those sites have a huge audience and if you have original content you can get a bit of advertising revenue as well that is never a bad thing.

However most of us have noticed that once you upload a file on those websites it looks worse than the original some time much worst. Why does that happen?

The answer lies in two words video compression.

Video compression is a technical subject and my previous post tried to share some of my finding in regards of the reasons why a camera produces video better than another even if the second produces better still images. It is all in the compression effectiveness and the same issue applies when we share our videos on line.

Unfortunately many people do not really know much about this subject and assume that the video editing program they purchased has all the answers and everything is optimised. Well that is not the case. Video produced off the shelf by such programs with default settings may be watchable but are not great and usually worse than the source clip of a good deal.

Another common misconception is that you need to convert a file produced by your device to another format so you can edit.

Finally many people convert files many times and wonder why the result is far off the original clips, not realising that video compression is lossy so each time you manipulate a clip are you are making things worse.

Obviously am talking consumer and prosumer here not RAW video recording at stellar bitrates.

So what is the best way too produce an underwater clip that looks good without spending too much time on it and that when uploaded on the web looks still decent?

To give an idea why a clip like this one shot with a compact camera

Does not look to far off this other clip shot with a semipro camcorder Sony AX100

or a Panasonic GH4

What all 3 clips at 1080p on youtube and honestly evaluate if there price difference is justified you will probably think no and think the second clip is actually a pro.

So why is that?

50% of the problem comes from the editing, I don’t have the details of how the other two clips are done but I know my clip is edited with iMovie, surely not the most advanced tool on the market you would think.

However there are a few tricks of the trade that I will explain to you one at time:

1. Never let your editor convert the files at the import.

Unless your workstation can’t physically process them leave the clips as in. Even think about getting a better computer in the long run if you can’t process files as is.

Many editors convert the files at import, in intermediate formats like prores or Avid that have no temporal compression. Those files unlike the originals have each frame stored like a complete image so that it is easier to edit. If your editor allows you use the original file without any conversion. You can do this in Final Cut using proxy and cheating also in iMovie creating manually event folders and copying mov or mp4 compliant files manually into them.

2. Once you finish your editing use the highest quality option available for export.

This is sometimes a tricky issue as the default options of those programs mention sometimes just a quality option with a slider from low to best. Many programs though, like final cut offer other options and modules for advanced compression.

If you have spent money on the editor spend the extra funds on the advanced codecs as they are worth every penny.

Once you have the advanced codecs (x264 is the one I use and is free plug in for iMovie) use constant quality with factor of 18 and the slowest preset your workstation can bear.

X264 preset go from very fast to placebo, my workstation can tolerate a very slow for 1080p that applies all the most advanced compression settings. This together with quality at 18 gives me an output very similar to the input but much more efficient with a smaller file.

At this point you are nearly there and ready to upload on vimeo and youtube.

Between the two services which one has the best quality?

Vimeo plain and simple, the same file will look better than youtube with less artefacts at the same resolution, however vimeo requires you to have a plus account to upload and share in 1080p whilst youtube is free.

So this is the reason why your files do not look as good as the clips you shot with the camera when you share them.

Now onto the second part why do clips produced with my very expensive equipment look worse than someone with a much cheaper set up and inferior equipment?

This second problem has to do with the way videos are shot.

Many people look on the internet for guidance on how to produce a video clip that looks decent and are tempted by some esoteric terms such as: flat profiles, colour grading, gamma curves etc etc.

They then go into water with their camera set like they have read on the internet and then spend a long time editing their clips, after all that effort the result image is a bit soft and the colors are washed out.  This seems to be quite a common issue especially with pros.

http://www.peterwalker.com/komodo.html

Note that the two videos above are probably two of my favourites of the last few years. However check the difference between the close up shots with lights or the land shots and the wide angle with natural light? Very different

This instead is an example of someone who knows how to work with the limitation of the set up:

Flat profiles and color grading may work very well when the environment is controlled in a studio situation or where there is plenty of light but in water this is seldom the case. So the best help is to get it right first time and if needed use a filter for your ambient light shots.

Many people including me used to be a white balance evangelist but I have to say with years I have lost interest and I think is greatly overrated.

This video from ikelite is my absolute favourite

The best part is at 0:45 comparing filter with auto white balance and filter with manual white balance. The clips says looks at the purple that comes with the manual white balance but actually that is a horrible hue there!

I have spent the entire 2012-2014 trips trying to perform custom white balance with various cameras, with various degree of success. When I was in Raja Ampat I once left the camera in auto and realised the color where the best I ever got. Though this was a mistake but after few months when I reviewed the clips and how they were taken I realised the truth, even since I have never hit the custom white balance button once on my RX100 and I am preparing to do exactly the same on the GX7.

So my five cents into video editing and doing something decent for sharing on the internet is based around the following key principles:

  1. Get the clip right in camera. Use the settings that make the clip look great at the outset, experiment until you are happy of the results. Forget about theory focus on what you like.
  2. Don’t let your editor alter the clips at all and use no or minimum grading or even try to do no correction at all including contrast and exposure any time the editor touches the clip something is damaged.
  3. Export with advanced settings using all the CPU power you have at hand to produce a high quality but as small as possible file

Good luck for your next trip, I am very much looking forward to mine!

 

Underwater Video Tips: Working with AVCHD 2.0 and 1080p60 or 1080p50 files in iMovie

As hardware becomes more and more powerful video format evolve to allow higher quality capture.

AVCHD is a format that still relied on interlaced video and the classic 24p until version 2.0 where higher frame rate 1080p50 and 1080p60 have become standard with a maximum bit-rate of 28 Mbps.

To date many non linear editing programs are not capable to process such files actually most of the low cost programs are not even able to import those files at all, this is quite frustrating after spending a good amount of money on a camera.

I use iMovie for all my edits as after testing programs like Adobe Premiere I did not really find them to add many benefits to justify the price and I also find them quite slow and counter intuitive so when I got my Sony RX100 I had the issue of processing AVCHD 2.0 files 1080p50.

An AVCHD container is made of streams that have a video and an audio track plus another track of text. The video is encoded in H.264 as other formats like mp4 and the audio is AC3 usually two channels. Usually video editor like files with an H.264 video track and a stereo audio track in AAC or MP3.

So if you re-wrap the information in an mp4 or mov format there is a good chance that a program like iMovie or final cut will digest it.

After various attempts I managed to find on the internet the tools I needed, I will list them here:

  1. LAME for Mp3 encoding (mandatory)
  2. FAAC for AAC encoding (optional but I have it in my build)
  3. FFMPEG
  4. Growl
  5. Clearpipe automator Action
  6. Automator FFmpeg action
  7. MTS2MP4 automator agent

For instruction on how to build your own ffmpeg (as the static builds did not work for me) look here:

http://sesam.hu/2012/09/05/installing-ffmpeg-on-os-x-mountain-lion/

Then install growl version 1.2.2 http://growl.googlecode.com/files/Growl-1.2.2.dmg

Get clearpipe, automator ffmpeg action and the mts2mp4 finder service here http://blog.laaz.org/apps/automator/ and install in sequence.

This creates the option to right click on an MTS file and re-wrap it into an Mp4, note that there are also commercial programs that do this like clipwrap and iVi however our finder service is free and quick…

I have created this little video to show how it works in practice, as you can see it swallows entire folders which is great. So here I create an output folder in the iMovie events folder so that iMovie can edit the 1080p50 file later skipping the import, this means no time is wasted and after generating thumbnails you are ready to edit your original video at high frame rate, a feature ‘officially’ not supported…this is how I edit my video natively in iMovie. If you have a GoPro that saves 1080p50 or 1080p60 mp4 files you can start from the manual creation of an event folder.

From there onwards you can import your double frame rate video into iMovie projects, that will anyway be 24,25,30 frames per second by default but can also exported in 50/60p using x264 decoder that you can find here http://www003.upp.so-net.ne.jp/mycometg3/

This means that you can process with iMovie and also final cut pro 50/60p projects with no problems!

Update for those struggling this is the link where all the files including the ffmpeg build are: https://www.dropbox.com/sh/6m4527odhpw3hcc/nHODxg3_DL I have modified the ffmpeg automator action as I was getting a problem with growl