Tag Archives: final cut pro

Export Workflows for underwater (and not) video

This is post is going to focus on exporting our videos for consumption on a web platform like YouTube or Vimeo.

This is a typical workflow for video production

We want to focus in the export to publish steps for this post as things are not as straighforward as it may seem.

In general each platform has specific requirements for the uploads and has predefined encoding settings to create their version of the upload this means that is advised to feed those platforms with files that match their expectations.

The easiest way to do this is to separate the production of the master from the encodes that are needed for the various platforms.

For example in Final cut this means exporting a master file in ProRes 422 HQ in my case with GH5 10 bit material. Each camera differs and if your source material is higher or lower quality you need to adjust however the master file will be a significantly large file with mild compression and based on intermediate codecs.

So how do we produce the various encodes?

Some programs like Final Cut Pro have specific add ons in this case Compressor to tune the export however I have had poor experience with compressor and underwater video to the point I do not use it and do not recommend it. Furthermore we can separate the task of encoding from production if we insert a platform independent software in the workflow.

Today encoding happens primarily by H264 and H265 formats through a number of encoders the most popular being x264 and x265 that are free. There is a commercial right issue to use HEVC (x265 output) for streaming so a platform like YouTube uses the free VP9 codec while Vimeo uses HEVC. This does not matter to us.

So to uploade to YouTube for example we have several options:

  1. Upload the ProRes file
  2. Upload a compressed file that we optimised based on our requirements
  3. Upload a compressed file optimised for YouTube requirements

While Option 1 is technically possible we are talking about 200+ GB/hour which means endless upload time.

Option 2 may lead to unexpected results as you are not sure of the quality of YouTube output and how it matches your file so my recommendation is to follow option 3 and give the platform what they want.

YouTube Recommended Settings are on this link

YouTube recommends H264 settings as follow for SDR (standard dynamic range) Uploads

  • Progressive scan (no interlacing)
  • High Profile
  • 2 consecutive B frames
  • Closed GOP. GOP of half the frame rate.
  • CABAC
  • Variable bitrate. No bitrate limit required, although we offer recommended bitrates below for reference
  • Chroma subsampling: 4:2:0

There is no upper bitrate limit so of course you can make significantly large files however for H264 there is a point in which the quality reaches a point that you can’t see any visible differences.

Recommended video bitrates for SDR uploads

To view new 4K uploads in 4K, use a browser or device that supports VP9.

TypeVideo Bitrate, Standard Frame Rate
(24, 25, 30)
Video Bitrate, High Frame Rate
(48, 50, 60)
2160p (4k)35–45 Mbps53–68 Mbps
1440p (2k)16 Mbps24 Mbps
1080p8 Mbps12 Mbps
720p5 Mbps7.5 Mbps
480p2.5 Mbps4 Mbps
360p1 Mbps1.5 Mbps
YouTube Bitrate table

YouTube recommended settings are actually quite generous and if we perform a high quality encode we may easily be able to create smaller file however we are unsure of the logic that YouTube applies to their compression if we deviate so to be sure we will follow the recommendations.

It is very important to understand that bitrate controls the compression together with other factors however in order to get a good file we need to make sure we put some good logic in the analysis of the file itself this will greatly influence the quality of the compression process.

There is a whole book on x264 settings if you fancy a read here.

For my purposes I use handbrake and to make YouTube happy I use Variable Bit Rate with two pass and target bitrate of 45 Mbps. Together with that I have a preset that takes into account what YouTube does not like and then does a pretty solid analysis of motion as H264 is motion interpolated. This is required to avoid artefacts.

Note the long string of x264 coding commands

I have tested this extensively against the built in Final Cut Pro X YouTube Export.

Starting from the timeline and going directly into YouTube resulted in files of 88 Mb starting from a 7.06 GB ProRes 422 HQ comparable for the project. Following the guidelines and the handbrake process I ended up with 110.1 MB which is a 24% increase.

I have also exported to H264 in FCPX this gave me a 45.8 Mbps file however when I checked on YouTube their file it was still smaller than my manually generated file of 12%. I have used 4K video downloader to retrieve file sizes.

Same source file different encodes different results in YouTube

For HDR files there are higher allowed bitrates and considerations on colour space and color depth but is essentially the same story and I have developed HandBrake presets for that too.

When I have to produce an export for my own use I choose H265 and usually a 16 Mbps bitrate which is what Netflix maxes at. Using Quality at RF=22 produces around 20 Mbps files which is amazing considering the starting point of 400 Mbps for GH5 AVCI files. YouTube own files range between 10 and 20 Mbps to give you an idea once compressed in VP9. I cannot see any difference between my 16 Mbps and 20 Mbps files so I have decided to stay with the same settings of Netflix if it works for them will work for me.

There is also a YouTube video to explain in detail what I just said and some comparative videos here

For all my YouTube and Blog subscribers (need to be both) please fill the form and I will send you my 3 handbrake presets.

Edit following some facebook discussions: if you want to upload HD you have better results if you make the file 4K. According to my tests this is not true. Using x264 and uploading an HD file produces same or better results than the HD clip YouTube created out of the same source using a 4K upload. I would be vary about what you read on the internet unless you know exactly how clips are produced. 90% of the issue is poor quality encoding before it even gets to YouTube!

Choosing the Appropriate Frame Rate for Your Underwater Video Project

I think the subject of frame rates for underwater video is filled with a level of non-sense second to none. Part of this is GoPro generated, the GoPro being an action cam started proposing higher frame rates as standard and this triggered a chain reaction where every camera manufacturer that is also in the video space has added double frame rate options to the in codec camera.

This post that no doubt will be controversial will try to demistify the settings and eliminate some fundamental misconception that seem to populate underwater videography.

The history of frame rates

The most common frame rates used today include:

  • 24p – used in the film industry
  • 25p – used in the PAL broadcasting system countries
  • 30p – used in the NTCS broadcasting system countries

PAL (Phase Alternating Line) and NTSC (National Televion System Committee) are broadcasting color systems.

NTSC covers US South America and a number of Asian countries while PAL covers pretty much the rest of the world. This post does not want to in the details of which system is better as those systems are legacy of interlaced television and Cathodic Ray Tubes and therefore are for most something we have to put up with.

Today most of the video produced is consumed online and therefore broadcasting standards are only important if you produce something that will go on Tv or if your footage includes artificial lighting that is connected to the power grid – so LED does not matter here.

So if movies are shot in 24p and this is not changing any time tomorrow why do those systems exist? Clearly if 24p was not adequate this would have changed time ago and except some experiments like ‘The Hobbit’ 24p is totally fine for today use even if this is a legacy of the past.

The human eye has a reaction time of around 25 ms and therefore is not actually able to detect a moving object in the frame at frame rates higher than 40 frames per second, it will however detect if the whole room moves around you like in a shoot out video-game. Our brain does a brilliant job of making up what is missing and can’t really tell any difference between 24/25/30p in normal circumstances. So why do those exist?

The issue has to do with the frequency of the power grid and the first Tv based on Cathodic Ray Tube. As the power of the grid runs at alternate current with a frequency of 60 Hz in the US when you try to watch a movie on Tv that has been shot at 24p this has judder. The reason is that the system works at 60 cycles per second and in order to fit your 24 frames per second there is a technique called Telecine. To make it short artificial fields are added each 4 fields so that this comes up to 60 per second however this looks poor and creates judder.

In the PAL system the grid runs at 50 Hz and therefore 24p movies are accelerated to 25p and this the reason the durations are shorter. The increased pitch in the audio is not noticeable.

Clearly whey you shoot in a television studio with a lot of grid powered lights you need to make sure you don’t have any flicker and this is the reason for the existence of 25p and 30p video frame rates. Your brain can’t tell the difference between 24p/25p/30p but can very easily notice judder and this has to be avoided at all costs.

When using a computer display or a modern LCD or LED Tv you can display any frame rates you want without issues therefore unless you are shooting under grid power artificial lights you do not have to stick to any broadcasting system.

180 Degrees Angle Rule

The name is also coming from a legacy however this rule establishes that once you have set the frame rate your shutter speed has to be double of that. As there is no 1/48 shutter 24/25p are shot at 1/50s and 30p is shot at 1/60s this makes sure also everything stays consistent with possible flicker of grid powered lights.

The 180 degrees angle rule gives each frame an amount of motion blur that is similar to those experienced by our eyes.

It is well explained on the Red website here. If you shoot slower than this rule the frames look blurry if you choose a faster shutter speed you eliminate motion blur so in general everybody follows this and it works perfectly fine.

Double Frame Rates

50p for PAL and 60p for NTSC are double frame rates that are not part of any commercial broadcasting and today are only supported officially for online content.

As discussed previously our reaction time is not able to detect more than 40 frames per second anyway so why bother shooting 50 or 60 frames per second?

There is a common misconception that if you have a lot of action in the frame then you should increase the frame rate but then why when you are watching any movies you don’t feel there is any issue there even if you are watching Iron Man or some sci-fi movie?

That is because those features are shot well with use of a lot of equipment that makes the footage rock steady, the professionals that do it follow all the rules and this looks great.

So the key reason to use 50p or 60p has to do with not following those rules and not being that great of shooting things in a somehow unconventional manner.

For example you hold the camera while you are moving for example a dashboard cam, or you hold the camera while running. In this case the amount of changes in the frame is substantial as you are moving not because things around you are moving. So if you were still in a fixed point it will not feel like there is a lot of movement but if you start driving your car around there is a lot of movement in the frame.

This brings the second issue with frame rates which is panning again I will refer to Red for panning speed explanation.

So if you increase the frame rate from 30 to 60 fps you can double your panning speed without feeling sick.

Underwater Video Considerations

Now that we have covered all basics we need to take into account the reality of underwater videography. Our key facts are:

  • No panning. Usually except some cases the operator is moving with the aid of fins. Panning would require you to be in a fixed point something you can only do for example in a shark dive in the Bahamas
  • No grid powered lights – at least for underwater scenes. So unless you include shots with mains powered lights you do not have to stick to a set frame rate
  • Lack of light and colour – you need all available light you can use
  • Natural stabilisation – as you are in a water medium your rig if of reasonable size is floating in a fluid and is more stable

The last variable is the amount of action in the scene and the need of slow motions – if required. The majority of underwater scenes are pretty smooth only in some cases, sardine runs, sea lions in a bait ball there really is a lot of motion and in most cases you can increase the shutter speed without the need to double the frame rate.

When I see video shot at 50/60p and played back at half speed for the entire clip is really terrible and you loose the feeling of being in the water so this is something to be avoided at all costs and it looks plain ugly.

Furthermore you are effectively halving the bit rate of your video and to add more usually the higher frame rate of your camera is not better than the normal frame rate of your camera and you can add more frames in post if you wanted to have a more fluid look or perform a slow motion.

I have a Panasonic GH5 and have the luxury of normal frame rates, double frame rates and even a VFR option specifically for slow motions.

I analysed the clips produced by the camera using ffprobe to see how the frames are done and how big they are and discovered a few things:

  1. The 50/60p recording options at 150 Mbps have a very long GOP essentially a full frame is recorded every 24 frames while the 100 Mbps 25/30p records a full frame every 12 frames. So the double frame rate has more frames but is NOT better at managing fast moving scenes and changes in the frame.
  2. The VFR option allows you to set a higher frame rate and then slows down recording to the frame rate of choice. For some reason the 24p format has more options than all the others and the 25p does not even have a 50% option. As the footage is recorded at 100 Mbps the VFR footage at half speed conformed to 30p is higher quality than 60p slowed down to 30p (100 Mbps vs 150/2=75 Mbps) in terms of key frames and ability to predict motion this is better as it has double the amount of key frames per second see this explanation with details of each frame look for the I frames.
  3. The AVCI all intra option has actually only I frames and it will have 24/25/30 of them per second and therefore it is the best option to detect fast movement and changes in the frame. If you need to slow this down this still has 12 key frames per second so other frames can easily be interpolated.
  4. Slow motion – as the image will be on the screen for longer and it is slowed down you need to increase the shutter speed or it will look blurry. So if you intend to take a slow mo you need to make that decision at time of your shot and go for a 90 or 45 degree angle. This remains through if you use VFR or if you slow down AVCI clips in post
  5. If you decided AVCI is not for your the ProRes choice is pretty much identical and again you do not need to shoot 50/60p unless you have specific situations. In general AVCI is equal or better than ProRes so the whole point of getting a recorder is highly questionable but that is another story.

For academic purposes I have compared the 3 different ways Final Cut Pro X does slow down. To my surprise the best method is the ‘Normal Quality’ which also makes sense as there are many full frames.

Now it is interesting to compare my slow motion that is not ideal as I did not increase the shutter speed as the quality of AVCI is high the footage looks totally fine slowed down

Various slow motion technique in FCPX with 1/50s shutter

Looking at other people example you get exactly the wrong impression you take a shot without increasing the shutter speed and then slow it down. The reason why 60p looks better is for the shutter speed not for the image quality itself it is also completely unneeded to slow down a whale shark as it glides through the water.

The kind of guidance you get

So taking this kind of guidance blindfolded is not a good idea.

Key Take Aways

  • Unless you shoot using main grid powered lights you can choose any frame rate you want 24/25/30 fps.
  • Shutter speed is important because it can give a motion blur or freeze motion in case of a slow motion clip
  • You need to choose what scenes are suitable for slow motion at time of capture
  • Slowing down systematically your footage is unnatural and looks fake
  • Using formats like AVCI or ProRes gives you better option for slow down than 50/60 fps implementation with very long GOP
  • VFR options can be very useful for creating purposes although they have limitations (fixed focus)

How do I shoot?

I live in a PAL system country however I find always limitations with the 25 fps options in camera. The GH5 VFR example is not the only one. All my clips are shot 24 fps 1/50s, I do not use slow motion enough and if I did I would probably keep using AVCI and increase the shutter speed depending on the effect I want to give to the scene, this is also the most natural and easier way to shoot underwater as you do not have to continuously change format. Having all intra frames gives me all the creativity I need also for speed ramps that are much more exciting than plain slow motion see this example.

The importance of Underwater white balance with the Panasonic gh5

One of the key steps in order to get the best underwater colours in your video is to perform a custom white balance.

This is true on land and on water because auto white balance only works in a specified range of color temperatures.

Panasonic GH5 advanced user manual

For our GH5 the range where auto works goes is approximately 3200-7500K. When the camera is working outside this range you get a colour cast. Let’s see with some examples:

Grey card Auto White Balance 8mm
Grey card Custom White Balance 8mm

In the example above I am taking a picture of a white balance reference card under warm lights that have a colour temperature of 2700K.

As you can see the auto white balance fails resulting in a yellowish tinge, while the shots taken after the custom white balance is accurate.

In terms of white balance card I use the Whibal G7 Studio 3.5″x6″ (8.9×15.2 cm). I found this card to work well underwater and I use it with a lanyard attached to a clip that I hook on my BCD D rings.

More info on the whibal here

It is possible to buy a larger card such as the reference that is 7.5″x10″ however this is cumbersome and I found the Studio version to work well with the Panasonic GH5 as it only uses the central part of the frame for white balance.

Custom white balance with the 8mm fisheye

Going back to our GH5 instruction manual you can also see that the camera white balance is limited to 10,000K which is the colour of blue sky.

Underwater due to light absorption at longer wavelengths red and orange disappear at depth and blue tends to scatter over suspended particles. So the colour temperature of water tends to be higher than 10,000K and also the blue is somewhat washed out by scattering.

This is the reason filters are essential because reduce the amount of blue or to say better cyan and bring the camera into a range where custom white balance works again.

I have already posted a whole range of observations on filters in a previous post so am not repeating here.

With the right filter for the water colour I dive in and with the appropriate white balance card you can get some pretty decent results with custom white balance.

To help the colour accuracy I have experimented with the Leeming Luts and I want to thank Paul Leeming for answering my obscure questions. Obviously you do not have to use the LUTs and you can design them yourself however I found that using the Cinelike D LUT I have a very good starting point for colour correction.

The starting point is a CineLike D profile with saturation, noise reduction and sharpness set to -5 all other settings to default as suggested by Paul, there is no need to lower the contrast as CineLike D is already a flat curve.

*Noise and sharpness have actually nothing to do with grading but are set to -5 as the GH5 applies sharpening and noise reduction even at -5 setting. Sharpening has generally a negative effect all around while noise reduction if required is better performed in the editor.

Looking at imaging resource tests of the GH5 we can appreciate that the camera colours are oversaturated by default.

the GH5 has around 113% over saturated colours

The GH5 tends to push deep colour and wash out cyan and yellow. This becomes apparent when we look at a white balanced clip uncorrected.

White balanced clip in final cut pro you can see how the water column is washed out whilst red and other dark colours are accurate

The Leeming Lut helps rebalancing the camera distorted colours and when you apply the camera LUT, provided you have followed the exposure instructions and applied the profile as described, the improvement is immediate.

The previous clip now with the CineLike D Leeming LUT applied

From here onwards it is possible to perform a better grading and work to improve the footage further.

For the whole read please look at Leeming Lut website

One other thing that I believe it is interesting is that while generally for ambient light or balanced light shots I do not actually trust the camera exposure and go -1/3 to -2/3 for close up shots exposing to the right greatly helps highlights recovery

In the two frames you can see the difference the LUT brings restoring the correct balance to the head of the turtle.

Turte detail the highlights appear blown out
Turtle detail with Leeming Lut applied

To be clear the turtle detail has been white balanced in water on the whibal card while using a Keldan Spectrum filter -2, then in fcpx automatic balancing is applied. The LUT brings out a better dynamic range from the same frames.

Obviously you are free to avoid lens filters and LUTs and to some extent it is possible to get similar results however the quality I obtain using automatic settings I believe is quite impressive.

I found myself most times correcting my own wrong exposures or wanting to increase contrast in scene where I had little however this only happens in sever circumstances where white balance and filters are at the limits.

Conclusion

There are many paths to get the right colours for your GH5 underwater videos in my opinion there are four essential ingredients to make your life easier and give your footage a jump start:

  • Take a custom white balance using a professional grade white balance card
  • Set the right picture profile and exposure when shooting
  • (Recommended) Use appropriate filters for the water conditions
  • Apply the appropriate LUT to eliminate the errors in the GH5 colour rendering in post processing

With the following settings producing a video like this is very simple and all your efforts are in the actual cutting of the clip.

Short clip that applies this blog tips

Please note some of the scenes that look off are shot beyond the working conditions of filters and white balance at around 25 meters…

100,000 visits – In Depth into Sharing Videos on the Internet

Two years and few months later I am pleased my blog hit 100,000 visits. Considering that there is sponsorship and this is pretty much content produced during free time I am well pleased.

So as commemoration topic I want to put a few considerations that spin off a post on the editing and share section of wet pixel.

Many people spend a lot of money on underwater video rigs and use sharing websites such as youtube and vimeo to host or promote their content. The reason is clear those sites have a huge audience and if you have original content you can get a bit of advertising revenue as well that is never a bad thing.

However most of us have noticed that once you upload a file on those websites it looks worse than the original some time much worst. Why does that happen?

The answer lies in two words video compression.

Video compression is a technical subject and my previous post tried to share some of my finding in regards of the reasons why a camera produces video better than another even if the second produces better still images. It is all in the compression effectiveness and the same issue applies when we share our videos on line.

Unfortunately many people do not really know much about this subject and assume that the video editing program they purchased has all the answers and everything is optimised. Well that is not the case. Video produced off the shelf by such programs with default settings may be watchable but are not great and usually worse than the source clip of a good deal.

Another common misconception is that you need to convert a file produced by your device to another format so you can edit.

Finally many people convert files many times and wonder why the result is far off the original clips, not realising that video compression is lossy so each time you manipulate a clip are you are making things worse.

Obviously am talking consumer and prosumer here not RAW video recording at stellar bitrates.

So what is the best way too produce an underwater clip that looks good without spending too much time on it and that when uploaded on the web looks still decent?

To give an idea why a clip like this one shot with a compact camera

Does not look to far off this other clip shot with a semipro camcorder Sony AX100

or a Panasonic GH4

What all 3 clips at 1080p on youtube and honestly evaluate if there price difference is justified you will probably think no and think the second clip is actually a pro.

So why is that?

50% of the problem comes from the editing, I don’t have the details of how the other two clips are done but I know my clip is edited with iMovie, surely not the most advanced tool on the market you would think.

However there are a few tricks of the trade that I will explain to you one at time:

1. Never let your editor convert the files at the import.

Unless your workstation can’t physically process them leave the clips as in. Even think about getting a better computer in the long run if you can’t process files as is.

Many editors convert the files at import, in intermediate formats like prores or Avid that have no temporal compression. Those files unlike the originals have each frame stored like a complete image so that it is easier to edit. If your editor allows you use the original file without any conversion. You can do this in Final Cut using proxy and cheating also in iMovie creating manually event folders and copying mov or mp4 compliant files manually into them.

2. Once you finish your editing use the highest quality option available for export.

This is sometimes a tricky issue as the default options of those programs mention sometimes just a quality option with a slider from low to best. Many programs though, like final cut offer other options and modules for advanced compression.

If you have spent money on the editor spend the extra funds on the advanced codecs as they are worth every penny.

Once you have the advanced codecs (x264 is the one I use and is free plug in for iMovie) use constant quality with factor of 18 and the slowest preset your workstation can bear.

X264 preset go from very fast to placebo, my workstation can tolerate a very slow for 1080p that applies all the most advanced compression settings. This together with quality at 18 gives me an output very similar to the input but much more efficient with a smaller file.

At this point you are nearly there and ready to upload on vimeo and youtube.

Between the two services which one has the best quality?

Vimeo plain and simple, the same file will look better than youtube with less artefacts at the same resolution, however vimeo requires you to have a plus account to upload and share in 1080p whilst youtube is free.

So this is the reason why your files do not look as good as the clips you shot with the camera when you share them.

Now onto the second part why do clips produced with my very expensive equipment look worse than someone with a much cheaper set up and inferior equipment?

This second problem has to do with the way videos are shot.

Many people look on the internet for guidance on how to produce a video clip that looks decent and are tempted by some esoteric terms such as: flat profiles, colour grading, gamma curves etc etc.

They then go into water with their camera set like they have read on the internet and then spend a long time editing their clips, after all that effort the result image is a bit soft and the colors are washed out.  This seems to be quite a common issue especially with pros.

http://www.peterwalker.com/komodo.html

Note that the two videos above are probably two of my favourites of the last few years. However check the difference between the close up shots with lights or the land shots and the wide angle with natural light? Very different

This instead is an example of someone who knows how to work with the limitation of the set up:

Flat profiles and color grading may work very well when the environment is controlled in a studio situation or where there is plenty of light but in water this is seldom the case. So the best help is to get it right first time and if needed use a filter for your ambient light shots.

Many people including me used to be a white balance evangelist but I have to say with years I have lost interest and I think is greatly overrated.

This video from ikelite is my absolute favourite

The best part is at 0:45 comparing filter with auto white balance and filter with manual white balance. The clips says looks at the purple that comes with the manual white balance but actually that is a horrible hue there!

I have spent the entire 2012-2014 trips trying to perform custom white balance with various cameras, with various degree of success. When I was in Raja Ampat I once left the camera in auto and realised the color where the best I ever got. Though this was a mistake but after few months when I reviewed the clips and how they were taken I realised the truth, even since I have never hit the custom white balance button once on my RX100 and I am preparing to do exactly the same on the GX7.

So my five cents into video editing and doing something decent for sharing on the internet is based around the following key principles:

  1. Get the clip right in camera. Use the settings that make the clip look great at the outset, experiment until you are happy of the results. Forget about theory focus on what you like.
  2. Don’t let your editor alter the clips at all and use no or minimum grading or even try to do no correction at all including contrast and exposure any time the editor touches the clip something is damaged.
  3. Export with advanced settings using all the CPU power you have at hand to produce a high quality but as small as possible file

Good luck for your next trip, I am very much looking forward to mine!

 

Underwater Video Tips: Working with AVCHD 2.0 and 1080p60 or 1080p50 files in iMovie

As hardware becomes more and more powerful video format evolve to allow higher quality capture.

AVCHD is a format that still relied on interlaced video and the classic 24p until version 2.0 where higher frame rate 1080p50 and 1080p60 have become standard with a maximum bit-rate of 28 Mbps.

To date many non linear editing programs are not capable to process such files actually most of the low cost programs are not even able to import those files at all, this is quite frustrating after spending a good amount of money on a camera.

I use iMovie for all my edits as after testing programs like Adobe Premiere I did not really find them to add many benefits to justify the price and I also find them quite slow and counter intuitive so when I got my Sony RX100 I had the issue of processing AVCHD 2.0 files 1080p50.

An AVCHD container is made of streams that have a video and an audio track plus another track of text. The video is encoded in H.264 as other formats like mp4 and the audio is AC3 usually two channels. Usually video editor like files with an H.264 video track and a stereo audio track in AAC or MP3.

So if you re-wrap the information in an mp4 or mov format there is a good chance that a program like iMovie or final cut will digest it.

After various attempts I managed to find on the internet the tools I needed, I will list them here:

  1. LAME for Mp3 encoding (mandatory)
  2. FAAC for AAC encoding (optional but I have it in my build)
  3. FFMPEG
  4. Growl
  5. Clearpipe automator Action
  6. Automator FFmpeg action
  7. MTS2MP4 automator agent

For instruction on how to build your own ffmpeg (as the static builds did not work for me) look here:

http://sesam.hu/2012/09/05/installing-ffmpeg-on-os-x-mountain-lion/

Then install growl version 1.2.2 http://growl.googlecode.com/files/Growl-1.2.2.dmg

Get clearpipe, automator ffmpeg action and the mts2mp4 finder service here http://blog.laaz.org/apps/automator/ and install in sequence.

This creates the option to right click on an MTS file and re-wrap it into an Mp4, note that there are also commercial programs that do this like clipwrap and iVi however our finder service is free and quick…

I have created this little video to show how it works in practice, as you can see it swallows entire folders which is great. So here I create an output folder in the iMovie events folder so that iMovie can edit the 1080p50 file later skipping the import, this means no time is wasted and after generating thumbnails you are ready to edit your original video at high frame rate, a feature ‘officially’ not supported…this is how I edit my video natively in iMovie. If you have a GoPro that saves 1080p50 or 1080p60 mp4 files you can start from the manual creation of an event folder.

From there onwards you can import your double frame rate video into iMovie projects, that will anyway be 24,25,30 frames per second by default but can also exported in 50/60p using x264 decoder that you can find here http://www003.upp.so-net.ne.jp/mycometg3/

This means that you can process with iMovie and also final cut pro 50/60p projects with no problems!

Update for those struggling this is the link where all the files including the ffmpeg build are: https://www.dropbox.com/sh/6m4527odhpw3hcc/nHODxg3_DL I have modified the ffmpeg automator action as I was getting a problem with growl