Panasonic Lumix G X Vario 14-42mm with Fisheye Converter DMW-GFC1

The Panasonic 8mm Fisheye lens for micro four third is a clear winner for close focus wide angle however the lack of zoom and the really wide 180º cover mean that there are many subjects that will look tiny in the frame.

The next option in terms of width is the Panasonic 7-14mm wide angle lens however this requires a large dome for optimal performance making the set up expensive.

Is there anything else left if you don’t want to buy a wet lens and you already have the Panasonic PZ 14-42 X Lumix G?

Panasonic produces an add on lens DMW-GFC1 that is declared to provide 10.5mm equivalent and reduce minimum focussing distance to 16 cm all specs can be found here.

This add on lens can be used with the 4.33″ dome for the 8mm fisheye and the 30 extension.

I took a few test shots and the results are pretty good.

This first shot is at f/5 and is very sharp in the centre.

Fisheye Converter f/5

Fisheye Converter f/5

Getting a bit closer and stopping at f/8 the results are pretty good for an adapter that is less than £100 on amazon.

Fisheye Converter f/8
Fisheye Converter f/8

Barrel distortion is contained so this combination may be good for wrecks where the fisheye effect is a bit disturbing.

If you have the Lumix G Vario X PZ 14-42mm you may want to invest in this little accessory before getting the much more expensive 8mm fisheye even if the Nauticam 30 extension is required. Later on the extension can be used with the flat port 35 and the Olympus 60mm for super macro and the 4.33″ dome of course with the 8mm.

I think it is amazing how much can be obtained out of this lens if we consider wet diopters, wet wide angle lenses and this adapter before you need to get a second lens.

This lens could also work for video with the Panasonic GH4 at 4K however zoom is not recommended with it.

100,000 visits – In Depth into Sharing Videos on the Internet

Two years and few months later I am pleased my blog hit 100,000 visits. Considering that there is sponsorship and this is pretty much content produced during free time I am well pleased.

So as commemoration topic I want to put a few considerations that spin off a post on the editing and share section of wet pixel.

Many people spend a lot of money on underwater video rigs and use sharing websites such as youtube and vimeo to host or promote their content. The reason is clear those sites have a huge audience and if you have original content you can get a bit of advertising revenue as well that is never a bad thing.

However most of us have noticed that once you upload a file on those websites it looks worse than the original some time much worst. Why does that happen?

The answer lies in two words video compression.

Video compression is a technical subject and my previous post tried to share some of my finding in regards of the reasons why a camera produces video better than another even if the second produces better still images. It is all in the compression effectiveness and the same issue applies when we share our videos on line.

Unfortunately many people do not really know much about this subject and assume that the video editing program they purchased has all the answers and everything is optimised. Well that is not the case. Video produced off the shelf by such programs with default settings may be watchable but are not great and usually worse than the source clip of a good deal.

Another common misconception is that you need to convert a file produced by your device to another format so you can edit.

Finally many people convert files many times and wonder why the result is far off the original clips, not realising that video compression is lossy so each time you manipulate a clip are you are making things worse.

Obviously am talking consumer and prosumer here not RAW video recording at stellar bitrates.

So what is the best way too produce an underwater clip that looks good without spending too much time on it and that when uploaded on the web looks still decent?

To give an idea why a clip like this one shot with a compact camera

Does not look to far off this other clip shot with a semipro camcorder Sony AX100

or a Panasonic GH4

What all 3 clips at 1080p on youtube and honestly evaluate if there price difference is justified you will probably think no and think the second clip is actually a pro.

So why is that?

50% of the problem comes from the editing, I don’t have the details of how the other two clips are done but I know my clip is edited with iMovie, surely not the most advanced tool on the market you would think.

However there are a few tricks of the trade that I will explain to you one at time:

1. Never let your editor convert the files at the import.

Unless your workstation can’t physically process them leave the clips as in. Even think about getting a better computer in the long run if you can’t process files as is.

Many editors convert the files at import, in intermediate formats like prores or Avid that have no temporal compression. Those files unlike the originals have each frame stored like a complete image so that it is easier to edit. If your editor allows you use the original file without any conversion. You can do this in Final Cut using proxy and cheating also in iMovie creating manually event folders and copying mov or mp4 compliant files manually into them.

2. Once you finish your editing use the highest quality option available for export.

This is sometimes a tricky issue as the default options of those programs mention sometimes just a quality option with a slider from low to best. Many programs though, like final cut offer other options and modules for advanced compression.

If you have spent money on the editor spend the extra funds on the advanced codecs as they are worth every penny.

Once you have the advanced codecs (x264 is the one I use and is free plug in for iMovie) use constant quality with factor of 18 and the slowest preset your workstation can bear.

X264 preset go from very fast to placebo, my workstation can tolerate a very slow for 1080p that applies all the most advanced compression settings. This together with quality at 18 gives me an output very similar to the input but much more efficient with a smaller file.

At this point you are nearly there and ready to upload on vimeo and youtube.

Between the two services which one has the best quality?

Vimeo plain and simple, the same file will look better than youtube with less artefacts at the same resolution, however vimeo requires you to have a plus account to upload and share in 1080p whilst youtube is free.

So this is the reason why your files do not look as good as the clips you shot with the camera when you share them.

Now onto the second part why do clips produced with my very expensive equipment look worse than someone with a much cheaper set up and inferior equipment?

This second problem has to do with the way videos are shot.

Many people look on the internet for guidance on how to produce a video clip that looks decent and are tempted by some esoteric terms such as: flat profiles, colour grading, gamma curves etc etc.

They then go into water with their camera set like they have read on the internet and then spend a long time editing their clips, after all that effort the result image is a bit soft and the colors are washed out.  This seems to be quite a common issue especially with pros.

http://www.peterwalker.com/komodo.html

Note that the two videos above are probably two of my favourites of the last few years. However check the difference between the close up shots with lights or the land shots and the wide angle with natural light? Very different

This instead is an example of someone who knows how to work with the limitation of the set up:

Flat profiles and color grading may work very well when the environment is controlled in a studio situation or where there is plenty of light but in water this is seldom the case. So the best help is to get it right first time and if needed use a filter for your ambient light shots.

Many people including me used to be a white balance evangelist but I have to say with years I have lost interest and I think is greatly overrated.

This video from ikelite is my absolute favourite

The best part is at 0:45 comparing filter with auto white balance and filter with manual white balance. The clips says looks at the purple that comes with the manual white balance but actually that is a horrible hue there!

I have spent the entire 2012-2014 trips trying to perform custom white balance with various cameras, with various degree of success. When I was in Raja Ampat I once left the camera in auto and realised the color where the best I ever got. Though this was a mistake but after few months when I reviewed the clips and how they were taken I realised the truth, even since I have never hit the custom white balance button once on my RX100 and I am preparing to do exactly the same on the GX7.

So my five cents into video editing and doing something decent for sharing on the internet is based around the following key principles:

  1. Get the clip right in camera. Use the settings that make the clip look great at the outset, experiment until you are happy of the results. Forget about theory focus on what you like.
  2. Don’t let your editor alter the clips at all and use no or minimum grading or even try to do no correction at all including contrast and exposure any time the editor touches the clip something is damaged.
  3. Export with advanced settings using all the CPU power you have at hand to produce a high quality but as small as possible file

Good luck for your next trip, I am very much looking forward to mine!

 

Demystifying video formats

Youtube now supports double frame rate video 50p and 60p so what?

That is actually a legitimate question look at this example here which is a short clip from a trip to Barbados in 2013, this was originally shot on a Sony RX100 Mark II in AVCHD progressive 1080@50p 28 Mbps

If you don’t see the 50p option is because your browser operating system does not support it. You need the latest version of browser and operating system and a machine fast enough plus enough bandwidth. So for Mac this means OS X Yosemite and Safari and for Windows you need 8.1 and IE9.

I hope you enjoyed the clip now check this other one which is a instead shot at 25p with the same camera at 24 Mbps/

I think you can see by yourself which one looks better and it is the 25p clip despite an overall lower bitrate.

There are a number of reasons:

  1. Underwater clips do not have a lot of action as you may think so extra frames go a bit to waste
  2. The encoding which is how the clip is first recorded by the camera is not really that different.
  3. The human eye does a great job at interpolating missing frames anyway
  4. There is not really much more data in the 50p file compared to its 30p rendition
  5. The image quality if you look at a still frame is better in the 25p clip.

There are of course benefits in shooting at double frame rate if you want to slow down the footage 50% speed but for what concerns your normal shooting you would say for that clip you could not tell.

Let’s think about it in simple terms if you have a clip shot at 25p with 24 Mbps you would expect something not quite double but a bit more for 50p instead you only have 28 Mbps. To be more precise you have 22 Mbps vs 26 Mbps video which is 18% more in Sony’s case. So that is not really much information more.

What is more interesting is the structure of the data what follows now is a bit technical but bear with me.

GOPStructure
GOP Structure Row 1 and 3 Sony AVCHD 25p and AVDHD progressive 50p

The first and third rows are representation of Sony 25p and 50p clips. The green bar are I frames that you can think of like a JPEG image, the red bars are P frames or prediction the only contain a delta from the previous frame not a full image.

You can see that in the first row there are 12 P red bar between each green I bar. This means that the GOP or group of picture is composed of a sequence IPPPPPPPPPPPP that repeats indefinitely.

On the 3 row there is a representation of a Sony 50p clip you can see that now there are 23 P frames between two I frames.

So the increment in full frame is limited however if we look at the sizes we see that the I frames in the 25p clip are 12% bigger and also the P frames are smaller.

So in short if you look at the image quality the 25p clip has more information in the full frames as well as for the predicted frames whilst the 50p clips has more frames but overall with less quality.

Which means that unless you are shooting something that is really action packed or you want to do slow motion there is no actual benefit but instead a deterioration when you shoot AVCHD progressive underwater.

Note: if instead we were shooting at higher bitrates for the 50p the story would be different but at similar bitrate it goes as above.

You will also have noticed stream 2 and 4 in the image above I repeat them again here

GOPStructure
GOP Structure Panasonic AVCHD 25p and AVCHD progressive 50p

The second and fourth stream are generated by a Panasonic camera and they look different. You will notice now the existence of frames with the tag B and also that some of the P frames have a green slice.

This means that Panasonic AVCHD implementation has two features that Sony does not have:

1. It has B frames which not only predict future frames from the past frames but can also reference future frames in the prediction (sounds crazy but it works basically the frames are stored in memory before past ones are saved)

2. It has slices for images so on one frame there can be an element of prediction from a previous frame and another element completely newly generated for example if the prediction was completely in a part of picture where there was a lot of movement.

H264 encoding has motion compensation so things that do not change are referenced and new parts are predicted or in this case partially created from scratch.

So the Panasonic encoding algorithm is much superior to the Sony one for AVCHD this explains why a small camera like the Panasonic LX7 could produce video to compete with a larger sensor RX100 with almost double number of megapixels.

What makes me laugh is when photography magazines jump to conclusion on the video quality of a camera shooting a static frame!

Of course if there is not movement the camera with the best IQ in still pictures will prevail however when you record motion all of that becomes somewhat less relevant as compression impacts the quality.

So the more effective compression algorithm of Panasonic beats Sony to the point that even a larger sensor size seems not to matter.

This explains why when you take a real life clip Panasonic cameras perform better in video despite a worst image quality in still images.

The difference between the 28 Mbps and 24 Mbps follows pretty much the same trend of the Sony clips there is not enough bitrate to justify the double frame rate unless there is a lot of action in your clip.

So to conclude if you are shooting AVCHD the normal 24/25p more will have better image quality and will be more suitable to scenes with a lot of dynamic range, will give more colour and contrast. If there is really a lot of action or you want to slow down the clip shoot in 50p bearing in mind that image quality will actually drop if you look at a still frame in isolation.

Underwater contrary to what you may expect things do not actually go that fast and most of the movement is in a specific part of the frame or in a limited part of it so AVCHD 24/25p gives better results.

Finally when looking at a camera for video check for real clips do not look at resolution charts designed for still images as they give very little indications on the quality of your videos. Also if there are any tests make sure those are on the JPEG images that share similar processing engine not on RAW files are you are not shooting RAW video. And finally consider that at similar bitrate some manufacturer have a clear edge on others when it comes to real time compression in our example Panasonic produces similar quality to a Sony camera with overall a better sensor but poorer compression.