When you buy a new camera some time after the initial release date you are lucky to find all sorts of videos of other geeks like you that have been testing.
I was referred to this video that shows quite a few quirks of the Sony A1. Gerald has since confirmed the A1 video is binned not line skipped.
You can’t always rely on third party so here are a number of geek tests.
The first question was do you need to shoot ASPC or full frame. APSC is scaled while full frame is binned which means the first may look better than the second but will have more noise.
As usual there is no official documentation of the camera inner workings but this diagram should help explain a few things.
On the upstream the potential flow for a classic bayer filter camera to accomplish binning (it is a guess). On the bottom what a mobile phone may be doing.
In the first case binning results in a reduction of resolution and potential artifacts. In the second only a reduction of resolution.
Now one of the question is what if we just crop the sensor in APSC and then scale down will it look better?
I have done exactly this test and the answer seems to be no.
In theory the binning should look much worse but what I have seen is that moire kicks off for both at around 2x the focal length of the other. So if APSC gives moire at 50mm full frame will give it at 100mm. The full frame moire is more severe when it occurs but in most cases you cannot tell the APSC and binned UHD apart in conditions of good light.
other side by side of a dpreview sample shot cropped again there is moire in their situation on the full frame but in effect the image quality is identical.
So personally am not going to bother with APSC unless I need more magnification and I am not in low light. For all the rest I will use full frame binned.
The A1 also provides its pixel binning mode in ProRes RAW and it is identical to the internal recording when the lens corrections are off. ProRes RAW does not have a concept of lens profile so you get all the lovely defect of your lens. To my horror e-mount leses have many defects, all are distorted and have significant amount of CA.
In short unless you have a DSLR adapted lens with zero defects those aberrations are troublesome so with native glass am skipping ProRes RAW altogether.
4K with External Recorders
If you have a Ninja consistent to what Gerald Undone says you can get a scaled down version of 8K setting your HDMI output to Auto or 2160. Auto generates 4k60fps while 2160 gives you the same frame rate of your 8K.
Interestingly if you do not record to card the HDMI output goes back to what you get in internal recording. So in short you need to record 8K to card which means eventually overheating. It is unclear what subsampling is being output howeverthe image does look a bit cleaner.
8K vs 4K
There is no doubt that the 8K mode although only available up to 30 fps is superior however editing the 580 mbps HEVC files is not that easy.
I personally shoot in 4K so I am set on the full frame binned 4K but if you have the hardware to process and the screen to watch 8K is the way forward. Gerald Undone trick of the HDMI 4K while shooting also works but be careful with overheating.
Next article will break down the codecs available with the A1.
I have been looking for a camera that would be a significant upgrade from my GH5M2 for some time and I have narrowed my options to two choices: the Sony A1 and the Canon R5. As the A1 underwater port system can use most of my glass I have recently acquired the A1.
Did I get the upgrade I was looking for? For photos I would say the answer has been an immediate yes due to the amazing autofocus and EVF of this camera and the burst rate. For me 15 fps is enough but the fact the A1 can trigger a flash with electronic shutter at 1/200 of a second is amazing. But I still like the mechanical shutter (unlike the Z9) and this goes to 1/400 which is also a first.
Let’s have a look at the A1 and where it stands at the end of 2022.
The Sony A1 was announced in January 2021 and was at that time the fastest, high resolution (>42 MP) full frame camera on the market.
Capable of producing 20 fps continuous autofocus at staggering 50MP and 30 fps JPEG it still is the full frame camera producing the highest resolution at the highest frame rate as the Nikon Z9 has the same frame rate but lower sensor resolution.
As new cameras have come along we have seen some development especially on the video front and while the Sony A1 can produce ProRes RAW this is at half resolution (4230×2430) the Canon R5 is able to output 8192 x 4320.
Users have been curious on why the Canon R5 can do that while the A1 can’t and also why does the Z9 produce ProRes RAW internally at half resolution, so similar to the A1 and NRAW (that is likely not RAW) at full resolution?
Various tests on video show that both the Z9 and A1 outperform the Canon R5 in video on all formats.
Interestingly SNR improves 0.8 stops moving from 8K to 4K full frame which would not be possible if the camera was skipping pixels.
But of course neither the Z9 nor A1 can produce external ProRes RAW 8K and users have been screaming at Sony.
I was quite suspicious of the fact that the Z9 can only record internal raw and I have noted that Nikon has pushed back on RED lawsuit on RAW recording and therefore I believe NRAW is actually demosaiced.
I looked at dpreview studio scene and compared those cameras and in addition added also the Panasonic SR1 to check the image quality.
You can see how all cameras are affected by false color artefacts, the A1 and Z9 much more than others.
Moire is not an issue
The other suspicious fact is that the A1 and Z9 produce ProRes RAW at half resolution. How can the camera produce RAW at half resolution with no false colour artefacts if the 2×2 cell is made of different colours?
If you follow mobile phone technology, you are familiar with the super high resolution claim of certain phones, this article on Sony semiconductors web page provides an insight
The actual pixels are arranged in cells of 4 of the same colour and to produce the high resolution image the pixels are re-mosaiced, which in turn could produce artefacts. This technology has been mainstream for at least 4 years.
If you look at this video you can see that ProRes RAW video no longer produces false colour artefacts but is prone to moire as the camera does not have a low pass filter.
Dpreview studio scene provides some additional insight looking at video grabs.
No false color in 4k video
Moire in 4k video due to low resolution generating aliasing
My conclusion is that the A1 as well as the Z9 are cheating. Unlike the Canon R5 they are based on a quad-bayer sensor cell and therefore will not offer the same color resolution at 1:1 pixel of the canon R5.
It has already been proven that the A7S3 has a quad bayer cell.
Measures like DxOMark color depth do not look at color errors so this will not be spotted but I believe the remosaic of pixels of the same colour is the issue here that is showing in the dpreview studio scene.
There has been additional debate then on why the A1 defaults to APSC mode when producing 4K video, this is counterintuitive however the APSC image does not have moire nor false colour.
If we carry on with the assumption that what I have written here is correct, we can have a look at the required bandwidth to read the sensor and produce video output at various resolution and frame rates
Bandwidth Gbps for various video resolutions
Considering a readout at 12 bits we can see that the highest bandwidth is for the 8K and the 4k APSC mode as the other modes have less pixels even the 120 fps does not get to that bandwidth however required a faster sensor scan and is cropped
When the raw data goes in the image pipeline it is converted into RGB signal and here we can see that after subsampling the ASPC format has the highest data volume due to the 422 subsampling.
This in turn produces the least artefacts in fact it is quite resistant to moire as anti aliasing can be performed in camera using different techniques. So this is why APSC footage from the A1 is smoother but not necessarily sharper in fact the opposite.
A different current of thought may say no it is a full resolution classic bayer filter array which is then binned for 4k video however
Such technology does not exist is not advertised there are no patents
The remosaic of quad bayer sensor has been mainstream in mobile phones for years now and is done on chip
So my take is Sony is just leveraging mobile phone technology for the IMX610 in the A1 but I am open to the challenge.
For clarity as some readers seem not to understand I believe the camera has a total of 50 megapixels arranged in a quad bayer cell and goes to 12.5 in video 4K full frame combining pixels in 2×2 cells. There are phones on the market with 108 megapixels so this is nothing new.
The A1 is produced on the Exmor RS line which has been developed for mobile technology so no suprise the same investement is leveraged for cameras.
Many commercially available phones already implement the same features of the A1 see for example the specs of the Xiaomi12
Video Format Choice
The other question is then what to shoot now that we know or think we know the inner workings of the camera?
8K suffers from similar false colour artefacts of still images and 8k displays are rare it is only available in 4:2:0 subsampling due to bandwidth issues.
APSC is cropped while the image has no defect this mode does not have a benefit on other cameras like the Panasonic GH6, it also does not support 120fps. Many other cameras offer cropped APSC 4k footage: you do not need an A1 if you want APSC video.
UHD has moire in certain situations due to the lower resolution being out resolved by the lenses used and the lack of anti aliasing filter however it does offer the highest dynamic range and no false colour artefacts
My approach is to use UHD and if I have moire, use APSC. Moire is visible in the EVF so you can then mitigate it by switching to APSC only when required.
I have done a full analysis of the codecs and frames which I will post in a later article.
The other consideration is that I did not get a full frame camera to shoot it in APSC and in fact the A1 APSC also looks the same as my Panasonic GH5M2 and offers minimal benefit of DR and SNR due to the smaller size of the cropped area.
Now that I know (or think I know!) What may be behind the A1 limitations, am I disappointed? Actually I am not. I did not buy this camera for 8K, I have no ability to edit or display 8k but I wanted an upgrade to 4k and I can say the A1 holds video footage at 12800 ISO in slog3. I have yet to see any moire and prores raw 4k@60 is amazing quality, surely there is distortion and chromatic aberrations and vignetting but especially underwater or topside long lens this is not an issue. I am a bit disappointed by the codecs on card especially as HEVC does not have a 30 fps mode however overall the camera delivers an extremely pleasing image quality in 4K with outstanding clean colours using slog3/cine.gamut. If there is one thing that is weak is the IBIS.
If you are a purist and want the best image quality in full sensor should you look at the canon R5? This is where it gets interesting. I believe the R5 has a cleaner image however Canon is behind in terms of sensor technology so at the end when you look at real life images in terms of IQ and SNR I do not see the Canon taking an edge. What the Canon is better at is ergonomics, menu systems but not ultimately image quality despite all the things discussed here.
Despite all the cheating the A1 remains an amazing camera, it is small, it has many lens options and has the best underwater ports option and I do not regret my choice in fact I look forward to using this underwater. And finally, all of this just made me reflect on what a great camera the Panasonic GH5M2 is and I will keep it for some time until I am happy with all use cases.
It was Christmas 2018 and my wife Helen hands me over an envelope with an unexpected gift: two hours tuition with Alex Mustard (on land). Alex was travelling and I was busy with work so I only managed to get the session arranged in spring 2019. At that time I had just returned from a Hammerhead expedition in the Bahamas.
Prior to boarding the boat I did two days diving at Blue Heron bridge. I must admit shooting macro is not my favourite discipline but the shots were very disappointing: they all looked flat fish ID style images of various critters on the sea bed.
I showed the images to Alex who gave me a session on inward lighting for macro and we took several shots of coffee mugs or other widgets on his kitchen table. I wish I had had that session before going on that trip, but things never quite work as you would think.
Since then, I have done mostly wide angle with the occasional macro or fish portrait. I have not really had the chance to give this technique a proper go. Inward lighting for macro requires to position your strobes behind the subject or in line with it and this is sometimes not exactly practical. The same technique, that had been initially introduced by Martin Edge, can be applied, with some changes, to close focus wide angle images.
Inward Lighting Diving in Italy
During this summer I had the opportunity to visit the Sorrento Peninsula again and dive with the friends at Punta Campanella Diving Centre. On day one of diving, the plan was to visit the dive site called Banco di Santa Croce, a group of offshore pinnacles ranging from 12 to 50 meters depth where there is abundance of groupers, rockfish, anthias occasional eagle rays and plenty of gorgonians. Usually, the visibility is terrible in the first meters and then clears up after the thermocline around 15 meters, however, on the day the visibility was pretty bad until around 30 meters. After a set of pretty deep gorgonian shots and seeing that the groupers were very un-cooperative, I went above the thermocline on the shallower pinnacle to see what I could shoot. Visibility was pretty bad with murky green water and a high number of suspended particles and I had an 8-15mm zoom fisheye on my Panasonic GH5M2 (similar to a Tokina lens on APSC). All of a sudden, I see a large “scorfano” (rockfish a variety of scorpion fish) swimming over the reef to change its resting location.
I take the first shot trying to minimise backscatter.
While the backscatter control worked reasonably well, I was faced with another fish ID style fish portrait with some ugly background: an overall anonymous shot, at least for my tastes. At this point I thought of giving inward lighting another go even though I had a zoom fisheye lens and was attempting some kind of fish portrait, with the ambient around the fish looking quite ugly.
I moved the strobes in line with the focal plane of the camera pointing right at the handles and started with the strobes wide, getting closer to the housing until I got the level of light I wanted.
First attempt was quite dark but the image started to look more interesting.
Eventually I got the light I wanted on the fish
now it was the time to make the image more interesting, trying to get some attitude out of this cooperative rockfish.
After a few attempts I managed to get the shot I wanted before the fish decided to swim away in a position no longer suitable for the composition I wanted.
As you can see, I kept quite a low f/number I wanted to make sure only the fish head was sharp and limit the depth of field through the frame. This is a fisheye zoom at 15mm on micro four thirds so f/5.6 still has some depth of field but not too much. In my opinion the position of inward strobes works particularly well with subjects that have depth and are not flat on the focal plane of the camera like in this example. The lighting creates very strong shadows and texture that gives the fish an attitude.
A few days later I am on another dive site shooting wide angle again and I notice a large hermit crab on the seafloor. I try a crossed strobe shot and with my horror I notice many large particles backscattering over the black background.
I change the strobe position for inward lighting wide angle and place myself so I would get some blue water in the background that would reduce the contrast of the particles.
The first repositioning works well: I get strong shadows and light more from one side as I wanted.
Then the hermit decides to go for a wander, first it is repositioned so that I get a more frontal shot.
Then it literally legs it so I get a shot that for me is quite funny, as you can see a group of breams swimming in the other direction against the blue water.
This technique has resulted in a few shots that are above average certainly not outstanding but decisively different that bring out the character of both critters in my opinion.
I want to try and provide some details and technical explanation of what I think is happening with the strobe positioning and the subject.
This is a standard position for close up frontal shots.
From the diagram you can see that the area where the lens and the strobes beams overlap can generate backscatter. As the strobes are aligned with the lens the phenomena can be really strong, as demonstrated in the first hermit crab shot.
There are two issues with this positioning: first if the subject is sitting on the sea bed and you cannot get water behind you will see the background no matter how fast the shutter speed goes. Moreover If you try to close the aperture you will need to increase strobe power which will result in more backscatter.
This is a position that I use for inward lighting when I use a wide-angle lens.
You can notice a few things. First is that the subject is only hit by the edges of the beam and only from one side of the strobe so the intensity of the light is greatly reduced. This can be a challenge if you have a true fisheye as you will need to have really strong strobes as you place them further away from you to cover a wider area.
The second Is that the light beams are pointed to each other which in turn means strong shadows and a lot of texture on your subject.
Thirdly any suspended particles will reflect away from the incident angle of the lens resulting in attenuation, but of course not elimination, of backscatter effects.
Finally, the area behind the subject is not covered by the strobes at all and lends itself to either dark background or ambient light as in my two examples in this article.
Here are some additional tips on the strobes; settings. I personally use diffusers in this set up otherwise the position of the strobes needs to go forward and this can create backscatter at the edges or you could even see the strobes in the frame. Second you need several attempts to work out the distance vs power vs aperture equation. If you are interested in a dark background you need to increase the shutter speed as far as you can but on the other end control the aperture so you get the visual effect you want, in my case open so that the background is not sharp in focus. If you want to have the blue water background in the shot then you need to reduce the shutter speed and increase the aperture so you get plenty of depth of field to show as much as possible of the environment, this may result in your strobes working at full power just to paint your subject enough to standout. It takes a while to work out how to proceed and it is better to decide at the outset how you want to compose the shot so that you do not spend too much time doing trial and error as your subject may decide to leave the scene and interrupt the cooperation.
I believe image editing is almost as important as making an image so I have included some post processing tips trying not to get too technical. To simplify, I will only say that the camera captures a lot more information than your image preview or your raw converter show when you import your images. Some of the inward lighting shots may look initially really dark, especially those at fast shutter speed. Do not despair if your camera has good ability to preserve colours in the shadows you still have an outstanding image potentially sitting there, so unless you did not get your focus right do not delete immediately images that appear underexposed.
The second suggestion is to avoid pressing the Auto button on your photo editing program because that will balance the exposure across the entire scene and take away any character from your image.
Generally inward images like the ones I have shot look fairly dark straight out of camera and you do not want to compensate exposure. My recommendation is to use a mask on the subject and adjust exposure very slightly and only if you got it very wrong. Instead pull up the whites and the highlights to make your subject stand out. I avoid any change of clarity, sharpening etc: the images have minimal, but selective, processing.
Another crucial consideration is that because you are using only the very edge of your strobe beams, the colour rendering index and warmth of your strobes may end up far away from normal conditions and using the white balance picker may result in strange effect as the lighting is not even across the frame. I recommend you increase the colour temperature and tint until you get something that you are happy with instead of going for recipes.
At the very end see if you want to clone out debris or some residual backscatter, this technique needs you to get very close to the subject and due to the strobe position backscatter on the focus point is minimised however it could still happen on the sides of the frame.
For what concerns cropping in a specific aspect ratio there is no hard and fast rule: I tend to shoot those close ups at 1:1 lately if the subject is somehow rounded but can go 16:9 if it is a fish sitting sideways on the seabed. Generally, I decide on the crop very early in the process but the good thing is that, as you will just make minor adjustment with masks on the subject, cropping will not change anything.
I have used a Panasonic GH5M2 with a Canon 8-15mm and Metabones smart adapter. My rig set up is described on this link. An APSC camera with a Tokina 10-17mm or a camera with a wet optics WWL-1 or similar or even WACP is adequate for shots like those described in this article. A full fisheye will have a much wider field of view and your subject may look very small or you may not be able to illuminate it correctly, a WAM (wide angle macro) solution may be better but that is an entirely different technique. I use a set of Sea and Sea YS-D2 despite the reputation for low reliability they have worked fine for everything I do until now. I am also convinced that shots like those described in this article can be taken with any camera type as long as you know how to and have adequate lenses and field craft, so if you have read up to now I recommend you give it a go and try and apply my suggestions adjusting the to your taste.
The Covid-19 pandemic has had a severe impact on the travel industry and consequentially on scuba diving, underwater photography and video.
I had to cancel my plans for the second part of 2020 and also for 2021 as test requirements and scarcity of flights made many destination very difficult to reach.
We are now in 2022 and things and the pandemic seems to have slowed down. In UK it is estimated that 96% of adults have antibodies. Travel has started again but there have been difficulties as airlines and hospitality struggle to hire and retain staff. A few European flights I have done for business trips were all severely delayed. Prices have gone up and frequency of connections dropped. It will take a while until we resume to pre-pandemic levels and perhaps we will never get back to 2019 and earlier.
During the long period without travel I found myself with my camera and lenses and unable to use most of my underwater housing and gear so I decided to expand my photography and videography interests.
What can I do with my equipment?
An underwater photographer/videographer will normally have an arsenal of fisheye or super wide lenses, macro lenses and in some cases standard zoom lenses.
You realise quite quickly that it is not exactly easy to put to use your fisheye lenses for land photography.
Most of wildlife shooting on land is carried out with long telephoto lenses. Wide rectilinear lenses are used for landscape but frankly extra wide are less used than others.
Shooting people involves focal lenghts that are normally used underwater for macro.
Macro photography is perhaps where there are more similarities between land and underwater photography. Long lenses are used in both cases and flash photography is also rather common on land for certain subjects (mostly plants and animals that move very little).
Another significant difference is driven by depth of field. High quality lenses for land use are generally f/2.8 and faster and wide lenses most cases f/4. Many times especially when shoooting portraits depth of field is limited and users try to get the best performance out of lenses which is generally in the f/5.6-f/8 interval.
The closer case to underwater photography is the sunny 16 rule which means to use f/16 to have plenty of depth of field for your shots. For macro you also need to have sufficient depth of field for your shots.
Generally venturing into other photography styles will mean investing in different lenses that fit the objectives.
When it comes to lighting underwater strobes are not fit for purpose for land use. A decent photographer knows that a basic flash will only give you flat lighting and potentially red eye effect so majority of land photography for portrait or studio uses off camera lighting with a variety of modifiers including umbrellas, soft boxes, continuous light and main powered flashes. Again if you decided to go in that direction you will need to buy equipment, the good news is that it is really unexpensive as there are high volumes so you can get flash, triggers etc with a few hundred pounds.
So in conclusion without any investment is relatively difficult to do anything with your underwater gear. As example here are some garden macro shots without flash.
Using flash is esaier on things that do not move at all or move really slow
I have made several attemtps at shooting bees with flash and the colors are great however mortality of your shots is extremely high.
You can obviously try some abstracts with flower or go into flower photography but you need to be mindful that in bright light subject isolation and background rendering may be a challenge.
I am not a macro guy myself or a fish portrait person so for me garden macro was not particularly exciting as a discipline. In addition as the bugs do not really let you get close you get better result with a long tele photo lens and teleconverter or extension tubes. This again means investing in new kit.
I have to admit before Covid-19 I would sometimes take landscape but just really take a shot. Having more time on my hands and not being able to travel I joined a local camera club and also some local instagram and facebook groups for inspiration.
At one point I did a whole study of sunset phases, golden hour, twilight in the same spot
I guess the job of a landscape photographer is one of chasing light not just being on location and I learned how frustrating it can be to have the perfect conditions. As an illiterate land photographer I thought good weather is always good as there is plenty of light only to find out that too much light and a clear sky do not make good images.
I also found that is more interesting to have a person in a shot instead of just the landscape.
Wildlife Photography and Videography
The move from underwater to land wildlife is not a simple one. As I mentioned this is mostly a long lens job. A further complication is due to the fact that depending where you are there may not be many subjects available. Due to the destruction of habitats in most places local wildlife means predominantly birds.
Personally I do not prefer birds to other wildlife I find difficult to compose shots due to the speed they move and the related difficulties to take shots. I did however develop a soft spot for Red Kites
This culminated with a visit to Gigrin farm in April 2022 when normal operations had resumed.
Due to the vicinity of Woburn Deer Park I found a real passion shooting Deer, especially Red Deer. A did an entire video project on this during 2021 and this is the result (shorter versions with selected scenes are available).
I have also taken some of my best images ever on the grounds of Woburn some of which I have sold on canvas 30×20.
I like deer as they are very attractive and they lend themselves to a variety of photos and videos. I got pretty good results at it, in fact very good results and I now run some workshops during the red deer mating seasons.
Nightscapes & the Milky Way
Clear skies are not good for daylight landscapes but are essential for shooting stars. During the period where there were travel restrictions I started venturing locally for spots to shoot the Milky Way at night.
I had the best results in Italy near my home town.
This culminated in a trip to Tenerife which led to some of my best shots to date.
This shot has done very well on facebook with something like 3.5k likes on specialised groups.
In terms of skills there is absolutely nothing in common with underwater imaging. Here you need fast lenses, a tripod and specialised devices like a star tracker. In addition there is a lot of standing around some warm clothes and even a dew prevention device for your lens are in order.
It is generally inexpensive to get into this kind of photography however due to light pollution you may not have any real chance where you are.
This is the genre I have explored the least. It requires additional lighting set up, which I now have, but especially interesting subjects to shoot which in general terms means models. Most models are for hire so this adds extra costs to your hobby. There are several other opportunities like re-enactors, cosplay shows, and others but I have not really explored those. This is an area under development.
Intentional Camera Movement
This includes blurred shots with pans as well as other technique like zoom etc. If you have read about Nick More here and elsewhere you know that those techniques can be used succesfully underwater. Personally although I like the technique for certain use it is definitely not my favourite and remains an area of future but not current focus.
One lesson that comes from trying different types of photography is that you do not know your camera as much as you think. During the periods of shooting on land I have probably learned more about the mechanics of a camera than I ever did when I was focussed on underwater.
The second lesson is about editing, there are some real photoshop wizards out there and many lessons and tools can be transported back to underwater imaging. I was not a photoshop user before the pandemic now I have the whole subscription set.
Underwater imaging is an expensive hobby, what I have learned is that if you only do 3-4 trips a year your camera is really under leveraged. There are 365 days in a year and all of them are good to take some photos. Many items especially lenses can be bought on deals or second hand and there are many other ways to enjoy yourself.
I guess as of today I would class myself predominantly an outdoor photographer, indoors shots remain the minority and studio is not really something I do however I think the pandemic was a great boost to my imaging in general and I hope you find this article stimulating for you to try new things where you are as well as when you travel.
A recent discussion on wetpixel with regards to mirrorless cameras vs DSLR seemed to highlight that electronic viewfinders are a major limitation vs optical viewfinders in high dynamic range scenes.
In reality an optical viewfinder does not have dynamic range is just the projection of what the camera lens is seeing through a mirror while an electronic viewfinder is a small screen limited to the 10 stops of dynamic range of the camera jpeg engine.
So there is no doubt that in certain cases the eye and the brain do a better job than a screen to manage certain scenes however to say that this is a limitation that cannot be overcome is a real stretch especially as now most images are taken with mirrorless cameras and have high dynamic range.
During my last Red Sea adventure I spent almost an entire dive shooting sunbursts. Sunburst can be tricky this is an excerpt from Alex Mustard Underwater Photography Masterclass
“At depth the overexposure at the edge of the sun ball is only in the blue channel, which creates an ugly cyan halo around the sun.”
Other situations for ideal sunburst are calm waters which I did not really have during my trip.
So possibly I had the worst conditions and most challenging for my camera, as you know I shoot a Panasonic GH5M2 and the micro four third format is frequently labelled as having very low dynamic range.
During lockdown I have practiced a lot of landscape and night photography and many sunrise and sunsets and I have learnt that actually my camera has a lot to offer if I do not fully trust the exposure tools.
The camera lies to you
The image displayed in camera and in the EVF is an output of the JPEG photo setting of the camera and shows what the manufacturer believes it is an optimal image. RAW converter do exactly the same thing and apply corrections to the raw data to show what they think looks good as a starting point for editing.
This image taken at around 18 meters so fairly deep shows a moderately clipped sunball in what was not calm surface water and a fairly dark foreground despite the strobe fired on the coal.
This image is instead the camera RAW data very close to linear and without correction. Note the cyan sunball and the very dark foreground.
This does look really dark indeed and to be frank the camera sees a lot in the dark.
The important part though is that this image is not clipped in the highlights and the darks are not crushed either.
The image can therefore be rescued to produce a decent result.
Is this image as pretty as one where the sun rays do a perfect star in calm shallow water? No.
Does it have an ugly sunball of death? No
Is it noisy grainy or lacks sharpness? Definitely not
Would you have taken this image if you believed the camera exposure settings? Probably not
How to take good high dynamic range images underwater with a mirrorless camera
There are a number of challenges to be overcome:
On some cameras the EVF may get so dark that you can’t see any of the foreground
The camera metering system reflects a jpeg not a raw image file
The image review afterwards may also be incorrect
You do not know how to edit such image to find out if it was clipped or not
Let’s take those challenges one by one.
Normally with a mirrorless camera if you try to expose so that the sunball is not clipped the display gets so dark that you can’t see the foreground unless you have a light.
Some cameras like mine have a metering mode called highlight weighted where the camera calculates exposure not on the middle grey but the highlights. This in turn allows us to calculate how much headroom is built in the exposure tool of the camera. I have calculated that for mine is 1 full stop. So the first step is to set your camera to the fastest shutter speed your flash can sync to (in my case 1/400) the lowest ISO (in my case 200) and the smallest aperture that does not go into diffraction (in my case f/10) and see if you can match that 1 stop overexposed and work from there with your aperture. Set the strobes to match your aperture in my case I set them to f/16 to start and then move to f/22 if needed if I am reasonably close otherwise I may go all the way to full power.
The second step is to switch back the camera to a normal multi metering mode and ignore entirely any warning of clipped highlights so you can compose the scene and shoot.
The jpeg review will show blinking highlights in abundance but you know that is not actually true.
Later in lightroom we apply the setting to remove the program bias and see if the scene was clipped
As you can see the scene is correctly exposed!
After some editing we are at what I consider a decent result
Here another example later in the day in another dive site an even more challenging backlit situation.
There is no doubt that the eye and the brain can do an easier job in those challenging condition to frame the shot however any camera including a DSLR will lie to you when it comes to the image review so ultimately you can push your equipment much further than you think if you know how.
Just back from a fantastic week. Cannot write a trip report on something I arranged however I am confident those will come from the participants.
The actual schedule ended up like this
Ras Umm Sid
I would have preferred a more aggressive approach to some sites however I decided ultimately to settle on something that was challenging diving wise but not extreme.
I used my Panasonic GH5M2 with the Canon EF 8-15mm and the Panasonic 8-18mm. Surprisingly I found I had more keepers with the 8-18mm this is due to the dolphin dive for which I took the risk of using the rectilinear lens and continuous autofocus which worked well.
The trip had a slow start at Temple followed by Ras Mohammed and some technical training on light at Beacon Rock.
After a dive at Dunraven and a better one at Small Crack where I took video we moved to Abu Nuhas where I decided to skip the last dive and go for a snorkelling trip hoping to get dolphins.
The dolphin came to play we had one hour with them swimming at speed around us and getting really close
It was the day of the wrecks including the Thistlegorm in order to support the group I was at the back which did not help visibility. We were mostly on our own though
Two additional dives on the Thistlegorm and we were back to Ras Mohammed after the adrenalin an easy dive at the lighthouse followed by sunset trip on a sandbar. This time I tried to get some better shots of the Thistelgorm exterior while I would say inside there was generally less fish to make the shot interesting.
Two dives at Shark Reef the current was pumping. We missed the snappers on dive one but they were there in full force with the batfish on dive 2. The group however ran out of air very fast trying to get the shots. Last dive was at Ras Ghoslani to have a break and finally a session of split shot that was not very successful due to waves however I did produce a decent one with quite some fish.
Usually the last day is a more restful however we had 3 dives and one sunset split session so actually a full day. Here dive one was focussed on sunburst but ended up also being dive two.
It was a great trip although I am not sure I took my best shots in all cases. The Thistlegorm was under par while the dives at Ras Mohammed and other sites other than Shark Reef were better than expected.
One thing that proved to be absolutely right was that the ability to influence the boat schedule and itinerary is essential. We were in the water always first, Egyptian boat have a tendency to get in the water very late for dive one and this means most of the following dives have the sun really high and not always the best conditions.
A few weeks ago I went diving in Swanage with BSOUP the British Society of Underwater Photographers that I have recently joined.
I was looking forward to some local diving so when I found out that they were organising a trip I managed to get on.
I drove there the night before and I was number two on the pier the next day.
It was a deceiving clear morning with perfect conditions on land.
I had two cameras one in the housing and one for land use so I took a few snaps.
Once parked on the pier I was informed by two friends that dive locally all the time that it was better to wait when the water level was a bit higher.
At that point it did look like a great day however there was a bit of wind.
I had my GH5M2 with the Panasonic 45mm macro that I acquired last year and has become my favourite macro lens.
I jumped in the water one of the first to find out the visibility was well maybe 1 meter? I could not see the LCD screen of the camera due to the suspended particles and had to use the viewfinder
One of the first things I say was this corkwing wrasse with a massive parasite near its eye.
Unfortunately I did not have a snoot or strobes suited for the challenge so I spend the first dive training myself on how to get the least amount of back scatter. Mind you when there are particles you will have backscatter not matter what you do.
Static subjects are ideal for testing so I had a go at some really simple stuff.
And again some anemone the object was to get the cleanest possible shot.
When I was reasonably happy I moved to some more interesting subject I gave up on blennies as I knew everyone would have shot some and besides my strobes were not the best for the situation and I found a cooperating cuttlefish.
I can tell you that to get this clean shot it took me quite a while but on reflection despite being very low I could not even see a hint of the surface so bad the conditions so I decided to get really close.
I wanted to emulate a profile of a person or perhaps an elephant not sure but I took a number of shots waiting for the tentacles to be in the right position and this is my best shot for the day.
I would say it is quite creepy but after all I had something decent and when I presented the shot in the club review at the sailing club it got some good feedback.
Now with that in mind let’s have a look at some shots taken in clearer water this is from Sorrento Peninsula.
You can see that clearer water improves contrast and sharpness as you would expect however as the UK shot was very close the gap is not as big.
And this is a shot from last time I was in the red sea
This is super macro so again suspended particles are not as important.
However if we look at a mid-range shot similar to the whole cuttelfish the situation is very different.
Here we are in Italy.
And finally here in the red sea.
For as much as we may love our local dive site there is a degree of adaptation but also a restriction on the variety of shots we can take.
When I was working as resident dive instructor I remember the guidelines we were passed one was really funny and said:
“if the visibility is crap you don’t say that to the guests what you say is today we are going to focus on macro” then you make sure you choose a site where there is some.
If like me you have been trying to make the most of your local dive site you deserve to get yourself in clear water where you can actually see further away than your arm. Of course we do have some good days in England sometimes 5 even 8 meters but I tale Egypt and their 25+ meters any day of the week!
A closing thought on conditions and land photography, in fact even if visibility is not an issue most times unless you have fog, overcast days, excessively clear days do not make great land pictures either so we can say we are always on a quest chasing light and conditions.
I believe we have finally got to the point where users are moving from DSLR to Mirrorless camera in mass. The release of the recent Nikon Z9 and Canon R5/R3 has definitely shifter land photographers to mirrorless.
Underwater photographers have been lagging mostly because of optics compatibility more specifically lack of compatible fisheye options for mirrorless. Some classic lenses like the Tokina 10-17mm do not work properly when used through an adapter and releasing fisheye lenses has not been a priority for Canon or Nikon. The good news is that 1st party full frame lenses like the 8-15mm fisheye do work through an adapter and generally all DSLR optics 1st party can be adapted to a mirrorless camera of that brand.
I have sold my last DSLR in 2016 and generally never looked back. I believe this can be a harder move for a bird shooter or a sport photographer but the latest flagship cameras have performance for everyone.
EVF vs OVF
In terms of image quality there are no significant differences between a mirrorless camera and a DSLR camera. Improvements in image quality are mostly related to sensor improvements regardless of the system that runs that specific sensor. There are however some significant differences between an optical viewfinder and an electronic one.
An optical viewfinder literally means looking through the lens with your eyes, the primary benefit of an optical viewfinder is the lack of lag. Some people say that optical viewfinder have higher dynamic range but that is not actually correct as an optical device does not really have dynamic range limitation and neither is true that the human eye has 30 stops of dynamic range and all those fantasies.
The key problems of an optical viewfinder is that when is dark you cannot see things until your eye adapts and this happens slowly so most DSLR users switch to live view which essentially means using your DSLR camera as a mirrorless camera and watching a video stream on your LCD.
The other ergonomic difference is that you don’t know how your shot turned out until you review it after you shoot as the OVF can’t play back images being an optical device only.
An electronic viewfinder instead is nothing else than a micro LCD or OLED screen that is showing you a video of what is going on and is also able to playback the images.
This has the great benefit of not needing to take your eye off the viewfinder as the image is played back as soon as you shoot. The price to pay is a small lag between reality and what you see on your EVF.
While an OVF is real time an EVF has a lag that depends on how fast the sensor is being read. This can mean a delay of more than 30ms on very cheap cameras with just an LCD down to 5ms for the fastest reading Nikon Z9 and the likes. In general below 20ms is normally good enough for underwater use but for fast moving subjects like birds in flight less than 10ms is better.
The other benefit of an EVF is that in dark scenes it can boost the display so you can see better than your eyes in the dark.
Electronic Viewfinder Myths
One of the biggest myths about EVF is that they give you a what you see is what you get view of the image before you take it.
This is unfortunately untrue and it is important to understand why.
In a photograph we have two exposure settings the aperture and the exposure time. ISO maps the amplification of the system and is not an exposure setting however it can be useful to brighten an image that is too dark by amplifying electric signal after light has been converted into current by the sensor.
Normally a camera operates with the lens wide open and with a fixed exposure time determined by the sensor readout frame rate.
Imagine that your camera has an f/2.8 lens and the sensor is reading at 60 frames per second. You have set your underwater shot for f/11 1/250. However your camera will not close the aperture to f/11 until you press the shutter and it is actually operating at 1/60 exposure time.
In order to simulate the image the camera will try and adjust the brightness of the EVF to make it lighter or darker so that you can see properly what you need to shoot. This has actually nothing to do with the shot that will come out.
Some cameras Sony, Canon, Nikon and Panasonic have a preview or exposure simulation setting that will close the aperture to the value you chose and simulate the shutter speed chosen in the video displayed to your on screen and if you operate in full manual the display will actually change brightness as you change your exposure settings. However this does not actually show an image exactly identical to the one you will shoot because of the limitation on the exposure time. It will show something close to that image and only if you select the option to simulate the exposure. Some cameras are actually unable to perform a full simulation and the brightness of the EVF will not be adjusted and may give the impression the image is very bright when it is not.
If you shoot with flash of course all of this goes out of the window as the camera assumes the flash will always sort things out and the display won’t be affected unless you force it too but of course it won’t be any near to the image you will take. In essence you need to wait until after you have taken the image to see very much like a DSLR.
Are mirrorless better for the underwater photographer?
Despite beliefs of hard core DSLR fans mirrorless are a better option for the underwater photographer for a number of reasons:
The EVF lag is no longer an issue as it used to be on old compact camera and the refresh is faster than your eye and brain can react to
You can see the image preview without having to take the eye off the viewfinder
If you need to shoot in ambient light you have exposure aids that will make sure your image is correctly exposed without trial and error
Is there a disbenefit to EVF? The EVD is a small screen and needs power to run this means that given the same battery capacity a mirrorless camera will have less autonomy however almost all decent cameras have over 300 shots autonomy and can get easily to 500+ so really there is no reason to hold back to DSLR.
In 2022 it is definitely time to move on.
If you are a DSLR shooter and see other disbenefit from a mirrorless camera leave a comment I want to hear from you.