DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Love the Hasselblad Camera

Computational Photography is definitely where the next leap in quality is going to come from.
In that respect smart phone makers are cutting edge and conventional DSLR/Mirrorless makers are literally years behind and the gap is growing. Theres a huge resistance to change.

It depends on what aspect you're referring to, computational photography will have little to no bearing on say journalistic photography, sports or, maybe, wildlife but massive implications for fashion, landscapes and product photography to name a few industries. Remember just because you *can* do something doesn't mean you should. That's why the likes of professional cameras have not changed much because they just work as needed and often under demanding conditions that most people can't conceive or understand (try using a phone at -20 with gloves on and be able to work for hours). Phones are limited by the size of the lens and also their sensor size and *have* to apply AI/computational photography to compete it's simple physics, but they're nowhere near the quality of say a 14 or 16bit sensor. I've worked with a lot of RAW images, and phones have come a long way but still are lacking when comes to resolving detail and dynamic range without resorting to tricks. If you're resorting to tricks you're potentially handing your creative vision over to a computer.

The other issue, is that the CPU of a camera just doesn't have the power yet and retain battery life to produce the work, and if you want to offload it to a remote server then you're opening the camera to security risks and a high power draw of WiFi (which to be frank is pretty much awful on all cameras that have it for throughput). This is why a lot of pro cameras have 1Gb ports to mitigate security/power draw.
 
IT can apply to editorial and so on, not the full "AI" stuff altering the image but using various tricks to reduce things like sensor noise etc will find an application. Its no different to camera internals now (even dslrs) but significantly more advanced.
Agree to an extent about the CPUs but the natural progressional would be to start upgrading conventional camera CPUs with more capable ones given the price is minimal now thanks to the expanding mobile market.

Yes theres no substitute for a large sensor, good glass and so on but it isnt a substitute. These techniques can improve on what you'll get with whatever hardware is available.

I do agree about general use, my main cameras have touchscreens but i never use them. As you said, gloves, rainfall, moisture and anything else stops it working. You can't beat real buttons. The WiFi on them is so bad and unreliable i cant remember the last time i ever turned it on. Its likely measured in years.

But a "proper" camera with better computational hardware and processing is going to happen. My guess is Sony will start heading this way first. Nikon a few years behind and Canon will talk about doing it some time in the year 2070 then just release an app that can print "hello world" on your image at most.
 
  • Like
Reactions: passedpawn
While i agree an ND is the last thing you want for still images (i have no idea why people keep thinking they need one) you're better off going with a faster shutter.

Some people are using it to get slower shutter speeds in brighter light in order to get the soft water effect without getting into the sharpness robbing apertures of F5.6 and smaller.

I have a set and I have yet to employ them but might at some point.
 
But a "proper" camera with better computational hardware and processing is going to happen. My guess is Sony will start heading this way first. Nikon a few years behind and Canon will talk about doing it some time in the year 2070 then just release an app that can print "hello world" on your image at most.

The problem with Nikon is that they're dependant on what ever sensor that Sony decides to dish out, usually a generation or two behind their own. Canon is the only one that has Fab capacity to do their own sensors, and the only company out-with Fuji that's actually producing solid lenses for their Mirror-less line up and have sensors. Sony is slowly filling the gaps, but they're reliant on third parties to do that. Canon is being aggressive with their R series and experimenting with UI/UX changes, though it remains to be seen what they're doing with sensor tech. Though by rumours they have a new sensor to be released next year. Nikon has to step up their game, more so than Canon because of that dependency. Sony has to step up to the game with QC and with UX/UI (a perennial complaint), they're pushing the envelop because they have to as it's one of the few profitable parts of the company.

Sadly the industry is now being milk fed by Sony and that's a sorry state to be in, and that's not likely to change as Fabs can cost billions to build let alone run. Going by sales it's coming down to a two horse race.
 
It’s a glorified high end phone camera- the colour science implementation seems to be, as you have said, the best feature. In recent years Hasselblad has done little more than rebadge Sony cameras with stupid pricing for the name association. I don’t think they ever made their own sensors and used Sony when they did- like they are now with the M2P.

Making your own sensor is not really needed anymore, all my Nikon cameras have Sony sensors and yet they still work like and act like the Nikons I started with in this business in 1986. My 50MP Hasselblad digital back for my V system has a Sony sensor, but the color range and processing is uniquely Hasselblad and pretty outstanding.

Hasselblad is in better shape than they have been in at least 10 years, they learned from the "Lunar" fiasco and now have DJI for capitol and are moving along with really great products like the new CFV50c-II and X1D-II. My Hasselblad V system is a primary one for me as it allows me to seamlessly integrate digital imagery with my need to shoot black and white film for fine art prints, these truly are the "Good Ol' Days" in my opinion.

I had no interest in drones before they put together the M2P, but it is barely passable as a single image so as I cited above, I get around that at times by either uprezzing a cropped version of the single shot using DXO lab before souping it in ACR or do a 4 or 6 panel stitch with consideration of getting rid of the crap corners.

It's getting there though, that is why I finally jumped on, tired of hiring drone pilots for shots who have no sense of composition unless I totally hold their hand.
 
The ND filter did nothing to enhance these images- the fact that it is a combined ND/polariser almost certainly did.

I can't see any reason to expect that you might need to modify your land based photo post processing workflow for your drone shots.
Ok, good to know! Can’t wait to get my M2Pro (Tomorrow) and get some practice in.

Think aligning the polariser would take some practice.
 
all my Nikon cameras have Sony sensors and yet they still work like and act like the Nikons I started with in this business in 1986.

Thats not actually the case as they said a while ago - they design their own sensors but use Sony to produce them as they lack a FAB.
Canon are probably 4-5 years behind Nikon sensor tech wise. The R series is just a rebadged ancient design and very disappointing.
 
Ok, good to know! Can’t wait to get my M2Pro (Tomorrow) and get some practice in.

Wear polarised sunglasses, it's not 100% perfect but it does help you to visualise the effects just don't use a filter and glass together! Alternatively, remember that polarised glass works best at certain angles to the sun (can't remember off of my head the exact numbers) and does interesting things to glass like show stress points particularly on cars. It's also great for reducing glare off glass and water.
 
It's also great for reducing glare off glass and water.

Right, Are use ND and polarizing filters on my Sony a7ii camera.
 
Question: With the M2P camera, with the right ND filter, could you capture an image like this?
View attachment 79907

No.

Thats a longer exposure than the drone is going to be able to do stably (Id guess its 10 seconds+). In addition there are foreground objects very close to the lens so the movement of the drone would be very apparent on there.

The only way i can see is with compositing - a fast shutter for the rocks longer for the water and blend in post processing. Not fast or easy.

I guess its not impossible with the drone but you'd have be extraordinarily lucky - a combination of no movement at all (100s of shots maybe) and various other things.
So not impossible just extremely unlikely and relies on luck.
 
Question: With the M2P camera, with the right ND filter, could you capture an image like this?
View attachment 79907
If not better.

Simply stack align and blend multiple frames to arrive at an exposure that will prove indistinguishable from a single frame long exposure in these circumstances.

You could also get a better composition without getting wet.
 
Thats assuming they'll align well (unlikely given how close the foreground is) with the bouncing. And the artefacts you're going to get creating a pseudo long exposure.

In either case its compositing and a ton of post processing work if it works at all vs a simple in-camera image on a proper setup.

You can make it for a lot of shortfalls in camera with a lot of time and effort in post if required.
 
Making your own sensor is not really needed anymore, all my Nikon cameras have Sony sensors and yet they still work like and act like the Nikons I started with in this business in 1986. My 50MP Hasselblad digital back for my V system has a Sony sensor, but the color range and processing is uniquely Hasselblad and pretty outstanding.

Hasselblad is in better shape than they have been in at least 10 years, they learned from the "Lunar" fiasco and now have DJI for capitol and are moving along with really great products like the new CFV50c-II and X1D-II. My Hasselblad V system is a primary one for me as it allows me to seamlessly integrate digital imagery with my need to shoot black and white film for fine art prints, these truly are the "Good Ol' Days" in my opinion.

I had no interest in drones before they put together the M2P, but it is barely passable as a single image so as I cited above, I get around that at times by either uprezzing a cropped version of the single shot using DXO lab before souping it in ACR or do a 4 or 6 panel stitch with consideration of getting rid of the crap corners.

It's getting there though, that is why I finally jumped on, tired of hiring drone pilots for shots who have no sense of composition unless I totally hold their hand.
Was making their own sensors ever needed? I certainly didn't suggest that. The point I was making is it doesn't matter,

I fear you have placed too much importance on the Hasselblad name in this instance- the Phantom 4 A/P/PV2 would have served your purposes just as well prior to the M2P being released.
 
Thats assuming they'll align well (unlikely given how close the foreground is) with the bouncing. And the artefacts you're going to get creating a pseudo long exposure.

In either case its compositing and a ton of post processing work if it works at all vs a simple in-camera image on a proper setup.

You can make it for a lot of shortfalls in camera with a lot of time and effort in post if required.
Have you tried it? You wouldn't be left assuming then. It is a simple automated process. In any case it provides the only available means to obtain significantly long exposures with the drones. Your concerns with respect to alignment issues and artefacts are unfounded. Give it a go- you will be surprised.
 
Question: With the M2P camera, with the right ND filter, could you capture an image like this?
View attachment 79907
Filter or not... if I need a really long exposure from my M2P I just shoot a brief hyperlapse at an exposure speed of several seconds, which hyperlapse will allow. Then I just capture a frame of the video.
In normal mode the max on the camrea is 1/30th.
 
Filter or not... if I need a really long exposure from my M2P I just shoot a brief hyperlapse at an exposure speed of several seconds, which hyperlapse will allow. Then I just capture a frame of the video.
In normal mode the max on the camrea is 1/30th.
I should try that. Do you end up with a .dng or JPG? I haven't played with hyper lapses at all but I do hope to have a fiddle soon.
 
Question: With the M2P camera, with the right ND filter, could you capture an image like this?
View attachment 79907

Yes.

But:
- it has to be a no wind day.
- the amount of flow will also dictate how much of an effect you will get.

I have successfully done exposure to 4 seconds. I know one member who has done 7 sec, but it’s probably starting to push it.

Obviously, the less speed, the better. I took successfully a shot of a waterfall in Rochester, NY with only 1 second exposure.
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,325
Messages
1,562,028
Members
160,259
Latest member
smittysflying