For a nonpro photo enthusiast the zoom offers great photos and the zoom function during the day 4x zoom video in 1080P- this is huge! But at failing light dusk and night, the M2 pro's camera shines!
Swapping lens only takes 3-4 min. The drone recognizes the lens and all functions become available in the firmware/software.
The question is with the swap, how much abuse the ribbon cable connector can handle.
I'm not that surprised from what I've read around here and my experience in general photography, the problem with the Pro 2 is that although it does have a better sensor you won't see the full benefits of the sensor unless you post process the raw files. The zoom on the other hand has a very obvious benefit which can be seen by anyone straight out of the box. I see the same with DSLRs and other large sensor devices particularly now smartphones produce such great shots straight off with no processing while larger sensor devices are usually pretty flat out of camera and need processing to extract the immense image quality they are capable of.
I was a big fan of the concept of the Karma and planned to buy one over the Mavic because I liked the modular design, I wanted a gimbal and new camera for the bike which the Karma would give me and I also liked the idea that I could upgrade the camera on its own or if I crashed the drone I could easily replace one part of it. Then they were falling out of sky and the re-released ones weren't great so I ended up with a Mavic and a separate gimbal and camera for the bike. Even if the Karma had worked well it was far bigger than the Mavic partially to accommodate that and the gimbal design was poor for mounting to anything else. The gimbal I bought was much better for connecting to a test strap to use on a bike and also had replaceable batteries which the Karma grip lacked.
In terms of what I'd pay for a swappable gimbal on the Mavic I think it's a good idea both to give more flexibility and also for repairs since the gimbal seems the most likely part to take damage in a crash but it's not something I'd pay more or would attract me to a drone on its own.
I agree. Had the Mavic not arrived, everyone would be flying Karmas, and think they were an extremely portable drone. Enter the Mavic, and the Karma looked like a pregnant Orca.
Gopro was such a pioneer in action cameras, too bad they couldnt settle their differences with DJI. DJI still doesnt have a waterproof consumer grade drone, but other companies do, such as Swellpro. An inferior drone, but meant to serve a purpose.
I wouldn't buy it. The difference between both is just to small. I'm happy with the Zoom and I'm not allowed to fly in the dark in my country. With the super resolution I get high resolution pictures (I stitch the RAWs in Photoshop) and there are only a few photos I took which could not be Super Resolutions (because of motion). I'd prefer, DJI try to combine the features of the pro and the Zoom into one camera.
But to be honest, the image quality of the current Mavics is really superb. If you want to step up, you have to pay for an Inspire and that's okay. Maybe I'll think about this when I'm done with the M2Z (in some years), maybe I'll be happy with the than current Mavic-like drones.
You're always going to make a compromise one way or another, if DJI could offer larger than a 1in sensor (or something larger than 1/2.3in) with a zoom then they could also offer a larger than 1in sensor without a zoom. Similarly with the Mavic 2 DJI could offer a zoom with a small sensor or no zoom and a larger sensor so they made the decision to offer separate cameras. You see the same in cameras as those that use a 1/2.3in sensor can offer incredible zoom range (125x for the Nikon P1000) but if you go up to a 1in sensor then it drops to just 25x in the same size.
One of the main misconceptions of the Mavic Pro 2 is that its sensor is only an advantage at night (I'm assuming that's why you mean by flying at night, sorry if not) but the main advantage for me with the 1in sensor is the much wider dynamic range. Many drone shots will have a bright sky and a dark ground which means you lose detail in both areas but with the Mavic 2 pro it's possible to extract a lot more detail than in a 1/2.3in sensor. Although low light is an advantage of a larger sensor, I don't find it's so big an advantage on the Mavic because the super stable gimbal means you can get away with much lower shutter speeds than when shooting handheld.
I'm not saying everyone should buy the Mavic 2 Pro, just explaining its advantages which are not as clear as the zoom and certainly for those that don't work with raw files they're not going to be able to get most of the benefit of the Mavic 2 Pro.
Perhaps it should be a different poll but the feeling I get from reading this topic is people have changed camera from the pro to the zoom because they weren't happy with the pro rather than switching back and forth.
a few days ago there was a discussion about the effects of different sensor shapes in another thread and I promised to write more about it. Since it's pretty offtopic in the other thread, I'm starting a new thread here. The purpose of the thread is to provide a basis for a technical discussion or to tell interested people something about the technology behind digital cameras. I am happy to try to answer all your questions and suggestions or discuss the topic. However, I ask you to stay objective, especially if advantages or disadvantages are mentioned and not to let it degenerate into a sensor format or manufacturer war.
First of all: I myself do not want to say anywhere that a certain sensor format is better or worse for the photographer, because both large and small sensors have advantages and disadvantages and everyone has to weigh up for himself what suits him better. I try to write this post as fact-oriented as possible. If it should sound different at any point, then please draw attention to it. I myself currently use both a full format sensor and a 2/3" sensor for my pictures. So a quite large and a rather smaller sensor. On top of that this view is very theoretical and in theory big differences can be relativized in practice. For the sake of simplicity, I take a full format and an MFT sensor for comparison, as these differ by a factor of 2 (crop factor 2) in the diagonal and by a factor of 4 in the area and are therefore easier to calculate with. The focal length data always refer to the full format sensor as it is usual (KB equivalent). The fact that both have a slightly different format (3:2 and 4:3) is also dropped for the sake of simplicity.
Terms used by me:
MFT or µFT: Sensor format (Micro)-Four-Thrids (17,3 x 13mm), Crop 2
APS-C: Sensor format (22,2 x 14,8mm), Crop 1,6
FF: Sensor format Fullframe/Fullframe/Smallframe: (36 x 24mm), Crop 1
MF: Sensor format Medium format (48 x 36mm), Crop 0.75
Crop/Crop factor: The factor in how much the diagonal of a sensor behaves in comparison to the diagonal of a full frame. It is also the apparent focal length extension
(Light) Transmission: Light transmission is the degree of transmission of an optical element. Window glass, for example, usually has a transmission of 0.85. This means that 85% of the light passes through and 15% is reflected or absorbed. Solar glass (glass for photovoltaics or solar thermal collectors) usually has a transmission of 0.95. Unfortunately, I don't know how high the transmission of the glass is on average for lenses, but I suspect even higher, about 0.95-0.98.
Apparent focal length extension
A sensor that is smaller than full format theoretically does not fill the entire image area of a lens. If you take a picture with the same distance and the same focal length with FF and MFT, the MFT sensor only shows a section of the FF sensor image. Thus, it looks like a crop. The image section of a 100mm lens on FF corresponds to that of a 200mm lens on an MFT and therefore one speaks of a focal length extension. However, this expression is only partially correct, as not all effects of a longer focal length change. This effect is especially useful when using telephoto lenses/long focal lengths. On the other hand, it requires very short focal lengths for (ultra) wide-angle shots.
Depth of field
The larger the sensor, the shallower the depth of field with the same aperture and the same image section. This is because the absolute depth of field increases as the subject distance increases. Thus at aperture F2.8 I have a much higher depth of field at a distance of 100m to the subject than at a distance of 1m. If the sensor gets smaller, I have to take a larger distance to the subject at the same focal length and therefore the depth of field increases. This can be advantageous in both directions: if I want more depth of field, then I have to dim less with smaller sensors than with larger sensors. If I want a lower depth of field, then larger sensors have advantages. The lens sets the limits: how far can I open the aperture and at which aperture does the diffraction blur begin?
With the same number of MPs and the same sensor technology on an MFT and FF sensor, a single pixel on the MFT sensor is only a quarter the size of a pixel on an FF sensor. The smaller the photocell, the less light or photons it captures with otherwise unchanged parameters. In order to achieve the same apparent brightness, the result must be amplified by a factor of four. However, the inaccuracies and false information are also amplified and this effect is perceived as image noise. Thus the gain at ISO800 on MFT corresponds to the gain at ISO3200 on FF.
To illustrate this, a very simplified example: my light source provides me with 1000 photons per pixel on the FF sensor. These 1000 photons are sufficient, for example, for this pixel to produce 50% gray. With the MFT sensor, there would be 250 photons, since the photocell is only a quarter the size. So that it reaches 50% grey as well, the signal has to be amplified four times more. Unfortunately, photocells are not perfect and they deliver e.g. a readout error that corresponds to the brightness of up to 25 photons. However, these readout errors are also amplified. With the FF sensor I have an error of up to 2.5%. With the MFT sensor however up to 10%.
In practice, however, two sensors are not so identical that one can assume exactly one factor of four. The photocells do not make up 100% of the area of a sensor, but there are very, very, very narrow distances between the cells. The effects of the distances, on the other hand, are significantly attenuated with so-called microlenses. These lenses capture the light on a larger surface and bundle it onto the photocell like a magnifying glass. On most of today's sensors, a single photocell perceives only one color (red, green or blue). Strictly speaking, the photocell only perceives brightness and a color filter in front of the photocell allows only one of the three primary colors to pass through. This is the Bayer matrix of the Bayer sensor. Of four pixels, one each perceives red and blue and two perceive green. The decision was made to double green, as the human eye is most sensitive to green tones. This gives us two layers in front of the sensor: microlenses and color filters. But most sensors have two more layers: an IR blocking filter that blocks infrared light and a low-pass or anti-aliasing filter that prevents the moiré effect caused by digital (under)scanning. Thus there are already between two to four layers between the last lens of the lens and the photocell. These can have slightly different transmission values and therefore it can easily happen, for example with an FF sensor only 90% of the light arrives at the photocell and with the MFT sensor 95%. And then the gain factor is no longer 4, but only 3.7.
Although the ISO5800 standard specifies which image brightness should be achieved under which lighting conditions, there may be differences between individual camera models. Completely independent of the sensor used. For example, manufacturer A may call ISO 100 the same as manufacturer B would call ISO 110, but both call it ISO 100, although manufacturer B amplifies the signal and thus the readout error by 10% more. Lenses also have different degrees of transmission. A lens with many lenses and the same glass quality as a lens with fewer lenses has a lower overall transmission.
Another physical effect is heat generation. The warmer a photocell, the higher the background noise. The larger a sensor, the greater the heat emission of the sensor. So if the housings of a camera with MFT and a camera with FF sensor can emit the same thermal energy, it will be warmer inside the FF camera if the rest of the hardware generates the same amount of heat. This has an effect especially on long-term exposures, LiveView and video recordings. Here, the sensor is used for a longer time and thus generates more heat.
The physical noise is reduced by the camera manufacturers with the image processor. The better the algorithms, the more image noise can be removed before you lose sharpness or detail. Even if we look at RAW files, they have at least gone through a demosaicing process. Even in the same tools (like e.g. Adobe Camera Raw) these are slightly different depending on the manufacturer/camera model in order to get the best result.
And all this only applies as long as the sensors have the same number of MPs. As a rule, larger sensors also have more MP and thus the factors are reduced. If an MFT sensor has 20MP and an FF sensor 40MP, we no longer have the theoretical factor 4, but only factor 2 at pixel level.
I will write something on this point in the next few days. I have to do some research myself. But as an approach: the dynamics of a photocell depends strongly on the background noise but also on the signal amplification. If the background noise is lower, the higher the dynamic range is possible. A higher dynamic, however, apparently needs a higher signal amplification (here I have to research even better). That's why some cameras in the lowest ISO levels (e.g. ISO50 for Canon FFs) offer less dynamic range, i.e. slightly higher ISO values.
Since in the last thread the question arose what I mean by light circle, here is a short explanation. Possibly there is also a technical term, which is not known to me. A lens creates an illuminated circle (usually only about one circle) on the image plane (where the sensor is). The lens doesn't care how big the sensor is inside. With a lens for full format the full format sensor has to be fully illuminated, with a lens for a smaller sensor it is sufficient to illuminate only this sensor (which simplifies the lens construction). Here is a picture that shows what I mean. I have drawn the approximate sensor sizes MFT, APS-C and FF.