I understand but I'm not sure you do. "Geometric super-resolution imaging" uses a specialized algorithm to reconstruct - and Photoshop align isn't it.
Furthermore the technique assumes that the image being enhanced is itself sampled to a lower resolution but still contains the high frequency information which can be reconstructed. All the papers I have read about it also state that it is not acceptable for images with motion artifacts or misalignments.
That's not to say that the procedure put forth doesn't have any merit, just that it isn't quite up to the $10 fancy name you applied. The primary purpose for image stacking is for noise reduction - not enhancing micro details.
You may as well get one of the numerous detail enhancer plugins and have at it.
My
original quibble is with the assertion that more megapixels = a good thing. Did you bother to read the link I provided to the calculator and an explanation of why more pixels isn't necessary? One photographer did an experiment with a Billboard magazine cover from his friggin iPhone.
A photographer used an iPhone 7 Plus to take this stunning 'Billboard' magazine cover
We won't even go into the fact that most images are not even printed large - instead they're shown on forums such as this and are lucky to have any side over 1200 pixels. Most folks that get wrapped up in pixel counts do so for bragging rights - not practicality. But I don't care per se - I was merely trying to help as some folks suffer under the misconception that the more pixels you have somehow makes the image better.
PS: To jog your memory here's an excerpt from teh original post:
You are still dissembling, and wriggling around to try to avoid the point of the original discussion.
Yes - advanced super-resolution techniques use specific techniques but those are not essential to be able to combine spatial information from multiple, shifted images.
Yes - the method does not work well for images with movement - no one said that it did, and that very issue was mentioned earlier.
Your "$10 fancy name" comment is asinine since the name is correct.
Stacking, depending on how it is used, can both reduce noise and enhance detail. I'll assume (dangerous, I know) that you understand how the noise reduction method works. The enhanced detail is available in engineered systems from the inter-image sub-pixel shifts that are applied systematically. In this approach the sensor shift arises from the stochastic variations in handheld (or UAV-held) sensor position, which is simply a poor-mans equivalent. You do not have the privilege to define the primary purpose of anyone else's use of stacking.
No - this is not at all equivalent to detail enhancer plugins, and that suggestion clearly demonstrates that you still have missed the entire point of the method, which is that mutliple images contain more information than a single image. Detail enhancers take the data available in one image and apply various global and local adjustments to sharpen edges, mostly. They are making assumptions about detail. Sub-pixel image localization, as its name implies, takes advantage of the increased quantity of data in a set of non-identical images. What don't you understand about that? Are you disputing that there really are more data (trivially obviously that there are), disputing that this method can extract some of the increased data (e.g. because PS is not a "professional tool"), or simply not thinking it through at all? I don't see any other explanations.
If all the images in a stack were identical (no object motion, no sensor shift, no random sensor noise) then this would not work - the stack average would identically equal each image. In practice the random noise is not zero, and so such a stack could at least reduce that. But if the sensor is moved slightly between images, different spatial data are captured in each image and mulitple shifted images contain more spatial data than any one image of the set. That permits both averaging out of systematic noise and increased localization of point (or small) light sources to better than the orginal pixel resolution, but requires image upsampling first to allow the sub-pixel localization to be stored. Upsampling, aligning and averaging is one way to access the increased data, and the more images that are available, the higher the theoretical resolution achievable.
And while you have consistently added the comment that no one needs higher resolution images, that was not your original assertion (post #22) which was that this method cannot increase resolution without image degradation, which it most certainly can.