Assuming your math is correct,
maximum photo/video resolution is not always equal to sensor size.
The sensor may have more photosites (not pixels) than final maximum resolution in a couple different ways:
1) The full number of photosites on the sensor may not be captured - some designed-in slop at the edges that is cropped out. This would typically be slight.
2) There may be many more photosites on the sensor than in the final image, which are processed to a lower resolution before recording.
This could happen in a couple ways:
Pixel binning, in which data from some photosites in the active imaging area of the sensor is not included;
or resolution reduction during image processing, in which e.g. a 6k image may be reduced in resolution to 4k in the camera image processing before recording.
I don’t know specifically what’s going on in the sensor and image processing of an
Air2S. The engineering is interesting. In my opinion if it looks good it is good.
EDIT
My reply crossed with
@anotherlab above. I re-read the earlier thread they linked, this popped out as a major third factor:
Notation for sensor size is a whole big can of worms. Basically, the published numbers are usually just a marketing gimmick based on decades old notation called 'optical format' owing back to when camera sensors were based on vacuum tubes…