In reply to Dan Arkle:
> ..., using 50mp and down sampling to 12 will increase image quality. Even if a phone lens struggles to resolve 50mp
Downsampling 50->12 will be better than 50, but not better than if it had been a 12Mpx sensor in the first place. Unless the sensor implements analogue 'binning' - which I've never seen in a CMOS sensor - each of the 50Mpx sites will contribute read noise to the final image.
> For example, one of the pixels might be random noise, and in Pixel cameras, the algorithm should then be able to eliminate it and use data from the other 3 pixels.
Not really. You can't by definition distinguish random noise across pixels. You could try this with fixed pattern noise, but that tends not to be a problem on modern sensors.
> Computational photography like this is the future. More data is good, and the extra data will also help with digital zoom.
In imaging the data is the photons. You can't make more photons or data computationally, only by having a bigger lens or (pedantically) a more efficient sensor (so you convert more photons to electrons).
Computational imaging is mostly like 'fake news' - you're just massaging the raw data you have into something that the consumer will 'like' more.
> Pixel cameras also take multiple shots, with different exposures each capture, and uses these to ensure shadow and highlight detail is retained (like we used to have to do manually).
Yes you can do that, but like with the 50->12 downsampling, its main use is to compensate for the poor dynamic range of the tiny pixel sites in a 50Mpx sensor.
Indeed.
Post edited at 09:44