I got my iPad 3, woo-hoo.
A cool little change is that you can use the iPad while it's being sync'ed. Very good.
Here's an interesting thing: I can read better (for fine text) on my iPad two if I change my regular glasses for my reading glasses. So you would think that with my regular glasses, the higher resolution of the iPad 3 wouldn't look any different, no?
But this is not so, even without the reading glasses, I can easily see that the screen is better. (How much it benefits me I'd not like to speculate yet.)
This fits with a datum I got from photography expert Ctein (K-tein): unless a lens is hopelesly out-matched, you will get higher resolution (more detail) if you get more megapixels in a camera, even if the lens has less resolution.
Or in other words, you would think that either the lens or the sensor has the highest resolution, and only improving the other one would improve the results. But that is not so, improving either one will improve detail, unless the gap is really huge.
I find this very interesting.
3 comments:
A higher resolution sensor samples more points, if the lens employed has a large point-spread function it is still possible to deconvolve more information with more sampling.
Using a deliberately structured point-spread function by the use of a phase or aperture mask results in imagery which in raw form is more blurred but can be completely deconvolved to obtain improved depth of field.
This is a specific manifestation of the more general field of computational photography.
In the absence of explicit computation one assumes that the visual cortex performs the function. That or lensing from the Reality Distortion Field.
This is increasingly important as imaging chip resolution is exceeding lens resolution in many cases.
Grats, Eo! Look how everyone else chose to get theirs!! NUTS!
Looking forward to hearing more about your new findings of it! :-D
Post a Comment