Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Image Analysis, Aided by Artists

Hank Hogan

Artists don’t see objects and scenes the way a camera lens does, and understanding how the best artists work could be useful in computerized image analysis, according to preliminary research by Charles M. Falco, a professor of optical sciences at the University of Arizona in Tucson.

From 1903 to 1905, Claude Monet completed a series of paintings, each approximately 82 × 91 cm, of the Houses of Parliament in London. Here, the skyline in each work of art is compared with the actual skyline as recorded by a camera from the same vantage point. For the most part, the painted skyline follows the camera closely except for the main tower, which Monet painted both higher and narrower. Mimicking this effect may help transform images from a camera into more easily understood information. Courtesy of Charles M. Falco, University of Arizona.

When drawing or painting, for example, artists can capture the essence of an object with a few brush or pen strokes — a very low resolution rendering that nonetheless is recognizable to observers. The best artists also can automatically account for how the viewer’s eye and brain work, thereby producing pictures that are better representations of a scene than photographs are.

For Falco, research into how painters work grew out of collaboration with the artist David Hockney, who had made a painting of the view from his studio door. At the scene, the eye seems to quickly and automatically rove across a wide area, and the brain assembles this accumulated information into a mental picture. Hockney captured elements gained through this process and put them down on a canvas. Falco realized that Hockney’s painting captured the setting better than a photograph could — even one acquired using a semi-wide-angle lens.

Falco began looking at how artists with the highest visual skills — such as the impressionist Claude Monet — represented scenes. For example, he compared a series of paintings completed by Monet of the Houses of Parliament in London with actual photographs taken from the same location (see figure).

Falco found that Monet had done a very good job of capturing the outline of the buildings, except for the tallest tower. In the nine paintings, the artist made the tower on average 15.5 percent too tall and 18.1 percent too narrow. Given the accuracy of the representation of the other buildings, Falco concluded that Monet deliberately exaggerated the tower’s dimensions, thereby representing — in the artist’s perception — the original scene better than a straight rendering would have.

In looking at other paintings of buildings by other artists, Falco found similar distortions, with the most central features exaggerated. Paintings of Big Ben, for instance, might show it significantly taller than it would be in a photograph.

As a result of continuing analysis and research, it might be possible to develop distortion algorithms based on artists’ works. A computer could then run these on images captured by camera, transforming them into something more easily grasped by onlookers. “It is certainly not easy, but I have good reason to believe it to be doable,” Falco said.

One benefit could be better training. Rather than giving someone a high-resolution image with instructions on what to look for, it might be possible to create a low-resolution image that has been processed to exaggerate certain elements that would increase recognition and decrease training time.

Another use might be in vehicular head-up displays, where including an appropriately distorted representation of a scene may enable the operator to make a quick identification of important features. “This second example would be a significant step along a path toward automatic recognition of significant features in scenes without the need for human input,” Falco said.

International Conference on Information Sciences, Signal Processing and Its Applications, Feb. 12-15, 2007.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media