Special Issue Article
Novel Approach for Image Decomposition from Multiple Views
|P.Dineshkumar1, R.Kamalakannan2, V.Rajakumareswaran3
|Related article at Pubmed, Scholar Google|
Intrinsic images aim is separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This paper presents the system that is able to estimate shading and reflectance intrinsic images from a single real image, given the direction of the dominant illumination of the scene. Although some properties of real-world scenes are not modeled directly, such as occlusion edges, the system produces satisfying image decompositions. The basic strategy of our system is to gather local evidence from color and intensity patterns in the image. This evidence is then propagated to other areas of the image. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, and running the Generalized Belief Propagation algorithm. One of the primary limitations of this work was the use of synthetic training data. This limited both the performance of the system and the range of algorithm pseudo inverse process is available for designing the classifiers. Then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. Finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky, and indirect layer they expect that performance would be improved by training from a set of intrinsic images gathered from real data.