Editor’s note: Precision and accuracy is an actively discussed topic when it comes to business application of image analysis, since the cost of error is too high in some cases. Read on to learn about recent advances in the field and reach out to us for development of best and accurate custom software for image-processing of different purposes.
Edgar Lobaton, Assistant Professor of Electrical and Computer Engineering at the North Carolina State University, together with his Ph.D. student Qian Ge, have invented a novel segmentation technique that will improve the accuracy of image analysis algorithms used in medicine and manufacturing. This method has a vast potential for the computer vision industry by making these algorithms more reliable.
Segmentation, and why it matters
How do we pick some relevant objects from the background in an image? Our brain seems to separate anything we see into meaningful regions instantaneously, and we hardly realize how the process flows. However, if we want a digital ‘brain’ to perform this task at the same level as people do, we should contemplate the process carefully and find the best possible strategy.
In computer vision, the process of breaking an image into segments is called image segmentation, and this is one of the major challenges for computer scientists to overcome to design effective image analysis algorithms.
Image segmentation allows marking important objects or regions for further analysis. Be it a tumor mass in an X-ray image, or a tooth root canal, or a component of a printed circuit board, the image analysis algorithm should find their borders properly and separate the regions without adding or subtracting any extra information. Incorrect segmentation in these cases may result in inadequate treatment or in mislabeling of vital components. That’s why we need segmentation accuracy to be impeccable.
But several challenges emerge here. First, the difference between regions can be determined by different factors, such as color, gray level or texture. Second, overlapping objects can be difficult to separate. Third, shadowing can create additional ‘borders’ or wash out existing ones. Illumination, in general, plays a major role here, changing the appearance of analyzed objects.
It appears that different segmentation algorithms perform better in different situations. One of the reasons is that they use different parameters to define a threshold separating one region from another. For example, it can be a gray level shift of a particular magnitude or a certain change in the proportion of RGB channels. By varying these parameters, computer scientists get segmentation algorithms of different specificity and sensitivity, working better in various circumstances.
The approach proposed by the NCSU researchers combines the data obtained from a range of segmentation algorithms with different parameters used. After performing a set of iterations with various algorithms and parameters, this new algorithm marks the most persistent segments in the image. The technique has shown great results (a covering score of 0.92) being much more accurate than any of the initial techniques alone.
Lobaton says that the new segmentation method can process up to 30 frames per second, to some extent due to the benefits of parallel computing used for most of its steps. Together with a high segmentation accuracy, this makes the proposed technique a valid real-world solution.
Qian Ge, Edgar J. Lobaton. Consensus-Based Image Segmentation via Topological Persistence. CVPR Workshops 2016: 1050-1057.