![]() There has been some success in partitioning images over the processors of a distributed memory system and carrying out operations on each image tile with intervening communication steps. At the same time, increasing resolution of microscopic image scanners, satellite/aerial sensors and gigapixel cameras is yielding very large images that are increasingly difficult to analyze on conventional uniprocessor machines. ![]() Īutomated analysis requires very heavy consumption of computation power. They are not intended to replace human readers but to provide feedback to the readers to minimize errors both in detection and diagnosis stages. These systems learn from human experts, the current knowledge-base and large databases. In recent years there has been great success in development of computer-assisted image analysis systems for both radiological and pathological images. The analysis of these digitized images by human experts is time consuming, expensive and, more importantly, prone to intra- and inter-reader variability. In the biomedical field for example, developments in imaging technologies have led to a flood of data in the form of digitized pathology slides, X-rays, CAT scans, MRI scans, etc. The automated analysis of digital images is an important problem in many fields such as image understanding and computer vision. The largest images we have run are 32000 2 pixels in size, which are well beyond the largest previously reported in the literature. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Image segmentation is a very important step in the computerized analysis of digital images.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |