Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification

Pattern Recognition, Volume 46, Issue 12, December 2013, Pages 3249-3255.
Jim Jing-Yan Wang, Halima Bensmail, Xin Gao.

 

Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia and

Qatar Computing Research Institute, Doha 5825, Qatar and

Computational Bioscience Research Center, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia.

 

 

Abstract

 

Automated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features have shown their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between the visual words and their weights. To overcome this problem, we develop a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning of both visual vocabulary and visual word weights, and optimized alternately in the iterative algorithm. We test our algorithm on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). The results show that Joint-ViVo outperforms the state-of-art methods on tissue classification problems.

 

Go To Journal

 

additional information:

 

Automated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features are among the state-of-the-art solutions to this problem. Two core components of bag-of-feature strategy are the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between them. Is there any relationship between the visual words and their weights? Researchers from King Abdullah University of Science and Technology (KAUST) tried to answer this question by developing a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning both visual vocabulary and visual word weights, and optimized alternately in an iterative algorithm. Joint-Vivo was tested on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). Joint-Vivo showed significantly improved performance over state-of-the-art methods on all three tasks.

 

Check Also

Bridging Technologies: Hybrid Electronic and Photonic-Assisted Ultra-Wideband Wireless Systems for Future Communications - Advances in Engineering

Bridging Technologies: Hybrid Electronic and Photonic-Assisted Ultra-Wideband Wireless Systems for Future Communications