IEEE 2017-2018 Project Titles on MatLab – Image Processing

Abstract:

Using very high resolution remote sensing images to extracting urban features from very high resolution remote sensing images is a very complex and difficult task. The improvement in geospatial technologies brought forward many solutions that can help in improving the process of urban feature extraction. Data collection using light detection and ranging (LiDAR) and capturing very high resolution optical images concurrently is one of these solutions. This research proves that the fusion of high-resolution optical image with LiDAR data can improve image processing results. It is based on increasing urban features extraction success rate by reducing oversegmentation. The fusion process relies first on wavelet transform techniques, which are run several times with different parameters (rules). Then, an innovative technique is implemented to improve fusion process. The two techniques are compared, and both have reduced fragmented segments and created homogeneous urban features. However, the fused image with the innovative technique has improved the accuracy of the segmentation results. The average accuracy for building detection is 96% (maximum 100% and minimum 92%) using the innovative technique compared to 21% and 51% for no fusion and wavelet-fusion-based techniques. Furthermore, an index is used to measure the quality of the building details which are detected after using the innovative fusion technique. The result indicates that the quality index is greater or equal to 86%.

Abstract:

Markov Random Fields (MRFs) are a popular tool in many computer vision problems and faithfully model a broad range of local dependencies. However, rooted in the Hammersley-Clifford theorem, they face serious difficulties in enforcing the global coherence of the solutions without using too high order cliques that reduce the computational effectiveness of the inference phase. Having this problem in mind, we describe a multi-layered (hierarchical) architecture for MRFs that is based exclusively in pairwise connections and typically produces globally coherent solutions, with 1) one layer working at the local (pixel) level, modeling the interactions between adjacent image patches; and 2) a complementary layer working at the object (hypothesis) level pushing toward globally consistent solutions. During optimization, both layers interact into an equilibrium state that not only segments the data, but also classifies it. The proposed MRF architecture is particularly suitable for problems that deal with biological data (e.g., biometrics), where the reasonability of the solutions can be objectively measured. As test case, we considered the problem of hair / facial hair segmentation and labeling, which are soft biometric labels useful for human recognition in-the-wild. We observed performance levels close to the state-of-the-art at a much lower computational cost, both in the segmentation and classification (labeling) tasks.

Abstract:

Automatic ship detection from optical satellite imagery is a challenging task due to cluttered scenes and variability in ship sizes. This letter proposes a detection algorithm based on saliency segmentation and the local binary pattern (LBP) descriptor combined with ship structure. First, we present a novel saliency segmentation framework with flexible integration of multiple visual cues to extract candidate regions from different sea surfaces. Then, simple shape analysis is adopted to eliminate obviously false targets. Finally, a structure-LBP feature that characterizes the inherent topology structure of ships is applied to discriminate true ship targets. Experimental results on numerous panchromatic satellite images validate that our proposed scheme outperforms other state-of-the-art methods in terms of both detection time and detection accuracy.

Abstract:

The Markov random field (MRF) model has attracted great attention in the field of image segmentation. However, most MRF-based methods fail to resolve segmentation misclassification problems for high spatial resolution remote sensing images due to insufficiently using the hierarchical semantic information. In order to solve such a problem, this paper proposes an object-based MRF model with auxiliary label fields that can capture more macro and detailed information and apply it to the semantic segmentation of high spatial resolution remote sensing images. Specifically, apart from the label field, two auxiliary label fields are first introduced into the proposed model for interpreting remote sensing images from different perspectives, which are implemented by setting a different number of auxiliary classes. Then, the multilevel logistic model is used to describe the interactions within each label field, and a conditional probability distribution is developed to model the interactions between label fields. A net context structure is established among them to model the interactions of classes within and between label fields. A principled probabilistic inference is suggested to solve the proposed model by iteratively renewing the label field and auxiliary label fields, in which different information of auxiliary label fields can be integrated into the label field during iterations. Experiments on different remote sensing images demonstrate that our model produces more accurate segmentation than the state-of-the-art MRF-based methods. If some prior information is added, the proposed model can produce accurate results even in complex areas.

Abstract:

In this paper, a segmentation-based approach to fine registration of multispectral and multitemporal very high resolution (VHR) images is proposed. The proposed approach aims at estimating and correcting the residual local misalignment [also referred to as registration noise (RN)] that often affects multitemporal VHR images even after standard registration. The method extracts automatically a set of object representative points associated with regions with homogeneous spectral properties (i.e., objects in the scene). Such points result to be distributed all over the considered scene and account for the high spatial correlation of pixels in VHR images. Then, it estimates the amount and direction of residual local misalignment for each object representative point by exploiting residual local misalignment properties in a multiple displacement analysis framework. To this end, a multiscale differential analysis of the multispectral difference image is employed for modeling the statistical distribution of pixels affected by residual misalignment (i.e., RN pixels) and detect them. The RN is used to perform a segmentation-based fine registration based on both temporal and spatial correlation. Accordingly, the method is particularly suitable to be used for images with a large number of border regions like VHR images of urban scenes. Experimental results obtained on both simulated and real multitemporal VHR images confirm the effectiveness of the proposed method.

Abstract:

Roads, as important artificial objects, are the main body of modern traffic system, providing many conveniences for human civilization. With the development of Intelligent Transportation Systems (ITS), the road structure is changing frequently. Road recognition is to identify the road type from remote sensing imagery, and road types depend largely on the characteristics of roads. Thus, how to extract road features and further making road classification efficient have become a popular and challenging research topic. In this paper, we propose a road recognition method for remote sensing imagery using incremental learning. In principle, our method includes the following steps: 1) the non-road remote sensing imagery is first filtered by using support vector machine; 2) the road network is obtained from the road remote sensing imagery by computing multiple saliency features; 3) the road features are extracted from road network and background environment; and 4) the roads are recognized as three road types according to the classification results of incremental learning algorithm. The experimental results show that our method has higher road recognition rate as well as less recognition time than the other popular algorithms.

Abstract:

Active learning (AL) and semisupervised learning (SSL) are both promising solutions to hyperspectral image classification. Given a few initial labeled samples, this work combines AL and SSL in a novel manner, aiming to obtain more manually labeled and pseudolabeled samples and use them together with the initial labeled samples to improve the classification performance. First, based on a comparison of the segmentation and spectral-spatial classification results obtained by random walker (RW) and extended RW (ERW) algorithms, the unlabeled samples are separated into two different sets, i.e., low- and high-confidence unlabeled data sets. For the high-confidence unlabeled data, pseudolabeling is performed, which can ensure the correctness and informativeness of the pseudolabeled samples. For the low-confidence unlabeled data, AL is used to select samples. In this way, the samples which are more effective for improvement of classification performance can be labeled in only a few iterations. Finally, with the learned training set and the original hyperspectral image as inputs, the ERW classifier is used to obtain the final classification result. Experiments performed on three real hyperspectral data sets show that the proposed method can achieve competitive classification accuracy even with a very limited number of manually labeled samples.

Abstract:

Human action segmentation is important for human action analysis, which is a highly active research area. Most segmentation methods are based on clustering or numerical descriptors, which are only related to data, and consider no relationship between the data and physical characteristics of human actions. Physical characteristics of human motions are those that can be directly perceived by human beings, such as speed, acceleration, continuity, and so on, which are quite helpful in detecting human motion segment points. We propose a new physical-based descriptor of human action by curvature sequence warp space alignment (CSWSA) approach for sequence segmentation in this paper. Furthermore, time series-warp metric curvature segmentation method is constructed by the proposed descriptor and CSWSA. In our segmentation method, descriptor can express the changes of human actions, and CSWSA is an auxiliary method to give suggestions for segmentation. The experimental results show that our segmentation method is effective in both CMU human motion and video-based data sets.

Abstract:

Digital reconstruction, or tracing, of 3-D neuron structure from microscopy images is a critical step toward reversing engineering the wiring and anatomy of a brain. Despite a number of prior attempts, this task remains very challenging, especially when images are contaminated by noises or have discontinued segments of neurite patterns. An approach for addressing such problems is to identify the locations of neuronal voxels using image segmentation methods, prior to applying tracing or reconstruction techniques. This preprocessing step is expected to remove noises in the data, thereby leading to improved reconstruction results. In this paper, we proposed to use 3-D convolutional neural networks (CNNs) for segmenting the neuronal microscopy images. Specifically, we designed a novel CNN architecture, that takes volumetric images as the inputs and their voxel-wise segmentation maps as the outputs. The developed architecture allows us to train and predict using large microscopy images in an end-to-end manner. We evaluated the performance of our model on a variety of challenging 3-D microscopy images from different organisms. Results showed that the proposed methods improved the tracing performance significantly when combined with different reconstruction algorithms.

Abstract:

This paper presents a cosegmentation-based method for building change detection from multitemporal high-resolution (HR) remotely sensed images, providing a new solution to object-based change detection (OBCD). First, the magnitude of a difference image is calculated to represent the change feature. Next, cosegmentation is performed via graph-based energy minimization by combining the change feature with image features at each phase, directly resulting in foreground as multitemporal changed objects and background as unchanged area. Finally, the spatial correspondence between changed objects is established through overlay analysis. Cosegmentation provides a separate and associated, rather than a separate and independent, multitemporal image segmentation method for OBCD, which has two advantages: 1) both the image and change features are used to produce foreground segments as changed objects, which can take full advantage of multitemporal information and produce two spatially corresponded change detection maps by the association of the change feature, having the ability to reveal the thematic, geometric, and numeric changes of objects and 2) the background in the cosegmentation result represents the unchanged area, which naturally avoids the problem of matching inconsistent unchanged objects caused by the separate and independent multitemporal segmentation strategy. Experimental results on five HR datasets verify the effectiveness of the proposed method and the comparisons with the state-of-the-art OBCD methods further show its superiority.

Abstract:

Accurate road detection and centerline extraction from very high resolution (VHR) remote sensing imagery are of central importance in a wide range of applications. Due to the complex backgrounds and occlusions of trees and cars, most road detection methods bring in the heterogeneous segments; besides for the centerline extraction task, most current approaches fail to extract a wonderful centerline network that appears smooth, complete, as well as single-pixel width. To address the above-mentioned complex issues, we propose a novel deep model, i.e., a cascaded end-to-end convolutional neural network (CasNet), to simultaneously cope with the road detection and centerline extraction tasks. Specifically, CasNet consists of two networks. One aims at the road detection task, whose strong representation ability is well able to tackle the complex backgrounds and occlusions of trees and cars. The other is cascaded to the former one, making full use of the feature maps produced formerly, to obtain the good centerline extraction. Finally, a thinning algorithm is proposed to obtain smooth, complete, and single-pixel width road centerline network. Extensive experiments demonstrate that CasNet outperforms the state-of-the-art methods greatly in learning quality and learning speed. That is, CasNet exceeds the comparing methods by a large margin in quantitative performance, and it is nearly 25 times faster than the comparing methods. Moreover, as another contribution, a large and challenging road centerline data set for the VHR remote sensing image will be publicly available for further studies.

Abstract:

Sidescan sonar image segmentation is a very important issue in underwater object detection and recognition. In this paper, a robust and fast method for sidescan sonar image segmentation is proposed, which deals with both speckle noise and intensity inhomogeneity that may cause considerable difficulties in image segmentation. The proposed method integrates the nonlocal means-based speckle filtering (NLMSF), coarse segmentation using k -means clustering, and fine segmentation using an improved region-scalable fitting (RSF) model. The NLMSF is used before the segmentation to effectively remove speckle noise while preserving meaningful details such as edges and fine features, which can make the segmentation easier and more accurate. After despeckling, a coarse segmentation is obtained by using k -means clustering, which can reduce the number of iterations. In the fine segmentation, to better deal with possible intensity inhomogeneity, an edge-driven constraint is combined with the RSF model, which can not only accelerate the convergence speed but also avoid trapping into local minima. The proposed method has been successfully applied to both noisy and inhomogeneous sonar images. Experimental and comparative results on real and synthetic sonar images demonstrate that the proposed method is robust against noise and intensity inhomogeneity, and is also fast and accurate.

Abstract:

Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

Abstract:

In this paper, a new breast cancer detection method that combines thermography and high frequency imaging techniques is presented. The proposed method uses the distribution and variation of the temperature on the breast surface in order to estimate the location and size of a breast malignant tissue (a cancerous tumor). First, breast tissue is excited with a printed dipole antenna array following which an electromagnetic analysis is conducted. Next, the heat equation is used to estimate the distribution of surface temperature. Simulation results show that both the temperature and the specific absorption rate (SAR) increases as the tumor gets bigger or closer to the surface. Finally, thermal responses (such as temperature distribution on the skin and its change during time) and electromagnetic responses (such as transmission and reflection s-parameters) are employed to estimate the size and location of the tumor.

Abstract:

Most of abdominal CT images include Gaussian noise, and CT scans form a blurry vision because of the internal fat tissue inside of abdomen. These two handicaps (noise and fat tissue) constitute an impediment in front of an accurate abdominal organ & tumour segmentation. Also segmentation techniques generally fall into error on segmentation of close grayscale regions. Therefore, denoising and enhancement parts are crucial for better segmentation results on CT images. In this paper, we form a tool including three efficient algorithms for the purpose of image enhancement before abdominal organ & tumour segmentation. At first, the denoising process is realized by Block Matching and 3D Filtering (BM3D) algorithm for elimination of Gaussian noise stated in arterial phase CT images. At second, Fast Linking Spiking Cortical Model (FL-SCM) is used for removing the internal fat tissue. At last, Otsu algorithm is processed to remove the redundant parts within the image. In experiments, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) index are used to evaluate the performance of proposed method, and a visual comparison is presented. According to results, it is seen that proposed tool obtains the best PSNR and SSIM values in comparison with two steps of pipeline (FL-SCM and BM3D & FL-SCM). Consequently, BM3D & FL-SCM & Otsu (BFO) ensures a clean abdomen particularly for segmentation of liver, spleen, pancreas, adrenal tumours, aorta, ribs, spinal cord and kidneys.

Abstract:

Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.

Abstract:

Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.

Abstract:

Segmentation of the developing cortical plate from MRI data of the post-mortem fetal brain is highly challenging due to partial volume effects, low contrast, and heterogeneous maturation caused by ongoing myelination processes. We present a new atlas-free method that segments the inner and outer boundaries of the cortical plate in fetal brains by exploiting diffusion-weighted imaging cues and using a cortical thickness constraint. The accuracy of the segmentation algorithm is demonstrated by application to fetal sheep brain MRI data, and is shown to produce results comparable to manual segmentation and more accurate than semi-automatic segmentation.

Abstract:

Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

Abstract:

Most color images obtained from the real world usually contain complex areas, such as nature scene images, remote sensing images, and medical images. All these type of images are very difficult to be separated accurately and automatically for complex color and structures included. In this paper, we focus on detecting hybrid cues of color image to segment complex scene in a bottom-up framework. The main idea of the proposed segmentation method is based on a two-step procedure: 1) a reasonable superpixels computing method is conducted and 2) a Mumford-Shah (M-S) optimal merging model is proposed for presegment suerpixels. First, a set of seed pixels is positioned at the lowest texture energy map computed from structure tensor diffusion features. Next, we implement a growing procedure to extract superpixels from selected seed pixels with color and texture cues. After that, a color-texture histograms feature is defined to measure similarity between regions, and an M-S optimal merging process is executed by comparing the similarity of adjacent regions with standard deviation constraints to get final segmentation. Extensive experiments are conducted on the Berkeley segmentation database, some remote sensing images, and medical images. The results of experiments have verified that the segmentation effectiveness of the proposed method in segmenting complex scenes and indicated that it is more robust and accurate than conventional methods.

Abstract:

Detection and segmentation of small renal mass (SRM) in renal CT images are important pre-processing for computer-aided diagnosis of renal cancer. However, the task is known to be challenging due to its variety of size, shape, and location. In this paper, we propose an automated method for detecting and segmenting SRM in contrast-enhanced CT images using texture and context feature classification. First, kidney ROIs are determined by intensity and location thresholding. Second, mass candidates are extracted by intensity and location thresholding. Third, false positive reduction is applied with patch-based texture and context feature classification. Finally, mass segmentation is performed, using the detection results as a seed, with region growing, active contours, and outlier removal with size and shape criteria. In experiments, our method detected SRM with specificity and PPV of 99.63% and 64.2%, respectively, and segmented them with sensitivity, specificity, and DSC of 89.91%, 98.96% and 88.94%, respectively.

Abstract:

Tracking heart rate for fitness using wrist-type wearables is challenging, because of the significant noise caused by intensive wrist movements. In this paper, we present FitBeat - a lightweight system that enables accurate heart rate tracking on wrist-type wearables during intensive exercises. Unlike existing approaches that rely on computation- intensive signal processing, FitBeat integrates and augments standard filter and spectral analysis tool, which achieves comparable accuracy while significantly reducing computational overhead. FitBeat integrates contact sensing, motion sensing and simple spectral analysis algorithms to suppress various error sources. We implement FitBeat on a COTS smartwatch, and evaluate the performance of FitBeat for typical workouts of different intensities, including walking, running and riding. Experimental results involving 10 subjects show that the average error of FitBeat is around 4 beats per minute, which improves heart rate accuracy of the default heart rate tracker of Moto 360 by 10x.

Abstract:

Accurate reconstruction of anatomical connections between neurons in the brain using electron microscopy (EM) images is considered to be the gold standard for circuit mapping. A key step in obtaining the reconstruction is the ability to automatically segment neurons with a precision close to human-level performance. Despite the recent technical advances in EM image segmentation, most of them rely on hand-crafted features to some extent that are specific to the data, limiting their ability to generalize. Here, we propose a simple yet powerful technique for EM image segmentation that is trained end-to-end and does not rely on prior knowledge of the data. Our proposed residual deconvolutional network consists of two information pathways that capture full-resolution features and contextual information, respectively. We showed that the proposed model is very effective in achieving the conflicting goals in dense output prediction; namely preserving full-resolution predictions and including sufficient contextual information. We applied our method to the ongoing open challenge of 3D neurite segmentation in EM images. Our method achieved one of the top results on this open challenge. We demonstrated the generality of our technique by evaluating it on the 2D neurite segmentation challenge dataset where consistently high performance was obtained. We thus expect our method to generalize well to other dense output prediction problems.

Abstract:

Nowadays, multi-source image acquisition attracts an increasing interest in many fields, such as multi-modal medical image segmentation. Such acquisition aims at considering complementary information to perform image segmentation, since the same scene has been observed by various types of images. However, strong dependence often exists between multi-source images. This dependence should be taken into account when we try to extract joint information for precisely making a decision. In order to statistically model this dependence between multiple sources, we propose a novel multi-source fusion method based on the Gaussian copula. The proposed fusion model is integrated in a statistical framework with the hidden Markov field inference in order to delineate a target volume from multi-source images. Estimation of parameters of the models and segmentation of the images are jointly performed by an iterative algorithm based on Gibbs sampling. Experiments are performed on multi-sequence MRI to segment tumors. The results show that the proposed method based on the Gaussian copula is effective to accomplish multi-source image segmentation.

Abstract:

Diffusion-weighted magnetic resonance imaging (DWI) is a key non-invasive imaging technique for cancer diagnosis and tumor treatment assessment, reflecting Brownian movement of water molecules in tissues. Since densely packed cells restrict molecule mobility, tumor tissues produce usually higher signal (a.k.a less attenuated signal) on isotropic maps compared to normal tissues. However, no general quantitative relation between DWI data and the cell density has been established. In order to link low-resolution clinical cross-sectional data with high-resolution histological information, we developed an image processing and analysis chain, which was used to study the correlation between the diffusion coefficient (D value) estimated from DWI and tumor cellularity from serial histological slides of a resected non-small cell lung cancer (NSCLC) tumor. Color deconvolution followed by cell nuclei segmentation was performed on digitized histological images to determine local and cell-type specific 2d (two-dimensional) densities. From these the 3d (three-dimensional) cell density was inferred by a model-based sampling technique, which is necessary for the calculation of local and global 3d tumor cell count. Next, DWI sequence information was overlaid with high-resolution CT data and the resected histology using prominent anatomical hallmarks for co-registration of histology tissue blocks and non-invasive imaging modalities' data. The integration of cell numbers information and DWI data derived from different tumor areas revealed a clear negative correlation between cell density and D value. Importantly, spatial tumor cell density can be calculated based on DWI data. In summary, our results demonstrate that tumor cell count and heterogeneity can be predicted from DWI data, which may open new opportunities for personalized diagnosis and therapy optimization.

Abstract:

Recent advances in using quantitative ultrasound (QUS) methods have provided a promising framework to non-invasively and inexpensively monitor or predict the effectiveness of therapeutic cancer responses. One of the earliest steps in using QUS methods is contouring a region of interest (ROI) inside the tumour in ultrasound B-mode images. While manual segmentation is a very time-consuming and tedious task for human experts, auto-contouring is also an extremely difficult task for computers due to the poor quality of ultrasound B-mode images. However, for the purpose of cancer response prediction, a rough boundary of the tumour as an ROI is only needed. In this research, a semi-automated tumour localization approach is proposed for ROI estimation in ultrasound B-mode images acquired from patients with locally advanced breast cancer (LABC). The proposed approach comprised several modules, including 1) feature extraction using keypoint descriptors, 2) augmenting the feature descriptors with the distance of the keypoints to the user-input pixel as the centre of the tumour, 3) supervised learning using a support vector machine (SVM) to classify keypoints as "tumour" or "non-tumour", and 4) computation of an ellipse as an outline of the ROI representing the tumour. Experiments with 33 B-mode images from 10 LABC patients yielded promising results with an accuracy of 76.7% based on the Dice coefficient performance measure. The results demonstrated that the proposed method can potentially be used as the first stage in a computer-assisted cancer response prediction system for semi-automated contouring of breast tumours.