This paper presents an algorithm to classify pixels in uterine cervix images into two classes, namely normal and abnormal tissues, and simultaneously select relevant features, using group sparsity. Because of the large variations in image appearance due to changes of illumination, specular reflections and other visual noise, the two classes have a strong overlap in feature space, whether features are obtained from color or texture information. Using more features makes the classes more separable and increases the segmentation's quality, but also its complexity. However, the properties of these features have not been well investigated. In most cases, a group of features is selected prior to the segmentation process; features with minor contributions to the results are kept and add to the computational cost. We propose feature selection as a significant improvement in this problem. It provides a robust trade-off between segmentation quality and computational complexity. In this work we formulate the cervigram segmentation problem as a feature-selection-based classification method, and we introduce a regularization-based feature-selection algorithm to leverage both the sparsity and clustering properties of the features used. We implemented our method to automatically segment the biomarker AcetoWhite (AW) regions in a dataset of 200 images of the uterine cervix, for which manual segmentation is available. We compare the performance of several regularization-based feature-selection methods. The experimental results demonstrate that on this dataset, our proposed group-sparsity-based method gives overall better results in terms of sensitivity, specificity and sparsity.