Today's artificial neural networks use computational models and algorithms inspired by the knowledge of the brain in the '90s. Powerful as they are, artificial networks are impressive but their domain specificity and reliance on vast numbers of labeled examples are obvious limitations. About a decade ago, spiking neural networks (SNNs) emerged as a new formalism that takes advantage of the spike timing and are particularly versatile when depicting spatio-temporal representations. The challenge now is to design rules for SNNs that can help them interact with their environment just like humans do. Specifically for visual classification tasks, we need to design a set of simple features that can describe any input, seen and unseen, by adapting to the environment. Herein, we propose an adaptive mechanism for deducing feature detectors from input data. Our proposed method adapts online to new instances of existing categories pooled from the MNIST database of handwritten numbers. The extracted features are comparable to those found in biological neural networks for certain classes of inputs. We anticipate that our proposed model will be embedded in our ongoing effort to design an SNN for image classification.