Deep learning models have achieved great successes these days. There are intensive studies of word representation learning for question classification. As questions are typically short texts, existing techniques are often not effective for extracting discriminative representations of questions just from a limited number of words. This motivates us to exploit additional information beyond words in order to improve the representation learning of questions. On one hand, topic modeling often captures meaningful semantic structures from the question corpus. Such global topical information should be helpful for question representations. On the other hand, entities extracted from question themselves provide more auxiliary information for short texts from a local viewpoint. Together with words, topics and entities, question representations can be substantially improved.In this paper, we propose a unified neural network framework by integrating Topic modeling, Word embedding and Entity Embedding (TWEE) for question representation learning. Concretely, we introduce a novel topic sparse autoencoder to incorporate discriminative topics into the representation learning of questions. In addition, both words and entity related information are embedded into the network to help learn a more comprehensive question representation. Empirical experiments show that the proposed TWEE framework outperforms the state-of-the-art methods on different datasets.