TY - GEN
T1 - Identifying Mild Traumatic Brain Injury via Vision Transformer and Bag of Visual Features
AU - Koochaki, Fatemeh
AU - Najafizadeh, Laleh
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Due to lack of established criteria and reliable biomarkers, timely diagnosis of mild traumatic brain injury (mTBI) has remained a challenging problem. Widefield optical imaging of cortical activity in animals provides a unique opportunity to study injury-induced alterations of brain function. Motivated by the results of medical-imaging studies that employ patch-level-based approaches, this paper proposes to use two patch-based deep learning techniques for classifying brain images of mTBI and healthy Thyl-GCaMP6s transgenic mice. The first approach uses a Bag of Visual Word (BoVW) technique to represent each image as a histogram of local features derived from patches from all training data. The local features are extracted using an unsupervised convolutional autoencoder (CAE). The second approach employs a pre-trained vision transformer (ViT) model. The average accuracy for classifying mTBI and healthy brains for the CAE-BoVW and the ViT are 96.8% and 97.78%, respectively, outperforming results of a convolutional neural network (CNN) model. This work suggests that attention-based models can be utilized for the problem of classifying mTBI and healthy brain images.
AB - Due to lack of established criteria and reliable biomarkers, timely diagnosis of mild traumatic brain injury (mTBI) has remained a challenging problem. Widefield optical imaging of cortical activity in animals provides a unique opportunity to study injury-induced alterations of brain function. Motivated by the results of medical-imaging studies that employ patch-level-based approaches, this paper proposes to use two patch-based deep learning techniques for classifying brain images of mTBI and healthy Thyl-GCaMP6s transgenic mice. The first approach uses a Bag of Visual Word (BoVW) technique to represent each image as a histogram of local features derived from patches from all training data. The local features are extracted using an unsupervised convolutional autoencoder (CAE). The second approach employs a pre-trained vision transformer (ViT) model. The average accuracy for classifying mTBI and healthy brains for the CAE-BoVW and the ViT are 96.8% and 97.78%, respectively, outperforming results of a convolutional neural network (CNN) model. This work suggests that attention-based models can be utilized for the problem of classifying mTBI and healthy brain images.
UR - http://www.scopus.com/inward/record.url?scp=85160644991&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85160644991&partnerID=8YFLogxK
U2 - 10.1109/NER52421.2023.10123771
DO - 10.1109/NER52421.2023.10123771
M3 - Conference contribution
AN - SCOPUS:85160644991
T3 - International IEEE/EMBS Conference on Neural Engineering, NER
BT - 11th International IEEE/EMBS Conference on Neural Engineering, NER 2023 - Proceedings
PB - IEEE Computer Society
T2 - 11th International IEEE/EMBS Conference on Neural Engineering, NER 2023
Y2 - 25 April 2023 through 27 April 2023
ER -