We present a method of automatic analysis of doctor-patient communication and present findings after applying this methodology in a post hoc study of communication between oncologists and their cancer patients (N=122). We analyzed several features of each participant in the conversation including the number of words spoken, the average positive/negative sentiment expressed, the number of questions asked, and the word diversity (unique word count). We found that the number of words spoken by the doctor is correlated with the highest doctor communication ability ratings made by patients. We additionally found that unsupervised clustering of conversation features into 'styles' identified that certain styles are associated with higher communication ratings. Two well-defined styles emerged when clustering based on doctor word diversity and doctor sentiment: A high word diversity-neutral sentiment style, which was associated with higher ratings, and a low word diversity-positive sentiment style with lower average ratings. Machine learning models were trained to automatically predict whether a doctor-patient interaction will be rated high or not with a best-performing 71% test set accuracy.