A smart stethoscope for automated diagnosis
Andrew McDonald, Ed Kay, Anurag Agarwal
Valvular Heart Disease
1.5 million people in the UK suffer from significant heart valve disease.
More than half of these cases (800,000) are undiagnosed [1]. Early detection of the disease is essential, but clinicians can struggle to hear the disease when using a traditional stethoscope. Unnecessary referrals put an immense strain on hospital cardiology departments, whereas late diagnoses can severely increase risk of death.
We are designing a smart stethoscope, for automated screening of asymptomatic valve disease. It will reduce unnecessary referrals, and make the stethoscope useful for those with limited medical experience.

[1] J. d’Arcy et al. (2016). Large-scale community echocardiographic screening reveals a major burden of undiagnosed valvular heart disease in older people: the OxVALVE Population Cohort Study. European Heart Journal, 37(47) : 3515-3522.
Heart Sounds
The heart produces characteristic sounds, which can be heard at the chest using a stethoscope. Normal hearts produce two sounds, S1 and S2, which give a characteristic ‘lub-dub’ rhythm:
Heart disease can cause the production of murmurs – abnormal high-frequency sounds. One example is a severe aortic stenosis murmur that occurs between S1 and S2:
Machine Learning
We are applying state-of-the-art machine learning models to heart sound recordings, to identify pathological murmurs and produce an instant diagnosis of valvular heart disease.
Heart sounds are recorded from the chest in various positions, using an electronic stethoscope.

Frequency transforms applied to the heart sound separate out S1 and S2 from the higher frequency murmur.

Artificial neural networks are trained to use the frequency transforms to identify the location of each heart sound, including the abnormal murmur.

Any abnormal murmurs are reported back to the clinician for evaluation.

An early version of our algorithm [2] received an award at the Computing in Cardiology 2016 Conference in Vancouver.
The project is now being commercialised through our spin-out company, BioPhonics, in association with Cambridge Enterprise.
[2] E. Kay, A.Agarwal: DropConnected neural networks trained on time-frequency and inter-beat features for classifying heart sounds. Physiological Measurement, 38 (8), pp. 1645-1657, 2017.