
Select an Action

A Contextual Framework for Health Monitoring Applications
Title:
A Contextual Framework for Health Monitoring Applications
Author:
Mengistu, Yehenew, author.
ISBN:
9780438100886
Personal Author:
Physical Description:
1 electronic resource (121 pages)
General Note:
Source: Masters Abstracts International, Volume: 57-06M(E).
Advisors: Weihua Sheng Committee members: Qi Cheng; Martin Hagan.
Abstract:
The demand for healthcare services has been growing in recent years and it is expected to at least double by 2050. This is mainly due to increase in the senior population worldwide. As a result, efforts are being made to develop a distributed healtcare system where patients can receive basic treatments at the comfort of their homes. The use of wearable health monitoring devices is accepted as a promising approach in this respect. However, there remains many challenges hindering the wide adoption of these devices for health monitoring applications and lack of contextual information is one of them. In this research a contextual framework is proposed which overcomes most drawbacks of existing systems and provide contextual information to remote caregivers for healthcare monitoring applications. The system is comprised of a wearable monitoring device, a home service robot and cloud servers. Contextual information is collected and processed using the wearable device and home service robot; and continuous status update is provided to the cloud servers as a service where an application can be built for monitoring patient health conditions. Acoustic events and emotional status are the two types of contextual information considered in this research. We used a wearable throat microphone for sound event recognition and the microphone on the robot is used for recognizing emotion status from speech. Hidden Markov Model (HMM) based acoustic event recognition algorithm is proposed which includes preprocessing, sound event detection, voiced/unvoiced recognition and sound event recognition. For emotion recognition, a two level speech emotion recognition algorithm which uses a Deep Neural Network (DNN) for segment level classification and HMM based utterance level emotion recognition is proposed. Experimental results show that the sound event recognition algorithm has an accuracy of 93.10% for classifying six events. The speech emotion recognition algorithm has an accuracy of 85.49% which outperforms other techniques upon evaluation. Two sample applications: Dietary Intake Monitoring and Depression Disorder Monitoring are also demonstrated. These applications use contextual information from drinking and eating recognition and speech emotion recognition results. Experimental results and case studies show that the framework can efficiently empower health monitoring applications.
Local Note:
School code: 0664
Added Corporate Author:
Available:*
Shelf Number | Item Barcode | Shelf Location | Status |
|---|---|---|---|
| XX(687732.1) | 687732-1001 | Proquest E-Thesis Collection | Searching... |
On Order
Select a list
Make this your default list.
The following items were successfully added.
There was an error while adding the following items. Please try again.
:
Select An Item
Data usage warning: You will receive one text message for each title you selected.
Standard text messaging rates apply.


