Projects:PainAssessment
Introduction
Pain assessment in patients who are unable to verbally communicate with the medical staff is a challenging problem in patient critical care. This problem is most prominently encountered in sedated patients in the intensive care unit (ICU) recovering from trauma and major surgery, as well as infant patients and patients with brain injuries. Current practice in the ICU requires the nursing staff in assessing the pain and agitation experienced by the patient, and taking appropriate action to ameliorate the patient’s anxiety and discomfort.
The fundamental limitations in sedation and pain assessment in the ICU stem from subjective assessment criteria, rather than quantifiable, measurable data for ICU sedation. This often results in poor quality and inconsistent treatment of patient agitation from nurse to nurse. Recent advances in computer vision techniques can assist the medical staff in assessing sedation and pain by constantly monitoring the patient and providing the clinician with quantifiable data for ICU sedation. An automatic pain assessment system can be used within a decision support framework which can also provide automated sedation and analgesia in the ICU. In order to achieve closed-loop sedation control in the ICU, a quantifiable feedback signal is required that reflects some measure of the patient’s agitation. A non-subjective agitation assessment algorithm can be a key component in developing closed-loop sedation control algorithms for ICU sedation.
Individuals in pain manifest their condition through "pain behavior", which includes facial expressions. Clinicians regard the patient’s facial expression as a valid indicator for pain and pain intensity. Hence, correct interpretation of the facial expressions of the patient and its correlation with pain is a fundamental step in designing an automated pain assessment system. Of course, other pain behaviors including head movement and the movement of other body parts, along with physiological indicators of pain, such as heart rate, blood pressure, and respiratory rate responses should also be included in such a system.
The current clinical standard in the ICU for assessing the level of sedation is an ordinal scoring system, such as the motor activity and assessment scale (MAAS) or the Richmond agitation-sedation scale (RASS), which includes the assessment of the level of agitation of the patient as well as the level of consciousness. Assessment of the level of sedation of a patient is, therefore, subjective and limited in accuracy and resolution, and hence, prone to error which in turn may lead to oversedation. In particular, oversedation increases risk to the patient since liberation from mechanical ventilation, one of the most common lifesaving procedures performed in the ICU, may not be possible due to a diminished level of consciousness and respiratory depression from sedative drugs resulting in prolonged length of stay in the ICU. Alternatively, undersedation leads to agitation and can result in dangerous situations for both the patient and the intensivist. Specifically, agitated patients can do physical harm to themselves by dislodging their endotracheal tube which can potentially endanger their life. In addition, an intensivist who must restrain a dangerously agitated patient has less time for providing care to other patients, making their work more difficult.
Computer vision techniques can be used to quantify agitation in sedated ICU patients. In particular, such techniques can be used to develop objective agitation measurements from patient motion. In the case of paraplegic patients, whole body movement is not available, and hence, monitoring the whole body motion is not a viable solution. In this case, measuring head motion and facial grimacing for quantifying patient agitation in critical care can be a useful alternative.
Pain Recognition using Sparse Kernel Machines
Support Vector Machines (SVM) and Relevance Vector Machines (RVM) were used to identify the facial expressions corresponding to pain. A total of 21 subjects from the infant COPE database were selected such that for each subject at least one photograph corresponded to pain and one to non-pain. The total number of photographs available for each subject ranged between 5 to 12, with a total of 181 photographs considered. We applied the leave-one-out method for validation.
In the preprocessing stage, the faces were standardized for their eye position using a similarity transformation. Then, a 70 × 93 window was used to crop the facial region of the image and only the grayscale values were used. For each image, a 6510-dimensional vector was formed by column stacking the matrix of intensity values. We used the OSU SVM MATLAB Toolbox to run the SVM classification algorithm. The classification accuracy for the SVM algorithm with a linear kernel was 90%, where we chose the complexity parameter C = 1. The number of support vectors averaged 5. Applying the RVM algorithm with a linear kernel to the same data set resulted in an almost identical classification accuracy, namely, 91%; while the number of relevance vectors was reduced to 2. However, in 5 out of the 21 subjects considered, the algorithm did not converge. This is due to the fact that, in contrast to the SVM algorithm, the RVM algorithm involves a non-convex optimization problem.