Study Bay Coursework Assignment Writing Help
Identifying User Static Expression Posture in an Office Environment Using Depth Camera (Kinect)
Leterature Review
Nowadays the number of hours an individual spend in front of a computer screen is increasing rapidly which according to a recent research the average office worker will spend almost 1,700 hours a year in front of a computer screen and an average of six and half hours a day sitting at their computer or laptop (Bailey, 2018). There are ergonomic risk factors and computer-related injuries caused by prolonged posture working on computers in office environments such as musculoskeletal problems of neck, shoulders and back, eyestrain, poor posture of the body and also carpal tunnel syndrome. These risk factors are not just limited to shoulder, neck and back pains but also more overall health and cause long-term issues when not addressed. Here there are three difference classifications of caused injuries to the body; Prolonged Posture, Awkward Postures, Repetitive Activity and contact stress (Expert, 2017). As sitting can cause physical fatigue, parts of the body need to be held steady for long period of time, which can cause reduction in blood circulation to the bones, muscles, ligaments, tendons and leading to stiffness and pain (Department of Health & Human Services, 2015).
In this research we mostly focus on upper body injuries and postures in the office environment and study on occurrence of and evaluate risk factors for neck or shoulder, hand and arm, musculoskeletal symptoms and disorders of computer users. The most common neck and shoulder musculoskeletal disorders are Somatic Pain Syndrom, the most common hand and arm disorder is deQuervain’s tendonitis (Gerr et al, 2002).
This study is focusing on De Quervain’s tendonitis and implementing electrical gadgets to detect the body expressions while working on computers in office environments for long hours.
As the wrist and thumb are used in many actions during the course of a day such as gripping, holding, lifting, turning handles, driving car and many other daily maneuvers require the thumb and the wrist (Walker, 2017). Tendons that are attach muscles to bones and pull a bone, joint when the muscle contracts. De Quervain’s tenosynovitis is an overuse syndrome resulting in wrist pain. It is commonly present in middle aged women, classically in the postpartum period (Raducha, 2018).
Overuse Injury, repetitive tasks that involve overexertion of thumb, radial and ulnar deviation of the wrist, arthritis, pregnancy are the risk factors of De Quervain Tendinosis (Duha, 2017). One of the most common cause of that is long use of computer mouse.
Resource: De Quervain’s Tendinosis (Duha, 2017)
Resource: www.dhgate.com & http://faall.ir
By defining and indicating De Quervain’s Tendinosis in the workspace user we are intend to indicate the angle of the wrist to decrease the possibility of wrist awkward position on the mouse.
By this said, we need to review numbers of articles that has been studied on different sensors, 2D and 3D cameras and also strategies that have been taken before. After that we can focus on areas that have not been studied sufficiently. Below we will review the summary of each article.
Nonverbal communication, which includes communication through body postures, gestures, facial expressions, and eye movements, makes up about two-thirds of all communication among human (Hogan & Stubbs, 2003). And Real-time human posture recognition and human activity detection are tremendous challenges to computer vision systems (Zhuang et al, 2012). In this research we intend to identify user posture in an office environment using different sensors. The goal of this research is to implement some algorithms that can recognize posture of the user when they are working on a desk.
The first approach is to identify the body expressions types, the obstacles, existing computations and study the state-of-art methods that have been employed toward this research and implement new approaches to circumvent the existing obstacles.
The bodily expressions can be static or dynamic. Static expressions (postures/poses) are those in which the position does not change during the expression period. In dynamic expressions (gestures), the position is temporal and it changes continuously with respect to time (Pisharady & Sarebeck, 2013).
There are some approaches and methods that have been used during the last couple of years and the tedious computations of those approaches may make the algorithm unsuitable for more real-time applications. There are some difficulties for these approaches, for instance the effect of varying illumination, appearance of dress and the cluttered background (Zhuang et al, 2012). Moreover, the conventional 2D imaging scheme is sensitive to varying viewpoints. This problem motivates researchers to investigate 3D image analysis for posture recognition (Gu. et al, 2010) For the 3D imaging sensory part, stereovision and depth measurement is two typical techniques. The crucial shortcoming of the stereovision technique is that its accuracy decreases drastically with the increasing distance from camera head to the object. In contrast, depth camera can provide significant improved measurement accuracy within the working range (Ye & Bruch, 2010). The Depth camera can calculate a depth map the scene using the combination of an RGB and an IR camera. (Pisharady & Sarebeck, 2013).
The motion-sensing device Kinect by Microsoft is an example of such a depth Camera, which can solve many issues, associated with 2D image data of the scene with providing features such as the co-ordinates of the skeletal model of the human body. 2D images suffer from some performance issues especially non-frontal perspective and also segment the image into background and foreground to identify the presence of a person in a given seen (Zhuang et al, 2012). An analysis of the samples misclassified by the recognition algorithm shows that the errors are occurred when the Kinect sensor failed to track the skeleton properly. For example, sometimes the tracking of hands is failed in the ‘crossed hands’ posture and (Pisharady & Sarebeck, 2013).
Based on depth images, the foreground human figure is easier to be segmented in 3D space in contrast to 2D projection (Zhuang,et al, 2012). An approach that has been employed the foreground human figure and detect it is by a self-organizing pulse coupled neural network (PCNN) for segmentation and temporal difference between sequential depth images for moving human detection (Zhuang et al, 2012) The PCNN is a hardware-oriented variant of original PCNN. This neural circuit array can be running in parallel on a FPGA[1] chip to perform the segmentation in real-time. Therefore it is viable for analyzing cases with varying backgrounds (Zhang et al, 2012).
According to other studies there are another way of evaluating and monitoring the body posture specifically sitting posture by implementing and designing eCushion. This eCoushin is an eTextile, which is a fiber-based yarn and is coated with piezoelectric polymer (Xu et al., 2011). There are some challenging on designing and implementing the eTeture sensors such as scaling, offset, corsstalk, and rotation effects. The goal of the study was to compensate the signal for accurate signal analysis; however, by designing the eCushion there is a high sitting posture recognition rate which 79% is allocated to general training and 92% for self-training. Based on the experiment the general training sitting posture recognition method is more fair and objective (Xu et al., 2011). Based on this study there is a need for more study and research on general case to achieve the intended goal in the future.
All of the last studies were focused of 3D human postures. For better understanding of the research, another study will be mentioned here in this literature review that focuses more on 2D human postures. There are two way of human postures detection on 2D single images, one of them, the location of the joints (elbow, knees, etc.) was manually marked and labeled and the other method is the automatic detection of body postures based on Artificial Neural Network (ANNs) (Souto & Musse, 2011). There are four phases identified toward this study which are 1) Computing Projection Curves, 2) Classification using ANNs, 3) Identification of Body Parts and 4) Detection of Human Posture. The goal here is to generate a skeleton automatically i.e. to estimate the position of joints in the human body. Experimental results showed that this technique works well in frontal views images and for the upper part of the body. This method can present some classification errors with a maximum error rate of 16% in a training database; However, by using public database (Yao et al, 2007), the achieved error rate was 25% and the wrong detection happen mainly when the ANN[2] misclassified the HPL[3] (Souto & Musse, 2011).
According to (Mutlu et al, 2007) the research is based on pressure sensors installed on a chair to evaluate the performance of a physical deployment of the new technology using data from 19 sensors and cross-validation using data from 31 sensors. The classification accuracy of the near real time 19 sensors technology is about 78% and the classification accuracy of the 31 sensors technology is about 87% for ten postures. This information about seated postures could be used to help avoid adverse effects of sitting for long periods of time or to predict seated activities for a human-computer interface. The benefit of using the new technology of 19 sensors is to use 1% of the deployed sensors compared to previous work, therefore drastically reducing hardware and computational cost (Mutlu et al, 2007).
Methodology for Data Analysis:
Base on the previous studies and our research and findings, we focus more on the De Quervain’s tendonitis by implementing Microsoft Kinect to indicate the right angle of the wrist in prolonged hours working with computer mouse in office environment.
References:
- Bailey, G. (2018). Office workers spend 1,700 hours a year in front of a computer screen. Retrieved from https://www.independent.co.uk/news/uk/home-news/office-workers-screen-headaches-a8459896.html
- Expert, E. (2017). 4 Ergonomic Risk Factors You Should Address Immediately. Retrieved from https://healthworksergo.com/4-ergonomic-risk-factors-address-immediately/
- Department of Health & Human Services. (2015). Computer-related injuries. Retrieved from https://www.betterhealth.vic.gov.au/health/healthyliving/computer-related-injuries
- Gerr, F., Marcus, M., Ensor, C. and Kleinbaum, D., (2002), “A prospective study of computer users: I. Study design and incidence of musculoskeletal symptoms and disorders”, American Journam of Industrial Medicine
- Hogan, K., Stubbs, R., (2003) “Can’t get Through 8 Barriers to Communication”, Pelican Publishing Company, Grenta, LA
- Pisharady, P., Saerbeck M., (2013) “Kenect Based Body Posture Detection and Recongition System”, IHPC, Singapore.
- Zhuang, H., Zhao, B., Ahmad, Z., Chen, S., and Low, C., (2012) “3D Depth Camera Based Human Posture Detection and Recognition Using PCNN Circuits and Learning-Based Hierarchical Classifier”, Singapore.
- J. Gu, X. Ding, S. Wang and Y. Wu, (2010) “Action and Gait Recognition From Recovered 3-D Human Joints,” IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics, vol.40, pp. 1021-1033.
- C. Ye and M. Bruch, (2010) “A visual odometry method based on the SwissRanger SR4000”, in Proc. Of SPIE, vol. 7692, pp. 76921I(1-9).
- H. L. Zhuang K. S. Low, W. Y. Yau, (2012), “Multi-chanel pulse coupled neural network based color image segmentation for object detection,” IEEE Trans. Industrial Electronics, in press.
- Xu, W., Li, Z., Huang, M., Amini, N., and Sarrafzadeh, M., (2011) “eCushion: An eTextile Device for Sitting Posture Monitoring”, International Conference on Body Sensor Networks.
- B. Yao, X. Yang, and S. C. Zhu, (2007), “Introduction to a largescale general purpose ground truth database: Methodology, annotation tool and benchmarks,” in Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 169–183.
- Souto, H., Musse, S., (2011), “Automatic Detection of 2D Human Postures Based on Single Images”, Pontificia Universidade Catolica Do Rio Grande Do Sul, Virtual Human Laboratory, Brazil.
- Mutlu, B., Krause, A., Forlizzi, J., Guestrin, C., and Hodgins, J., (2007), “Robust, Low-cost, Non-intrusive Sensing and Recognition of Seated Postures”, Carnegie Mellon University, PA.
- Walker, B., (2017), “What is DeQuervain’s Syndrome and what can you do to find relief?”, Retrieved from http://stretchcoach.com/articles/de-quervains-syndrome