alexa Discern Of Gestures and Tracking of Human Using Kalman
ISSN ONLINE(2319-8753)PRINT(2347-6710)

International Journal of Innovative Research in Science, Engineering and Technology
Open Access

Like us on: https://twitter.com/ijirset_r
OMICS International organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.

Open Access Journals gaining more Readers and Citations

700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers

This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)

Special Issue Article

Discern Of Gestures and Tracking of Human Using Kalman Filter

S. Kanagamalliga1, Dr. S. Vasuki2, R.Sundaramoorthy3, J.Allen Deva Priyam4, M.Karthick5
  1. Assistant Professor, Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai ,India
  2. Professor Head , Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai, Indias
  3. U.G Student, Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai, India
  4. U.G Student, Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai, India
  5. U.G Student, Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai, India
Related article at Pubmed, Scholar Google
 

Abstract

This paper provides the interrelated topics of action recognition and detection of Humans. The foreground clutter is used to segment the human action from the video using the statistical method of Adaptive Background Mixture Model. The descriptor can be computed from drawing the boundary box for the humans in the video and the counting of actions is also displayed. The values for features are calculated from the descriptor. The descriptor allows for the comparison of the underlying dynamics of two space-time video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. The calculated feature values are taken for extraction of human actions. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences also under occlusive conditions the human is detected using the adaptive filter.

Keywords

Share This Page

Additional Info

Loading
Loading Please wait..
Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

 
© 2008-2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords