alexa The Combined Effect of Captioning and Sign Language in Understanding Television Content in Deaf

ISSN: 2375-4427

Journal of Communication Disorders, Deaf Studies & Hearing Aids

The Combined Effect of Captioning and Sign Language in Understanding Television Content in Deaf

1Department of ENT, Maharishi Markandeshwar Medical College and Hospital Solan, Himachal Pradesh, India
2S.R. Chandrasekhar Institute of Speech and Hearing, Bangalore, India
*Corresponding Author:

Received Date: Nov 08, 2017 / Accepted Date: Jan 16, 2018 / Published Date: Mar 10, 2018

Abstract

Objectives: The aim of the present study was to evaluate the benefits of combined use of captions and sign language interpretation in understanding television content.
Methods: 30 prelingually deaf adults (25-45 y) participated in the study. Stimulus consisted of three video-clips taken from Television (TV) program “Shaktimaan” with already incorporated sign language video. Captions were incorporated in two of the video clips. The experiment was conducted in three phases. In pre-experimental phase participants were required to answer informational questions. In the Experimental phase participants were shown three video-clips in three different viewing conditions i.e. with sign language (WSL), with captions only (WC) and with both sign language and captions combined (WSL:WC). They were required to answer 10 comprehension questions after each video. In post experimental phase they were required to answer feedback questions based on the videos shown.
Results: The results showed significant improvement in scores in condition with combined presence of captions and sign language (WSL:WC),than the other two conditions.
Conclusion: Was concluded that providing captions and sign language together would immensely help in making television accessible to larger number of deaf adults than providing any of it alone. It was further concluded that the combined use of these devices may also be helpful in improving televised content depending on the individual’s efficiency in use of Sign language or captions, though this needs to be researched further.

Keywords: Captioning; Sign language interpretation; Television; Deaf; Hard of hearing

Introduction

In modern communication technology, broadcasting is an extremely effective manner through which millions of people are unified as common recipient of a particular message [1]. In deaf people, the perceptual impairment, permanently limits their access to the audio component of the televised speech, thus depriving them of information, education, entertainment and pleasure of watching television. They are unable to capture, the equal quantity of information from their environment as compared to people without hearing loss. As reported by world health organization (WHO), there are about 250-300 million deaf people in this world, 2/3 of them live in the underdeveloped nations, of these, India has the largest share. As per National sample survey organization (NSSO) 2001 survey there are 291 persons/100000 populations suffering from moderate to profound hearing loss, out of which 205 persons are suffering from severe to profound hearing loss. Captioning the audio content has long been used to assist this population to access television media and has also made television as the most effective educational medium. Captioning technology has shown tremendous growth over the years. The most recent development includes Real-Time Translation (CART) stenographers, Speech recognition technology, use of speech recognition software and dialogue revoicing, web captioning etc. presently number of organizations are offering online captioned video making it available to the growing number of Internet savvy deaf and hard-of-hearing people worldwide. Online Video Streaming or Internet Video Streaming service offers captioning services on web videos. In United Kingdom approximately 50% of all live captioning is through speech recognition as of 2005. No data is available for Indian television.

Several studies have evaluated different parameters of captioning for example, rate of caption delivery [2] edited vs. near verbatim captioning [3] effect of caption rate and text reduction [4] but very few studies have been done to evaluate the benefit of captioning in understanding television content in deaf population. Studies show that captions enhance deaf children’s abilities to perceive the emotional complexity of presented information i.e. perception of the characters' emotions and personality traits and their predictions concerning how characters would behave in new but similar situations [5]. Few studies [6-8] have shown that captions are necessary for only certain types of television content whereas for other type of content, image is sufficient, for example captions contribute notably to the comprehension of characters intentionality, provide greater coherence to the story, whereas captions does not contribute to the representation of the characters’ mental status, feeling, thoughts and emotions. Some of the recent studies [9,10] have also focused on learning in deaf with real time captions in the educational settings and have found that captioning improve working memory performance relative to no captions for both hearing and deaf students.

Inspite of the benefits, captions have certain limitations which restrict their use and effectiveness. Use of captions requires basic reading skills. In deaf population the utilization of captioning involves use of applicable knowledge base, memory processes, linguistic adequacy and word base of the language in which captioning is done. Studies show that captions are limited in their benefits for prelingual deaf population because of the need of appropriate reading skills and processing difficulties in captioned language's syntax, vocabulary, accessing phonological representations, making inferences, understanding figurative language, and utilizing short term memory efficiently [11]. Children who are deaf lag behind hearing children in reading achievement as measured by grade level and the lag broadens with age [12]. These constrains preclude deaf people to fully enjoy television, depriving them of information, education, and entertainment. Additionally literacy among deaf is extremely low. In India only 2% receive any education and even less succeeded in reading; though men have higher literacy rate than female [13].

Sign language presentation of the audio information is another alternative for this population. In this video of the sign language interpreter is embedding in the video stream of the television program. Since sign language is not usually a signed version of the printed language, different types of sign language could be used as ASL (American Sign Language), BSL (British sign language) or ISL (Indian sign language). According to Lane Hoffmeister and Bahan (1996) ASL is the 6th most used language in US with 500000 to 2 millon speakers. Although the precise number of sign language users in India is difficult to determine. It is estimated that ISL is used by over 1000000 deaf adults and 500000 deaf children 2680000 sign language users in India. Sign language interpreting has emerged as an essential support service for many deaf students. Some of the recent studies have examined the use of video-sign language interpreting as a tool of learning in educational setup [14]. Few studies have directly evaluated the comprehension of television content with sign language in deaf Population and show that even with interpretation deaf viewers do not benefit equally for new clip, sign language fluency; training and interpreters experience effect quality of interpretation [15]. Current studies are ALSO focusing on understanding the efficacy of different modes of sign language interpreting services. Marc Marschark examined deaf students’ learning via direct or live sign language [14,16]. Fajardo examined the efficacy of video-based sign language in web navigation [17]. They also reviewed the implications of current SL generation technologies for two tasks i.e. information search and learning. The study showed that, although information content can be portrayed in sign language by means of videos of human signers, but the issue of how captions and sign language together, affect comprehension of the content is still unresolved and unexplored. Few studies have evaluated the combined effect of caption and sign language in deaf. Stenson MS examined the utility of print relative to sign language interpreting in the classroom [18]. It was found that deaf students recalled more information when material was presented in print rather interpreted. Marschark in his study concluded that neither sign language interpreting nor real time text have any Inherent generalized advantage over the other in supporting deaf students in secondary or post-secondary settings [16]. D Matjaz showed that presence of captions positively affects the rate of comprehension among hard of hearing Viewers. The most obvious difference in comprehension between watching sign language interpreter videos with and without captions were found for subjects of hiking and culture were comprehension was higher when captions were used . The efficacy of video-based sign language navigation to improve web search for Deaf Signers was tested by Fajardo. The findings showed that sign language videos added to text hyperlinks improve web search efficiency for Deaf. Very few studies could be found in the Indian literature which has been done to evaluate the benefit in understanding television content using sign language interpreting or with combined use of captions and sign language interpreting during television viewing.

Though In India at present most deaf (approximate 60-78%) do not have access to Computers, Laptops, Tabs, etc, there is very limited Deaf channels, Deaf programs, and Deaf oriented entertainment [19]. Deaf experience lack of access to technology due to hearing impairment. Providing these technological innovations and solutions to deaf can immensely benefit, empower and enhance societal inclusion and participation of this population through providing access to knowledge and information. The Internet offers an opportunity for inclusiveness – to view the global community of its users as one while recognizing its rich diversity. Internet technologies have the potential to give persons with disabilities the means to live on a more equitable basis within the global community in a manner that previously was not possible. In India there is confluence of barriers to accessiblty with inaccesibility and unaffordable technologies. Providing captions and sign language interpreting of the televised content for Deaf supports the cause of persons with disability (PWD) act. It was enacted in India in 1995, this act recognized the right to full participation in society and equality for disabled by:

1) Providing equal opportunities through accessibility of information regarding education, employment and development,

2) Protecting their right by fair, equal and non-discriminatory access and

3) Providing opportunities for full participation in society.

This is provided in UN convention (2006) under Article 9 (Accessibility to information, communication and other services, including electronic services and emergency services) and Article 21(Access to Information encouraging the mass media, including providers of information through the Internet, to make their services accessible to persons with disabilities). India it is also signatory to both UNCRDP towards an inclusive barrier free and right based society for persons with disability in Asia and pacific [16]. The National Telecom Policy 2011 strategizes the need to recognize telecom and broadband connectivity as a basic necessity like education and health and work towards right to broadband act.

This study was conducted with the aim to evaluate the benefits of combined use of captions and sign language in and deaf population. Additionally distributions of frequency percentage of participants on the results of questionnaire on TV viewing and captioning habits were also analyzed.

Material and Methods

Participants

Total 30 prelingual deaf adults (20-45 y) with the mean age of 35.5 y were taken for the study. 6 subjects were taken from deaf club of AYJNIHH and 24 subjects were taken from the ISHARA foundation for Deaf. Participants were included in the study only if they had hearing thresholds more than 90dB in their better ear, had been using sign language for more than 2 years for most of their communication, had appropriate English reading skills, had their vision corrected to normal 20/20 by contact lenses or glasses. Even though participants shared these common characteristics they represented a heterogeneous group. 11 participants used hearing aid and could understand speech when presented a higher intensities with speech reading. 7 had started intervention at age less than 5 years while 19 subjects had late intervention and did not use hearing aids. They were totally dependent on sign language and gestures. 12 subjects had been using sign language since childhood while rest had started using sign language 5 to 6 years back. 2 had deaf parents but only one had used sign language with parents. The details of the audiological reports could not be retrieved from all subjects. To ensure the adequacy of vision abilities, reading skills and basic conceptual knowledge participants were required to view and read four sentences on screen. Four practice sentences were presented to the subject that were in form of questions requiring one word answer e.g. What is your name? They had to write down the answer with 90% accuracy (to be included in the study). All the subjects could understand and write simple and compound sentences. Stimulus and Scoring Stimulus consisted of three short video clips of approximately 4 min each. The video was taken from television program ‘Shaktimaan’. This program was chosen because of two reasons:

1) The program offer sign language already incorporated for viewer who are deaf;

2) The program is in Hindi and does not have syntactically or semantically complex language.

The video consisted of 7 small short story clips having a theme in each part. 4 clips were of approximately 2 min each and 3 clips approximately ranging between 1-1.15 min. The whole video was divided into three parts. First and the second parts consisted of 2 short story clips (2 min each). The 3rd part had three story clips (1 min each). This was done to make the three parts equivalent in duration, content and length. Captioning was incorporated in the one part of the video and the sign language display was covered with the black screen overlap. Thus displaying only the captions. In the 2nd part of the video captioning was incorporated along with the sign language display. Captions were added with Studio (version 9) video editing software. Taking into account the literature regarding the technological features [2,3] affecting comprehension of captions in deaf, the following parameter were considered. Speed of captioning was kept to be 90 words /min; the number of lines never exceeded 2, edited captioning was used in order to be able to manipulate speed and vocabulary as required. The captions were added 1 sec prior and lasted 1 sec after the verbal sentences. The onscreen time of the captions was never less than 4-5 sec. and also depended on the length of the sentence. The equivalency in complexity and reading level was obtained by adhering to two main steps [20,21] i.e. rating of content and scene by scene comparison. 10 normal hearing educated subjects were made to rate each video content. Three point scale was used for rating the video content on difficulty level, where 0 denoted very easy, 1 denoted average and 2 denoted very difficult. Sentences with the rating 0 and 2 were discarded. Secondly scene by scene comparison was done to ensure that three videos included clips equivalent in type of scene i.e. with equal number of clips with three television viewing conditions:

1) Clear head shots with audio and lip reading cues

2) With audio and visual cues and

3) Only audio without any other cue.

The sign language display was on right side of the screen. The stimulus was further verified by three teachers of deaf; one working in the ISL cell of AYJNIHH and two working at ISHARA foundation. The necessary changes in the vocabulary and captioning parameters were made as required. 10 comprehension questions were formulated for each video clip. Therefore in total there were 30 questions (10 × 3). Most questions were inferential which requires the understanding of the concept of the content. Since during the testing it was found that most subjects had difficulty in writing the grammatically correct sentences. The questions were changed from open set into closed set with 4 options, thus reducing the demand for sentence formulation. Each question carried 2 points. Therefore the maximum score was 60. Additionally two questionnaires were formulated. First to acquire information regarding the television habits, caption use and sign language use. Second questionnaire was formulated to get feedback regarding the video clips shown during the experiment.

Procedure

Whole testing procedure was divided into three phases preexperimental, experimental and post experimental phase. In preexperimental phase an informed consent to participate in the study was taken from all subjects. Brief case history of each participant was taken from the teachers and previous audio-logical reports were retrieved if possible. Two teachers of deaf participated in the process. In this phase the participants were required to read the informational questionnaire. These questions were also presented in sign language by the teacher. Answers were either written or signed to the teachers which were later recorded. In experimental phase, participants were seated in a comfortable chair in front of 14 inch, flat-screen with adequate level of lightening in a quiet room. Testing conditions were kept constant and was done in a group of 3 or 5 subjects. Participants were allowed to adjust the location of sitting to obtain the most comfortable viewing distance . This was followed by visual acuity test where four practice sentences were presented to the participants and they were asked to repeat and answer the captioned sentences. Participants were required to repeat three of the four sentences correctly to ensure the captions could be seen and read. The Videoclips was presented in 3 conditions presented in the following sequence:

1) With sign language only (WSL)

2) With Captions only (WC) and

3) With both sign language and captions (WSL:WC).

Following presentation of each video clip, participants were asked to answer comprehension questions where the subjects had to tick the correct option. In addition each question was signed by the teacher and subjects were allowed to ask, if they had difficulty in understanding the questions. The video-clips were presented twice if demanded. During post experimental phase participants were given open ended feedback questions regarding the subjective benefit from captioning and sign language which were also signed if needed. Subjects wrote down or signed the answers. The whole procedure was completed in 1-1/2 hours (Tables 1 and 2).

Pre-experimental phase: Completion of Informational questionnaire
Total questions :5, 3 information questions:
1) First Language     
2) Duration of Sign Language use.
3)Literacy Level
Two  MCQs  to be rated on 4 –point scale “0” Never, ”1”Sometimes ”2”Usually and “3” Always
4) TSTV (Time spent in television viewing)
5) UC (Use of captions)
Experimental phase: Two stimuli S1 and S2 were presented on a computer screen and participants were required to record the response on record sheet.
Stimulus Condition No. Questions Max Score
Video-1 WSL 10 20
Video-2 WC 10 20
Video-3 WSL:WC 10 20
Post experimental Phase: Completion of Feedback questionnaire.
3  MCQs to be rated on 4 –point scale “0”- Very difficult, ”1”-Difficult ”2”-Adequate and “3” Easy and 3 MCQs to be rated as “0”- Very fast, ”1”-fast ”2”-Adequate and “3” slow. 1) VDL (Video difficulty level)
2) CVDL (Caption vocabulary difficulty level)
3) SLDL (Sign language difficulty level)
4) CSP (Speed of Captions)
5) SLSP (Speed of sign language).
6) SBSLC (Subjective benefit of sign language and captioning)
Two Other questions:
7) Preferred mode: Sign language or captions
8) Describe if any difficulty encountered when both are presented.

Table 1: Summary of the procedure

  N Mean score SD
WSL 30 12.70 3.120
WC 30 12.57 3.401
WC:WSL 30 16.37 1.829

Table 2: Shows the N, Mean and SD of the scores obtained in three viewing conditions.

Patient follow-up

All patients were evaluated by clinic visits or by phone at 1, 3, 6, and 12 months after the procedure, and annually thereafter. Patients were advised to return for coronary angiography if clinically indicated by symptoms or shown myocardial ischemia.

Definitions

Duration of DAPT was defined as the length between the date of index PCI procedure and the DAPT cessation. The patients were divided into 3 groups according to DAPT duration: <1 year (0-11 months), =1 year (11-13 months), and >1 year (>13 months). New generation DES was defined as the second generation and biodegradable polymer DES. Myocardial Infarction (MI) was defined according to the third universal definition of myocardial infarction [14]. Target vessel revascularization (TVR) was defined as the repeated revascularization by PCI or surgery of the target vessel. Bleeding was defined by Bleeding Academic Research Consortium (BARC) [15]. All endpoints were adjudicated centrally by two independent cardiologists, and the disagreement was resolved by consensus.

Statistical analysis

Continuous variables were presented as mean ± standard deviation, and Student’s t tests or the Mann-Whitney rank sum test were performed for cross-group comparisons. Categorical variables were presented as frequency or ratio. The normally distributed continuous variables were compared using the 1-way ANOVA test, and categorical variables were compared by Pearson chi-square test among the three groups. The multivariate logistic regressions were used to evaluate the independent factors associated with shortening or prolonging DAPT duration over the study period, and the results were expressed by odds ratios (OR) with corresponding 95% confidence intervals. Both clinical and statistical significant covariates were considered in this model. All statistical analyses were performed at a significance level of two-sided 0.05 with the software of SAS® 9.2 (SAS Institute Inc, Chicago, IL).

Results

Figure 1 shows the mean scores obtained in three conditions i.e. with captions (WC), with sign language (WSL) and with sign language and captions(WC:WSL). It is seen that the maximum scores are obtained in the condition of combined use of captioning and sign language (WC:WSL). The scores are low for the other two conditions. It was hypothesized that sign language users would show better scores in combined presence of Sign language and captions i.e. WC: WSL condition, than either sign language (WSL) or Captions (WC) presented individually. Repeated one way analysis of variance (ANOVA) was used to determine any significant difference across three conditions i.e. WSL, WC, WC: WSL. The results revealed significant difference across the three conditions [F (2.58)=23.73; p<0.0001]. Post hoc Turkey HSD Test was used to determine difference between means of individual conditions HSD[0.05]=1.51; HSD[0.01]=1.9. There was a significant difference between the mean of WSL [12.7] and WC: WSL [16.66]. The difference was also significant between WC [M=12.56] and WC: WSL[M=16.66]. No significant difference was seen between the means of WSL [M=12.7] and WC [M=12.56]. This indicated that comprehension scores were significantly higher in combined presence of SL and Captions for sign language user but there was no difference between the comprehension scores when sign language or captions are presented individually. Thus, proving that the combined use of SL and captioning leads to improvement in understanding televised content in sign language users.

deaf-studies-hearing-aids-Mean-Scores

Figure 1: Mean Scores for three viewing conditions i.e. with captions (WC), with sign language (WSL) and with sign language and with captions (WC:WSL).

Figure 2 shows the graphs depicting the % of participants on 4 point scale (“Never”; “Rarely”; “Frequently” and “Always”) on two multiple choice questions (MCQs) of pre-experimental questionnaire. It was seen that none of the participants reported that they never watch television inspite of the inability to hear. On UC (Use of captions) it was reported that inspite of hearing loss most participants had reported that they “rarely” use captions.

deaf-studies-hearing-aids-Bar-Graphs

Figure 2: Bar Graphs depicting the % of participants who rated Time spent watching Television (Graphs A) and Use of captions (Graph B) on 4 point scale in pre experimental questionnaire.

Figure 3 shows the Bar Graph depicting % of participants who rated “Very difficult, “Difficult”, “Adequate” and “Easy” on three MCQs of Post Experimental Questionnaire. On VDL (Video difficulty level) it was seen that 40% of participants reported it to be “difficult”, 40% reported that it was “Adequate” while 6.7% reported it to be “very difficult”. On CVDL (Caption vocabulary difficulty level) similar pattern was seen with 40%reported it to be “easy’ while 6.7 % reported it to be “very difficult” and on SLDL (Sign language difficulty level) 46% reported it to be “adequate” while only 3.3% reported it to be “very difficult”.

deaf-studies-hearing-aids-Experimental-Questionnaire

Figure 3: Bar Graph depicting the % of participants who rated Video difficulty level (Graph C), Caption vocabulary difficulty level (Graph D)and Sign language difficulty level (Graph E) on 4 Point scale in Post Experimental Questionnaire.

Figure 4 shows the Bar Graphs depicting the % of participants on 4 point scale rating speed in post experimental questionnaire. Graph G shows that CSP (Caption speed) was rated adequate by 80% of the participants while it can be seen in grafh F that SLSP (Sign language speed) was rated as “Fast” and “Very Fast” by 33.3 % and 36.7% of the participant.

deaf-studies-hearing-aids-Graphs-depicting

Figure 4: Bar Graphs depicting the % of participants who rated speed of Captions (Graph F) and Speed of sign language (Graph G) on a 4 point Scale in Post Experimental Questionnaire.

Discussion

The finding of the study showed a significant improvement in understanding televised content in deaf individuals when captions and sign language is presented together. In contrast most previous findings have shown the opposite results, stating that visual presentation of two sources of information at the same time create significant impediment of information integration suggested that deaf students are no more likely to be visual learners than hearing students and that their visualspatial skill may be related more to their hearing than to sign language. Mayer also demonstrated similar results, that when hearing students were required to split their visual attention between presented text and visual supporting material the visual material over powered resulting in reduced utilization of both sources of input [22]. The contradictory findings in the current study can be explained based on various factors. Primary factor could be the heterogeneity of the participants, which may have resulted in confounding results. 11 subjects used hearing aid and could understand speech when presented loudly and with speech reading. 7 subjects had started intervention from the age less than 5years. 19 subjects did not use hearing aids and were totally dependent on sign language and gestures. Only 40% of the participants had been using sign language since childhood while rest 60% started using SL only 5-6 years back indicating different level of efficiency in SL. Second factor could be the type of stimulus used in the study. It was seen that most of the participants had rated sign language as “fast” or “very fast “indicating that most participant would have found SL difficult to interpret. This might explain the lower scores in WSL condition. Most subjects had reported that they use captions for learning “English” and 46% of them reported that they only “rarely” use captions for television viewing. This was reported due to inability to attend to the visuals of the video. They reported that they preferred captions for certain programs only as news, discussion etc. In programs like “drama” “story’ movies, serials etc.. subjects reported that they prefer deriving information from the visual-picture cues and only use caption for literal words as name of person or object. They reported using visual cues even if they do not understand the whole content, they reported that reading was taxing and reduces the enjoyment of watching television. This might be the reason of low scores in only caption (OC) condition. In the third condition WC: WSL participants had option of either using the captions or sign language as per their convenience. Therefore each participant had options of using their preferred mode, resulting in better mean scores in this condition. The alternative explanation could be the possibility of visual compensation in deaf adults which would have helped them in taking advantage of both the information sources simultaneously. Deaf students have greater experience in receiving information in variety of format. Various studies suggest that deaf individuals have enhanced visual attention relative to their hearing peers because of their reliance on visual modality; thus suggesting that deaf individuals have greater peripheral visual acuity. D. Bavelier compared normally hearing individuals and congenitally deaf individuals as they monitored moving stimuli either in the periphery or in the center of the visual field [23]. When participants monitored the peripheral visual field, greater recruitment of the motion-selective area MT/MST was observed in deaf than in hearing individuals, whereas the two groups were comparable when attending to the central visual field. This finding indicates an enhancement of visual attention to peripheral visual space in deaf individuals. It is possible that when sign language and captions were presented simultaneously deaf viewers might be able to focus on visual–sign language or picture and captions together. Thus improving their ability to understand televised content. It is to be noted, that the even though study shows the improvement in objective scores in presence of combined source of information, in subjective feedback questions, most of the subjects reported that they either used only captions or only sign language and never used both together. 24 subjects had reported that they prefer captions only in certain type of programs as News etc. while prefer sign language in movies, game shows, serials etc. Some of the subjects reported that giving too much information through visual modality would result in distraction but few studies on multimedia learning suggest that inspite of many sources of input through visual modality subjects are likely to end up focusing only on one source [24]. Therefore providing two inputs together would give opportunity to subjects to focus on their preferred mode. It is concluded that deaf adults as a group, benefit more, if captioning and sign language are presented together, possibly because they would have option to choose from the most preferred mode as per their efficiency. Therefore for television and other mass media both the options could be made available to make it more accessible to persons with hearing impairment or those with difficulty in hearing. Captions are known to improve and enhance the reading skills of the deaf. Some of the studies suggest a reciprocal relationship between reading ability and television comprehension that requires language skills, regardless of the modality of communication, for developing lexical understanding of word based language and for acquiring background knowledge [25,26]. Especially in country like India, where there is limited awareness of assistive listening devices for television viewing and most people cannot afford high technology hearing aids, captioning can prove to be the most beneficial and cost effective assistive devise for hearing impaired population in providing satisfaction in their daily lives and vital communicational needs and interests. Sign language interpreting should also be incorporated along with captions on television screen to make all programs, accessible to deaf. All public mass media should be incorporated with captioning and sign language interpreter screen. Other media as videos, DVDs and Internet content etc. should be facilitated to have options of closed captions and sing language. As closed captions can be activated when required, similar technology should be developed for sign language interpreter screen which could be activated when required. Though the task require skilled professionals and is time consuming but will be an effective step in our effort to mainstream deaf and hard of hearing population.

The study was done with utmost care but we could find certain limitations of the study. The Sample size was small and may not be representative of the whole population of deaf.

Future research is also warranted to investigate the same using a wider age range. Researchers should investigate visual attention abilities of deaf individuals and visual compensation that take place in deaf because of deprivation of auditory stimulus in initial years of life so that provision of captioning and sign language can be made more effective and beneficial for his population.

References

Citation: Sharma D, Rao RR (2018) The Combined Effect of Captioning and Sign Language in Understanding Television Content in Deaf. Commun Disord Deaf Stud Hearing Aids 6: 182. DOI: 10.4172/2375-4427.1000182

Copyright: © 2018 Sharma D, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Select your language of interest to view the total content in your interested language

Post Your Comment Citation
Share This Article
Article Usage
  • Total views: 1358
  • [From(publication date): 0-2018 - Dec 14, 2018]
  • Breakdown by view type
  • HTML page views: 1318
  • PDF downloads: 40

Post your comment

captcha   Reload  Can't read the image? click here to refresh