alexa Towards Understanding Human-media Interaction: The Effect of Computer's- vs. Teacher's Presence and Voice on Young Users Behavioural Interaction Development through a Digital-Playground | Open Access Journals
ISSN: 2469-9837
International Journal of School and Cognitive Psychology
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

Towards Understanding Human-media Interaction: The Effect of Computer's- vs. Teacher's Presence and Voice on Young Users Behavioural Interaction Development through a Digital-Playground

Agina AM*, Kommers PA and Heylen Z

Department of Communication Studies University of Twente, The Netherlands

*Corresponding Author:
Adel M. Agina
Professor, Department of Communication Studies University of Twente
P. O. Box 217, 7500 AE
Enschede, The Netherlands
Tel:
+218924887110
E-mail: [email protected]

Received date: September 26, 2015 Accepted date: October 21, 2015 Published date: October 29, 2015

Citation: Agina AM, Kommers PA, Heylen Z (2015) Towards Understanding Human-media Interaction: The Effect of Computer's- vs. Teacher's Presence and Voice on Young Users’ Behavioural Interaction Development through a Digital-Playground®. Int J Sch Cog Psychol S2:011. doi:10.4172/2469-9837.S2-011

Copyright: © 2015 Agina AM, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at International Journal of School and Cognitive Psychology

Abstract

Problem: Despite the massive body of research on users' interaction, the literature still lacks investigating the effect of the computer's- vs. teacher's presence and voice on young user's interaction behavioral development during progression. Purpose: Exploring the effect of the computer's- vs. teacher's presence and voice on young user's interaction behavioral development during progression. Methods: Two types of interactional units (nonverbal vs. verbal encouragement cues) were applied and investigated by forty preschool young users. The participants were divided by their teachers between the two conditions equivalently. It was hypothesized that young users who acted alone with computer were more interacted with the environment than those who acted with their teacher. Findings: First, the hypothesis was confirmed with no significant differential effect of the gender on the task performance. Second, the injudicious use of encouragement cues hindered the participants’ interaction behavioral development. Third, the elicitation of the compulsory-Interaction, undesirable-interaction, inner-interaction, and spontaneous-Interaction were fully different in their mechanisms. Conclusions: The computer’s presence and voice promotes more young users' interaction than the effect of the teacher’s presence and voice where each single type of interaction has different mechanism and evaluation.

Keywords

Compulsory-interaction; Undesirable-interaction; Inner-interaction; Spontaneous-interaction; Private speech; Social speech; Self-regulation; Thinking aloud; Digital-playground

Introduction

During and in a conversation, in terms of Human-Media Interaction (HMI), the researchers believed that the participants display when they are listening various behaviors in response to the contributions conversa-tion of the speaker [1]. The participants' display may take the form of nonverbal behaviors such as head nods and shakes, various kinds of facial expressions or vocalizations such as uh-huh, hmm, etcetera. These so-called listener responses as the researchers in HMI call [2,3] including what are commonly known as back-channels [4], are intimately connected with the contributions of the speaker (i.e., the external regulator). Those researchers signal that the contribution is being attended to, understood, and agreed upon or some other attitudinal or affective reaction to it [2,5,6]. This dependence of the occurrence of a listener response on the contribution of the speaker has prompted many studies in HMI on the characteristics of the speaker’s contribution that might act as cues or triggers for the responses both from a linguistic perspective [3] and from a computational perspective. However, listener responses are optional. Even though listener responses are important for a successful completion of an interaction; it does not mean that when a listener will not provide a response at a certain moment the conversation will immediately break down. It is known that individuals differ in their choice in timing and type of listener responses. However, the researchers in HMI have no real understanding yet of the causes of these differences [1]. Therefore, the assumption behind these studies is that listener responses do not occur randomly, or at the listeners’ whims but, instead, there is some kind of dependence on the speaker’s contribution. As reported in the previous work [7], the hope is to find out algorithms that can produce appropriate responses in spoken dialogue systems or embodied conversational agents based on features derived from the speaker’s contribution.

Thus, the present study was mainly conducted to focus on exploring the effect of two versus speakers' (computer's- vs. teacher's presence and voice) on young users' interaction behavioral development. To our knowledge so far, this subject has never conducted in the literature yet. The present study, however, is complete-ly relied on the studies that originally introduced by Agina et al. [8-10]. Researchers in education-related areas have gradually adopted the concept of self-regulation from psychology and adapted it to student learning or educa-tional practice, which leads to the current concept of self-regulated learning [11]. Thus, the two terms Self-Regulation (SR) and Self-Regulated Learning (SRL) are interchangeable and have the same meaning in educa-tional contexts. SRL, by nature, has defined in several and different directions that differ from one study to an-other as from one area of knowledge to another. Each definition was based on the researchers' background and perspective and has emerged not only as a multidisciplinary, but also as an interdisciplinary research [12]. In phi-losophy the definition was based on self-control [13], in psychology the definition was based on self-management [14] and self-efficacy [15], in cognitive the definition was based on self-generated [16], in motivational learning the definition was based on self-motivation [17], and most recently, a new definition of SRL in terms of computer-gaming has raised as the ‘‘learners’ ability to direct their verbalization process and, simultaneously, monitoring their learning process’s goals’’ [12]. Remarkably, all those definitions lead to meas-ure the young users' interaction behavioral development during progression. Interaction, per se, in traditional classroom learning focuses on the dialogues between instructors and students.

Previous research [18,19] described interaction as a component of the educational process where a trans-formation of the inert knowledge or information occurs, in terms of the transactional view where human factors and the environment are both taken into consideration. Learner-instructor (also: student-teacher or adult-young user) interaction refers to a two-way communication between the instructor of the course and learners [20]. This type of interaction is regarded as valuable by students and by many instructors. Learner-instructor interaction can take on many forms. Some of them are indirect such as instructors designing a course to stimulate student interest in course content or increase motivation to learn. Evaluation is conducted by instructors to make sure learners are on track, and certain assistance such as guidance, support, and encouragement are available from instructors when necessary. Instructors are especially valuable when students are at the point of knowledge application [21]. This means that the instructor's feedback is important in learner-instructor interaction. With feedback from students, instructors ensure student comprehension of the course materials and receive information on their own performance in delivering course content.

Feedback from instructors is vital to students’ achievement in the courses [22,23]. Students favor timely feedback from instructors. In contrast, a lack of immediate feedback brings about feelings of isolation and dis-satisfaction [24,25]. Students who can easily communicate with their instructors are more satisfied with the learning compared to those having difficulties interacting with their instructors. Accordingly, it is a question that whether the different instructors (i.e., computer's- vs. teacher's presence and voice) with dif-ferent interactional encouragement cues (i.e., nonverbal or enacted vs. verbal feedback) has an effect on the young users' interaction behavioral development. This is what the present study all about.

Analytical critiques on the previous work

While the publication of the English translation of both Mind in Society [26] and Thought and Lan-guage [27], there has been a rediscovery of Vygotsky’s ideas in the last two decades, not only in developmental psychology but also in various other disciplines, namely sociology, anthropology, work organization, and educa-tion [28]. In Human-Computer-Interaction (HCI), Vygotsky's theories have been used as a basis for a number of studies especially related to education and learning, computer-supported collaborative work (CSCW), and more recently tangible and embodied interaction. As reported in the study [29], some of Vygotsky more prominent ideas that have been applied to interfaces, computers, and systems include for example private and social speech and proximal zone of development [8,9], spontaneous and scientific concepts [30], the socialization thesis [31], mediated activity [32], use of tools and artifacts, and concept formation [33]. The present study, however, is an attempt to enable Vygotsky to speak about young users' interaction behavioral development in terms of HMI based on the original research by Agina et al. [8-10]. On one hand, the researches in HMI [1-3] use the term Listener Responses (LR) to describe the participant's reaction during progression and they called that reac-tion as Participant's Display (PD). This PD, as MHI researchers believed, may take the form of nonverbal be-haviors such as head nods and shakes, various kinds of facial expressions or many vocalizations such as uh-huh, hmm, .., etc. [2,3]. The computational modeling of listener behaviors can thus only partly rely on the cues from the speaker’s contribution but needs to take into account also models of emotion or personality and many other factors. On the other hand, Vygotsky [27] originally, introduced the term inner speech and Piaget [13] used egocentric speech to refer to the concept of Private Speech (PS).

The subsequent research, up to date, are fully guided either by Vygotsky [34-38] or Piaget [39,40] with only one major difference (if it can be seen as a difference) is that they used other al-ternatives to describe the concept of PS such as: self-verbalization [41], self-directed speech [42], and, most recently, task-related speech and self-talk [43]. Ironically, all those alternatives were introducing in the litera-ture without explaining why or what is the difference. Technically, all those alternatives are simply referring to the young users' overt speech to themselves during progression. This is exactly as Vygotsky and Piaget already defined. Remarkably, this includes the participant's murmuring such as ‘‘offfff’’, ‘‘aha’’, ‘‘wow’’, ‘Omm’ and so on, whispers, and inaudible lip movements in which the researchers in HMI call them vocalizations. In this context, it is highly expected to see new terms in the literature to describe the concept of the young users' reac-tions during progression but without any valuable or major changes that may lead, or at least inspire, the re-searchers to seriously think about a revolution in studying young users' behavioral development including inter-action itself!!! In more specific language, what valuable will be added to the literature with new terms of the same phenomenon more than confusing the readers and the researcher as well especially during searching the literature?

Methodological critiques on the previous work

Media can take many forms such as computer, video games, TV, entertainments … etc. Since 1970 computer video games were entered to the children’s world where the visual and auditory coding is necessary for the successful games [44]. The games’ main subject is always violence through adventure [45,46] in which children become, simultaneously, satisfied with their emotional needs. This is true because they do not only feel, but also ‘taste’ the arousal when playing against the rules [12]. On one hand, some researchers [46-48] believed that children interested to play challenging as well as preferred computer and video games than TV in which their parents have negative attitude toward these games because of their children’s later behavior, aca-demic achievement, and skills. On the other hand, other researchers [49] has shown that playing video games can be problematic for some youth as they achieved lower grades over the course, showed more aggres-sive impulses, promoted their violence, and were more likely to infer hostile intent when none existed. This was related to heightened levels of aggressive behavior and a variety of psycho-social and health problems [50].

Accordingly, it is not surprising that teachers and parents are most concerned with violent games inter-fering with schoolwork, social skills, and exercises where many parents felt that computer games activated negative emotions such as aggression, loss of environmental attention, and social withdrawal (Kutner, Olson, Warner and Hertzog). Many studies [11,51,52] have clarified the main complexities of young user's behavioral development in the school contexts. For instance, the effects of which had to be determined to know how self-regulation occurred, which is the key factor of the one's self-interaction. Therefore, schools with chil-dren are complex places and much different from controlled laboratory settings with adults. A clear example of this complexity is seen in research on help seeking, which is an important behavioral development strategy whereas all users require assistance at times, to understand material, and when confused about what to do [53]. Seeking help from others (e.g., teachers, peers, and parents) seems like a natural response; yet wide individual differences occur in students’ frequency, amount, and type of help seeking. These differences suggest a complex interplay between social and motivational factors. Seeking help is eventually a key factor that may positively or negatively affecting interaction.

Importantly, at both educational and controlled laboratory settings the researchers [54-58], up to date, are still continuing to support their participants with explicit instructions be-fore/during/after learning tasks to regulate themselves and prompt them to talk/act when they are silent for long periods. This external intervention was typically in the form of prior training on how to use the material, en-couragement through the external regulators to keep talking during the performance, or a questionnaire after the session. These practices are not recommended as they place artificial constraints on the situation, changes the cognitive processes and task activities required, and distort the natural spontaneous emergence of self-regulatory behavior [34]. In other words, the participants will be forced to interact. To be sure that the subjects actually report their mental states without distorting them [59], it is important that the subjects do not feel that they are taking part in a social interaction, which is fact HHI. This external intervention (i.e., HHI) may cause children to divide their cognitive capacity between the present task and understating the external instructions, thereby forc-ing their cognitive process to work in different directions (i.e., towards a task focus process vs. an external focus process), which is so-called extraneous cognitive load of learners that should be minimized during the learning process [60]. Specifically, when the external regulators such as teachers in the classrooms, on one hand, inter-fere insufficiently to guide the participants, the participants' verbal/nonverbal cues, especially during the per-formance, might result in an inappropriate level of verbalization in which their verbalization is, mostly, a feed-back to the environment rather than to those instructions. On the other hand, when the external regulators inter-fere sufficiently the participants who were asked to describe their interaction loudly through thinking aloud (i.e., spontaneous-interaction), as part of a research method, will not talk to themselves spontaneously but, instead, because they have been instructed and forced to interact with the environment (compulsory-interaction). This leads to conclude that the interaction, by nature, is diversified and variable in terms of how it occurs.

Practical critiques on the previous work: interaction, by nature, is diversified and variable

Recent research clarified that the interaction, by nature, is diversified and variable [8]. The diversity of the interaction behavioral development is varying from inner-interaction, compulsory-interaction, undesirable-interaction, and spontaneous-interaction in which each one has a different mechanism (i.e., how it occurs?). Therefore, they defined the inner-interaction as “the participants' nonverbal-thinking about the current task when they act alone and without HHI either before, during, or after the progression” and the spontaneous-interaction as “the participants' spontaneous verbal thinking about the current task when they act alone and without HHI either before, during, or after the progression”. Based on their analytical critiques, they found that the compulsory-interaction is the task-related utterances and the undesirable-interaction is the task-unrelated utterances. Therefore, their study was entirely relied on computer to control the progression with no sign of us-ing HHI even before, during or after progression, which is basically developed by Self-Regulation Learning [8]. Thus, the present study is completely relied on those types of the interaction [10] to explore the different effect of the computer's- versus teacher's presence and voice on young users' interaction behavioral development.

The compulsory-interaction and undesirable-interaction vs. inner-interaction

In terms of both HMI and human nature, the participant's inner-speech (i.e., inner-interaction) is clearly versus the compulsory- and undesirable- interaction. This fact leads to realize that why the participants are usually producing the compulsory- and undesirable interaction because both make a collision with their free will. Consequentially, the present study defines the inner-interaction as “the participants' nonverbal-thinking about the current task when they act alone and without HHI either before, during, or after the progression” [10]. This definition means that the inner-interaction is completely differ from one task to another based, essentially, on the task complexity (simple vs. complex level) and task precision (correct vs. incorrect answer). The present study analyses how the young users' inner-interaction with the Digital-Playground occurs, how it can be evaluated at each task and how it can be distinguished and differentiated from the other types of interaction.

The compulsory-interaction vs. spontaneous-interaction

Remarkably, many researches in the literature studied the phenomenon of the interaction behavioral development in terms of participants' speech, especially private vs. social speech during progression. Most of those studies were focusing their analysis on the participants' verbalization in terms of thinking aloud, which is usually the behavior under study [8,9,34-40,58,61-63]. Remarkably enough, thinking aloud has always considered as the highest level of the participants' interaction given that the participants should be spontaneously talking to themselves during progression and loudly thinking about their exact thoughts and feelings regarding the given task. In terms of human nature, however, this ‘‘thinking aloud”, as a method of eliciting data, is not the same as the natural “thinking aloud” in the everyday sense, which entails something other than sitting people down next to a tape recorder and asking (i.e., forced) them to talk or/and think loudly [64]. Stated differently, the pure thinking aloud, by nature, should spontaneously be occurred and without any HHI.

In terms of HMI, thinking aloud, based on its natural mechanism, can be considered as the spontaneous-interaction, which is completely differing from the compulsory-interaction and inner-interaction. Accordingly, the present study defines the spontaneous-interaction as “the participants' spontaneous verbal thinking about the current task when they act alone and without HHI either before, during, or after the progression” [17]. In more specific, the participants who were asked to talk or think aloud, as a part of the research method, will not talk to themselves or think aloud spontaneously. Instead, they talk to themselves and think aloud because they have been previously instructed, actually forced, to do so. Practically, they have been compelled/forced/obligated to interact with the stimulus material, which can be clearly considered as a compulsory-interaction. Therefore, the presence of another person, as an external regulator such as the teacher, experimenter, parents … etc., creates the problem of separating the compulsory-interaction from the spontaneous-interaction given the fact that both of them have the same mechanism of occurrence (i.e., both of them are verbal and task-related speech). Consequentially, it is a question that whether young users, especially at an early age, are able to assimilate the meaning of the common recommended reminder “keep talking” or the common question “What are you thinking?” during progression to correctly verbalizing their thoughts and feelings or show their actual level of interaction! The present study takes this issue into account and consideration by exploring how the spontaneous-interaction occurs and how can it be counted, distinguished and differentiated from the compulsory-interaction and desirable-interaction.

Analytical critiques on young users' interaction behavioral development

In terms of designing successful computer-based edutainment, young users have to be engaged as design-partners from scratch to design, generate, and evaluate the stimulus material before using that product in the actual experiments. Otherwise, the motivation will be always needed during progression to motivate children to keep going, which is undesirable factor in terms of distorting the young users' spontaneous interaction. In specific, motivating young user during progression will force them to compulsorily interact (i.e. compulsory-interaction). Remarkably, most, if not all, the available ready-made edutainment in the market nowadays were designed by adults. Thus, the result in the literature regarding young users' behavioral development, so far, still repeats itself with no major change in terms of a revolution in studying young users' interaction behavioral development [65]. In terms of educational psychology, the young users should not get any training on how to use the stimulus material to avoid distorting their cognitive overload before running the actual experiment (i.e., the natural process of the young users' spontaneous-interaction will be negatively affected either by the edutainment fashion and/or stylish or socially by the teacher/experimenter who trained the young users). Therefore, the stimulus material, per se, has to be smart enough to run such a friendly-chat questionnaire whenever needed but without creating any overloading to the young user's current cognitive or thinking processes.

Psychologically, the “fashion and stylish” interface does not mean the product will be definitely accepted by the young users especially when the gender, just for instance, has conducted as an independent key! Many and many experiments were failed because of the adult-based design as many others failed because of the difference between the game's hero gender and the young user [12]. Cognitively, the stimulus material has to consider the negative effect of the Children's Split Attention (CSA) as well as the Children's Cognitive Overload (CCO) during progression. Otherwise, there will be no chance to separate the verbalization of the young users' private speech from social speech and thinking aloud, which are the keys that describe and clarify the content of the children's feeling and thoughts. For instance, the background of the stimulus material during progression may completely distort the young users' thinking process to be cognitively overloaded and the verbalization quality to be mixed between private speech, social speech, and thinking aloud. In case like this, it is impossible to separate and classify the verbalization. Meta-cognitively, thinking aloud (i.e. spontaneous-interaction) is more accurate that the young users are spontaneously following to describe their feeling and thoughts whereas feeling and thoughts control the young users' interaction behavior development. Most importantly, young users' feeling and thoughts can only be obtained and measured by their speech and, more accurately, through thinking aloud verbalization that should be spontaneously occurred and without instructing them to do so, which is the problem that still remains among the researchers so far.

Why the Present Study Should Raise?

To date, the previous researches still relay on a human (i.e., teacher, instructor, experimenter, etc.) as an external guidance/regulator not only to control the experiment but also to control the delivery of the interac-tional voices of encouragement cues especially during the progression. Therefore, they relied on HHI to offer the training session on how to use the stimulus material before the actual experiment starts [10]. The current re-search in the literature concerning young users' interaction behavioral development can be divided into two main branches. The first branch of the studies followed the Vygotskyian’s view that self-regulation is behavioral, appears after and as a result of regulation by others in a specific task and promoted by external regulators. This is also applied in the studies of HMI [54-58]. The second branch of research on children’s devel-opment followed the Piagetian’s view that self-regulation is psychological and promoted by giving children ex-tensive opportunities to make choices and decisions. This is also applied on the studies of HHI [40,66]. However, both branches still rely on offering children external intervention in the form of instructions and guid-ance before the progression (as a training on how to use the stimulus material?), during the progression (as a help whenever the young users need it) and after the progression (as a questionnaire) despite the fact that Piaget [13] argued that regulation by others hinders the children's development. Accordingly the present study was mainly conducting to explore the effect of computer's- vs. teacher's presence and voice on young user's interac-tion behavioral development (i.e., HMI vs. HHI).

The research expectations and main questions

The present study assumed five different expectations. Each expectation is associated with a research question as following:

Expectation (1): Computer’s presence and voice have a different evocative effect compared with the effect of the teacher’s presence and voice on young users’ interaction behavioral development during progression.

Question (1): What is the influence of the computer’s- vs. teacher’s presence and voice on young users’ interaction behavioral development?

Expectation (2): Computer’s presence and voice in contrast to teacher’s presence and voice elicits more compulsory-interaction (task- related utterances) than undesirable-interaction (task-unrelated utterances) from young users.

Question (2): How does the computer’s voice affect the ratio between young users’ compulsory-interaction (task-related utterances) and undesirable-interaction (task-unrelated utterances) during learning tasks?

Expectation (3): There is a significant difference between the effects of the computer’s- vs. teacher’s encouragement cues on young users’ manifested interaction.

Question (3): What is the influence of the computer’s- vs. teacher’s voice on young users’ performance?

Expectation (4): There is a significant difference between the effects of the computer’s- vs. teacher’s voice on young users’ task performance.

Question (4): What is the influence of the computer’s- vs. teacher’s encouragement cues on young users’ task performance?

Expectation (5): There is a significant difference between the effects of the computer’s- vs. teacher’s presence and voice on young users’ satisfaction during progression.

Question (5): To what extent does the computer, as an instrument, increase young users' satisfaction to act alone without their teacher during progression?

Methodology

The present study was an attempt towards understanding the young users' interaction behavioral development through exploring the effect of the computer's- vs. teacher's presence and voice on young users' during progression. The affect was exploring through special computer-based methodology that uses special

Digital-Playground®. The Digital-Playground® used special model, which was developed through a number of pilot studies prior to this research. To our knowledge so far, this kind of methodology has never been used before for studying the young users' interaction behavioral development as it has never been used to analyze the interaction in terms of HMI. It is very important to mention that the present study used the same experimental design, material, participants, tasks, experimental conditions, procedure, and results that introduced by Agina et al., [9]. This is mainly to analyzing the young users' interaction behavior development in two different directions. The first direction is to analyzing the young users' interaction behavioral developmental (how does the interaction occur?). Second, clarifying the mechanism of the interaction (how does the interaction work?). The two directions were analyzed through two different conditions in which each condition was controlled by different external regulator/speaker. The first condition was regulated by the computer and the second condition was controlled by the teacher. In other words, the effect of the HMI vs. HHI on young users' interaction behavioral development was the main research key in the present study.

Participants

The participants were 40 students (Mage=5.6 years) from Al-Mustakbel preschool, which is one of the public preschools at the center of Tripoli. The teachers, in co-operation with the experimenter, distributed the students into two equivalent groups in terms of age and gender. Each group involved 20 students (10 boys and 10 girls). All students spoke Libyan as their native language, which is a hybrid of Arabic and Italian and was also the language used by the stimulus material. The school medical records were revised for all the participants to mainly ensure that there was no sign for attention deficit hyperactivity disorder (ADHD) or similar challenges such as the autism spectrum disorders (ASD) or problems with hearing or vision such as color blindness. All the participants were familiar with the use of the computer in the classroom and most of them at home as well.

Materials

The present study uses the Digital-Playground® version 1.1® [9], which involves two different modes of interactional voice: nonverbal vs. verbal encouragement cues. The two modes were applied by two different external regulators (computer's - vs. teacher's presence and voice).

The silent vs. spoken encouragement cues

The present study used two different instructional units of encouragement cues (nonverbal vs. verbal) during the allotment time of each task, which was 60 seconds for each task. The teachers, in co-operation with the experimenter, were basically developed the utterances of the encouragement cues based on their experience in the classroom as shown in se 1.

The application of the nonverbal vs. verbal encouragement cues during progression

The first 30 seconds of the allowed reaction time, was intentionally left without verbal encouragement cues (i.e., nonverbal unit) as the young users usually and naturally needed (i.e., they need a time to regulate themselves to get ready and respond, which is a natural setting). Therefore, offering encouragement instantly may cognitively distort students to actually report their exact interaction behavioral development. This was an effort to exactly follow what the teachers normally follow in the classroom. As our colleague teachers believed, the encouragement cues should be cautiously and judiciously followed during learning tasks as ‘silence’ is the natural setting for the learners during learning progression; otherwise, the interaction behavioral development will be most probably, if not definitely, distorting, if not entirely destroyed. Thus, the first 30 seconds of each task represent the nonverbal unit (i.e., no encouragement cues were given) where the external regulator (i.e., the teacher or computer) had only to direct the child’s attention to the computer screen without any verbal encouragement cues in case the young user asked for an extra regulation/help.

Specifically, the teacher or computer, as the external regulator, may or may not ‘silently’ interfere and without verbal instructions to direct the young user's attention to the computer screen in case the young user tried to communicate (e.g., seeking for an extra regulation/help, which is very nature setting for the learners during progression). Technically, the teacher interfered by using her finger to point to the computer screen whenever the young user tried to communicate or did not answer the task within the first twenty seconds of the nonverbal-unit. The computer only interfered once after 20 seconds of the nonverbal-unit (i.e., 10 seconds before the verbal-unit begins) by flashing the border of the task only once. In specific, the Digital-Playground® expected that the young user needs at least one nonverbal encouragement in case the young user did not answer the task within the first 20 seconds of the nonverbal-unit. The focus was on the first 20 seconds based on the teachers’ experience in the classrooms. During the next 30 seconds of the allotted time (i.e., during the verbal-unit), the external regulator (the teacher or computer) was systemically interfered by verbalizing specific encouragement cues each 10 seconds in order to motivate the young users to interact during progression. However, as an effort to offer the freedom that the young users need to select what they want with full free-will as they already experienced in their classroom, there were no verbal encouragements offered during the task level selection (Figure 1).

school-cognitive-psychology-young-users

Figure 1: The young users select the complexity level with no encouragements cues.

The experimental conditions

To study the influence of the computer's- versus teacher's presence and voice, the two instructional units of the program (nonverbal vs. verbal) were offered to the participants in two different orders (i.e., the two different conditions of the experiment). In the first order, the young users worked with their teacher and in the second order the young users worked alone with the computer. In both cases, the two instructional units (nonverbal vs. verbal) were applied in each task. For the sake of the simplicity and clarity, the young users who received the Teacher’s Encouragement Cues were defined as ‘TEC-Condition’. In contrast, the young users who received the Computer’s Encouragement Cues were defined as ‘CEC-Condition’. It is also important to mention that the teacher was not present with the young users during the CEC-Condition (i.e., the young users were worked alone with the computer in the CEC-Condition and without HHI either before, during or after progression).

Measuring the production of the young users’ compulsory- vs. undesirable-interaction

Following the previous research [10], the young users' compulsory-interaction was captured by any utterance about the task, explanation or comments about the answer or question, or ongoing process (this is the same of what already so-called task-related speech in developmental research concerning young human.

Therefore, only the short sentences (i.e., murmuring such as ‘‘offfff’’, ‘‘aha’’, ‘‘wow’’, ‘Omm’, and so on, whispers, and inaudible lip movements) were also counted as compulsory-interaction too. Any other utterances were ignored and considered as undesirable-interaction (this is the same of what already so-called task-unrelated speech in developmental research concerning young users). Table 2 shows a number of actual examples of the compulsory- and undesirable-interaction utterances that the young users had during the progression.

The Original Encouragement English Translation Verbalized at:
 
(Exactly as verbalized by computer/teacher during the performance. The language is a hybrid of Libyan and Italian and written by Arabic letters) (The translation is based on the exact meaning but not on the word-to-word translation. So it can be presented in other words and context. However, the most important thing is the meaning of the original encouragement cues that should be clear in English). (The teacher/computer was verbalized one of the three encouragement cues each 10 seconds)
????????????????????????????????????????????????? If you do not understand the question, touch the Princess and she repeats the question once again. At the second 30th
?????????????????????????????????????????????????????? We know that this is not easy question, but also we know you are smart enough to answer it. A the second 40th
??????????????????????? Still, we are waiting for your answer. At the second 50th

Table 1: The set of the encouragement cues used by the computer and teacher during the verbal unit.

Task-related Speech(CEC-Condition)
?????????? .?? The question is clear
???????????? ,??????????????? Even if I am not smart, the answer is easy.
???...??..? Wow… wow-wow… (This utterance was verbalized as a song)
TEC-Condition
??????????????????????????????????? I do not understand the question and without touching the
Princess, ??????? I know the answer.
???????????????? The difficult questions begin.
Task-unrelated Speech(CEC-Condition)
??????????????? The Princess’ voice is sweet.
?????????????????? Can you fly Superman?
???????????? Very sweet game.
TEC-Condition(Children were mostly directed their speech to the teachers and, mostly, as questions)
?????????????????????? Why the teacher does not respond to me?
??????????????????????? Teacher: is this the same homework-task?
??????????? ,??????????????????? If you do not want to respond to me, then leave me alone.
??????????????????? ,?????? Teacher: is my answer correct or incorrect?
????????????????????????????. This is not the same game my dad brought to me.
????????????????? .?????? Teacher: is this the answer or that one?

Table 2: Examples of the students’ utterances.

Counting the young users' inner-interaction

The present study used the same scoring system that originally developed by the study [8] and improved by the study [9]. Specifically, after each task during the progression, the young users had have to make a decision whether they wanted to proceed next with a more simple task by touching the letter “” (sounds as: SEAN) on the green board or more complex task by touching the letter “” (sounds as: SAD') on the yellow board as shown in Figure 1. Those decisions were considered and counted as “the correct interactional decisions of the young user's manifested inner-interaction” based on four devolved principles.

Table 3 illustrates the list of the four principles of the manifested inner-interaction and why using each principle (i.e., the rational of each principle).

Principle The Principle Context The Rational of the Proposed Principle
1 A user chooses a simple task after he could not complete the previous task because of time. Because the user realizes that the time does not work on his behalf and wants to take another correct try with the next task as a simple level.
2 A user chooses a complex task after he completed the previous task correctly for whatever the level of the previous task was. Because the user realizes that he can challenge another task especially if his answer was correct AND the task level was complex.
3 A user decides to continue with the complex task after he completed the previous task correctly. Because the user realizes that he can challenge any coming task for whatever the next level is (simple or complex).
4 A user decides to continue with the simple tasks after he completed the previous task incorrectly. Because the user realizes that he should not go further with more complex tasks UNLESS he can answer the simple task(s) first.

Table 3: The four main principles of evaluating the young user's manifested inner-interaction.

Measuring the task performance as a function of the young user's inner-interaction

The present study used the scoring system version 1.0, which was developed by the study [9] to cover all the possibilities that the Digital-Playground® was scored them by zero whenever it is inapplicable. In specific, the computer, as an instrument to control the entire progression through the proposed learning environment, was able to automatically scoring the task performance as correct/incorrect depends on the level of the participant’s interaction for each task. In more specific, the system related the final judgment of the task precision (correct/incorrect) to the choice of task complexity level (simple/complex) that the participant made before presenting the actual task. Simply, the score was related to the degree of the participant’s manifested inner-interaction for the task itself but not only to the task precision. However, if the participant did not answer during the task allotted time (60 seconds), the system considered that as an incorrect answer (exactly as the teachers followed in the classroom). Table 4 shows the list of the scores and why the system used each score, which was originally developed and improved by the studies [8,9].

Score Score Context Why this score with X point?
0 For the correct answer at the simple level
and incorrect answer at the complex level
IF AND ONLY IF the task level choice
wasasimple-levelandregardlessthe
previous task precision.
Because the simple task can be easily answered even with a low
degree of self-regulation as it is a natural response to answer the
complextaskincorrectlyevenwithahighdegreeofself-
regulation. Thus, the game scored zero point.
1 For the mid-level IF AND ONLY IF the
child answers the current task correctly.
Reminder: the mid-level means that the
participant did not make a choice about
the task level (more simple/difficult).
Because of the probability that the participant may intentionally
deselect the task level to examine what the system is going to
present if he did not make a choice, which is a degree of self-
regulation that hardly to be known during the performance (i.e., it
is impossible to know whether the child was really followed that
behavior or not). Thus, the system scored one point if the child’s
answer is correct regardless the task actual level (simple/complex).
Otherwise, the game scored zero point.
2 Forthecorrectansweratthecomplex
level and incorrect answer at the simple
levelIFANDONLY IFthetask level
choicewasacomplexlevelandthe
previous answer was correct.
Because the child already regulated himself to face a complex task
based on the correct answer of the previous task, which is naturally
requiring a high degree of self-regulation to make this decision, the
incorrect answer of the simple task is ineffective on the child’s
manifested self-regulation. Thus, the game scored two points even
if the current task is simple and the child’s answer is incorrect.
Otherwise, the game scored zero point.
3 For the correct answer of the given task
[simple/complex] IF AND ONLY IF the
level choice of all the previous tasks was
simple AND the child responded WITH
receiving encouragement cue(s).
Because the child regulated himself to always produce the correct
answerthroughselectingthesimplelevelintentionallyAND
simultaneously the child did not accept the challenge to face any
complextaskwithoutencouragementsduringlearningtasks,
which is naturally a high degree of SRL. Thus, the system scores
three points. Otherwise, the game scored zero point.
4 For the correct answer of the given task
[simple/complex] IF AND ONLY IF the
level choice of all the previous tasks was
simpleANDthechildresponded
WITHOUT receiving any encouragement
cue.
Because the child regulated himself to always produce the correct
answerthroughselectingthesimplelevelintentionallyAND
simultaneously the child did not accept the challenge to face any
complextaskandchildreceivedencouragementcue(s)during
learning tasks, which is naturally a high degree of SRL. Thus, the
system scores four points. Otherwise, the game scored zero point.
5 For the correct answer of the given task
[simple/complex] IF AND ONLY IF the
level choice of all the previous tasks was
complex AND the child responded WITH
receiving encouragement cue(s)
Because the child regulated himself to always produce the correct
answer through selecting the complex levels AND simultaneously
accepted the challenge to face the complex tasks always but the
child received encouragement cue(s) during learning task, which is
naturally a degree of SRL. Thus, the system scores five points.
Otherwise, the game scored zero point.
6 For the correct answer of the given task
[simple/complex] IF AND ONLY IF the
level choice of all the previous tasks was
complexANDthechildresponded
WITHOUT receiving any encouragement
cue.
Why? Because the child regulated himself to always produce the
correctanswerthroughselectingthecomplexlevelsAND
simultaneously accepted the challenge to face the complex tasks
alwaysand,therefore,withoutreceivinganyencouragement
during learning task, which is naturally a high degree of SRL.
Thus, the system scores six points. Otherwise, the game scored
zero point.

Table 4: The scoring system for the task performance as a function of the young user's inner-interaction.

Evaluating the young users' satisfaction during learning tasks

After completing the 21 tasks, the young users under both conditions were given the opportunity to describe their satisfaction about the Digital-Playground® and the experimental settings (they were fully free to react or to refuse). This process was achieved through a friendly-chat questionnaire with the Princess and Superman that involved eight simple questions. The current study uses the friendly chat questionnaire version

1.0® [8]. In specific, Superman started the questionnaire by informing the young user that he (Superman) and the Princess would like to chat with him (the young user) about the game because he (the young user) showed a high level of intelligence and could help to improve the game (this is regardless the actual young user's achievement and as a motivation for the young users to react exactly as the teachers followed in the classroom). First, Superman asked the young user whether he would like to chat with them (the Princess and Superman) by touching the correct (agree) or incorrect (disagree) sign in the middle of the screen (Figure 3).

school-cognitive-psychology-principles-counting

Figure 2: The principles of counting the young user's innerinteraction (the correct interactional decisions).

school-cognitive-psychology-friendly-chat

Figure 3: The friendly-chat questionnaire.

If the young user agreed, the Princess first told the young user that whenever he did not understand the point, he (the young user) should touch her or Superman once again to repeat the explanation. For the next question, Superman asked the young user to touch the correct sign once again within two minutes, which was the allotment time for each question. If the young user agreed to answer by touching the correct sign, Superman asked a series of questions. When the young user either declined to chat or finished the questionnaire, the Princess moved the game to the reward session, which was the last session. Each young user was rewarded with a piece of chocolate (Kit-Kat/Mars), which were the favorites among the participants as their teachers reported. The Princess and Superman thanked the participant and informed him that he did a very nice job with high performance and told him that when the room light comes on, he will find the chosen chocolate (Figure 4) with the teacher in the meeting room.

school-cognitive-psychology-last-session

Figure 4: The reward and last session.

Data gathering

The system gathered data on factors such as the exact time the young user started the game, the chosen task level, the actual task level, the response time in milliseconds, the task score and task precision response-time in milliseconds, and the degree of the manifested inner-interaction score for each young user.

Video data were gathered and reviewed to classify and tabulate the utterances.

The friendly-chat questionnaire was reviewed for each young user’s answers to ensure the young users responses.

Procedure

The school has a special experimental room ready for research with young users and their teachers. This room was usually located in a quiet corner and involved a child-sized chair, an external 17-inch touch-screen (to avoid any possible coordination problems for the young users) connected to a laptop computer and two hidden portable video cameras. The first camera captured the entire environment where the second offered a clear view of the task on the screen with the young user’s face. An extra small microphone was connected to the second camera for audio recording. The young users were kept unaware of the cameras and the microphone to avoid a problem of splitting attention that could lead to undesirable cognitive processes. Each young user attended a five-minute welcome session in the preschool’s meeting room but did not receive training on how to use the system. The young users were told that the game required a smart player to complete the tasks and that they should follow the instructions given by the computer. They were also told that neither their teacher nor the experimenter would tell the answers even if the teacher presented. All sessions were held in the morning at 9:30 AM to avoid differences due to fatigue. The actual experiment ran with two young users of each group per day (first two young users from the CEC-Condition and then two young users from the TEC-Condition) and the entire experiment required ten days to accomplish.

Results

The initial research goal was to analyzing the effect of the computer's- versus teacher's presence and voice on young users' interaction behavioral development in two opposite directions. First, analyzing the young users' developmental interaction behavior (how does the interaction occur?). Second, clarifying the mechanism of the interaction (how does the interaction work?). The present study also analyzing the task performance as a function of the young users' inner-interaction and the degree the young users interact with the Digital-Playground (i.e., satisfaction). This was done by finding the differential effect between the two types of instructional conditions of encouragement cues (nonverbal vs. verbal) in a laboratory condition (school classroom) through an isolated computer-based environment (Digital-Playground®).

The influence of the computer's- vs. teacher's presence and voice on young users' interaction behavioral development (the 1st research question)

The first research question addressed the influence of the computer's- vs. teacher's presence and voice on young users’ interaction behavioral development during progression. Table 5 shows that the most significant difference between the two conditions was the high production of compulsory-interaction (i.e., task-related utterances). The CEC-Condition produced (73%) compulsory-interaction and (27%) undesirable-interaction where the TEC-Condition produced (44%) and (56%) respectively.

Young Users’ InteractionProduction CEC-Condition(n=20) TEC-Condition(n=20) Total
Compulsory-Interaction 116 (73%) 76 (44%) 192 (58%)
Undesirable-Interaction 43 (27%) 97(56%) 140(42%)
Total 159 173 332

Table 5: The influence of the computer’s- vs. teacher’s presence and voice on young users’ interaction production during progression, by group.

A chi-square test of independence was performed to examine the relation between the encouragement modes (nonverbal vs. verbal cues) and the total amount of interaction's productivity in the two conditions. The result showed that the relation between these variables was significant, χ2 (df=1, N=332)=27.445, p<0.001, i.e., there is enough evidence that both groups are significantly different in terms interaction behavioral development because of the two different actors (computer vs. teacher). This result confirms the first hypothesis that the computer’s presence and voice has a different evocative effect compared with the effect of the teacher’s presence and voice on the young users’ talking during the performance of learning tasks.

The influence of the computer's- vs. teacher's presence and voice on young users’ compulsory-interaction vs. undesirable--interaction during each encouragement mode (the 2nd research question)

The second research question addressed the way the computer’s presence and voice affect the production ratio of interaction between the young users' compulsory-interaction (i.e., task-related utterances) and undesirable-interaction (task-unrelated utterances) during each encouragement mode (nonverbal vs. verbal). Table 6 shows the spread of interaction productivity at each mode/unit.

Young Users’ Interaction Productivity CEC-Conditions Total per two units TEC-Condition Total per two units
During Nonverbal unit During Verbal unit During Nonverbal unit During Verbal unit
Compulsory-Interaction (Task-related) 37 (32%) 79 (68%) 116 19 (25%) 57 (75%) 76
Undesirable-Interaction (Task-unrelated) 16 (37%) 27 (63%) 43 29 (30%) 68 (70%) 97
Total (32%) 106 (67%) 159 48 (28%) 125 (72%) 173

Table 6: The influence of each encouragement mode (nonverbal vs. verbal) on young users’ compulsory-interaction vs. undesirable-interaction, by group.

During the verbal unit

In the verbal unit, the young users in the CEC-Condition produced more compulsory-interaction (i.e., task-related: 79 utterances) than undesirable-interaction (i.e., task-unrelated: 27 utterances). In contrast, the young users in the TEC-Condition produced more undesirable-interaction (i.e., task-unrelated: 29 utterances) than compulsory-interaction (i.e., task-related: 19 utterances). The result of the chi-square test showed that the relation between these variables was significant, χ2 (df=1, N=231)=18.648, p<0.001, i.e., the computer's presence and voice shows a significant increase in the young users’ compulsory-interaction during the verbal unit. This result confirms the second hypothesis that the computer’s presence and voice in contrast to teacher’s presence and voice elicits more compulsory-interaction (i.e., task-related utterances) than undesirable-interaction (i.e., task-unrelated utterances) from young users.

During the nonverbal unit

In the nonverbal unit, the young users in the CEC-Condition produced more compulsory-interaction (i.e., task-related: 37 utterances) than undesirable-interaction (task-unrelated: 16 utterances). The same result for the young users in the TEC-Condition with different number of the utterances and proportions. The result of the chi-square test showed that the relation between these variables was significant, χ2 (df=1, N=101)=8.133, p<0.004, i.e., the computer's presence and voice shows a significant increase in the young users’ interaction productivity during the nonverbal unit. This result confirms the second hypothesis that the computer’s presence and voice in contrast to teacher’s presence and voice elicits more compulsory-interaction (i.e., task-related utterances) than undesirable-interaction (i.e., task-unrelated utterances) from young users.

The influence of the computer's- vs. teacher's encouragement cues (nonverbal vs. verbal) on young users' manifested inner-interaction (the 3rd research question)

The third research question addressed the influence of the computer’s- vs. teacher’s encouragement cues (nonverbal vs. verbal) on young users’ inner-interaction during progression. As shown in Table 7, the most significant difference between the two conditions was the high production of the CEC-Condition who showed a higher inner-interaction during the nonverbal unit (215 correct decisions: 69%).

The Young Users' Manifested Inner-Interaction during: CEC-Condition (n=20) TEC-Condition (n=20)
During Nonverbal unit   During Verbal unit During Nonverbal unit During Verbal unit
Total per unit 215 (69%)    98 (31%) 145 (53%)  130 (47%)
Total per condition 313 275

Table 7: The effect of the computer’s- vs. teacher’s encouragement cues on the young users’ inner-interaction, by group.

During the verbal unit

In the verbal unit, the young users in the CEC-Condition produced more compulsory-interaction (i.e., task-related: 79 utterances) than undesirable-interaction (i.e., task-unrelated: 27 utterances). In contrast, the young users in the TEC-Condition produced more undesirable-interaction (i.e., task-unrelated: 29 utterances) than compulsory-interaction (i.e., task-related: 19 utterances). The result of the chi-square test showed that the relation between these variables was significant, χ2 (df=1, N=231)=18.648, p<0.001, i.e., the computer's presence and voice shows a significant increase in the young users’ compulsory-interaction during the verbal unit. This result confirms the second hypothesis that the computer’s presence and voice in contrast to teacher’s presence and voice elicits more compulsory-interaction (i.e., task-related utterances) than undesirable-interaction (i.e., task-unrelated utterances) from young users.

During the nonverbal unit

In the nonverbal unit, the young users in the CEC-Condition produced more compulsory-interaction (i.e., task-related: 37 utterances) than undesirable-interaction (task-unrelated: 16 utterances). The same result for the young users in the TEC-Condition with different number of the utterances and proportions. The result of the chi-square test showed that the relation between these variables was significant, χ2 (df=1, N=101)=8.133, p<0.004, i.e., the computer's presence and voice shows a significant increase in the young users’ interaction productivity during the nonverbal unit. This result confirms the second hypothesis that the computer’s presence and voice in contrast to teacher’s presence and voice elicits more compulsory-interaction (i.e., task-related utterances) than undesirable-interaction (i.e., task-unrelated utterances) from young users.

The influence of the computer's- vs. teacher's encouragement cues (nonverbal vs. verbal) on young users' manifested inner-interaction (the 3rd research question)

The third research question addressed the influence of the computer’s- vs. teacher’s encouragement cues (nonverbal vs. verbal) on young users’ inner-interaction during progression. As shown in Table 7, the most significant difference between the two conditions was the high production of the CEC-Condition who showed a higher inner-interaction during the nonverbal unit (215 correct decisions: 69%).

The result of the chi-square test showed that the relation between these variables was not significant,χ2 (df=1, N=588)=2.816, p=0.093, i.e., the computer’s- vs. teacher’s encouragement cues (nonverbal vs. verbal) have no effect on the young users' manifested inner-interaction during progression. This result does not confirm the third hypothesis that, there is a significant difference between the effects of the computer’s- vs. teacher’s encouragement cues on young users’ manifested inner-interaction. The correlation between all young users’ interaction production and manifested inner-interaction in both groups was (r=0.2), in the CEC-Condition was (r =0.5), and in the TEC-Condition was (r=-0.1, ns).

During the nonverbal unit

As shown in Table 7, the young users in both conditions manifested the less inner-interaction during the verbal unit. The young users in the CEC-Condition manifested (98 correct decisions: 31%) where the young users in the TEC-Condition manifested (130 correct decisions: 47%).

During the verbal unit

As shown in Table 7, the young users in both conditions manifested the higher ratio of the inner-interaction during the nonverbal unit. The young users in the CEC-Condition manifested (215 correct decisions: 69%) where the young users in the TEC-Condition manifested (145 correct decisions: 53%).

The influence of the computer's- vs. teacher's presence and voice on young users' task performance (the 4th research question)

The fourth research question addressed the influence of the computer’s- vs. teacher’s presence and voice on young users’ task performance. As shown in Table 8, the young users in both conditions gained more correct answers during the verbal unit (i.e., when they received encouragement cues) than the nonverbal unit. The young users in the CEC-Condition outperformed the young users in the TEC-Condition (241 and 175 correct answers respectively).

The result of chi-square showed that the relation between these variables was not significant, χ2 (df=1, N=416)=1.582, p=0.208, i.e., the computer’s- vs. teacher’s presence and voice have no effect on the task performance that does not confirm the fourth hypothesis that, there is a significant difference between the effects of the computer’s- vs. teacher’s presence and voice on the young users' task performance during progression.

Number of the correct answers during: CEC-Condition (n=20) TEC-Condition (n=20)
Nonverbal Unit 87(36%) 52(30%)
Verbal Unit 154(64%) 123(70%)
Total 241 175

Table 8: The influence of the computer’s- vs. teacher’s encouragement on the young users' task performance, by group.

The correlation between all the young users' interaction production and task performance in both groups was (r=0.07), in the CEC-Condition was (r =0.02), and in the TEC-Condition was (r=-0.01, ns).

The effect of the computer's- vs. teacher's presence and voice on young users' satisfaction (the 5th research question)

The fourth research question addressed the influence of the computer’s- vs. teacher’s presence and voice on the young users' satisfaction during progression. The young users in the CEC-Condition showed a higher level of satisfaction than the young users in the TEC-Condition. This result confirmed the fifth hypothesis as shown in Table 9.

The friendly chat Questionnaire CEC-Condition (n=20) TEC-Condition (n=20)
(1) The game is easy to use. 100% 70%
(2) It is easy to select the task level. 100% 85%
(3) All tasks are difficult. 5% 25%
(4) The task time is enough. 85% 55%
(5) You will play this game once again. 90% 95%
(6) You will recommend this game. 100% 80%
(7) You like this game. 100% 95%
(8) You want the teacher [teacher’s name] to be with you to finish the tasks. 10% 35%

Table 9: The young users’ answers on satisfaction questions after completing the tasks.

While the young users in the CEC-Condition did not show any complain about the game’s usability (100%), the young users in the TEC-Condition were not fully satisfied (only 70% did not find difficulty to use the Digital-Playground®). The most significant difference effect between the two conditions was concerning the young users' judgment about the difficulty of the tasks. Only (5%) of the young users in the CEC-Condition had some difficulty while (55%) of the young users in the TEC-Condition found difficulties in the tasks. The second significant difference between the two conditions is that only (10%) of the young users in the CEC-Condition wanted their teacher to be presented with them while (35%) of the young users in the TEC-Condition wanted that. Statistically, an ANOVA one way test, based on the proportions, shows that there is a statistically significant difference between the two conditions (p=0.037) that confirms the fifth hypothesis that: there is a significant difference between the effect of the computer’s- vs. teacher’s presence and voice on the young users’ satisfaction during progression.

Discussion and Conclusion

The results of the present study provide evidence that the computer’s presence and voice promotes more young users' interaction than the effect of the teacher’s presence and voice. This effect can be understood quite easily as the young users are more productive regarding the undesirable-interaction when (socially) interacted with their teacher (i.e., HHI). The results also show that the young users are more active in gaining more inner-interaction (i.e., more correct decisions/self-regulation) and more correct task answers when engaged/interacted with computer than the teacher as an external regulator/speaker. While this is very logic and natural result because the young users already expect such a feedback from their teacher during learning tasks, it is surprising, however, that the previous work still relates the young users’ speech productivity (i.e., compulsory-interaction) and the manifested self-regulation (i.e., inner-interaction) to the task success and failure. The present study confirms the fact that the context of the external regulation (computer's- vs. teacher's presence and voice) and the content of the encouragement cues (nonverbal vs. verbal) play a critical role for the young user’ interaction production and satisfaction during progression. This conclusion is consistent with the original study [9] and simultaneously is inconsistent with the previous work.

In specific, if the young users' private speech (i.e., compulsory-interaction) production relates to the task success and failure as the previous Vygotskyian work concluded that private speech (i.e., compulsory-interaction) increases linearly with task success then how can we interpret the outperforming of young users in the TEC-Condition in compulsory-interaction productivity (173 utterances) despite they already gain less successful tasks (175 correct answers)? This result does not confirm Vygotsky’s view that private speech (i.e., compulsory-interaction) increases linearly with task success. Therefore, this result does not confirm Piaget’s view as well that the external regulation hinders the development of self-regulation (i.e., inner-interaction) given the fact that the computer’s intervention during progression is an actual external regulation. This leads to consider the different effect of the teacher, as human, and the effect of the computer, as a nonhuman or an instrument, to be highly taken into account and consideration in the future research concerning young users' or children’s interaction behavioral development in all terms!

Another sensitive implication is that: during reviewing the video recording for all young users in both conditions, we found that some of them start talking before receiving any verbal cues. This is an actual and pure thinking aloud (i.e., an actual spontaneous-interaction) given the fact that thinking aloud occurs spontaneously and without any previous guidance for the participants to do so. While this is also a limitation of the present study because it does not precisely count that, it is also motivated and challenged to reinvestigate the use of the thinking aloud protocols with young users through the computer's supervision. Accordingly, the present study confirms the conclusion of the original research by the study [9] that thinking aloud (i.e., spontaneous-interaction) should occur spontaneously given the fact that some young users verbalize their thoughts and express their feelings during the nonverbal unit. Remarkably, the most obvious obstacle in applying the verbal encouragement cues on the young users in the TEC-Condition is that when the teacher verbalized one of the encouragement cues, most of the young users reacted to the teacher instantly and mostly as a question. Our investigation shows that most of those questions are an actual seeking for extra help before answering the presented task in which the young users do not pay attention to the task allotment time (i.e., the young users do not consider the task allotment time when they talked to their teacher). This obstacle may be the main reason why the young users in the TEC-Condition attain/gain a lowest number of the correct decisions (i.e., inner-interaction/self-regulation), less task correct answers and less satisfaction during progression. This conclusion is fully consistent with the previous work (e.g., Frauenglass and Diaz, Lee, Müller, Zelazo, Hood, Leone, Rohrer, Wozniak). This is also shows the side effect of the HHI during progression in which the participants' cognitive processes are overloaded and distorting by undesirable-interaction, which is also called social speech or task-unrelated speech.

From a practical point of view, the present study answers various questions about the context and setting of inner-interaction/self-regulation that reported by the previous research [11,51,52]. First, the question "how inner-interaction/self-regulation occurs during progression?" and, second, "how inner-interaction/self-regulation can be scored?" are now having, to a great extent, clear answer, which is the result that has never seen before in terms of HMI. In specific, the principles used for measuring inner-interaction/self-regulation in the present study show clearly how inner-interaction occurs and how inner-interaction can be scored. Third, the complexity of the young user’ seeking help [53], which considers as an important element of inner-interaction/self-regulation strategy in the previous work whereas all students require assistance at times, to understand material, and when confused about what to do, are also having clear answer. This is very clear when comparing the young users in both conditions in terms of seeking help from the external regulators (computer's- vs. teacher’s presence and voice). In specific, the young users in the TEC-Condition are mostly reacting to the teacher’s encouragement cues in the form of questions, which is an actual seeking help during progression.

Analyzing the young users' interaction behavioral development through Vygotsky’s view

According to Vygotsky’s view, language occurs in three stages: social speech, egocentric speech and inner speech. Social speech is just that speech for the purposes of communicating (undesirable-interaction). Egocentric speech is more intellectual and children use this by speaking out loud to themselves (spontaneous-interaction). Inner speech is used by children to think in their heads about the problem or the current task, instead of verbalizing their thoughts in order to decide what to do next (inner-interaction). Practically, the present study shows that the computer's voice acts as a bridge between the social (i.e., undesirable-interaction) and the private speech (compulsory-interaction). As the computer's voice is less expressive and detached from a personal relationship it may work as "inner voice" that motivates the young users to be more productive concerning the inner-interaction. According to the fact that the tasks of the stimulus material (Digital-Playground®) are already implemented based on the Vygtoskyian’s Zone of Proximal Development (ZPD) and ordered according to the teachers’ judgment as motivated and unmotivated task [8,9], our investigation shows that embedding the teacher’s judgment with ZPD helps us to separate and distinguish the young users'’ spontaneous-interaction (i.e., the young users' verbalization during the nonverbal unit/ thinking aloud) from their compulsory-interaction (i.e., the young users’ speech during the verbal unit/private speech) and the undesirable-interaction (i.e., task-unrelated speech/social speech) given that they have the same mechanism (how it occurs?). This implication has never seen before in the literature.

Recommendations

The future research should take into account that young students are able to monitor, control and increase their behavioral regulation during learning tasks even without human external intervention. Implicitly, the present study recommends the use of learning environments that act as a standalone learning systems in studying students’ development.

Based on ‘the when’ and ‘the how’, there are two types of the external interventions the researcher should take into consideration: judicious versus injudicious interventions. The judicious intervention increases the the participant’ interaction behavioral development and the injudicious hinder that behavioral interaction development.

The present study invites the researchers to seriously think about new methodological aspects and measurements that may lead to new ways for investigating of the young users’ interaction behavioral development and, therefore, to avoid introducing new terms of the same phenomenon (e.g., should we introduce the term spontaneous-talk or self-speak? And so on!!!).

The present study recommends the investigation of the effect of different task feedback on the young users’ interaction production during learning tasks with the computer given the fact that the young users provide evidence that they can think and talk while they acting alone and without the need of their teacher's intervention and the fact that offering the encouragement cues through their teachers forces them to spend extra time as they mostly react to their teacher rather than interacting with the present task. This subject is currently what our next study all about.

References

  1. Cole M (2004) Preface: Reading Vygotsky, In: Rieber R, Robinson K, Bruner JS (eds.) The Essential Vygotsky, Kluwer Academic: New York, NY.
  2. Bandura A (1977) Social Learning Theory. Prentice Hall.
  3. Dewey J (1916) Democracy and education. New York, NY: Macmillan.
  4. Winsler A, Manfra L, Diaz RM (2007) ‘‘Should I let them talk?’’: Private speech and task perfor-mance among preschool children with and without behavior problems. Early Childhood Research Quarterly 22: 215-231.
  5. Agina AM, Kommers PA, Heylen D (2015) Towards Understanding Human-Media Interaction: The Effect of Human's Absence vs. Computer's Voice On Detecting Young Users’ Behavioural Interaction De-velopment Through a Digital-Playground®. Danish Science Journal Vol 2: 51-69.
  6. Clark BD, Nelson BC, D'Angelo CA, Slack K, Martinez-Garza M (2010) Disciplinary Integration of Digital Games for Science Learning SURGE: Integrating
  7. Greenfield PM (1994) Action video games and informal education: effects on strategies for dividing vis-ual attention; Journal of Applied Developmental Psycology 15:105-123.
  8. Self-Regulation Learning. IADIS Multi Conference on Computer Science and Information Systems (MCCSIS 2008) Proceedings. Amsterdam the Netherlands.
  9. Agina A, Kommers P, Steehouder M (2011a) The Effect of the External Regulator’s Absence on Chil-dren’s Speech Use, Manifested Self-regulation, and Task Performance during Learning Tasks. Computers in Human Behavior 27: 1118-1128.
  10. Agina AM, Kommers PA, Steehouder F (2012) The effect of nonhuman’s external regulation on children’s responses to detect children with developmental problems (DP) associated with the natural development of self-regulation during learning tasks. Computers in Human Behavior 28: 527-539.
  11. Schraw G (1994) The effect of metacognitive knowledge on local and global monitoring. Contemporary Educational Psychology 19: 143-154.
  12. Agina AM, Kommers PAM (2008) The Positive Effect Of Playing Violent Games On Children’s.
  13. Newman RS, Schwager MT (1992) Student perceptions and academic help seeking. In: Schunk DH, Meece JL (eds.) Student perceptions in the classroom. Hillsdale, NJ: Lawrence Erlbaum Associates Inc 123-146.
  14. Pintrich PR, Roeser R, De Groot E (1994) Classroom and individual differences in early adoles-cents’ motivation and self-regulated learning. Journal of Early Adolescence 14: 139-161.
  15. Anderson CA, Gentile DA, Buckley KA (2007) Violent video game effects on children and ado-lescents: Theory, research, and public policy. New York, NY: Oxford University Press 2007.
  16. Tang CM, Bartsch K, Nunez N (2007) Young children’s reports of when learning occurred. Journal of Experimental Child Psychology 97: 149-164.
  17. Belanger F, Jordan DH (2000) Evaluation and implementation of distance learning: Technologies, tools, and techniques. Hershey, PA: Idea Group.
  18. DeVries R, Zan B (1992) Social processes in development: A constructivist view of Piaget, Vygotsky, and education. Paper presented at the annual meeting of the Jean Piaget Society, Montreal, Quebec Canada.
  19. Deniz CB (2004) Early childhood teachers’ beliefs about, and self-reported practices toward, children’s private speech. Dissertation Abstracts International Section A: Humanities and Social Sciences 64(9-A).
  20. McIsaac MS, Blocher, JM, Mahes, V, Vrasidas C (1999) Student and teacher perceptions of inter-action in online computer-mediated communication. Educational Media International 36: 121-131.
  21. Kamii C, DeVries R (1980) Group games in early education: Implications of Piaget’s theory. Wash-ington, DC: National Association for the Education of Young Children.
  22. Allwood J, Nivre J, Ahlse´n E (1992) On the semantics and pragmatics of linguistic feedback. J Se-mant 9:1-26.
  23. Bavelas JB, Coates L, Johnson T (2000) Listeners as co-narrators. J PersSocPsychol 79:941-952.
  24. Jááskeláinen R (1999) Tapping the process: An explorative study of the cognitive and affective factors involved in translating. Joensuu: University of Joensuu.
  25. Yee SLCY (2011) A Review of Vygotsky: Understanding Thought and Language and Mind in Socie-ty 7-12.
  26. Tang CM, Bartsch K, Nunez N (2007) Young children’s reports of when learning occurred. Journal of Experimental Child Psychology 97: 149-164.
  27. Tuomi I (1998) Vygotsky in a TeamRoom: An Exploratory Study on Collective Concept Formation in Electronic Environments. in 31st Hawaii International Conference on System Sciences.
  28. Vygotsky’s Spontaneous and Instructed Concepts in a Digital Game? in ICLS: Learning in the Disciplines. Chicago IL: ACM.
  29. Winsler A, Abar B, Feder MA, Schunn CD, Rubio RA (2007) Private speech and executive functioning among high-functioning children with autistic spectrum disorders. Journal of Autism Develop-mental Disorders 37: 1617-1635.
  30. Boekaerts M, Corno L (2005) Self-regulation in the classroom: A perspective on assessment and inter-vention. Applied Psychology: An International Review 54: 199-231.
  31. Friedman T (1995) Making sense of software: Computer games and interactive textuality, In: Jones SG (ed.), Cybersocity: computer-mediated communication and community (Thousand Oaks, CA:Sage).
  32. Pintrich PR, De Groot EV (1990) Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology 82: 33-40.
  33. Tuomi I (1998) Vygotsky in a TeamRoom: An Exploratory Study on Collective Concept Formation in Electronic Environments. in 31st Hawaii International Conference on System Sciences.
  34. Clark HH (1996) Using language. Cambridge University Press, Cambridge.
  35. de Kok I, Heylen D (2012) Analyzing nonverbal listener responses using parallel recordings of multi-ple listeners. Open access, Springerlink.
  36. Fernyhough C, Fradley E (2005) Private speech on an executive task: Relations with task difficulty and task performance. Cognitive Development 20: 103-120.
  37. Rizzo A, Simon P, Save L, Sujan M (2005) Designing complex sociotechnical systems: A heuristic schema based on Cultural-Historical Psychology. in EACE: Annual conference on European association of cognitive ergonomics. Athens, Greece: ACM.
  38. Schunk DH (2005) Self-regulated learning: The educational legacy of Paul R. Pintrich. Educational Psy-chologist 40: 85-94.
  39. Bernardini S (1999) Using think-aloud protocols to investigate the translation process: Methodological aspects. Bologna: University of Bologna.
  40. Daugherty M, White C, Manning B (1994) Relationships among private speech and creativity meas-urements of young children. Gifted Child Quarterly 38: 21-26.
  41. Dewey J (1938) Experience and education. New York, NY: Collier Macmillan.
  42. Vygotsky LS (1978) Mind in Society. In: Cole M (ed.) Cambridge, MA: Harvard University Press.
  43. Winsler A, Fernyhough C, McClaren EM, Way E (2005) Private speech coding manual. Un-published manuscript. George Mason University, Fairfax VA USA.
  44. Muraven M (2010) Building self-control strength: Practicing self-control leads to improved self-control performance. Journal of Experimental Social Psychology 46: 465-468.
  45. Duncana RM, Cheyne JA (2002) Private speech in young adults task difficulty, self-regulation, and psychological predication. Cognitive Development 16: 889-906.
  46. Girbau D (2002) A sequential analysis of private and social speech in children’s dyadic communication. The Spanish Journal of Psychology 5: 110-118.
  47. Gentile DA (2009) Pathological video game use among youth 8 to 18: A national study. PsycholSci 20: 594- 602.
  48. Schunk DH (1986) Vicarious influences on self-efficacy for cognitive skill learning. Journal of Social and Clinical Psychology 4: 316-327.
  49. Anderson T (2003) Modes of interaction in distance education: Recent developments and research ques-tions. In: Moore MG, Anderson WG (eds) Handbook of distance education Mahwah NJ: Erlbaum 129-144.
  50. Ericsson KA, Simon HA (1993) Protocol analysis: Verbal reports as data (2ndedn) Cambridge, MA: MIT Press.
  51. Real MR (1996) Exploring media culture : A guide,Thousand Oaks, CA:Sage.
  52. Piaget J (1932/1965) The moral judgement of the child. London: Free Press.
  53. Moore MG, Kearsley G (1996) Distance education: A systems view. New York, NY: Wadsworth.
  54. Dittmann AT, Llewellyn LG (1968) Relationship between vocalizations and head nods as listener responses. J PersSocPsychol 9:79-84
  55. Garzotto F (2007) Was Vygotsky Right? Evaluating Learning Effects of Social Interaction in Children In-ternet Games. in INTERACT. Rio de Janeiro, Brazil: LNCS.
  56. Moore MG (1989) Three types of interactions. The American Journal of Distance Education 3: 1-6.
  57. Stright AD, Neitzel C, Sears KG, Hoke-Sinex L (2001) Instruction begins in the home: Relations between parental instruction and children’s self-regulation in the classroom. Journal of Educational Psychol-ogy 93: 456-466.
  58. Vygotsky LS (1986) Thought and Language (eds) Kozulin A, Cambridge, MA: MIT Press.
  59. Berk LE, Winsler A (1995) Scaffolding children’s learning: Vygotsky and early childhood educa-tion. Washington, DC: National Association for the Education of Young Children.
  60. Sneed C, Runco MA (1996) The beliefs adults and children hold about television and video games; Journal of Psychology 126: 273-284.
  61. Agina AM, Kommers PA, Steehouder F (2011b) The effect of nonhuman's versus human's external regulation on children's speech use, manifested self-regulation, and satisfaction during learning tasks. Com-puters in Human Behavior 27: 1129-1142.
  62. Agina AM, Kommers PA, Steehouder F (2011c) The effect of the nonhuman external regulator’s answer-until-correct (AUC) versus knowledge-of-result (KR) task feedback on children’s behavioralregula-tion during learning tasks. Computers in Human Behavior 27: 1710-1723.
  63. Agina AM, Kommers PA, Steehouder F (2011d) The effect of nonhuman’s external regulation on detecting the natural development process of young children’s self-regulation during learning tasks. Comput-ers in Human Behavior 27: 1724-1739.
  64. Gunter B (2002) The effects of video games on children: the myth unmasked, CA: Sage.
  65. Agina AM (2014) An Analytical Reflection towards Understanding the Effect of Children's Behavioral Regulation (CBR) on Children's Behavioral Nutrition (CBN) Through Computer-based Edutainment Envi-ronments. Journal of Child and Adolescent Behavior 2: 1000166.
  66. Heylen D, Bevacqua E, Pelachaud C, Poggi I, Gratch J, et al. (2011) Generating listening behavior. In: Cowie R, Pelachaud C, Petta P (eds.) Emotionoriented systems. The humaine handbook, Spring-er, London 321-347.
Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Recommended Conferences

Article Usage

  • Total views: 11766
  • [From(publication date):
    specialissue-2015 - Nov 21, 2017]
  • Breakdown by view type
  • HTML page views : 7938
  • PDF downloads : 3828
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri & Aquaculture Journals

Dr. Krish

[email protected]

1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

[email protected]

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

[email protected]

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

[email protected]

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

[email protected]

1-702-714-7001Extn: 9042

General Science

Andrea Jason

[email protected]

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

[email protected]

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

[email protected]

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

[email protected]

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

[email protected]

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

[email protected]

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

[email protected]

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

[email protected]

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

[email protected]

1-702-714-7001Extn: 9042

 
© 2008- 2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords