Gesture and ASL Acquisition
Sarah Taub, Linguistics
Dennis Galvan, Psychology
Pilar Piñar, Foreign Languages,
Cultures, and Literatures
Susan Mather, Linguistics
Why this study?
• Taub, Piñar, and Galvan (2004) compared
narratives in English, Spanish, and ASL.
• They analyzed information expressed through
spatial mapping in
– ASL verbs, classifiers, role shift
– gesture accompanying the spoken languages
• For the spoken languages, a significant amount of
spatial information was expressed through cospeech gesture.
Why this study? (continued)
• Nearly all hearing subjects used co-speech gesture
to express conceptual information.
• Some subjects made bigger, clearer gestures than
others:
– Clearly established significant spatial locations
– Used their whole bodies to show character actions
– Changed their handshapes to show characters’ shapes
• OUR QUESTION: What if the co-speech gestures
of hearing non-signers predicted their ability to
learn sign language?
Overview of Presentation
• Previous research on
– ASL L2 acquisition
– Spatial mapping in gesture and ASL
•
•
•
•
Hypotheses
Methods
Coding
Preliminary results and next steps
ASL L2 Acquisition research
1. Locker McKee & McKee (1992): Students and
teachers identified difficult aspects of ASL
grammar:
1. Adapting to the visual modality
2. Dexterity of production
3. Spatial indexing, classifiers
2. Wilcox & Wilcox (1991) identified difficult
aspects of ASL for learners:
1. Modality (creates production and perception
difficulties)
2. Non-manual grammar features
3. Morphological inflection and classifiers
ASL L2 acquisition research
(continued)
3. McIntire & Snitzer Reilly (1988): examined
whether communicative facial expression in
hearing learners transferred into ASL facial
grammar.
• Preexisting facial expression transfers into
acquisition of ASL facial grammar, but only
after a reanalysis stage.
Our study as a next step
• We focus on the potential transfer of spatially
mapped elements from gesture to ASL
– First person “blends”
– Third person “blends”
– Establishment of spatial locations
• We also investigate mental imagery skills
– Potential effect on spatial mapping in gesture
– Potential effect on spatial mapping in ASL
“Blends” in Gesture and ASL
• The communicator creates a “blend” of an
imagined space and the space surrounding
him/her. (Fauconnier & Turner 1996,
Liddell 2000)
First and Third Person
Perspective
• Tannen 1986
• First person discourse or direct quotation
– The cat said, “I want to catch the bird.”
–
High involvement style
• Third person discourse or indirect quotation
– The cat said that he wanted to catch the bird.
–
Low involvement style
• Mather & Thibeault 2000
• How does this apply to ASL and co-speech
gestures?
First Person Blends – ASL
Imagined Space
Blend: Signer as Cat
First Person Blends - Gesture
Imagined Space
Blend: Gesturer as Cat
Third Person Blends – ASL
Imagined Space
Blend: Signer’s hands
as cat and wall
Third Person Blends - Gesture
Imagined Space
Blend: Gesturer’s hands
as cat and wall
Mental imagery skills: effect on
gesture and ASL?
• Mental imagery skills are known to be stronger in
native signers than in nonsigners (Emmorey,
Kosslyn, & Bellugi 1993).
• Mental imagery skills include:
– Mental rotation
– Image generation
• Could mental imagery skills form the foundation
for “blends” in gesture and ASL?
Mental Rotation Test
• Subjects are asked whether the target is the same as
another shape that is either identical or mirror image
• The second shape is rotated by up to 180 degrees
• Native signers answer more quickly than non-signers
(Emmorey, Kosslyn, & Bellugi 1993)
Image Generation Test
• Subjects are required to remember the block letter image
and to generate it on demand.
• Accurate image generation will lead to accurate response
as to whether the “X” lies on the letter shape.
• Native signers form images faster than non-signers
(Emmorey, Kosslyn, & Bellugi 1993).
Main Hypothesis
1. The quality of co-speech gesture in non-signers
might predict successful acquisition of spatial
mapping in ASL:
1. Use of role-shift in gesture ------> efficient learning of
first person in ASL.
2. Use of classifier-like handshapes in gesture ----->
efficient learning of 3rd person in ASL.
3. Ability to set up the space in gesture ----> efficient use of
spatial locations in ASL.
Other Possible Hypotheses
2. Mental rotation and image generation
scores might predict spatial mapping in
gesture
3. Mental rotation and image generation
scores might predict acquisition of spatial
mapping in ASL
Methods
• This is a longitudinal study of second-language
learners.
• Subjects:
– 35 Native speakers of English about to start learning
ASL. (Subjects were recruited at Gallaudet and CCBC.)
– Control group of native speakers of English about to
start learning Spanish (not yet analyzed) – data
collected in partnership with Karen Emmorey of SDSU.
Procedure
• Experimental group:
– Collect gesture data in fall, before/at the beginning of
ASL classes (35 subjects)
– Collect ASL data in spring, after approximately 8
months of ASL classes (19 subjects)
• Control group:
– Collect gesture data in fall
– Collect Spanish data in spring
• Note: approximately half the subjects did not
return for the second data collection
Procedure (continued)
• Before learning ASL: subjects were filmed retelling 7 cartoon
stories and 10 cartoon clips in English to a partner.
• After 8 months of taking ASL: subjects were filmed retelling the
same clips in ASL.
• Both times: Language background questionnaire.
• Both times: Mental rotation and image generation tests.
• ASL grades (in the spring, after one semester of learning ASL.)
Coding
• The research team devised a coding sheet to
account for:
– Use of first person in co-speech gesture and
ASL
– Use of third person in co-speech gesture and
ASL
– Establishment of locations in co-speech gesture
and ASL
First Person Blends
• Does use of 1st person blends in gesture
predict use of 1st person blends in ASL?
• Measures:
– eye gaze matches entity eye gaze
– facial expression shows character’s emotion
– body part movements show character’s
performance
Third Person Blends
• Does use of 3rd person blends in gesture
predict use of 3rd person blends in ASL?
• Measures:
– handshape and orientation are plausible to
represent entity’s shape (in gesture)
– handshape/entity match is correct for ASL
– separate handshapes are used for different
entities
Establishment of Locations
• Does establishment of locations in gesture
predict establishment of locations in ASL?
• Measures:
–
–
–
–
establishment of location
consistency of location
relative placement of 2 locations
size of signing space
Coding (continued)
• Coders filled out their coding sheets separately.
• Coding ratings were then compared and discussed
until complete consensus was reached for each
subject.
• The coders watched and coded the retelling of two
cartoon clips per subject.
– SWING SMASH
– SWING WAGGLE
Overview of results
• Mental imagery tasks:
– No correlation with gesture or ASL spatial mapping
– Differences between those who returned for the second
data collection and those who did not
– Differences among different majors
• Gesture hypotheses
–
–
–
–
First person blends: partly supported
Third person blends: not supported
Establishment of location: partly supported
Large differences between clips
Results for Image Generation
and Mental Rotation Tasks
Fall
(n=35)
Spring
(n=19)
Mental Rotation
mean # correct
22.0
31.2
Image
Generation
mean # correct
22.1
27.7
Fall Mental Rotation and Image Generation tasks:
Comparing those who returned with those who did
not. Those who returned did better in the fall, but
differences are not significant.
Fall tasks sessions
n
means
sd
Mental
Rotation
1st only
16
19.4
9.9
M.R.
both
19
24.2
12.4
Image
Generat.
1st only
16
18.2
12.6
I.G.
both
19
25.4
12.7
18 of 19 people improved on the second
Mental Rotation task. (p<.001) (Sig. diff.)
3.0
2.5
Count
2.0
1.5
1.0
0.5
0.0
-3.00
1.00
.00
3.00
2.00
5.00
4.00
9.00
7.00
mr_improve
12.00
10.00
18.00
16.00
21.00
11 of 19 improved on the second
Image Generation. task.
(p=.109) (Not a significant difference)
4
Count
3
2
1
0
-8.00
-6.00
-4.00
-3.00
-2.00
-1.00
.00
2.00
ig_improve
5.00
6.00
7.00
8.00
14.00
Number of Items correct by Ed.
Program for Mental Rotation task.
P=.591 (Not a significant difference)
30
25
20
15
Ment. Rot.
10
5
0
Interp Psych
Aud
UG
Other
Number of Items correct by Ed.
Program for Image Generation task.
P=.021 (Significant difference)
35
30
25
20
Imag Gen.
15
10
5
0
Interp
Psy
Aud
UG
other
Hypothesis 1: First Person Blends
in gesture will be correlated with
First Person Blends in ASL.
• True for Swing-Waggle clip, r = .622*
(p=.004)
• Not true for Swing-Smash clip, r =
.342
Hypothesis 2: Third Person
Blends in gesture will be
correlated with Third Person
Blends in ASL.
• Not true for Swing-Waggle clip,
r = -.066
• Not true for Swing-Smash clip,
r = -.014
Hypothesis 3: Establishment of
locations in gesture will be
correlated with establishment of
locations in ASL.
• True for Swing-Waggle clip, r = .511*
(p=.036)
• Not true for Swing-Smash clip, r =
.007
Summary and New Questions
• Scores on mental imagery tasks do not correlate
with spatial mapping in gesture or in ASL
• High scores on mental imagery tasks correlate
with persistence in study
– Are high scorers more likely to stay in ASL programs?
– Does study of ASL improve mental rotation and image
generation scores?
– Do people with high image generation skills self-select
into interpretation training programs?
Summary and New Questions
• Partial support for correlation of gesture with ASL
aptitude:
– Establishment of locations, for Swing Waggle clip
– First person blends, for Swing Waggle clip
– No support for Third person blends, either clip
• Why were the two clips different?
– Perhaps they tested very different skills
– Need to analyze additional clips
Next Steps
• Add additional subjects (19 subjects tested in
partnership with Karen Emmorey of SDSU)
– Will correlations still hold? Will trends become
significant?
• Code additional clips
– What patterns will appear, and why?
• Analyze control group of Spanish learners
– Does gesture correlate with learning any language, not
just a signed language?
Thanks to
• Gallaudet University Priority Research
Grant for three years of support
• Research assistants
– Marisa Bennett, Jessica Bentley, Brett Best,
Carla Colley, Angela Crawford, Will Garrow,
Shannon Grady, Randall Hogue, Christy Lively,
Kristina Parmenter, Rachel Rosenstock, David
Warn
•
•
•
•
•
•
•
•
•
REFERENCES
Emmorey, K., S. Kosslyn, & U. Bellugi. 1993. Visual imagery and visual-spatial
language: Enhanced imagery abilities in deaf and hearing signers. Cognition, 46,
139-181.
Fauconnier, G. and M. Turner. 1996. Blending as a central process of grammar. In
A. Goldberg (ed.) Conceptual Structure, Discourse, and Language, pp.113-130.
Stanford, CA: CSLI.
Liddell, S. 2000. Blended spaces and deixis in sign language discourse. In D.
McNeill (ed.) Language and Gesture. Cambridge: Cambridge University Press.
Locker McKee, R. & D. McKee. 1992. What’s so hard about learning ASL?:
students’ & teachers’ perceptions. Sign Language Studies, 75: 129-158.
McIntire, M. L. & J. Snitzer Reilly. 1988. Nonmanual behaviors in L1 & L2
learners of American Sign Language. Sign Language Studies, 61: 351-375.
Mather, S & A. Thibeault. 2000. Creating an involvement-focused style in book
reading with deaf and hard-of-hearing students: The visual way. In C.
Chamberlain, J. Morford, & R. Mayberry (eds.) Language Acquisition by Eye.
Hilldale, NJ: Lawrence Erlbaum Associates.
Taub, S., P. Piñar & D. Galvan. 2004. The encoding of spatial information in
speech/gesture and sign language. Paper presented to the 8th international
conference on Theoretical Issues in Sign Language Research, Barcelona, Spain.
Tannen, D. 1986. That’s Not What I Meant! New York: Morrow.
Wilcox, S. & P. Wilcox. 1991. Learning to see: ASL as a second language. Center
for Applied Linguistics. ERIC Clearinghouse on Languages and Linguistics.
Englewood Cliffs, N.J.: Prentice Hall.
Descargar

Document