Zed Sevcikova Sehyr, Ph.D.
Links:
  • About me
  • Research
  • Publications
  • Teaching
  • Public Outreach
  • Science Poetry

What factors predict skilled reading comprehension for deaf readers?

7/1/2021

0 Comments

 
Hearing children learn to read by mapping the sounds of their spoken language onto the written letters. These sound to letter connections become crucial for word recognition and subsequent reading acquisition and literacy. Phonological development is strongly related with reading development. But deaf readers have reduced or no access to spoken language they’re learning to read. Nevertheless, many deaf readers go on to become skilled readers. So naturally, question arises what is the nature of their reading processes? Is phonology even necessary for deaf readers to develop skilled reading? Evidence for whether deaf readers rely on phonology during reading has been inconclusive: Skilled adult deaf readers have been reported to use phonological coding during short-term memory recall tasks [1], but not in word recognition [2], so phonology may come into play depending on the task demands. Additionally, most deaf signers in the US use ASL as their primary language of communication and they also use (written/spoken) English. To what extent do their ASL skills predict reading comprehension?

We evaluated the contributions of lexical quality (LQ) (Study 1) and ASL variables (Study 2) to reading comprehension in deaf adult signers, matched for reading ability with hearing non-signers. The Lexical Quality (LQ) Hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension, above and beyond other variables known to influence comprehension, such as non-verbal reasoning, age or education. The LQ variables were orthographic (spelling), phonological, and semantic (vocabulary) knowledge assessed using standardized tests.

In Study 1, we recruited 98 hearing and 97 deaf adults who completed a number of assessment tests, including a standardized test of reading comprehension (PIAT-R reading comprehension subtest). Using a hierarchical regression model, which allows us to factor out variables one step at a time, we found that for hearing readers, phonology was the strongest predictor of reading comprehension. In contrast, for deaf readers, semantics and orthography, not phonology, predicted reading comprehension. We replicated this result using a different test of reading comprehension (assessed by the Woodcock-Johnson IV Passage Comprehension Subtest), suggesting that the lack of the role of phonology is not specific to the PIAT test. We conclude that strong orthographic and semantic representations, rather than precise phonological representations, predict reading skill in deaf adults.

In Study 2, we recruited 89 deaf ASL signers, who completed tests of ASL skill, ASL comprehension, ASL sentence reproduction and ASL fingerspelling repetition tests. We asked to what extent ASL skills predict reading comprehension above and beyond other variables. We found that fingerspelling was the only significant variable in our model, explaining variance in reading comprehension scores. The findings corroborate the idea that ASL fingerspelling and English orthography mutually facilitate 
reading proficiency in deaf readers [3].

Watch a presentation about this project now in English (subtitles) and ASL

0 Comments

An interactive visual database for American Sign Language reveals how signs are organized in the mind

4/6/2021

0 Comments

 
Original article published in The Conversation on April 6, 2021 8.27am EDT
Picture
“Desire” and “still” don’t rhyme in English, but they do rhyme in American Sign Language. Just as poets can evoke emotions and meaning by choosing words that echo one another in English, actress and Tony nominee Lauren Ridloff chooses signs that visually echo one another in her ASL adaptation of Anne Michaels’ poem “Not.”

For spoken languages, there are many resources that contain information about how often words are used, which words rhyme and other information not found in a dictionary. But until recently, there was no such thing for sign languages.
Our team of deaf and hearing scientists worked with a group of software engineers to create the ASL-LEX database that anyone can use for free. We cataloged information on nearly 3,000 signs and built a visual, searchable and interactive database that allows scientists and linguists to work with ASL in entirely new ways.
Mental maps of languageTo communicate in any language, people must search their mental lexicon – the words or signs that they use and recognize – to perceive or produce the right vocabulary item. How quickly and efficiently they do this depends on how their lexicon is organized in their mind. The database our team built is meant to represent a mental lexicon and is allowing us to examine how signs are organized in the human mind.

For example, if you looked up “tease” in the database, you would learn that this sign is used quite frequently in ASL. A person trying to sign “tease” might think of it more quickly than a rare sign like “linguistics.” ASL-LEX also shows that “tease” is visually similar to – and, in a visual way, rhymes with – other signs, like “ruin.” These related signs might also come to mind while a person thinks of “tease.” Researchers believe this process of calling up similar words or signs helps people speak or sign faster.
Our goals were to first catalog the information that people might use to organize their mental lexicons and then to illustrate that information visually using a network map.


Read More
0 Comments

Some signs in American Sign Language look like what they mean while others less so. But how do we objectively measure the extent to which a sign resembles its meaning?

5/9/2019

0 Comments

 

The perceived mapping between form and meaning in ASL depends on the person's linguistic knowledge and task

Iconicity is defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. ​
​
In this study, we examined how knowledge of American Sign Language (ASL) influences the perceived iconicity of 991 ASL signs. We looked at the relationship between iconicity (form-meaning resemblance), transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and how diverse the meanings of the participants' guesses were (e.g., did they all guess the same meaning?). We conducted two experiments; in the first experiment we asked deaf ASL signers and hearing non-signers to rate how 'iconic' they thought each sign was in relation to its meaning. The hearing non-signers were told the meaning of the signs, so that both groups would know what each sign meant. These iconicity judgements, or ratings, give us a measure of how a person is able to perceive the connection between the sign form and its meaning (think about onomatopoeia in English: 'bang', 'slurp', 'chirp', 'miaow').

​In the second experiment, we selected a smaller subset of 430 ASL signs and asked another group of hearing non-signers to guess the meaning of the signs. They were then asked to subsequently rate how obvious (transparent) their guesses would be to other people. This tells us about how much a person, who doesn't know any ASL, can guess the meaning of the signs purely based on what the signs look like, and, if they did not guess the correct meaning, what the nature of their guesses was. For example, for a sign-naive perceiver, the ASL sign COOKIE may conjure up images of a "spider" moving around on a surface while others might say "open (a jar)" because of the twisting movement of the wrist. In the first instance, the person focused on the shape of the hand (clawed handshape) contacting the palm, and in the second instance, the person focused on the twisting hand movement. What determines how people extract the meaning of gestures? And for those who are actually trying to learn ASL signs (like me!!), how does the ability to extract meanings, which may or may not be irrelevant, impact their learning process - does it help or hinder sign retention?

Our study demonstrated that linguistic knowledge mediates perceived iconicity differently from experience with gesture. 
Picture
ASL sign COOKIE
Sevcikova Sehyr, Z., and Emmorey, K. (in press). ​The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: Evidence from iconicity and transparency judgments. Language & Cognition.
0 Comments

What is the nature of orthographic representations in deaf individuals?

6/12/2018

0 Comments

 
By studying the electrophysiology of printed letter and fingerspelling font recognition using event related potentials (ERP/EEG), we gain better understanding of the nature of orthographic processing in deaf individuals which has important implications for improving literacy for deaf readers.

Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature processing through to the processing of abstract letter representations. In masked priming ERP studies, related letter pairs have been shown to elicit less positive-going waveforms than unrelated pairs, and amplitude was modulated as a function of case consistency between prime and target (starting at ~120ms) or as a function of abstract letter identity (220-300ms). Deaf ASL signers can also represent orthography indirectly using fingerspelling. In a single letter, supraliminal unmasked paradigm, Experiment 1 examined letter to letter ERP priming effects in deaf signers versus hearing non-signers. Experiment 2 explored the differences and interactivity between letter and fingerspelling processing in deaf signers by comparing priming effects between letters and fingerspelling fonts. ERPs were recorded over 29 scalp sites while participants performed a probe detection task. Targets were presented centrally for 200ms immediately preceded by a 100ms prime. Experiment 1 results revealed almost identical letter to letter priming effects between groups, replicating previous findings observed for hearing non-signers by Petit et al. (2006). Small differences between deaf and hearing participants in scalp distribution of the priming effects suggested a possible influence of deafness and/or signed language. Experiment 2 revealed that fingerspelling font primed English letters, but English letters did not prime fingerspelling. This pattern is consistent with previous research indicating that deaf ASL signers recode fingerspelled words into English in short-term memory, whereas printed words are not recoded as fingerspelling (Sevcikova Sehyr, Petrich, & Emmorey, 2016) and might have important implications for skilled reading in deaf population.

*This project is generously supported by NIH and Dr. Emmorey's LLCN lab. Earlier version of this project was presented at the Society for the Neurobiology Conference (SNL) 10th Annual Meeting in Quebec, August 16-18, 2018. 
Zed Sehyr is supported by the SNL 2018 Travel Award www.neurolang.org/2018/2018-award-winners/

Sehyr, Sevcikova Z., Renna, J., Osmond, S., Midgley, K. J., Holcomb, P. J., & Emmorey, K. (2018) Priming effects between fingerspelled fonts and printed letters. Poster presented at the Society for the Neurobiology Conference (SNL) 10th Annual Meeting in Quebec, August 16-18, 2018. 
0 Comments

Comparing Semantic Fluency in American Sign Language and English

5/4/2018

0 Comments

 
Verbal fluency is the ability to produce as many words as possible that are either semantically related (e.g. animals, vegetables) or phonologically related (e.g. words that begin with the letter 'L') within one minute. This ability varies based on the person's language proficiency, age of language exposure or whether the person uses one or more languages.

Verbal fluency performance has been widely used as a measure of language proficiency, word retrieval and lexical organization, memory organization, or executive functioning in children and adults in clinical and research settings.

The ability to generate words on the fly has been examined for many spoken languages but it has not been extensively studied in deaf or hearing sign language users with different language backgrounds. How does this ability compare between signed and spoken language? How does the age of exposure to ASL mediate the speed, accuracy, or pattern of sign retrieval?

In one experiment, we examined semantic fluency performance in deaf signers who generated signs in ASL (their dominant language) and compared them with hearing sign-naive monolingual English speakers who produced English words for each of the following categories: 'animals', 'fruits', 'vegetables' and 'clothing'. We found that language, whether spoken or signed, did not influence performance as both signers and speakers retrieved comparable numbers of items, and interestingly, about 30% of the deaf signers' ASL responses were made up of fingerspelling.

In another experiment, we were interested to see if hearing speakers, who are proficient ASL and spoken English, would generate similar numbers of items in both languages. One group of ASL-English bilinguals had deaf parents and used both languages from birth. Another group of ASL-English bilinguals acquired ASL later in adulthood (they were ASL-English interpreters or teachers of the deaf). Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Again, fingerspelling was relatively common in both groups of signer.

To summarize, modality of the dominant language (spoken or signed) does not affect semantic fluency scores in deaf or hearing adults if fingerspelled forms are considered acceptable responses, and that language dominance rather than age of acquisition affects ASL semantic fluency performance in hearing ASL–English bilinguals 

This study showed that verbal fluency tests are generalizable to signed languages. Semantic fluency is sensitive to language dominance and can be used to measure lexical retrieval in both signed and spoken language modalities. Secondly, this study emphasizes the need to consider fingerspelling when assessing semantic fluency in ASL due to the relatively high occurrence of fingerspelling in ASL responses. This is crucial for clinicians and researchers who are relying on these measures as an index of language proficiency, lexical access or executive functioning. 

Zed Sevcikova Sehyr, Marcel R Giezen, Karen Emmorey (2018). Comparing Semantic Fluency in American Sign Language and English, The Journal of Deaf Studies and Deaf Education, eny013, https://doi.org/10.1093/deafed/eny013


0 Comments

Referring strategies in American Sign Language and English (with co-speech gesture): The role of modality in referring to non-nameable objects

4/28/2018

0 Comments

 
American Sign Language (ASL) and English differ in linguistic resources available to express visual–spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures’ geometric properties (“shape-based reference”) declined over time in favor of expressions describing the figures’ resemblance to nameable objects (“analogy-based reference”). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers’ referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.

SEHYR, Z., NICODEMUS, B., PETRICH, J., & EMMOREY, K. (2018). Referring strategies in American Sign Language and English (with co-speech gesture): The role of modality in referring to non-nameable objects. Applied Psycholinguistics, 1-27. doi:10.1017/S0142716418000061
​

0 Comments

Lateralization of the N170 for word and face processing in deaf signers

11/6/2015

0 Comments

 
Picture
Poster presented at CNS New York Apr 1-4 2016
​

We investigated whether hemispheric organization of word and face recognition is uniquely shaped by sign language experience.

For typically developing hearing individuals, learning to read leads to LH-lateralization for words and may trigger subsequent RH-lateralization for faces. Hemispheric specialization for faces in RH may be contingent on prior lateralization for words in LH (Dundas, Plaut & Behrmann, 2014).

Deaf native users of American Sign Language (ASL) have distinct developmental experiences with both words and faces (e.g., the face conveys linguistic information).

What is the relationship between word and face processing for deaf native users of American Sign Language? How do distinct developmental experiences of deaf signers and hearing non-signers affect hemispheric organization for word and face processing? 

In our preliminary report, we report data from 19 hearing non-signers and 23 deaf ASL signers made same-different judgments to pairs of words or faces (192 trials each), where the first stimulus was presented centrally and the second was presented to either the left (LH) or right hemisphere (RH). EEG was recorded to centrally presented words / faces and referenced to the average of all electrode sites. 

In addition, we measured accuracy of discrimination between the central and lateralized words / faces. Based on previous research with hearing non-signers, we expected to observe RH advantage for faces presented in the left visual field and conversely, LH advantage for words presented in the right visual field.

Preliminary ERP results: 
Deaf signers and hearing non-signers showed a similar laterality pattern for N170 to words (left-lateralized) and to faces (bilateral). However, the scalp distributions for the laterality effects differed between the groups and might reflect unique organization of visual pathways in the occipito-temporal cortex for deaf signers.
  • Face processing: Deaf and hearing showed a bilateral N170 to faces. Deaf signers showed a slightly right-lateralized response to faces at temporal sites, but behaviorally the hearing non-signers showed a small RH advantage. ERP results suggest perhaps a more distributed circuit for face perception.
  • Word processing: Both groups showed a left lateralized N170 response to words, but the asymmetry was somewhat larger for hearing non-signers. At temporal sites, deaf signers exhibited a more bilateral N170 response while hearing non-signers exhibited a strong, left-lateralized N170 response. This result might reflect phonological-orthographic integration in hearing, but not deaf, individuals.

Discrimination accuracy - behavioral results:
  • Both groups show higher accuracy for words than faces (F (2, 82) = 29.3)
  • LH bias for words but no RH bias for face processing; only hearing group approached significance

Stay tuned for more news and final results!

Picture
0 Comments

Fingerspelled and Printed Words Are Recoded into a Speech-based Code in Short-term Memory

4/24/2014

0 Comments

 
We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14–15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary.

Open access:
academic.oup.com/jdsde/article/22/1/72/2333964
​

Sevcikova, Z. & Emmorey, K. (2014) Short-term Memory for Fingerspelling and Print. Paper presented at Center for Research in Language, University of California San Diego. 29 April 2014
http://crl.ucsd.edu/talks/pasttalks.php

0 Comments

ASL-LEX: A lexical database for American Sign Language 

4/24/2014

1 Comment

 
Picture
Lexical and phonological properties of words, such as frequency and neighborhood density, affect many aspects of language processing. For many spoken languages, there are large databases that can be used to obtain this information, but for American Sign Language (and many other sign languages), no large corpora or normative datasets are currently available. Here we report the development of ASL-LEX, a soon to be publically available resource for sign language researchers, that contains lexical and phonological information for nearly 1,000 ASL signs. We collected subjective frequency ratings for each sign from 25-31 Deaf signers (native and early-exposed) using a 7-point scale (1 = very infrequent). We also collected iconicity ratings for each sign from 21-37 hearing non-signers using a 7-point scale (1 = not at all iconic). In addition, each entry has been coded for phonological features based on a modified version of the Prosodic Model (Brentari, 1998) from which neighborhood densities were calculated. The signs have also been coded for grammatical class and initialization, and the database contains time codes for the sign onset and offset within each video clip. ASL-LEX will soon be accessible online, and users will be able to search the database contents and access the sign videos using pre-defined search criteria.

Our analysis of the subjective frequency data from ASL-LEX reveals a strong correlation between the frequency ratings from native signers and early signers (exposed to ASL before age seven) (rs =.94, p<.001), replicating Mayberry et al. (2014). Thus, subjective frequency ratings are relatively stable across Deaf people who are proficient signers. Subjective frequency ratings for ASL signs (raw scores) were moderately correlated with the word frequencies of their English translations from SUBTLEX (rs =.58, p<.001). We observed a small, but significant correlation between frequency and iconicity (rs =–.17, p<.001), indicating a weak tendency for frequent signs to be rated as less iconic. This pattern is the opposite of that observed by Vinson et al. (2008) for British Sign Language, although they also observed a very weak correlation between subjective frequency and iconicity. Frequency ratings in ASL-LEX were normally distributed, while iconicity ratings were skewed toward the lower end of the scale (median iconicity rating = 2.71). Neighborhood density (ND) correlated weakly with frequency and with iconicity (frequency: rs =.11, p<.001; iconicity: rs =.12, p<.001). For this analysis, ND was defined as the number of sign neighbors sharing three out of four coded features (parallel to the way neighbors have been defined in the spoken language literature, i.e., words that share all but one phoneme). The weak correlation between ND and frequency parallels results from spoken languages (Frauenfelder et al., 1993). Overall, the results indicate weak relationships between iconicity, frequency, and neighborhood density for ASL. Although ASL-LEX is not a substitute for an ASL corpus, this database will be a valuable resource for designing tightly controlled experimental studies, as well as developing assessment and resource materials for education.

Sehyr Sevcikova, Z. Caselli, N. Cohen-Goldberg, A., & Emmorey, K. ASL-LEX: A lexical database of American Sign Language. Poster presented at The 12th Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne Convention Centre, Australia, Jan 4-7 2016. Awarded Best Early Award Prize.
Poster abstract here
Poster PDF here

Paper to appear:
Caselli, N., Sehyr Sevcikova, Z., Cohen-Goldberg, A., & Emmorey, K. (2016) ASL-LEX: A lexical database of American Sign Language. Behavior Research Methods.

Please visit our project website:
http://asl-lex.org/

And keep an eye out on the project news here:
https://slhs.sdsu.edu/llcn/asl-lex/

1 Comment

    Categories

    All

    Archives

    July 2021
    April 2021
    May 2019
    June 2018
    May 2018
    April 2018
    November 2015
    April 2014

    RSS Feed

    Picture
Powered by Create your own unique website with customizable templates.