Loading...

Testing ESL Sociopragmatics

Development and Validation of a Web-based Test Battery

by Carsten Roever (Author) Catriona Fraser (Author) Catherine Elder (Author)
©2014 Monographs 182 Pages
Series: Language Testing and Evaluation, Volume 35

Summary

Testing of second language pragmatics has grown as a research area but still suffers from a tension between construct coverage and practicality. In this book, the authors describe the development and validation of a web-based test of second language pragmatics for learners of English. The test has a sociopragmatic orientation and strives for a broad coverage of the construct by assessing learners’ metapragmatic judgments as well as their ability to co-construct discourse. To ensure practicality, the test is delivered online and is scored partially automatically and partially by human raters. We used the argument-based approach to validation, which showed that the test can support low-stakes decisions about learners’ knowledge of sociopragmatics in English.

Table Of Contents

  • Cover
  • Title
  • Copyright
  • About the author
  • About the book
  • Acknowledgments
  • This eBook can be cited
  • Contents
  • Figures
  • Tables
  • 1 Introduction
  • 2 Pragmatics: The lay of the land
  • 2.1 The content of pragmatics: Speech acts et al.
  • 2.2 Context in pragmatics
  • 2.3 What language users know: Sociopragmatic and pragmalinguistic knowledge
  • 2.4 Applied pragmatics: Cross-cultural and interlanguage pragmatics
  • 3 Interlanguage pragmatics and pragmatic development
  • 3.1 Research designs and instruments
  • 3.1.1 Receptive research instruments
  • 3.1.2 Productive research instruments
  • 3.2 Developmental trajectories
  • 3.2.1 Speech acts
  • 3.2.2 Implicature, routine formulae and indexicals
  • 3.2.3 Extended discourse
  • 3.3 Individual differences and development of L2 pragmatics
  • 4 Testing second language pragmatics
  • 4.1 The ancestors: Early functional orientation
  • 4.2 The first generation: Speech acts
  • 4.3 The second generation: Broadening the construct
  • 4.4 Third generation: Interaction
  • 4.5 Testing for research purposes
  • 4.6 Issues in testing L2 pragmatics
  • 4.6.1 Inferences, practicality, context
  • 4.6.2 The native speaker standard, or: benchmarking in the age of lingua franca
  • 5 Validity and Validation
  • 5.1 The argument-based approach to validity
  • 5.2 Validity and validation in tests of L2 pragmatics
  • 5.3 A validity argument for a test of ESL sociopragmatic knowledge
  • 5.3.1 From target domain to observation: Domain Description
  • 5.3.2 From observation to observed score: Evaluation (Scoring)
  • 5.3.3 From observed score to universe score: Generalization
  • 5.3.4 From universe score to construct: Explanation
  • 5.3.5 From construct to target score: Extrapolation
  • 5.3.6 From target score to test use: Utilization
  • 6 This study
  • 7 Methodology
  • 7.1 Instrument types
  • 7.2 Online development
  • 7.3 Pre-pilot testing
  • 7.4 Pilot test
  • 7.4.1 Instruments
  • 7.4.2 Participants
  • 7.4.3 Test administration
  • 7.4.4 Scoring
  • 7.4.5 Data analysis
  • 7.4.6 Pilot results
  • 7.5 Revising the test
  • 7.6 Final test
  • 7.6.1 Items
  • 7.6.2 Interlanguage pragmatics test
  • 7.6.3 Test administration
  • 7.6.4 Participants
  • 7.6.5 Scoring
  • 7.6.6 Data analysis
  • 8 Results
  • 8.1 Overview
  • 8.2 Validity argument
  • 8.3 Domain Description
  • 8.4 Evaluation
  • 8.5 Generalization
  • 8.6 Explanation
  • 8.6.1 Group comparisons
  • 8.6.1.1 Native speakers and non-native speakers
  • 8.6.1.2 Effect of proficiency and exposure
  • 8.6.2 Test-internal analyses
  • 8.6.2.1 Section correlations
  • 8.6.2.2 Factor analysis
  • 8.6.3 Criterion measures
  • 8.6.4 Qualitative validation: Dialog Choice
  • 8.7 Extrapolation
  • 8.8 Utilization
  • 9 Discussion
  • 9.1 The validity argument
  • 9.1.1 Utilization
  • 9.1.2 Extrapolation
  • 9.1.3 Explanation
  • 9.1.4 Generalization
  • 9.1.5 Evaluation
  • 9.1.6 Domain Description
  • 9.1.7 Overall evaluation of the test
  • 9.2 The validity argument: structure or straitjacket?
  • 9.3 Proficiency, sociopragmatics and pragmalinguistics
  • 10 Conclusion and outlook
  • 11 References
  • 12 Appendices
  • 12.1 Appendix 1: Scoring Guide for the pilot test
  • 12.2 Appendix 2: Scoring guide for the main test

← 10 | 11 → Figures

Figure 1: Metapragmatic judgment item

Figure 2: Routines item

Figure 3: Implicature item

Figure 4: DCT item

Figure 5: Sample role play instructions

Figure 6: Validity argument following Chapelle (2008)

Figure 7: Appropriateness item, correct version

Figure 8: Continuation confirmation

Figure 9: Appropriateness Judgment task

Figure 10: Extended DCT

Figure 11: Dialog Choice task

Figure 12: Appropriateness Choice task

Figure 13: Speech styles task

Figure 14: Scoring system

Figure 15: Appropriateness Judgment item

Figure 16: Extended DCT item

Figure 17: Dialog Choice item

Figure 18: Appropriateness Choice and Correction item

Figure 19: Implicature item

Figure 20: Routines item

← 11 | 12 → Figure 21: Speech act item

Figure 22: Boxplot of total score distributions by NS status

Figure 23: Scree plot for ESL sample

Figure 24: Sociopragmatic-pragmalinguistic continuum

← 12 | 13 → Tables

Table 1: L1 distribution of pilot participants

Table 2: Section scores

Table 3: Test characteristics by group

Table 4: Mean scores by exposure

Table 5: Mean scores by proficiency

Table 6: Reliability

Table 7: Section correlations

Table 8: Context distribution for final Appropriateness Judgment tasks

Table 9: Appropriateness judgment task NS score distribution (N=50)

Table 10: Context distribution for final Extended DCT tasks

Table 11: Context distribution for final Dialog Choice tasks

Table 12: Context distribution for final Appropriateness Choice and Correction tasks

Table 13: Main test population

Table 14: Main test population L1 distribution

Table 15: Item reliability

Table 16: Item reliability without under-performing items

Table 17: Mean section percentage scores

Table 18: Answer times per item

Table 19: ILP test means, standard deviation and reliability (ESL group only)

Table 20: Validity argument

Table 21: Item characteristics of the Appropriateness Judgment tasks

Table 22: Testlet statistics for each of the Extended DCTs

Table 23: FACETS statistics for the Extended DCT items

← 13 | 14 → Table 24: Rater characteristics for the Extended DCT items

Table 25: Item characteristics of the Appropriateness Choice items

Table 26: Mean scores on the Appropriateness Corrections

Table 27: FACETS analysis of the Appropriateness Corrections

Table 28: Rater characteristics

Table 29: Cronbach’s α reliability coefficients

Table 30: Section and total scores for native and non-native speakers

Table 31: Cohen’s d effect sizes of the difference between group scores

Table 32: Total score by proficiency level

Table 33: Partial ƞ2 values for proficiency effects on section scores for the ESL sample

Table 34: Partial ƞ2 values for proficiency effects on section scores for the total NNS sample

Table 35: Mean scores by residency

Table 36: Partial ƞ2 values, significance levels and group differences for the section scores

Details

Pages
182
Publication Year
2014
ISBN (PDF)
9783653045987
ISBN (MOBI)
9783653982510
ISBN (ePUB)
9783653982527
ISBN (Hardcover)
9783631653791
DOI
10.3726/978-3-653-04598-7
Language
English
Publication date
2014 (August)
Keywords
Zweitsprache Soziopragmatik Interimssprache Metapragmatik
Published
Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2014. 182 pp., 46 tables, 24 graphs

Biographical notes

Carsten Roever (Author) Catriona Fraser (Author) Catherine Elder (Author)

Carsten Roever is Senior Lecturer in Applied Linguistics at the University of Melbourne. Catriona Fraser holds a PhD in Applied Linguistics from the University of Melbourne. Catherine Elder is the former director of the Language Testing Research Centre at the University of Melbourne.

Previous

Title: Testing ESL Sociopragmatics