Cornell University

Rehabilitation Research and Training Center on Disability Demographics and Statistics

Removing Barriers to Survey Participation for Persons with Disabilities

 

Susan Mitchell

Anne Ciemnecki

Karen CyBulski

Jason Markesich

 

Mathematica Policy Research, Inc.

 

January, 2006

 

For further information about this paper contact:

Jason Markesich

Mathematica Policy Research, Inc.

600 Maryland Ave., S.W., Suite 550

Washington, DC  20024-2512

 

tel 609-275-2207

email  jmarkesich@mathematica-mpr.com

 

This paper is being distributed by the Rehabilitation Research and Training Center on Disability Demographics and Statistics at Cornell University.

 

The center is funded to Cornell University by the U.S. Department of Education, National Institute on Disability and Rehabilitation Research (No. H113B031111).  The contents of this paper do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the Federal Government (Edgar, 75.620 (b)).

 

The Co-Principal Investigators are:

Susanne M. Bruyère—Director, Employment and Disability Institute, ILR School, Extension Division, Cornell University

Richard V. Burkhauser—Sarah Gibson Blanding Professor, Department of Policy Analysis and Management, College of Human Ecology, Cornell University

Andrew J. Houtenville—Senior Research Associate, Employment and Disability Institute, ILR School, Extension Division, Cornell University

David C. Stapleton—Director, Cornell University Institute for Policy Research, Washington, DC.


CONTENTS

                                                                                                                          

A. Introduction. 2

B. Surveys Evaluated. 4

1. CMS’s Evaluation of Section 1115 Medicaid Reform Demonstrations (SSI/Medicaid surveys)4

2. SSA’s National Survey of SSI Children and Families (NSCF) 5

3. SSA’s National Beneficiary Survey (NBS) 5

C. Challenges in Interviewing Persons with Disabilities. 6

D. Modifications to Instrumentation. 7

1. Minimizing High-Frequency Sounds. 8

2. Interviewer Checkpoints. 8

3. Structured Probes. 12

4. Follow-Up Items for Nonresponse. 14

5. Measurement of Disability. 14

6. Self-Reports of Disabling Conditions. 15

7. Adjusting for Differences in Living Situations. 16

8. Identifying Other Differences. 18

E. Modifications to Procedures. 18

1. Interviewer Training. 19

2. Contact Procedures. 21

3. Establishing Legitimacy. 22

4. Proxy Respondents. 23

5. Assisted Interviews. 26

6. Using Incentives. 27

7. Production Standards. 27

F. Interviewing Modes. 28

1. CATI 28

2. CATI/CAPI 30

3. TTY, Telecommunications Relay Service, and Instant Messaging. 33

4. Web    36

G. Conclusions and Recommendations. 38

References    41

Appendix: Cognitive Test For Identifying Need For Proxy Respondent In The National Beneficiary Survey.. 44


A.    Introduction

The National Institute on Disability and Rehabilitation Research (NIDRR) has recently defined a new paradigm of disability (NIDRR 2000).  Under the new paradigm, disability is a “deficit in the person-community relationships that should be addressed by social interactions.”  The new definition represents a shift from the old paradigm that presented disability as a “deficit in an individual that prevents the individual from performing certain functions or activities.”  The goal of the new paradigm is to facilitate the full participation of people with disabilities in society.  Implied by the shift are survey research methods that require new approaches to measuring disability in federally funded surveys and new approaches to making surveys accessible to people with disabilities.

Accompanying the new paradigm of disability is a trend toward self-directed care and self-determination.  Self-directed care allows individuals to exercise more control over their health care decisions (CMS 2003).  In 2001, President Bush announced the New Freedom Initiative, which is designed to ensure that all Americans have the opportunity to learn and develop skills, engage in productive work, make choices about their daily lives, and participate fully in community life.  The shift toward self-directed care and full participation by individuals with a disability further emphasizes the importance of including persons with disabilities in surveys that inform not only federal programs and policies but also the services received by persons with disabilities.  

While the survey research literature focuses on using surveys to identify persons with disabilities and representing them statistically in population estimates (National Council on Disability 2004; National Council on Disability 1998; LaPlante and Carlson 1996; Kraus et al. 1996; Conwal Incorporated 1993; Thompson-Hoffman and Storck 1991; Zola 1990; Mathematica Policy Research 1984), the literature on practical recommendations for surveying people with disabilities is sparse.  The available literature tends to explore issues surrounding the use of assistive technology devices (most notably TTY) and/or their potential limitations for telephone interviewing (Carlson et al. 2001; Barnett and Franks 1999; Wilson et al. 1998; Russell et al. 1997; Olsen et al. 1999; Kirchner 1998) as well as the use and selection of proxies in surveys to represent persons with disabilities (Kirchner 1998; Todorov and Kirchner 2000; Parsons et al. 2000; Black 2004).  In response, the Interagency Committee on Disability Research convened a meeting in April 2004 to begin establishing best practices for surveying people with disabilities.  The conference brought together participants from within the federal government and research community who share an interest in improving the representativeness of people with disabilities in survey estimates. Mathematica Policy Research (MPR) made two presentations at that conference: “Simple Survey Modifications Increase Accessibility” and “Multiple Survey Response Modes Increase Accessibility.”  This paper synthesizes the major points of each presentation and draws on our broader survey experience to formulate a set of practical recommendations for conducting surveys with people with disabilities.  

Persons with disabilities are naturally included in general population surveys through random selection.  This paper, however, focuses on list-frame surveys in which all sample members are presumed to have a disability.  The comparatively low incidence of people with disabilities in general population surveys may make some methods described here impractical or cost-prohibitive; however, other methods will no doubt be practical enough to warrant adoption in all surveys.  We view this paper as the beginning of a set of best practices for removing barriers to survey participation for persons with disabilities.

       

B.    Surveys Evaluated

MPR has gained experience in conducting surveys of people with disabilities through contracts sponsored by the Centers for Medicaid and Medicare Services (CMS) and the Social Security Administration (SSA).  The sample members in these surveys were not only people with disabilities, but were also people with low incomes.  As compared with the general population, low-income populations tend to be more mobile, have lower levels of education, and have lower literacy skills—characteristics that added to the challenges of these data collection efforts.  Below we describe three surveys that form the basis of our findings and recommendations.

1.         CMS’s Evaluation of Section 1115 Medicaid Reform Demonstrations (SSI/Medicaid surveys)

During the 1990s, states used Section 1115 demonstration waivers to modify their Medicaid programs to provide services through managed care rather than through traditional fee-for-service arrangements.  As part of this CMS evaluation, MPR conducted computer-assisted telephone interview (CATI) surveys to assess how recipients of Supplemental Security Income (SSI) were faring in Medicaid managed care.  The survey sample included people with physical and sensory disabilities, mental illness, and mental retardation.  The surveys—conducted in Kentucky, New York, and Tennessee—addressed access to and satisfaction with care, utilization of medical services, insurance coverage, experience in the demonstration program, unmet needs and delays receiving care, health status, attitudes toward health care and health care risks, use of preventive services, and family demographics.  MPR conducted more than 4,600 interviews of people with disabilities between September 1998 and February 2000.  Had MPR conducted the surveys in-person instead of by telephone, the cost would have been about four to eight times as much—a cost that may well have been prohibitive. 

2.         SSA’s National Survey of SSI Children and Families (NSCF)

The NSCF collected data on children with disabilities and their families who received or applied for SSI (n = 12,000).  Sponsored by the Office of Research, Evaluation, and Statistics of the Social Security Administration, the purpose of the survey was to profile the current cross-section of SSI child recipients and to evaluate the effects of the Welfare Reform Act on SSI children.  Administered in 2001–2002, the NSCF was a mixed-mode survey with CATI as the primary mode of data collection and computer-assisted personal interviewing (CAPI) as the follow-up mode.  For children age 18 or under, the respondent was the parent or guardian; for young adults between ages 18 and 24, the respondent was the sample case him- or herself (whenever possible).  To ensure the participation of the young adults (about 900 cases, all of whom were presumed to have a disability), we developed specialized interviewer training that sensitized interviewers to the challenges that people with disabilities face in survey participation.

3.         SSA’s National Beneficiary Survey (NBS)

The Social Security Administration’s Office of Disability and Income Security contracted with MPR to conduct an evaluation of the Ticket to Work (TTW) program, which was designed to increase access to employment for Social Security disability beneficiaries.  The program provides eligible SSI and Social Security Disability Insurance (SSDI) beneficiaries with a ticket that can be used to obtain vocational or employment services from a network of providers.  As part of the evaluation, MPR conducted the 2004 National Beneficiary Survey with a sample of about 10,000 disability beneficiaries drawn from SSA’s administrative records.  The questionnaire collected data on health and disability status, experiences with the TTW program, employment and use of services, income and other assistance, and sociodemographic characteristics.  Similar to the NSCF, the NBS was a dual-mode survey, with CATI as the primary mode and CAPI as the follow-up mode.  CAPI interviews were conducted primarily with those who (1) could not be located for a CATI interview, (2) refused a CATI interview, and (3) could not respond to a telephone interview for reasons of their disability.  Only one percent of those interviewed in person were interviewed in person because a disability prohibited a telephone interview.  Interviewing staff was trained in the use of TTY, relay, and instant messaging as a means of overcoming speech and hearing difficulties. In addition, if needed, they obtained the services of sign language translators in the field and made a range of other accommodations to maximize survey participation.

Organization of Paper.  We begin with a description of the challenges associated with interviewing people with disabilities and then describe modifications that can be made to instruments and data collection procedures to remove barriers and maximize self-response.  Next, we discuss the strengths and limitations of different interviewing modes and conclude with a summary of our experiences interviewing people with disabilities.  Throughout the paper, we cite examples from the SSI/Medicaid surveys, the NSCF, and the NBS.

C.    Challenges in Interviewing Persons with Disabilities

Interviewing the disability population presents several challenges. To be successful, the interviewer has to overcome communication, stamina, and cognitive challenges.  At the same time, it may be necessary to use instruments from surveys not designed for people with disabilities to benchmark comparisons.  The surveys described here were among the first to be conducted with large samples of people with disabilities; therefore, we took care to accommodate their needs and minimize proxy response.  Our goals were to (1) give respondents with disabilities the opportunity to speak for themselves regarding issues that affect their health and well-being, and; (2) provide our clients with a cost-effective way to collect data on people with disabilities.

Survey researchers collect data from persons with disabilities all the time.  Usually, however, researchers are unaware that a particular respondent has a disabling condition—especially if the interview is conducted by telephone.  The dilemma faced in designing the three surveys described here was that every survey sample member had a disability.  Moreover, the survey populations represented a wide range of disabilities with varying degrees of severity; in addition, some sample members had several disabling conditions.  Accordingly, the surveys could not be designed to overcome all the possible challenges.  Instead, the instrumentation procedures attempted to address three broad categories of common challenges:  (1) communication, (2) stamina, and (3) cognitive barriers.  Communication challenges include both hearing and speech impairments.  The term “stamina challenges” refers to both physical and mental fatigue.  Cognitive challenges include, but are not limited to, emotional disturbance, difficulty processing questions and responses, lack of complete or specific knowledge, and confusion about the purpose of the interview.

D.   Modifications to Instrumentation

To varying degrees, the questionnaires featured four techniques designed to overcome the three categories of challenges: minimization of high-frequency sounds, interviewer checkpoints, structured probes that allowed questions to be rephrased in a standard manner, and follow-up items for nonresponse.  In addition, the surveys worded questions simply, clearly, and briefly as well as in an unbiased manner so that respondents could readily understand key terms and concepts.  Given the intent of the questions, response categories were appropriate, mutually exclusive, and reasonably exhaustive.  Clear, concise instructions and probes accompanied questions so that interviewers knew exactly what was expected of them.

1.         Minimizing High-Frequency Sounds

In the English language, consonants are more important than vowels in identifying words, and the hearing loss of high-frequency consonants (s, z, t, f, and g) is common.  The replacement of high frequencies with low frequencies makes questions easier to hear.  Following is an example of a question with several high-frequency sounds:

How satisfied are you with the overall quality of care you receive as a member of NAME OF MANAGED CARE PLAN?  Are you very satisfied, somewhat satisfied, neither satisfied nor dissatisfied, somewhat dissatisfied, or very dissatisfied?

 

Compare this with a version that uses low-frequency sounds:

How would you rate the overall quality of the medical care you get as a member of NAME OF MANAGED CARE PLAN?  Is it excellent, very good, good, fair, or poor?

 

2.         Interviewer Checkpoints

Checkpoints allow interviewers to assess whether the respondent needs encouragement or is becoming too fatigued to continue the interview.  The SSI/Medicaid surveys varied in the length of the instruments.  The SSI/Medicaid survey in Tennessee took, on average, 44 minutes to administer; the SSI/Medicaid surveys in Kentucky and New York took 22 minutes.  Pretest interviews revealed that some respondents grew fatigued, especially during the longer interview.  Pretesting also revealed that it was useful to provide respondents with positive feedback about completing the survey task.  Comments such as “Your answers are very helpful to this study” seemed to allay fears and put respondents at ease.  Other reassuring comments used by pretest interviewers included “There are no right or wrong answers to these questions” and “Take your time.” Based on the pretest observations, the SSI/Medicaid interviews included three checkpoints to ensure that interviewers stopped and assessed the respondent’s ability to continue.  The checkpoints also provided interviewers with prompts to provide encouragement when necessary.  Interviewers were required to record their actions at each checkpoint:

INTERVIEWER CHECKPOINT:  DOES THE RESPONDENT SEEM FATIGUED, CONFUSED, OR NEED ENCOURAGEMENT?

 

FATIGUE PROBE

(1)     Are you feeling tired, or can we continue?

(2)     Would you like to take a break?  I can hold on....

(3)     Would you like to continue the interview another time?

 

ENCOURAGEMENT PROBE

(1)     Your answers are very helpful to this study.

(2)     There are no right or wrong answers to these questions.

 

INTERVIEWER ACTION

Not fatigued, no encouragement provided.................... 00

Fatigued and wants to be called back........................... 01

Fatigued but can continue............................................. 02

Gave encouragement................................................... 03

 

About three-quarters of the respondents were able to complete the 22-minute interview without special intervention from the interviewer (see Table 1).  About half the respondents completed the 44-minute interview without needing special intervention.  These differences persisted across all disabling conditions.

Not one respondent needed a break during the 22-minute interview.  Eleven percent of the respondents needed a break during the longer interview (see Table 2).  Of these respondents, more than half needed a second break because their stamina was too low or their attention span too short to complete the interview in one session.  All respondents who needed a break completed the interview on a subsequent call.  While the percentage needing a break did not vary much by disabling condition, respondents with physical or sensory disabilities were most likely to need more than one break.  Such respondents tended to tire quickly or experience difficulty in using the telephone for prolonged periods.  Respondents with severe and persistent mental illness were most able to continue with no further intervention after a break.

Interviewers reported that the checkpoints caused them to reduce the pace of the interview while reminding them that they were speaking with respondents who were likely to experience difficulties.  Interviewers reported that the encouragement was especially helpful for respondents who did not know the answer to a question or series of questions.  For these respondents, interviewers discovered that the following sentence helped respondents relax and continue:  “I know these questions are hard to answer, and you are doing your best.”  The interviewers strongly recommended the use of encouragement in future surveys of people with disabilities.

The NSCF and NBS interviews were even longer, 70 and 50 minutes, respectively. The interviewers perceived that respondents were motivated to complete the entire interview, although respondents frequently needed several sessions for completion.  The fact that SSA, the agency from which the respondents received monthly benefits, sponsored the surveys acted as an unintended incentive to continue.  Interviewers reported that many respondents were concerned that failing to complete the interview would adversely affect their benefit status. These concerns persisted despite our assurances that the information provided by respondents was confidential and would not be used in benefit determinations, and that their participation was entirely voluntary.


TABLE 1


INTERVIEWER ACTIONS AT CHECKPOINTS IN THE SSI/MEDICAID SURVEYS

 

 

 

Total
22
Minutes

Total
44
Minutes

 

Physical/Sensory

22 Minutes

Physical/
Sensory
44
Minutes

 

Mental Illness

22 Minutes

Mental Illness
44
Minutes

 

Schizo-phrenia
22 Minutes

Schizo- phrenia
44
Minutes

 

MR/DD
22 Minutes

MR/DD
44
Minutes

 

Un-  known
22 Minutes

Un- known
44
Minutes

 

Total Respondents (n = )

 

2,852

 

916

 

 

795

 

282

 

 

842

 

288

 

 

321

 

NA

 

 

510

 

235

 

 

384

 

111

 

Percent Who Needed:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

No Intervention

 

73%

 

51%

 

 

70%

 

50%

 

 

69%

 

54%

 

 

91%

 

NA

 

 

72%

 

49%

 

 

72%

 

52%

Some Intervention

28%

49%

 

30%

50%

 

31%

46%

 

9%

NA

 

27%

51%

 

27%

48%

Encouragement only

27%

38%

 

29%

38%

 

31%

34%

 

9%

NA

 

27%

42%

 

27%

35%

Callbacks

0%

11%

 

0%

12%

 

0%

11%

 

0%

NA

 

0%

9%

 

0%

13%

No intervention but showed fatigue

1%

0%

 

1%

0%

 

0%

1%

 

0%

NA

 

0%

0%

 

0%

0%

 

aMental retardation/developmental disability.

 

 

 

 


TABLE 2

OUTCOMES FOR SAMPLE MEMBERS WHO NEEDED CALLBACKS
IN THE 44‑MINUTE SSI/MEDICAID INTERVIEW


 


Total

 

Physical/

Sensory

 

Mental
Illness

 

MR/DDa

 

Unknown

 

Respondents Who

 

100

 

34

 

32

 

20

 

14

Needed Callbacks (n = )

11%

 

 

 

 

 

 

 

 

 

 

Percent who needed a second callback

56%

62%

44%

55%

71%

Percent who continued with no further intervention

23%

12%

34%

25%

21%

Percent who continued with only encouragement

21%

26%

22%

20%

7%

 

3.         Structured Probes

The SSI/Medicaid surveys used a series of structured probes so that the interview for respondents with disabilities would be as comparable as possible to the interview administered to all types of Medicaid recipients.  Most questions succeeded for most of the respondents with disabilities.  Nevertheless, some respondents needed to have concepts defined.  To the extent possible, the surveys preserved the wording of the original question and supplemented it with standardized definitions and probes.  To ensure that interviewers used the probes consistently, the probes appeared on the CATI screens in the order in which the interviewers were to use them.  If none of the probes helped the respondent, interviewers were allowed to rephrase the question in a way they thought the respondent would understand.  In the following examples, the new, structured probes appear in boldface:

 

 

·      EXAMPLE 1

For how many of the last 12 months—that is, since MONTH AND YEAR 12 MONTHS AGO—have you been enrolled in NAME OF BEHAVIORAL HEALTH PLAN?

 

PROBE:  For which months have you been enrolled?

PROBE:  For how long have you been enrolled?  Have you been enrolled in NAME OF BEHAVIORAL HEALTH PLAN all of that time?

·      EXAMPLE 2

During the last 12 months, was there any time when you took less than the recommended dosage or took a prescription medication less frequently than recommended, so the medication would last longer?

 

PROBE:  Stretched the medication so it would last longer?

·      EXAMPLES 3–5

Because of a physical, mental, or emotional problem, do you currently get help or supervision from another person . . .

. . . managing money?

PROBE:  Counting money, getting the correct change, or paying bills?

. . . using the telephone?

PROBE:  Answering the phone or dialing numbers?

. . . doing heavy housework?

PROBE:  Like scrubbing floors or washing windows?

 

·      EXAMPLE 6:

How long ago did you have your blood pressure taken by a doctor or other health professional?

PROBE:  When was the last time you had your blood pressure taken?

PROBE:  The doctor or nurse puts a cuff around your arm, pumps it up, and listens with a stethoscope?


 

Interviewers reported that they always read the main question first but often read the probe before waiting for the respondent to answer the main question.  Interviewers reported that the structured probes were usually sufficient and that they rephrased questions infrequently. 

4.         Follow-Up Items for Nonresponse

While item nonresponse was low (less than 3 percent) in all the surveys, some respondents, especially those who lacked complete and specific knowledge, had difficulty providing precise responses to questions about income, length of time spent waiting at the doctor’s office, or out-of-pocket medical expenses.  To minimize item nonresponse, the survey instruments added follow-up questions for continuous variables.  If a respondent could not provide an exact amount, the interviewer followed the “don’t know” response with a probe to provide a categorical response.  (For example, if a respondent did not know the exact amount she received in Food Stamps last month, we asked if the amount was $500 or more, or less than $500.)  The upper and lower bounds of each category were based on the ranges used by the analysts.  On the SSI/Medicaid surveys, the follow-up questions resolved about three-quarters of item nonresponse.

5.         Measurement of Disability

An important consideration in designing survey instruments for people with disabilities is how to measure disability—either to screen for the presence of a disabling condition or to classify individuals by disability type and severity.  No consensus exists on how to measure disability, but the major national surveys (National Health Interview Surveys and the Survey of Income and Program Participation, for example) typically use a similar set of questions.  These questions ask about the presence of specific conditions, activity and functional limitations, medications, health care services, assistive devices, and special education.  On the NSCF and NBS, we asked the questions addressing such topics, supplemented with questions specific to work disability.  Our goal was to allow researchers the flexibility to define and measure disability according to their own conceptual framework.

Disability is no longer viewed as an “all or nothing” phenomenon but rather as an interaction between the person, his or her health condition, and the environment (Wunderlich et al 2002).  Current data collection efforts do not, for the most part, measure the environment and its impact on the person’s participation in society.   The International Classification of Functioning (ICF) provides a framework for developing questionnaire items that capture environmental factors affecting disablement.  Among the factors are products and technology, support and relationships, attitudes, social services, systems, and policies.  Researchers are currently addressing the development of survey questions that map conceptually to the ICF (Schneider 2001), although this work is still incomplete. 

When designing surveys for people with disabilities, it is important to establish a conceptual framework for measuring disability, to review the current literature, and to enlist the help of technical experts in devising valid and reliable measures.  Once measures are devised, extensive cognitive testing can aid in the identification of comprehension and perceptual differences about question meaning. 

6.         Self-Reports of Disabling Conditions

Another important consideration when designing surveys for people with disabilities is that not all sample members report a disabling condition despite administrative records or caregiver reports indicating otherwise.  Respondents may recover from a disabling condition between the time of sample selection and interview, they may perceive a social stigma to reporting a disability, accommodations and medical treatment may render a health condition non-disabling, or they may not perceive themselves as disabled.  As mentioned above, standard measures of disability rely on self-reports of specific health conditions or functional limitations.  If respondents do not acknowledge a health condition or limitation, then it is awkward to continue with questions that assume the presence of a potentially disabling condition—to do so can increase the risk of a break-off or refusal.  

To illustrate the potential magnitude of this issue, 23 percent of the young adult NSCF respondents, all of who were either current or former SSI recipients, reported no disabling conditions or limitations.  In one case, a current SSI beneficiary with schizophrenia (according to SSA administrative records) reported that he did not have a disability because medication kept his condition under control. 

In the NSCF and NBS surveys, we included an automated instrument check that detected if a disabling condition had been self-reported.  In the absence of self-reports, the interviewer routed respondents past questions that asked about the duration and severity of their condition, the impact of their condition on their ability to work or attend school, and their unmet needs for assistive devices and technologies.  When designing surveys for people with disabilities, we have found it important to review all survey items for relevance to populations with and without a disability.  If needed, alternative question wording or special instrument routing should be created for each group.

7.         Adjusting for Differences in Living Situations

In comparison with general population surveys, surveys of people with disabilities are likely to encounter a wide variety of nontraditional living situations.  In higher proportions than in the general population, people with disabilities may be living in group homes, assisted-living centers, nursing homes, or Medicaid institutions, or be homeless or incarcerated.  While researchers sometimes declare individuals living in nontraditional and low-incidence settings (nursing homes, assisted-living facilities, jail, or prisons) as “institutionalized” and therefore ineligible for a survey, such practice is likely to introduce coverage bias in surveys of people with disabilities.  In the NSCF, for example, 5 percent of the young adults were incarcerated, 2 percent were living in Medicaid institutions, and 2 percent resided in group homes or assisted living facilities.  Excluding these individuals from the survey would have likely biased estimates of people with disabilities with severe disabilities and those with behavioral or emotional problems.  Therefore, the NSCF developed data collection procedures to ensure full representation of the broadest spectrum of people with disabilities.

For the NSCF survey instrument, we considered the wording of individual items and created alternatives if needed to accommodate differences in living situations. Significant dividing factors were whether respondents were living assisted or unassisted, alone or with other family members.  For example, the instrument did not query residents in assisted-living arrangements or group quarters about household income but instead asked about personal income.  Similarly, the instrument did not ask persons in assisted living about the cost of the disability-related services they received; such costs are typically included in the facility cost.  Respondents living alone were not asked if other family members provided home health care or had ever changed work hours to care for them.

Alternative living arrangements also necessitate plans for conducting interviews in institutionalized settings.   During the NSCF, we sought interviews with respondents in assisted living, group homes, and Medicaid institutions, accepting a proxy respondent if needed.  We also interviewed incarcerated sample members by using proxy respondents.  The proxy respondents were usually family members who answered an abbreviated set of questions about the young adult’s health conditions, education and job training, SSI receipt, and reasons for incarceration.  While alternatives to proxy interviews include in-person or telephone interviews with incarcerated individuals themselves, we decided against such an approach because of the complexities and expense involved in arranging the interviews.  For the same reasons, we did not attempt to interview homeless persons.

8.         Identifying Other Differences

The diversity of the population with disabilities in terms of living situations is compounded by differences in cognitive and communication abilities.  The screener sections of both the NBS and NSCF instruments established routing through the instrument based on respondent characteristics (living situation, age, self-reports of disabling conditions) and the need for proxy respondents (based on observed cognitive and communication abilities and gatekeeper reports).  The screeners were particularly complex; they required over 12 weeks to develop and test and necessitated computerized administration.  Our experience demonstrates the importance of planning for the diversity inherent in the PWD population and allowing sufficient time and resources to design and test survey instruments.

E.    Modifications to Procedures

In addition to modifying and enhancing instrumentation for the surveys, we modified data collection procedures to ensure high-quality data.  Special interviewer training, adjusted interviewer productivity standards, advance contact and notifications, incentives, and use of proxy respondents and assisted interviews can contribute to respondent and interviewer comfort.

1.         Interviewer Training

All three surveys were staffed with experienced interviewers, but not with those who had undergone prior training in working with people with disabilities.  The survey training programs focused on the background and purpose of the study, a question-by-question review of the instrument, contact protocols, refusal avoidance, and practice interviews.  In addition, trainers addressed the issues that interviewers were likely to face.  Training began with sensitivity exercises designed to demonstrate that interviewers should be kind and show unconditional positive regard for respondents regardless of their limitations.  Trainers stressed that the greatest barriers faced by people with disabilities—and therefore the barriers that are the most difficult to overcome—are others’ negative attitudes and erroneous images of them.  Interviewers were trained to use positive rather than patronizing language and were encouraged to focus on the individual first and the disability last. 

Trainers presented the three general issues that persons with disabilities face in completing telephone interviews and provided guidance for overcoming each one.  Communication challenges were divided into hearing and speech impairments.  To overcome hearing impairments, interviewers learned (1) to use a normal tone of voice and not restrict conversations to single syllable words, (2) to use controls on headsets to amplify outgoing sounds, and (3) to use a text telephone (TTY/TTD) or relay operator, if necessary.  To overcome speech impairments, interviewers learned (1) to use controls on the headsets to amplify incoming sounds, (2) not to be afraid to ask the respondent to repeat what he or she said, (3) to be patient as speech patterns become easier to discern after a few minutes, (4) to repeat aloud what they heard and determine the need for clarification, and (5) not to pretend to understand something they did not understand and instead go back and build from the point at which they did understand.  Trainers demonstrated that people with speech impairments might be unable to monitor their tone of voice.  For example, a person with cerebral palsy may seem angry when actually she or he is not, and people who slur words may seem drunk when they are not.  Trainers encouraged interviewers not to make assumptions about people based on their tone of voice.  The requirement that interviewers monitor an interview with a respondent with a severe speech impairment reinforced the importance of not making assumptions based on tone of voice.

To overcome stamina challenges, interviewers were trained to be aware of behaviors that might suggest that a respondent was too fatigued to continue.  For instance, agitation and distraction can signal that a respondent is ready for a break.  Trainers encouraged interviewers to ask if the respondent needed to schedule another time to continue and then set appointments for times when the respondent was most alert.

To overcome cognitive challenges, the training focused on nonbiased, nondirective probing methods (silence, repeating the question, repeating the response categories, asking for more information, stressing generality, stressing subjectivity, and zeroing in) and using active listening skills and patience.  The trainers (1) showed interviewers how to keep the respondent free of distractions, (2) instructed interviewers to say the respondent’s name often, and (3) cautioned against an exaggerated inflection or tone of voice (such exaggerations call attention to the interviewer and can be distracting and confusing). 

Our experience points to another recommended practice: the involvement of people with disabilities in the training itself, particularly if in-person interviews are planned.  People with disabilities know best the challenges they face in survey participation; their presentations can help put interviewers at ease.  For example, it can be useful for a person using a wheelchair to discuss proper positioning to be able to see the interviewer as he or she poses the questions; for a person with blindness to discuss how or if the interviewer should provide the individual with assistance in moving around the house; or for a person with deafness to explain the basics of American Sign Language. Other courtesies and procedures should be explained during training:  how to approach service animals, the purpose of various assistive devices, and how to react if a medical emergency should arise.  In repeated surveys, interviewers from the last round can be asked to describe their experiences and reassure new interviewers that, in practice, the interviewing is not as difficult as the training might suggest.

Finally, special circumstances may arise, albeit rarely, for which in-person interviewers must be prepared.  Circumstances include drug and alcohol abuse, suspected child abuse or abuse by a caregiver, talk about suicide, or lack of food or utilities.  Survey organizations are cognizant of the legal requirements for reporting abuse and will carefully weigh when and if to seek outside assistance for other circumstances.  Procedures should be made clear during interviewer training.

2.         Contact Procedures

In an effort to increase respondent trust and rapport, all the surveys sent an advance letter to sample members before the start of data collection. The advance letters explained the purpose of the survey, offered assurances of confidentiality, described the voluntary nature of participation, and included a toll-free number for respondents to call with questions or to complete the interview at their convenience.  Advance letters should make an explicit offer of assistance to persons who may be unable to respond by telephone because of their disability and mention other interviewing modes, if available.  Respondents should be encouraged to call a toll-free number or send email to a help desk to make special arrangements to be interviewed, if needed. In addition, if space permits, advance letters should be printed in a large-size font for persons with low vision.  (A Spanish translation can be printed on the back.) 

Sending a letter in advance of the first contact also helps increase the legitimacy of the survey, an important objective if respondents are program participants or benefit recipients.  As with non-beneficiaries, beneficiaries are increasingly apprehensive about  “scams” or the theft of personal information for inappropriate use.  For this reason, the advance letter should be signed by an agency official and include a telephone number or Web site where more information about the survey can be obtained.  Along with the advance letter, it is useful to send a fact sheet about the survey that answers questions such as the following:  Why did you choose me? Am I required to participate?  How will the information be used?  Will my answers be confidential?  How can I contact you with further questions?  The advance letter and fact sheet should communicate all survey facts with clarity and openness, use simple language, and, if needed, repeat information for emphasis.

3.         Establishing Legitimacy

In addition to establishing legitimacy through advance contact with the survey population, we have found it important to establish legitimacy within the sponsoring agency itself.  Upon receipt of the advance letter or interviewer contacts, some disability beneficiaries call their local field office or caseworker with questions about the survey or its legitimacy.  If the field offices are uninformed, they will usually advise the beneficiary not to participate and, based on suspicion of fraud, may even launch an investigation of the survey organization.   For this reason, we recommend developing an information brochure about the survey and then enlisting the agency’s help in distributing it across the organization, usually by posting it on an agency intranet.  Most important, the brochure should provide the name of an individual within the agency who can address questions and concerns.  In surveys with long field periods (greater than six months), it is sometimes necessary to redistribute the survey information in the likely event that it was discarded. In addition to an intranet posting, it is useful to have survey information posted on the agency’s external Web site for use by agency personnel and respondents alike. 

4.         Proxy Respondents

A universal design element of the surveys was the use of proxy respondents for sample members unable to respond for themselves.  Proxy respondents completed the survey on behalf of respondents with communication impairments, those with severe physical disabilities that precluded participation (in any mode), and those with mental impairments that might have compromised data quality.  The use of proxy respondents for collecting data from people with disabilities is debated.  One view holds that a proxy respondent cannot adequately describe and understand the day-to-day living of people with disabilities and is therefore a poor substitute for self-response.  Another view is that a proxy respondent, while possibly biased, is preferable to no respondent at all.  To maximize the representation of those unable to respond for themselves, our surveys allowed proxy respondents.  At the same time, we emphasized self-response as the preferred method and took steps to overcome barriers to self-response. One argument for proxy respondents is that analysts can decide to include or exclude proxy cases from their analyses depending on the response differences they observe and the intended data uses.

The decision to use proxy respondents should be made in the planning stages so that individual questions can be designed and evaluated for proxy use.  Some questions will be unsuitable for proxy respondents; for example, questions about the respondent’s mental health, opinions, or attitudes.  Proxy respondents need to be identified at the start of the interview and follow a distinct route through the survey instrument that directs them to questions they can answer reliably—thus adding to the complexity of the instrumentation.  It is also useful to ask the relationship of the proxy respondent to the sample member (family member, personal care attendant, nursing home staff, and so forth), the reason why a proxy respondent was needed (communication, cognitive, or physical barrier), and the degree to which the proxy respondent is involved in the sample member’s life (helps make decisions, makes decisions, is not involved in decision making).   This information can be analyzed to understand more fully and predict proxy usage in future surveys and to exclude from analysis proxy respondents deemed unreliable.  

The decision as to when to use a proxy can be handled in several ways.  In the SSI/Medicaid and NSCF surveys, the decision was that of the interviewers, who evaluated the quality of responses and had the authority to seek a proxy early during the interview.  More often, gatekeepers or caregivers, who volunteered that the sample member was unable to respond to the survey because of a disability, identified the need for a proxy.  Another approach is to incorporate a short, formal assessment of mental capacity into the screening portion of the instrument.  Tests, such as the Mini-Mental State Examination (MMSE), the Telephone Interview of Cognitive Status (TICS), or the Short Portable Mental Status Questionnaire (SPMSQ), ask a series of questions to determine if the respondent has the mental capacity to understand questions and respond appropriately.  A distinct disadvantage of the tests is that respondents may find them inappropriate or insulting and refuse to continue with the interview.  For example, asking respondents to name the day of the week or the president of the United States may come across as condescending or ridiculous, especially when such questions are asked at the beginning of the interview before establishing any rapport, thus setting a bad tone for the entire interview.  In addition, the tests might embody errors of inclusion and exclusion that are not always well understood or documented.  Further, the tests are often designed for other purposes, such as in-take evaluation at medical facilities, and do not readily lend themselves to a research interview. 

In the NBS, we used an innovative “mini-cognitive test” designed expressly for the survey to identify when proxy respondents were needed.[1]  We wanted to provide interviewers with a tool for evaluating when to seek a proxy rather then leave the decision to their discretion or to gatekeeper advice.  The test combined the ability to understand the survey topics with elements of informed consent.  Specifically, we asked three questions at the start of the interview.  First, we gave a general description of the survey topics to be covered (your health, daily activities, and any jobs you might have) and asked the respondent to state the topics in his or her own words.  Second, we described the voluntary nature of the survey and asked respondents to state, in their own words, what that description meant to them.  Third, we described the confidential nature of the respondents’ answers and asked them to state what that description meant.  To avoid alienating respondents, we said that the reason for asking them to restate the descriptions was “just to be sure my explanations are clear.” In that way, it appeared to respondents that they were aiding the interviewer by providing feedback on the explanations rather than undergoing a test of their own cognitive abilities.

If respondents were unable to restate accurately any description after two attempts, we asked if someone else could answer questions on their behalf.  About 65 percent of those who failed the test then volunteered a proxy.  (The Appendix includes the complete script of the mini-cognitive test.)

Preliminary results suggest that the mini-cognitive test worked well in actual survey administration.  At the end of the test, we were reasonably assured that we not only had identified a competent respondent but also that we had obtained informed consent.  Less than 1 percent of the respondents broke off the interview during the mini-cognitive test because they objected to the questions.  About 4 percent failed the cognitive test, requiring a search for proxy respondents.  Further research is needed, however, to assess whether the test was biased in terms of either excluding persons mentally competent to answer the questions but unable to communicate that competence effectively or including persons who were not competent.

On all the surveys, the rate of self-response was high.  On the NBS and NSCF, the overall rates of self-response were 84 and 89 percent, respectively.  On the SSI/Medicaid surveys, which did not use in-person interviewing, the average self-response rate was 83 percent.   

5.         Assisted Interviews

Assisted interviews are another means of encouraging self-response without reliance on a proxy.  An assisted interview differs from a proxy interview in that, for the most part, sample cases respond for themselves but in the presence of another person from whom they can occasionally seek help in interpreting or answering a question.  In both the NSCF and NBS, we allowed assisted interviews in order to minimize item nonresponse, improve the accuracy of responses, and overcome less limiting conditions (such as difficulty hearing) and language barriers.  In the NSCF, about 6 percent of the respondents with a physical disability needed an assistant to help them through the interview compared with about 7 percent of those with a mental disability.  The overall rate of assisted interviews (7 percent) was lower than the rate of proxy interviews (11 percent).  As reported by the interviewers, the most common reasons for needing an assisted interview were “did not know answers to the questions” (78 percent), “hearing problem” (17 percent), and “poor memory or confusion” (17 percent).

6.         Using Incentives

Survey response rates are declining, making incentives an important mitigating factor.  For federally sponsored surveys, incentive use and amounts are subject to approval by the Office of Management and Budget, which has recently been demonstrating greater acceptance of incentive use if sufficient justification is provided.  To encourage response, both the NBS and NSCF offered a $10 incentive (post-paid by check) that was justified by the length of the interview and the surveys’ longitudinal design.  In surveys of disability beneficiaries, however, incentives can carry a risk because beneficiaries are required to report amounts over $10 as income.  A larger incentive may result in a benefit reduction for the month in which it is reported.  For this reason, we have limited incentives to $10 for surveys of disability beneficiaries.  Alternatives to cash incentives, such as telephone cards or small gifts, are useful to consider.

7.         Production Standards

Conducting telephone interviews with people with disabilities can present different issues, even for experienced, well-trained interviewers.  Interviews take longer because questions need to be repeated, and several interview sessions may be required.  The data collection efforts reported here made extra efforts to support the interviewers and reduce stress and burnout.  To that end, the surveys adjusted the usual performance measures (such as hours per completed interview).  Supervisors reminded interviewers that break-offs were acceptable and desirable if respondents were fatigued.  Supervisors and colleagues provided support during and after interviews and at regular debriefing sessions.  Some interviewers may experience compassion fatigue, or the burnout felt by people who are exposed to difficult circumstances experienced by others. Interviewers feeling compassion fatigue can benefit from dividing their time between a difficult study and one that does not create burnout or stress.  Compassion fatigue training and workshops can also be helpful. 

F.    Interviewing Modes

Data collection modes have different strengths and weaknesses that arise, in part, from whether the questionnaire is self-administered or interviewer-administered.  In general, interviewer-administered surveys, in which an interviewer reads questions aloud, reduce the cognitive burden on respondents and eliminate the requirement that respondents be able to read—factors that are important to the extent that cognitive, fatigue, and literacy issues are encountered.  Interviewer administration may also decrease item nonresponse and improve the quality of open-ended answers.  Below, we discuss our experiences in using interviewer-administered surveys, both CATI and CAPI, to collect data from people with disabilities. 

1.         CATI

When MPR conducted the first SSI/Medicaid surveys in 1998, the feasibility of using telephone interviewing exclusively to collect data from people with disabilities was untested and unknown; most health surveys of people with disabilities were conducted in-person.   It was unclear if telephone interviewing could accommodate the range of disability types, overcome cognitive challenges, and achieve high response rates.  The SSI/Medicaid surveys, however, demonstrated that reasonably high response rates could be achieved by using telephone interviewing and that data quality could be maintained through modifications to procedures.  Table 3 shows response rates, cooperation rates, and self-response rates by state as well as by type of disability as defined in the SSI files.[2]  

The response rates averaged 66 percent, ranging from 58 percent in New York City to 74 percent in Kentucky.  The cooperation rates averaged 95 percent, ranging from 94 percent for respondents with mental illness to 97 percent for respondents with mental retardation.  The most notable source of nonresponse was the inability to locate sample members by telephone.  Those with physical or sensory disabilities were the most likely to be located (73 percent) while only 67 percent of those with mental illness or mental retardation could be located.  Sample frames were state Medicaid administrative records.  Although data quality varied from state to state, for the most part, files either lacked addresses or telephone numbers or contained inaccurate contact information. 

Data quality can be measured in terms of high response rates to both the survey (unit response) and specific survey questions (item response) and in terms of the reliability and validity of the data.  Measures of data quality showed that the reliability and validity of the data collected solely by telephone appeared to be high and that item nonresponse rates were equivalent to rates observed in general population surveys (see Ciemnecki and CyBulski 2004).

 


TABLE 3
SSI/MEDICAID RESPONSE RATES, COOPERATION RATES, AND SELF-RESPONSE RATES, BY STATE AND TYPE OF DISABILITY

 

Response Rate

Cooperation Rate

Self‑Response Rate

 

Tennessee

 

67.3

 

96.5

 

87.0

Tennessee  Physical/ sensory

70.2

95.3

93.4

Tennessee Mental illness

67.5

96.2

90.4

Tennessee MR/DDa

62.0

97.8

81.4

Tennessee Unknown

70.3

96.9

80.7

 

 

 

 

Kentucky

74.3

97.4

83.9

Kentucky Physical/sensory

84.4

98.5

92.7

Kentucky Mental illness

71.9

97.3

85.8

Kentucky MR/DD

72.2

96.5

75.8

Kentucky Unknown

71.5

97.5

74.6

 

 

 

 

New York City

57.5

92.8

78.8

New York City Physical/ sensory

60.9

91.8

86.9

New York City Mental illness

53.3

87.4

85.1

New York City MR/DD

55.9

97.3

69.0

New York City Unknown

62.1

94.9

75.4

 

 

 

 

Westchester County, NY

59.0

92.5

80.7

Westchester County, NY Physical/sensory

61.4

93.1

86.9

Westchester County, NY Mental illness

56.0

90.8

87.9

Westchester County, NY MR/DD

64.9

96.3

56.1

Westchester County, NY Unknown

56.4

91.5

76.7

 

 

 

 

Total

65.7

95.3

83.0

Total Physical/sensory

69.6

95.1

90.4

Total Mental illness

64.5

94.3

87.0

Total MR/DD

64.0

97.0

72.7

Total Unknown

65.3

95.4

77.1

 


2.         CATI/CAPI

While it is possible to collect data from people with disabilities by relying exclusively on telephone interviewing, we have found that the use of several survey modes is particularly effective in both removing barriers to survey participation among people with disabilities and achieving high response.  Response rates to telephone surveys have trended downward in the past four to six years since the SSI/Medicaid surveys were conducted. With the proliferation of call-screening devices, it is easier for respondents to evade the interviewers’ contact attempts. The “do not call” registry, which respondents often mistakenly believe applies to research surveys, has empowered the population in general to say “no” to surveys.  The growing incidence of “cell phone only” households, particularly among urban youth, means that a significant portion of the population may be systematically excluded from telephone surveys.  Finally, privacy concerns loom large, as the population—not just people with disabilities—becomes increasingly reluctant to provide personal information to strangers.  All these factors have contributed to the decline in telephone survey response rates.  A CATI/CAPI dual-mode design can overcome some of the limitations of a CATI-only design.  This approach is used in part by the Census Bureau’s American Community Survey for respondents that do not mail back the paper instrument.

CAPI offers a number of advantages over CATI for improving access to surveys among people with disabilities.  In-person administration (1) facilitates interviewing of persons with hearing and speech limitations who are unable to participate by telephone, (2) permits in-person assistance to persons with learning disabilities and cognitive challenges,  (3) informs the decision of when a proxy respondent may be needed, and (4) improves the locating rate through in-field searching. The disadvantage is cost:  a CAPI interview can cost four to eight times as much as a CATI interview because of interviewer travel expenses.  In addition, a CATI/CAPI model has the potential to introduce mode effects stemming from the differences between remote and in-person survey administration.  Mode differences may particularly affect sensitive questions, such as those asking about income and assets, drug and alcohol use, and health status.

The NSCF and NBS were both dual-mode surveys, with CATI used as the primary mode and CAPI used as the secondary mode (to follow up CATI nonrespondents).  Table 4 shows the proportion of the sample that responded to each mode, along with other survey outcomes.  The largest proportion of respondents to each survey, 54 percent in the NSCF and 48 percent in the NBS, responded by CATI when the interviewer initiated the call (CATI call-out interviews).  A smaller proportion responded by CAPI, 12 percent in the NSCF and 13 percent in the NBS.  Thus, a sizable group of respondents could be interviewed by CATI, reducing reliance on more expensive CAPI.  While not large, the proportion responding by CAPI helped improve the response rate over what might have been achieved through CATI alone.

 

TABLE 4

NSCF AND NBS SURVEY OUTCOMES


 

NSCF
Completed Interviews
(n = 9,242)

NBS
Completed Interviews
(n = 7,957)

CATI (phone out)

57%

50%

CATI (phone in)

  7

12

CAPI

13

14

Total Response Rate (unweighted)

77

77

 

 

 

Other Outcomes: Refusal rate

7

9

Other Outcomes: Not interviewed

16

14

 

Four other survey outcomes are worth noting.  First, 7 percent of the NSCF sample and 12 percent of the NBS sample responded by CATI call-in; that is, they called the help desk in response to contact materials or messages left by interviewers.  CATI call-ins offer the advantage of reduced interviewer labor hours associated with making unproductive calls; the offer of an incentive helps promote call-in interviews.  Second, the refusal rates were roughly equivalent in the surveys, 7 percent in the NSCF and 9 percent in the NBS.  Initial refusal rates were about double these percentages, but a comprehensive refusal conversion program, including sending CAPI interviewers to telephone refusal households, helped contain the rates.  Third, as observed in the SSI/Medicaid surveys, the largest source of nonresponse was failure to locate.  About 13 percent of the sample in the NSCF and 11 percent in the NBS could not be located.  Locating problems stemmed from poor-quality information available in SSI administrative records and gatekeepers’ refusal to provide information about the respondent’s whereabouts.  Finally, barriers to response caused by insurmountable disabilities accounted for only about 1 to 2 percent of nonresponse in the samples overall.

3.         TTY, Telecommunications Relay Service, and Instant Messaging

Using a text typewriter (TTY) is a feasible though cumbersome means of collecting data from persons with hearing impairments.  TTY is referred to by alternative names—TDD, TT, TTD, and TTY.  The culturally preferred term is TTY and covers both older model text typewriters and newer TTD (telecommunication device for the deaf) models.  TTY calls take longer than a voice call; therefore, most hearing-impaired persons prefer to keep their calls short.  To speed up TTY conversation, it is common to use abbreviations such as those used in email messages.  Common abbreviations are “ga” (go ahead), “nu” (number), “oic” (oh, I see), and “sk” (stop keying—designates end of call).  Punctuation is typically eliminated; some words such as articles and linking verbs are also omitted in TTY conversations.  The following is an example of a TTY response that an interviewer might encounter: 

Respondent:          I have to go now cuz husband home call bck tmw ok q ga to sk

Translation:            I have to go now because my husband just came home. Can you call me back tomorrow?   Go ahead if you have more to say but I’m finished. 

Interviewer:           ok will callback tmrw sk

Translation:            Okay, I will call you back tomorrow.  I have nothing more to say.

The specialized style of TTY communication can be incompatible with interviewing requirements that call for questions to be administered exactly as written and in a uniform way.  The time needed to type questions and list all response categories can be frustrating and fatiguing for both interviewer and respondent.  To minimize typing time and maintain uniformity, we used TTY emulation software on a PC, whereby we maintained an electronic version of the instrument in TTY format (eliminating capitalization, some punctuation, and programming instructions but maintaining question wording).  To ask a question, we “cut” the question text from the electronic file and “pasted” it into the TTY text box.  Working side by side, a second interviewer entered the respondent’s answers into the computerized survey instrument on a second PC.   Based on skip patterns, the second interviewer then instructed the first interviewer what question to “cut and paste” next.  As we gained experience, one interviewer was able to perform both functions.  Nonetheless, the average time to complete an NBS TTY interview was three hours and often required several sessions.  The shortest TTY interview was 1.5 hours in one session, the longest was 5 hours over three sessions. 

The primary disadvantage of using TTY is the long delay between the time questions are transmitted and the time answers are received, even with typing minimized.   Many TTY machines have only a small screen, which the respondent must scroll through to read a long question.  Other respondents preferred to print the questions on paper tape, a process that took even longer.

It is important to be aware that hearing-impaired people who were prelingually deafened may speak American Sign Language (ASL) as their “first” language.  They may not be comfortable using TTY because they can express themselves more clearly in ASL.   Hearing-impaired persons who speak English as a second language may also be more comfortable with ASL.  In these cases, it is useful to send an ASL translator to the house along with a field interviewer.

For all the surveys, a small group of interviewers were trained to administer TTY interviews; they typically completed only one per 6-hour shift.  TTY calls required respondents to be informed in advance as to how long the process might take.  Most respondents then set an appointment for a later date.  To save time, the TTY conversation can be saved to disk and printed out for later entry into the survey instrument.  Given that saving to a disk is similar to recording a hearing person’s conversation, it is good practice to obtain the respondent’s consent.

Another option when interviewing hearing-impaired persons is to use a relay operator (RO).  The Americans with Disabilities Act provides that each telephone common carrier providing voice transmission services must also provide Telecommunications Relay Service (TRS).   TRS connects people who use TTYs to people who use standard voice telephones.  The relay operator assists in bridging the gap between the two communications devices by typing what the voice user says and voicing what the text telephone user types.  The communication takes place over the Web (for text) and the telephone (for voice).  When using an RO, survey organizations can set certain preferences:  preference of RO gender, all calls relayed verbatim, no relay of background noise, and request for a foreign-language-speaking RO.  We found that respondents who lost hearing after they had learned to speak preferred TRS because they could voice their responses to the RO rather than type.

According to Michael Kaika, director of media relations of Gallaudet University, more hearing-impaired persons now use instant messaging rather than TTY and TRS.  Especially popular among young people, instant messaging has the advantage of eliminating the relay operator and special equipment—anyone with a PC can use instant messaging.  In the NBS, we conducted interviews with instant messaging by cutting and pasting question text and response lists into the text box.  This method saved the effort and time of typing out each question.  Interviews conducted by instant messaging were generally about one-third shorter than those using TTY.

To illustrate the frequency with which alternative means were used to conduct interviews, we identified 61 NBS respondents who needed to be interviewed by TTY, TRS, or instant messaging.  (Another five cases called our help desk via an RO and completed the interview on the spot.)  Of these, 31 cases eventually completed the interview: 12 by TTY, 6 by TRS, 4 by instant messaging, and 9 using some combination of the three methods.   The remainder either did not answer the TTY call or refused, usually citing length of the interview. 

Finally, more hearing-impaired persons are opting for wireless messaging pager systems that allow the user to send and receive email, TTY messages, faxes, and text-to-speech and speech-to-text messages, as well as to send a text message to any one-way alphanumeric pager (WyndTell and Sidekick are common services and devices).  In addition, more cellular telephones are now compatible with TTY and hearing aids; as these telephones become less expensive, their use will likely increase.  The challenge facing survey organizations is to keep current with the technologies and employ knowledgeable staff who can use the technologies effectively. 

4.         Web

While MPR has not yet used a Web survey to collect data from people with disabilities, we believe that it offers significant potential if used in combination with other modes.  With or without the benefit of assistive technology (hardware and software designed to facilitate the use of computers by people with disabilities), persons with disabilities increasingly use the Web for shopping, communicating, and distance learning (Disability Rights Commission 2004).  A Web version of a survey instrument can be used in conjunction with TTY, TRS, and instant messaging as a means of overcoming hearing and speech impairments.  Respondents with different impairments may prefer Web administration because they can complete the interview privately at their convenience.  Depending on the number of interviews completed by this mode, Web surveys can be less expensive than either CATI or CAPI interviews because they eliminate interviewer labor hours for data collection.

A major challenge associated with designing a Web survey for people with disabilities is to ensure accessibility to the broadest spectrum of the population.  Section 508 of the Workforce Investment Act requires federal agencies using or maintaining electronic technology to ensure that persons with disabilities have access to information comparable to the access enjoyed by members of the public without disabilities—unless it is an undue burden to do so.   A major goal of Section 508 is that information technology—such as software, computers, and Web pages—are compatible with assistive technology.  In some cases, the standards require the information technology to be readily usable without the need for assistive devices.  While federal agencies are still developing a common understanding of Section 508 requirements, it is reasonable to assume that a Web survey in a federally sponsored study will need to be fully Section 508–compliant in the near future.

Designers of Web surveys will need to obtain a better understanding of both the Section 508 requirements and the accessibility needs of people with disabilities as related to technology development.  People with severe visual impairment, for example, will need text instead of images for translation into audible or legible words by screen reading devices; persons with low vision may need large-font text and effective color contrast; people with dyslexia or cognitive impairments will benefit from the use of simple language and the clear and logical layout of a Web page; and people with physical impairments may need to navigate with a keyboard rather than with a mouse. 

It will be critical to involve people with disabilities in the design and testing of Web surveys to ensure practical accessibility and usability.  Involving people with disabilities in such tasks will likely improve usability for all.  Many of the characteristics of Web surveys that impede people with disabilities will also make the surveys confusing to users in general.

G.   Conclusions and Recommendations

It is in our best interest as survey researchers to make sure that surveys are accessible to a broad spectrum of respondents at the lowest cost possible.  Furthermore, it is our responsibility to collect the highest-quality data possible.  Most of the time, high-quality data can be collected directly from the people who participate in the programs we study.  We have demonstrated that, through careful instrument design and survey procedures, it is possible to conduct surveys with people with disabilities.  Shorter rather than longer interviews cause less respondent burden and can be conducted with fewer break-offs and less need for encouragement from the interviewer.   However, interviews lasting more than an hour are possible. 

In reviewing the implementation and results of our surveys, we feel that their success lies in the modifications to instrumentation and data collection procedures we undertook to remove barriers to participation.   When designing the survey instruments, we followed three guiding principles: use simple language, keep questions brief, and keep recall periods short.  Further, we sought to minimize the use of high-frequency sounds that are difficult to hear.  We encouraged interviewers to slow the pace of the interview by building in checkpoints to test for respondent fatigue and to provide necessary encouragement.  We aided the respondent by rewording questions (using structured probes), if needed, for improved comprehension and encouraged breaks and multiple interview sessions for respondents who needed them. 

A literature review underscores that measurement of disability is a complex phenomenon, with varying and evolving conceptual frameworks depending on the intended data uses. We observed that self-reports of disabling conditions could differ markedly from administrative records.  The diversity of the population with disabilities—in terms of disability types and severity and age, education and employment, living situation, and environmental supports—is great.  Taken together, these characteristics add to the complexity and challenges associated with instrument design. We found that careful pretesting and expert review were necessary to explicate and capture the diversity of the population with disabilities. 

In all the surveys, we used specialized data collection procedures to remove barriers to participation.  These procedures included reliance on proxy respondents and assisted interviews, the use of incentives, adjusting production standards, and establishing legitimacy through dissemination of survey information.  Interviewer training focused not only on the actual questions in the questionnaire but also on strategies for sensitizing interviewers to the needs of people with disabilities in survey settings and in daily life.  

Our experience suggests that it is feasible to interview large samples of people with disabilities by telephone.   Further, we have evidence that the telephone interview mode provided information that appeared to be consistent with that collected by in-person interviews. Telephone interviewing alone, however, resulted in lower response rates compared with a CATI/CAPI model.  While we recognize that not all people with disabilities can complete a telephone interview, we also recognize that in-person interviewing can be four to eight times more expensive than telephone interviewing. Therefore, we recommend a mixed-mode approach that attempts telephone interviews first and uses in-person interviewing for people with disabilities who do not or cannot respond by telephone.  Web surveys have significant potential as an alternative mode at a lower cost; however, issues of literacy and computer access must be evaluated for the given sample.

The major challenges in collecting information on people with disabilities are the same as in general population surveys.  In particular, poor contact information in Medicaid and SSI administrative records was largely responsible for limiting the overall response rates.  Other challenges that are typical of general population surveys, not just those of people with disabilities, include overcoming concerns about privacy and confidentiality, overcoming call-screening devices, and gaining cooperation through incentives and motivational appeals.  People with disabilities themselves do not appear to pose an extraordinary challenge to research surveys.  Once valid contact information (telephone numbers and addresses) is located, most people with disabilities agree to be interviewed.  In combination with effective locating strategies and several interview modes, modifications to instruments and procedures can ensure both high response rates and high-quality data.


References

American Association for Public Opinion Research.  “Standard Definitions: Final Dispositions of Cases Codes and Outcome Rates for Surveys.”  2004.

Barnett, S., and P. Franks.  “Telephone Ownership and Deaf People:  Implications for Telephone Surveys.”  American Journal of Public Health 89(11), 1999, pp. 1754-1756.

Black, Ken.  “Measurement Issues.”  Presented at the First Workshop for Improving Disability Statistics and Measurement.  Bangkok, May 24–28, 2004.  [http://www.unescap.org/stat/
meet/widsm1/widsm1_ken3.asp].

Carlson, Dawn, Nathaniel Ehrlich, Betty Jo Berland, and Nell Bailey.  “Assistive Technology Survey Results:  Continued Benefits and Needs Reported by Americans with Disabilities.”  September 27, 2001. [http://www.ncddr.org/du/researchexchange/v07n01/atpaper]. 

Center for Medicare and Medicaid Services, Division of Disabled and Elderly Health Programs.  “Home and Community-Based Services:  From Institutional Care to Self-Directed Supports and Services.”  May 2003.

Ciemnecki, Anne B., and Karen A. CyBulski.  “Removing the Barriers:  Modifying Telephone Survey Methodology to Increase Self-Response among People with Disabilities.”  Final report.   Princeton, NJ:  Mathematica Policy Research, Inc., August 2004.

Conwal, Incorporated.  Disability Statistics.  Brief, XIV, Number 8.  Washington, DC:  National Institute on Disability and Rehabilitation Research, 1993.    [http://codi.buffalo.edu/graph_based/.demographics/.disstats].

Crowley, Jeffrey S., and Risa Elias.  “Medicaid’s Role for People with Disabilities.”  The Kaiser Commission on Medicaid and the Uninsured, August 2003. 

Hill, Steven, and Judith Wooldridge.  “SSI Enrollees in TennCare: Room for Improvement.”    Princeton, NJ:  Mathematica Policy Research, Inc., March 2000.

Kirchner, C. “Improving Research by Assuring Access.”  Footnotes 26(7), 7, 1998.

Kraus, Lewis E., Susan Stoddard, and David Gilmartin.  “Chartbook on Disability in the United States, 1996.” Washington, DC: U.S. Department of Education, National Institute on Disability and Rehabilitation Research, 1996 [http://www.disabilitydata.com/disabilitydata/chartbook.choices.html]

LaPlante, M., and D. Carlson.  Disability in the United States:  Prevalence and Causes, 1992.  Disability Statistics Report (7).  Washington, DC:  U.S. Department of Education, National Institute on Disability and Rehabilitation Research, 1996.

Long, Sharon K., Teresa A. Coughlin, and Stephanie J. Kendall. “Access to Care among Disabled Adults on Medicaid.”  Health Care Financing Review, 2002.

Mathematica Policy Research, Inc.  Digest of Data on Persons with Disabilities.  Washington, DC:  author, June 1984.

National Council on Disability.  Improving Federal Disability Data.  Washington, DC:  author, January 9, 2004.

National Council on Disability.  “Reorienting Disability Research.”  Washington, DC:  author, April 1, 1998.  [http://www.ncd.gov/newsroom/publications/publications.html].

Olsen, L. et al.  The National Immunization Survey:  Development of Strategies to Include Deaf Respondents in an RDD Telephone Survey.  Presented at the American Public Health Association Conference, Chicago, 1999.

Parsons, Jennifer A., Sara Baum, and Timothy P. Johnson.  “Inclusion of Disabled Populations in Social Surveys:  Review and Recommendations.”  Chicago: Survey Research Laboratory, University of Illinois at Chicago, December 2000.

Russell, N., G.E. Hendershot, F. LeClere, J. Howie, and M. Adler.  “Trends and Differential Use of Assistive Technology Devices:  United States, 1994.”  Advance Data from Vital and Health Statistics, No. 292.  Hyattsville, MD:  National Center for Health Statistics, 1997.

Schneider, M.   “Participation and Environment in the ICF and Measurement of Disability.”  Paper presented at the United Nations International Seminar on the Measurement of Disability.  United Nations, New York, NY, 2001. 

The Web:  Access and Inclusion for Disabled People.  London: Disability Rights Commission, 2004.

Thompson-Hoffman, S., I.F. Storck, eds.  Disability in the United States:  A Portrait from National Data.  New York:  Springer Publishing Company, 1991.

Todorov, A., and C. Kirchner.  “Bias in Proxies’ Reports of Disability:  Data from the National Health Interview Survey on Disability.”  American Journal of Public Health 90, pp. 1248-1253, 2000.

U.S. Department of Education, Office of Special Education and Rehabilitative Services, National Institute on Disability and Rehabilitation Research. Long-Range Plan 1999-2003.  Washington, DC:  U.S. Government Printing Office, 2000.

Wilson, Barbara Foley, Senda Benaissa, Karen Whitaker, Paul Beatty, and Gerry Hendershot.  “Improving the Feasibility of Including Deaf Respondents in Telephone Surveys.”  Paper presented at 53rd Annual Conference of the American Association of Public Opinion Research, 1998.

Wunderlich, Gooloo S., ed.  The Dynamics of Disability: Measuring and Monitoring Disability for Social Security Programs. Washington, DC: National Academy Press, 2002.

Zola, I.K., ed.  Disability Studies Quarterly, 10(3).  (Issue dedicated to disability demographics), Summer 1990.

                                                                


APPENDIX

 

COGNITIVE TEST FOR IDENTIFYING NEED FOR PROXY RESPONDENT IN THE NATIONAL BENEFICIARY SURVEY

 

1.     Next I will explain some facts about the survey.  After I explain, I will ask you three questions so I can be sure my explanations were clear.

 

Here’s the first explanation.  The survey asks about (YOUR/NAME’S) health, daily activities, and any jobs you might have.  Please tell me in your own words what the survey is about.

 

INTERVIEWER:  IF NAME/PROXY SAYS “DON’T KNOW” RECORD AS “LISTS NONE”

 

                        LISTS NONE................................................. 00

                        LISTS 1 TOPIC.............................................. 01

                        LISTS 2 TOPICS........................................... 02  (go to 2)

                        LISTS 3 TOPICS........................................... 03  (go to 2)

                        REFUSED...................................................... r

 

1a. Let’s try that again.  The survey asks about (YOUR/NAME’S) health, daily activities, and any jobs (YOU/NAME) might have.  Please tell me in your own words, what the survey is about.

 

INTERVIEWER:  IF NAME/PROXY SAYS “DON’T KNOW” RECORD AS “LISTS NONE”

 

                        LISTS NONE................................................. 00  (go to 4)

                        LISTS 1 TOPIC.............................................. 01  (go to 4)

                        LISTS 2 TOPICS........................................... 02  (go to 2)

                        LISTS 3 TOPICS........................................... 03  (go to 2)

                        REFUSED...................................................... r

 

2.   Here is the next explanation.  Taking part in the survey is completely voluntary.  Completely voluntary means you can choose whether or not to take part.  If you decide to take part, you can refuse to answer any questions you do not like and you can stop the interview at any time you choose.  Whether you choose to take part or not, (YOUR/NAME’S) disability benefits will not be affected in any way.

 

       When I say your taking part is completely voluntary, what does that mean to you?

 

INTERVIEWER:  IF NAME/PROXY SAYS “It is voluntary,” PROBE:  What does that mean?

 

EXAMPLES OF ACCURATE ANSWERS ARE:  I can decide to take part or not to take part.  I can refuse to take part if I want.   I don’t have to do this.  I can do this if I want.  No one will take away my benefits if I refuse, etc.

 

INTERVIEWER:  IF NAME/PROXY SAYS “DON’T KNOW” RECORD AS “INACCURATE ANSWER”

 

                ACCURATE ANSWER......................................... 01        (go to 3)

                INACCURATE ANSWER..................................... 02

                REFUSED............................................................... r

 

 

2a.   INTERVIEWER:  YOU ARE ASKING THIS QUESTION FOR THE SECOND AND LAST TIME

 

Let’s try that again.  Taking part in the survey is completely voluntary.  Completely voluntary means you can choose whether or not to take part.  If you decide to take part, you can refuse to answer any questions you do not like and you can stop the interview at any time you choose.  Whether you choose to take part or not, (YOUR/NAME’S) disability benefits will not be affected in any way.

 

       When I say your taking part is completely voluntary, what does that mean to you?

 

INTERVIEWER:  IF NAME/PROXY SAYS, “It is voluntary,” PROBE:  What does that mean?

 

EXAMPLES OF ACCURATE ANSWERS ARE:  I can decide to take part or not to take part.  I can refuse to take part if I want.   I don’t have to do this.  I can do this if I want.  No one will take away my benefits if I refuse, etc.

 

INTERVIEWER:  IF NAME/PROXY SAYS, “DON’T KNOW” RECORD AS “INACCURATE ANSWER”

 

                        ACCURATE ANSWER................................. 01

                        INACCURATE ANSWER............................. 02  (go to 4)

                        REFUSED...................................................... r

 

 

3.   Here’s the last explanation.  All your answers will be kept confidential and used only for the research purposes of this study.  When I say your answers will be kept confidential, what does that mean to you?

                                              

INTERVIEWER:  YOU ARE ASKING THIS QUESTION FOR THE SECOND AND LAST TIME.

 

Let’s try that again.  All your answers will be kept confidential and used only for the research purposes of the study.  When I say your answers will be kept confidential, what does that mean to you?

 

INTERVIEWER:  IF NAME OR PROXY SAYS “It is confidential,” PROBE:  What does that mean?

 

EXAMPLES OF ACCURATE ANSWERS ARE:  My answers will be secret.  Only researchers will see what I said.  What I say will be (kept) private.  It will be used for research,” etc.

 

INTERVIEWER:  IF NAME/PROXY SAYS “DON’T KNOW: RECORD AS “INACCURATE ANSWER

 

ACCURATE ANSWER............................................. 01  (begin interview)

INACCURATE ANSWER......................................... 02  (go to 4)

REFUSED.................................................................. r

 

 

4.   Thank you.  Our study rules say that we need to find someone else who can help answer the survey questions.  Is there someone there who could answer questions about (YOUR/NAME’S) health, daily activities and any jobs (YOU/NAME) might have?

 

PROBE:  This might be someone who lives with (YOU/NAME), a friend, or someone like a social worker or caseworker.


Cornell University

ILR School

 

For more information about the Rehabilitation

Research and Training Center on Disability

Demographics and Statistics contact:

Andrew J. Houtenville

Employment and Disability Institute

Cornell University

303 ILR Extension Building

Ithaca, New York 14853-3901

Tel 607.255.5702

Fax 607.255.2763

TTY 607.255.2891

Email ajh29@cornell.edu

Web www.edi.cornell.edu

 

 

 

 



[1]Westat designed the test as part of the design of the Ticket to Work evaluation; MPR modified it after pretesting.

[2]MPR adheres to AAPOR’s standard definitions of outcome rates for surveys (AAPOR 2004).  Response rates are defined as completed interviews divided by the number of interviews completed plus the number of noninterviews plus all cases of unknown eligibility.  The cooperation rate is the number of completed interviews divided by all eligible cases ever contacted.  The self-response rate is the number of sample members who responded for themselves divided by the number of completed interviews.