Measurement Tool Survey
to Improve Alignment of COPAR Tool

Final Report

Overview and Summary
(Adapted and Excerpted for WWW)



Developed for:

Colorado Department of Institutions
Division for Developmental Disabilities
3824 W. Princeton Circle
Denver, CO 80236
(303) 762-4578

by:
Allen, Shea & Associates
1780 Third Street
Napa, CA 94559
(707) 258-1326

in cooperation with
Claudia Forrest
and
Nicholas DeCilla

Originally Published in June, 1992
Reissued and adapted for Reinventing Quality in December, 1993




Acknowledgments

The project involved sharing, and busy people taking the time to answer questions. We sought measurement tools addressing quality-of-life values, and views as to whether the use of such tools made good sense. This report, the first of three volumes, reviews purposes and methods, and summarizes what was found -- not only in terms of tools and the value areas they address, but also concerns and questions about whether (and, if so, how) the measurement of such values might be pursued.1. Volume II is a compilation of forms, by Tool Number, giving additional facts about each tool and reviewing it in terms of criteria for assessing each tool's potential usefulness. Volume III is a collection of the tools, organized by Tool Number. Related literature, when available, is enclosed with the tool.

We are grateful for the willingness of Key Informants and others (e.g., directors of agencies, researchers) to share tools and to spend time talking with us about issues, concerns, problems, and, in some cases, their own experiences in constructing and using such tools. Several individuals with whom we spoke expressed special interest in the survey by asking how they might keep informed about what was found. Thanks to Brian Lensink, the Director of the Division for Developmental Disabilities in Colorado, and his staff, we were able to tell everyone that the Division would be willing to share what was learned with others, but might have to charge for copying documents, shipping and handling.

We thank, in a very special way, Judy Ruth, our principal contact within the Division for Developmental Disabilities. She has been great to work with, prompt in responses to questions, thorough and careful in critiquing our plans, and just a delightful person with whom to work.

Bill Allen
John Shea




Contents

Acknowledgments

I. INTRODUCTION
A. Purposes
B. How This Report is Organized
C. Technical Notes

II. METHODS
A. Surveys
B. Literature Search
C. Application of Criteria to Tools Located

III. RESULTS
A. Measurement Tools
B. Measurement Concerns and Limitations

1. What Should We Measure?
2. How Should We Measure (and Report) Things?
3. Why Should We Measure Things?
4. Any Special Problems Measuring Different Values?
5. Other Issues, Limitations, Opportunities

IV. SOME FINAL THOUGHTS

A. Some Generalizations About Tools Reviewed
B. Considerations in Developing Quality-of-life Measures

Application of All Criteria to Measurement Tools






I. Introduction

A. Purposes

In March of 1992, we submitted a bid in response to an RFP (Request for Proposal) to locate and obtain certain measurement tools that purport to tap some progressive values embodied in the Colorado Division for Developmental Disabilities' new (1990) mission statement -- such things as friendship, self-esteem, satisfaction, and other aspects of service and life quality. Besides obtaining such tools, the RFP asked the contractor to discuss measurement concerns with Key Informants identified by the Division, and to report back on what was said. Finally, based on reports and conversations, the Division asked the contractor to ascertain how various measurement tools stacked up against certain criteria to be used to judge whether further exploration of the tool would be worthwhile, leading to possible use in the COPAR, an instrument involved in an on-going longitudinal survey of people in Colorado with developmental disabilities.

B. How This Report is Organized

The six value areas, for the purpose of various surveys, may be summarized as follows:

  • Friendship and belonging: Establishing and maintaining relationships with other persons and a sense of belonging to their community.
  • Self-esteem: Experiencing personal security and self-respect.
  • Competencies and talents: Developing and exercising skills related to leisure or recreation activities, personal ambitions, or hobbies.
  • Decision-making: Having choices, making increasingly responsible choices and exerting greater control over one's own life circumstances.
  • Community inclusion: Having and exercising opportunities to participate in activities which are common for most persons and in the locations which represent everyday community life.
  • Satisfaction: With services and life conditions of the person receiving services and of their family.
  • In this document, we provide an overview in terms of purposes of the project, approach taken to get information, and what was learned.

    C. Technical Notes

    Unless quotation marks are evident, we have paraphrased what we understood to be the views of interviewees. In general, we have refrained from conclusions that one or more project team members have drawn in the course of conducting the study, since we were not asked for our own opinions.

    We did not review several tools forwarded to us, if they were completely off-target (e.g., behavior deficit assessment) or redundant (e.g., a tool adapted from and essentially identical to another tool, previously received).

    II. Methods

    A. Surveys

    We developed letters of introduction, interview and questionnaire schedules, and a data-base forms for recording information ('applying the criteria'). The questions addressed to Key Informants were as follows:

    1. Do you have any concerns or arguments (pro or con) regarding the possibility of accurately measuring any of the concepts (or values) expressed above?
    2. Do you have any concerns or arguments (pro or con) regarding the proper interpretation of measurements of any of the concepts?
    3. Do you have any concerns or arguments (pro or con) regarding the use to which measures of these concepts might be put?

    Mail questionnaire surveys were sent to three other groups, as follows: State MR/DD Directors; Colorado agencies providing advocacy or other services to persons with developmental disabilities; and, State ARC Directors. In obtaining 'tools,' and useful information about each (the criteria), we emphasized a 'snowball' technique, asking informants (and all others we surveyed) for additional leads to creators and users of instruments, and to literature in which results were presented.

    B. Literature Search

    We used a computerized literature search to identify tools that may have been overlooked using the more targeted approach. Our use of the computer-based, literature databases included REHABDATA, ERIC and PsychALERT. Once reviews were received , we requested copies of relevant abstracts and tools when referenced.

    C. Application of Criteria to Tools Located

    We were then responsible for reviewing the various tools and instruments, and for putting down on paper what we could learn about each concerning the criteria to be applied to tools. These criteria were, for the most part, laid out in the Colorado Request-for-Proposals.


    III. Results

    A. Measurement Tools

    Over-all, we reviewed and got comments on some 72 measurement tools, as shown in the list below.

    Summary of Tools Reviewed for Project

    Total tools reviewed for project 72
    Tools reviewed by value area: All Self-esteem 18
    Competencies and talents 28
    Friendships 39
    Decision-making 42
    Community inclusion 53
    Satisfaction 53

    It was not always possible to determine the source of tools and other material brought to our attention. Of those who responded to the State survey, about half sent tools, and about half said they had no tools to share. As far as we can tell, about 10 measurement tools were sent to us by representatives of Colorado agencies. Key Informants sent in a handful of tools, and leads from a wide variety of sources, including the literature search, were the source of approximately 30 tools. While we did pick up a few additional tools through the literature search, many resources had already been provided or explored. There were several tools noted and not collected as they were referenced in foreign journals and not obtainable within the three month time frame of the project.

    B. Measurement Concerns and Limitations

    A member of the project team, went to extraordinary lengths to interview Key Informants, and was successful in reaching about three in five of those on the list provided by the Division. Most respondents expressed their concerns in a rather free-flowing exploration of their opinions on the matter. Three or four responded in writing. Key Informants expressed a variety of views about what to measure, how to measure it, whether to focus on the individual or on percentages of individuals, and raised the question of why measure subjective, quality-of-life values.

    1. What Should We Measure?

    We suspect that the very revolution in thinking about lives and services that is reflected in the new values in the Colorado Mission Statement is at the root of much of the expressed concerns. The likes, dislikes, hopes, dreams, and preferences of individuals are being given more respect these days by philosophers, advocates, and many progressive service providers. Sizable numbers of persons interviewed talked about person-centered planning and support as elements of a paradigm shift away from a medical/developmental/readiness model.

    a. Inner experience or other aspects of a person's life?

  • Hopes we don't make the mistake of thinking we are measuring the nature of what a person is experiencing inside themselves.
  • It's a shift from quantifying outcomes with percentage values (. . . 50% of the time) to valuing outcomes that are not always quantifiable.
  • Developmentalists focus on task analyzed progression which may not matter.
  • b. People's lives or programs, services, support?

    One informant had a lot to say about what should be measured, stressing the importance -- in his view -- of focusing on that which the service system may be able to influence: that is, services and opportunities. At the same time, judged by other's remarks, there is sentiment to look at both the lives of individuals and services, which would seem to make sense, if the purpose of evaluation is to modify the latter to improve the former.

  • How can we measure things that the service system does have control of that increases/decreases people's opportunities.
  • Look at the performance of the system as a whole. We've got plenty of work to do to see how services and policies affect people's lives. Evaluation as a snapshot of behavior of the system as a whole and the regional parts of it.
  • More puzzled and less worried about using the tool to look at the system and not as a clinical evaluation of the person.
  • COPAR may permit the creation of the notion of a path model that things will happen to increase the possibilities for people.
  • May be worth trying to look at a conceptual model, time series measurement, on things which set limits on progress toward values.
  • This is a crude approximation of the performance of a region or system, rather than people's clinical status.
  • Measure at two levels: person & program.
  • More sense as far as organizing services to ask if the service agencies' missions are consistent with the mission of the Department.
  • Concerning consumer progress toward values, combination of expectations for individual change and ecological change.
  • Focus on consumer in evaluation is risky - must evaluate the whole setting.
  • How is person's life better? Look at basic concepts? Where are you living? In a house or a home? Basically, use O'Brien's 5 accomplishments.
  • 2. How Should We Measure (and Report) Things?

    The dilemma many Key Informants see is that to be feasible in time and energy one probably wants a few 'key questions or observations,' but to be meaningful to people with very different personalities, likes, and dislikes, questions and observations should be individualized. Several Informants just don't see how subjective values can be measured (and reported) adequately. There is sentiment for measuring the same things that are measured in surveys of the general population, and one person suggests a focus on families. No one is saying that the values in the Colorado Mission Statement are unimportant, its just that some are skeptical and others see the need for a lot of preparatory work, if measurement is to make sense and be useful.

  • Best way to check this out is to ask: Can I come up with something which tells us how people are doing in these areas?
  • Safeguard: If I tried this on my friends and people with disabilities, does it tell me anything that's interesting? Does it seem intrusive?
  • As much as possible look at tools which measure the same values we look at in people without disabilities.
  • Would encourage the surveying of families.
  • Quantitative measurements of belonging, choice, inclusion, etc., are somewhat arbitrary. Attempts to 'boil down' to a number or phrase can be misleading.
  • 'Systematizing' a QA tool on measures of Quality of Live which are personally relevant is extremely difficult unless there is a great deal of flexibility in the tool and process.
  • I like anecdotal approach and personal futures planning.
  • The spirit of the value can be perverted in trying to measure it. In Little Prince, that which is essential is invisible to the eye.
  • If you do things individually, tools don't fit.
  • Basically, we are trying to emphasize in the overall process things that are very significant certainly given the discrimination experienced by people with disabilities. I just don't see how we can measure these values.
  • Desire to over-quantify. If you try to turn values into numbers, trivializes human functions and interactions.
  • Raw averages don't tell much.
  • From a mission statement orientation, it is hard to give hard number scores, but you can get opinions from evaluation teams about how their doing.
  • 3. Why Should We Measure Things?

    The COPAR is an evaluation instrument, intended ostensibly to tell agencies, public officials and others 'how well they are doing.' Since the Colorado Mission Statement was revised in 1990, and relatively few items in the COPAR track (or speak to) the new values, it makes sense to see whether the COPAR should be realigned with the Mission Statement.

    One conception of evaluation in human services is that information is often wanted for one of four reasons: (1) curiosity; (2) monitoring; (3) fine tuning; or (4) choosing one program/service design over another. Edwards and Newman recognize that the difference between 'fine tuning' and 'choosing one program or service design over another' is often a matter of degree. One person's 'fine turning' may be a 'major change in a program or service design' to someone else.

    In any event, the expressed reason for the present study is monitoring -- to see how we are doing, and this implies looking at the relationship between programs and services, on the one hand, and the quality of people's lives, on the other.

    Key Informants were more or less sanguine about attempting to measure the new values, depending on purpose(s) or how the information would be used. Listed below are comments, organized in accordance with the Edwards-Newman perspective on getting information for decision-making. But, first, general remarks:

    a. So, what's the purpose?

  • Main interest is in what they (Colorado) do with the information from the measurement of the values.
  • We need to help Colorado think through "What does this mean and what are you going to do about it."
  • Not clear who uses COPAR and for what.
  • No need to ask questions when the answer is obvious. For example, in decision-making, if the person lives in an institutional setting there is no point in asking the question.
  • Not a bad idea to recognize the limitations of what would be reported on a routinely administered instrument.
  • I have no problem with the Division doing evaluations. The problem is people need to know the purpose of the evaluation and how it is going to help people. How is it tied to training and technical assistance?
  • b. Curiosity is okay. Can we fulfill it, and use the information to advantage?

  • Someone who is 'scored' as having not met expectations for any of the values, risks being seen as having a 'deficit' in need of remediation, rather than exploration and discovery needed first to better understand how, what, why . . . .
  • I have learned that information needs to be used as an indicator that there is something more to learn.
  • Less worried, if it's one more thing that shapes conversations people have about what it [the value, such as friendship] is.
  • Need to pay attention, for example, to the people who have lots of friends and see what they are doing and what that's about.
  • Can't get the general taste of things from tools.
  • c. Monitoring --

  • Often attempts at value-based, people-centered planning is completed, only to have the funding system require the 'percentage' approach to programs. These assessment processes are approaches to life!
  • Agencies see evaluation as punitive rather than as a way to grow and develop. They have to be open to ignorance and fallibility.
  • If purpose is to pass/fail agencies, then basis is compliance, not value-based mission. Floor on acceptability/ceiling on desirability.
  • d. Fine tuning OR choosing one program/service design over another --

  • Question is how to make things happen from the information.
  • On use [or application], we need to get smarter rather than do more evaluations. Should get at the big questions first, and then based on observations, if there is a problem go at standards.
  • Colorado talks like they want things to happen.
  • Division thinks they can mandate values. Low probability of this happening. There is a lack of real values training with direct-line staff. Even if managers understand, which many of them don't.
  • Generally, measures cannot be interpreted in the usual way. To use these assessments and processes, the paradigm shift has at least begun or may be completed. An individual's 'plan' falls out nicely!
  • Suggest advice to programs, not pass/fail.
  • Judgment should be as much on what agency does to facilitate change toward values.
  • 4. Any Special Problems Measuring Different Values?

    a. Friendship

  • I have a concern about objectifying and quantifying friendship. Would you measure eye gaze? The essence of friendship is non-quantifiable and spontaneous.
  • Values are important but are going to pose serious measurement challenges. Wouldn't know how to scale my own friendship relationships.
  • I do not believe issues around relationships can be measured. You can not quantify 'friend.'
  • I think issues regarding friendship and belonging, etc., are not easily measured. They are relational issues. Checklists of things to think about or questions to ask might be helpful, but to quantify or rank them bothers me a lot.
  • b. Self-esteem
  • Look at scales used with general population. It's as variable as the day you ask.
  • How I feel about myself depends on when you ask me.
  • Most self-esteem scales are fairly useless.
  • c. Competencies and talents
  • My concern that competency will end up being productivity. Too broad a term; need to break it down.
  • d. Decision-making

  • How could you make sense of evaluation results? Is it better to make more decisions? How would you compare complexity and number?
  • Saying someone exercises 'decision-making' among choices which are poor ones (segregated facilities, isolated situations, etc.) does not represent quality decision making.
  • e. Community inclusion

  • My concern is that tools end up measuring time and activities.
  • Variation over time, and I'm not sure how I'd scale it.
  • You can have a one-person group home. Lots of people are in the community, but they are not part of the community.
  • f. Satisfaction
  • Look at C. Feinstein's Pennhurst Study and 1990 Study by DD Councils, also by Feinstein. Ask simply: "What do you want? need? like? and dislike?"

  • 5. Other Issues, Limitations, Opportunities

    a. Importance of understanding and training

    Several Key Informants stressed the importance of having full, sophisticated understanding of the values, and the importance of extensive training:

  • Need to help people with disabilities to be frank and trust that we will listen and respond.
  • Need significant orientation to values. All levels need training.
  • Recommend a few years of values training prior to measuring and interpreting values. People are often sent out in a group to a carnival of thousands and it is interpreted as integration.
  • Once they get an assessment, need to train the hell out of people.
  • On measuring values, seems to me people have a superficial understanding of their meanings. They learned about normalization on Tuesday, and now they think they have the values. There is a major discrepancy between what people say they believe and how people live. They are not bad people, they just can't figure out why people who live in group homes don't meet their neighbors.
  • Understanding the values requires sophisticated thinking.
  • b. The issue of intrusion

    Presumably, intrusiveness is a bad thing in itself, especially if it can be avoided, and the related issue is what will be seen or heard, and what it means. Here are some thoughts on the matter:

  • Evaluation can be very intrusive in people's homes and lives. Have to be very sensitive. Establishing trust is critical.
  • Look for measures which are minimally intrusive -- like MAPS (McGill Action Planning System). However, you can see people make significant changes indicated in an evaluation tool, because of people's fairly superficial understanding of the values.
  • Some of these can be personally intrusive.
  • c. Intentional or unintentional effects

  • I worry people will write goals to build 3 friends by May 1st.
  • If told that horizontal, reciprocal relationships are going to be measured, then horizontal, reciprocal relationships will be created.
  • Main worry about, if we are not careful that people will think they are looking at the person instead of a group of people.
  • Good services get beat up because they write goals differently.
  • Measuring values fragments the picture. You get stereotypic responses. You'll look like someone perceives you to be like or like they expect you to be. Biggest problem is we don't let people be who they are.
  • d. What if the person is profoundly impaired or cannot communicate well?

  • I applaud Colorado's effort. Some stuff is hard to evaluate. How do you measure satisfaction for people who don't communicate? The real struggle is to create an instrument to work for a wide range of people.
  • Ask the question: Will people who are profoundly retarded be able to participate?
  • Goals which emphasize personal change may exclude people with profound handicaps.
  • e. Reliability, bias

    We indicated in our proposal that longitudinal research designs help solve some problems (e.g., memory lapses), but imply other problems. One is 'measurement noise' attributable to differences in surveyors or how a surveyor behaves at two different points in time -- to say nothing of how the person interviewed or observed feels at a particular time, and how fleeting those feelings may be. This suggests that the softer the measure (that is, the more subjective or open to interpretation), the more important to approach people in a particular way, or (perhaps) to probe if one wants a sophisticated understanding of what has been said or seen.

    Key Informants expressed these opinions or concerns:

  • Most tools are unreliable and waste time.
  • Depends on whose helping people fill out the evaluation.
  • In Colorado, case managers are not independent. They are owned by the system, and this puts them in a conflict-of-interest position in evaluating people.
  • Concern over the potential bias of the rater who may have an interest in the person's welfare.

  • IV. Some Final Thoughts


    A. Some Generalizations About Tools Reviewed

    After reviewing the tools obtained for this project through State of Colorado, national, Key Informant and literature surveys, several generalizations seem warranted:

  • Most tools have been developed as a method of evaluating services or reviewing compliance with quality assurance standards.
  • Most tools are concerned with lifestyle, that is, observable life patterns (what people do, where they do it and when), environments (what places look like) and observable behavioral characteristics (do people have an opportunity for choice, how do they spend their free-time).




















































  • GO BACK TO THE ASA MAIN MENU.