Developed for:
Colorado Department of Institutions
Division for Developmental Disabilities
3824 W. Princeton Circle
Denver, CO 80236
(303) 762-4578
by:
Allen, Shea & Associates
1780 Third Street
Napa, CA 94559
(707) 258-1326
in cooperation with
Claudia Forrest
and
Nicholas DeCilla
Originally Published in June, 1992
Reissued and adapted for Reinventing Quality in December, 1993

The project involved sharing, and busy people taking the time to answer questions. We sought measurement tools addressing quality-of-life values, and views as to whether the use of such tools made good sense. This report, the first of three volumes, reviews purposes and methods, and summarizes what was found -- not only in terms of tools and the value areas they address, but also concerns and questions about whether (and, if so, how) the measurement of such values might be pursued.1. Volume II is a compilation of forms, by Tool Number, giving additional facts about each tool and reviewing it in terms of criteria for assessing each tool's potential usefulness. Volume III is a collection of the tools, organized by Tool Number. Related literature, when available, is enclosed with the tool.
We are grateful for the willingness of Key Informants and others (e.g., directors of agencies, researchers) to share tools and to spend time talking with us about issues, concerns, problems, and, in some cases, their own experiences in constructing and using such tools. Several individuals with whom we spoke expressed special interest in the survey by asking how they might keep informed about what was found. Thanks to Brian Lensink, the Director of the Division for Developmental Disabilities in Colorado, and his staff, we were able to tell everyone that the Division would be willing to share what was learned with others, but might have to charge for copying documents, shipping and handling.
We thank, in a very special way, Judy Ruth, our principal contact within the Division for Developmental Disabilities. She has been great to work with, prompt in responses to questions, thorough and careful in critiquing our plans, and just a delightful person with whom to work.
Bill Allen
John Shea

Acknowledgments
I. INTRODUCTION
A. Purposes
B. How This Report is Organized
C. Technical Notes
II. METHODS
A. Surveys
B. Literature Search
C. Application of Criteria to Tools Located
III. RESULTS
A. Measurement Tools
B. Measurement Concerns and Limitations
IV. SOME FINAL THOUGHTS
A. Some Generalizations About Tools Reviewed
B. Considerations in Developing Quality-of-life Measures
Application of All Criteria to Measurement Tools

A. Purposes
In March of 1992, we submitted a bid in response to an RFP (Request for Proposal) to locate and obtain certain measurement tools that purport to tap some progressive values embodied in the Colorado Division for Developmental Disabilities' new (1990) mission statement -- such things as friendship, self-esteem, satisfaction, and other aspects of service and life quality. Besides obtaining such tools, the RFP asked the contractor to discuss measurement concerns with Key Informants identified by the Division, and to report back on what was said. Finally, based on reports and conversations, the Division asked the contractor to ascertain how various measurement tools stacked up against certain criteria to be used to judge whether further exploration of the tool would be worthwhile, leading to possible use in the COPAR, an instrument involved in an on-going longitudinal survey of people in Colorado with developmental disabilities.
B. How This Report is Organized
The six value areas, for the purpose of various surveys, may be summarized as follows:
In this document, we provide an overview in terms of purposes of the project, approach taken to get information, and what was learned.
C. Technical Notes
Unless quotation marks are evident, we have paraphrased what we understood to be the views of interviewees. In general, we have refrained from conclusions that one or more project team members have drawn in the course of conducting the study, since we were not asked for our own opinions.
We did not review several tools forwarded to us, if they were completely off-target (e.g., behavior deficit assessment) or redundant (e.g., a tool adapted from and essentially identical to another tool, previously received).
We developed letters of introduction, interview and questionnaire schedules, and a data-base forms for recording information ('applying the criteria'). The questions addressed to Key Informants were as follows:
Mail questionnaire surveys were sent to three other groups, as follows: State MR/DD Directors; Colorado agencies providing advocacy or other services to persons with developmental disabilities; and, State ARC Directors. In obtaining 'tools,' and useful information about each (the criteria), we emphasized a 'snowball' technique, asking informants (and all others we surveyed) for additional leads to creators and users of instruments, and to literature in which results were presented.
B. Literature Search
We used a computerized literature search to identify tools that may have been overlooked using the more targeted approach. Our use of the computer-based, literature databases included REHABDATA, ERIC and PsychALERT. Once reviews were received , we requested copies of relevant abstracts and tools when referenced.
C. Application of Criteria to Tools Located
We were then responsible for reviewing the various tools and
instruments, and for putting down on paper what we could learn
about each concerning the criteria to be applied to tools. These
criteria were, for the most part, laid out in the Colorado
Request-for-Proposals.
A. Measurement Tools
Over-all, we reviewed and got comments on some 72 measurement tools, as shown in the list below.
It was not always possible to determine the source of tools and other material brought to our attention. Of those who responded to the State survey, about half sent tools, and about half said they had no tools to share. As far as we can tell, about 10 measurement tools were sent to us by representatives of Colorado agencies. Key Informants sent in a handful of tools, and leads from a wide variety of sources, including the literature search, were the source of approximately 30 tools. While we did pick up a few additional tools through the literature search, many resources had already been provided or explored. There were several tools noted and not collected as they were referenced in foreign journals and not obtainable within the three month time frame of the project.
B. Measurement Concerns and Limitations
A member of the project team, went to extraordinary lengths to interview Key Informants, and was successful in reaching about three in five of those on the list provided by the Division. Most respondents expressed their concerns in a rather free-flowing exploration of their opinions on the matter. Three or four responded in writing. Key Informants expressed a variety of views about what to measure, how to measure it, whether to focus on the individual or on percentages of individuals, and raised the question of why measure subjective, quality-of-life values.
1. What Should We Measure?
We suspect that the very revolution in thinking about lives and services that is reflected in the new values in the Colorado Mission Statement is at the root of much of the expressed concerns. The likes, dislikes, hopes, dreams, and preferences of individuals are being given more respect these days by philosophers, advocates, and many progressive service providers. Sizable numbers of persons interviewed talked about person-centered planning and support as elements of a paradigm shift away from a medical/developmental/readiness model.
a. Inner experience or other aspects of a person's life?
b. People's lives or programs, services, support?
One informant had a lot to say about what should be measured, stressing the importance -- in his view -- of focusing on that which the service system may be able to influence: that is, services and opportunities. At the same time, judged by other's remarks, there is sentiment to look at both the lives of individuals and services, which would seem to make sense, if the purpose of evaluation is to modify the latter to improve the former.
2. How Should We Measure (and Report) Things?
The dilemma many Key Informants see is that to be feasible in time and energy one probably wants a few 'key questions or observations,' but to be meaningful to people with very different personalities, likes, and dislikes, questions and observations should be individualized. Several Informants just don't see how subjective values can be measured (and reported) adequately. There is sentiment for measuring the same things that are measured in surveys of the general population, and one person suggests a focus on families. No one is saying that the values in the Colorado Mission Statement are unimportant, its just that some are skeptical and others see the need for a lot of preparatory work, if measurement is to make sense and be useful.
3. Why Should We Measure Things?
The COPAR is an evaluation instrument, intended ostensibly to tell agencies, public officials and others 'how well they are doing.' Since the Colorado Mission Statement was revised in 1990, and relatively few items in the COPAR track (or speak to) the new values, it makes sense to see whether the COPAR should be realigned with the Mission Statement.
One conception of evaluation in human services is that information is often wanted for one of four reasons: (1) curiosity; (2) monitoring; (3) fine tuning; or (4) choosing one program/service design over another. Edwards and Newman recognize that the difference between 'fine tuning' and 'choosing one program or service design over another' is often a matter of degree. One person's 'fine turning' may be a 'major change in a program or service design' to someone else.
In any event, the expressed reason for the present study is monitoring -- to see how we are doing, and this implies looking at the relationship between programs and services, on the one hand, and the quality of people's lives, on the other.
Key Informants were more or less sanguine about attempting to measure the new values, depending on purpose(s) or how the information would be used. Listed below are comments, organized in accordance with the Edwards-Newman perspective on getting information for decision-making. But, first, general remarks:
a. So, what's the purpose?
b. Curiosity is okay. Can we fulfill it, and use the information to advantage?
c. Monitoring --
d. Fine tuning OR choosing one program/service design over another --
4. Any Special Problems Measuring Different Values?
a. Friendship
b. Self-esteem c. Competencies and talents d. Decision-making e. Community inclusion f. Satisfaction5. Other Issues, Limitations, Opportunities
a. Importance of understanding and training
Several Key Informants stressed the importance of having full, sophisticated understanding of the values, and the importance of extensive training:
b. The issue of intrusionPresumably, intrusiveness is a bad thing in itself, especially if it can be avoided, and the related issue is what will be seen or heard, and what it means. Here are some thoughts on the matter:
c. Intentional or unintentional effects
d. What if the person is profoundly impaired or cannot communicate well?
e. Reliability, bias
We indicated in our proposal that longitudinal research designs help solve some problems (e.g., memory lapses), but imply other problems. One is 'measurement noise' attributable to differences in surveyors or how a surveyor behaves at two different points in time -- to say nothing of how the person interviewed or observed feels at a particular time, and how fleeting those feelings may be. This suggests that the softer the measure (that is, the more subjective or open to interpretation), the more important to approach people in a particular way, or (perhaps) to probe if one wants a sophisticated understanding of what has been said or seen.
Key Informants expressed these opinions or concerns:
A. Some Generalizations About Tools Reviewed
After reviewing the tools obtained for this project through State of Colorado, national, Key Informant and literature surveys, several generalizations seem warranted: