Taking Inventory: Symptom Research and Psychometrics
Network - Summer 2008
By Sandi Stromberg
Have you ever wondered who develops standardized tests, like the Scholastic Aptitude Test (SAT) for getting into college or the Graduate Record Exam (GRE) for graduate school?
How do they know what questions measure knowledge or intelligence?
And how can they calculate what constitutes a passing score that says a person has the ability to follow a course of study?
Those who practice this specialized field are psychometricians, and their science is psychometrics, the study of the design and analysis of tests and questionnaires. Besides standardized tests, psychometricians also play a prominent role in the construction of patient-reported assessment tools for cancer-related clinical trials, such as symptom burden outcome.
Tito Mendoza, Ph.D., assistant professor in M. D. Anderson’s Department of Symptom Research, is one such psychometrician. For the last 12 years, he and Charles Cleeland, Ph.D., chair of the department, have worked with researchers and health care professionals across the institution to help design and assess the reliability and validity of tools that measure the side effects of cancer and its treatments.
FDA demanding more rigor
Active in pain research for many years and instrumental in developing the Brief Pain Inventory now used in most clinical trials, Cleeland knew the importance of measuring and attending to patients’ symptom distress long before it became a concern for federal agencies.
“Now, the U.S. Food and Drug Administration is asking for more rigor in the assessment of symptoms and other patient-reported outcomes,” he says. “It wants more systematic and validated measures of symptoms that are both relevant and intelligible to patients.”
Thanks to Cleeland and his colleagues, M. D. Anderson is a leader in this field through the BPI, mentioned above, as well as the Brief Fatigue Inventory and the M. D. Anderson Symptom Inventory, known as the MDASI. The latter is a brief measure of the severity and impact of the 13 cancer-related symptoms and six daily functions they affect, regardless of disease site.
Based on what they learned developing the MDASI, they have been working with physicians and nurses across the institution to produce a subsequent series of site-specific inventories. Mendoza plays an active role from the beginning of the process.
The process of taking inventory
“Before a statistical analysis plan can be written and data collected, I need to know what the researcher wants to show — what the primary question is,” Mendoza says. “Then, I help figure out what information needs to be collected, how many patients to recruit and how many time points to include.”
Once he calculates how many patients need to be enrolled to find the answers, Mendoza designs how they will test the reliability and validity of the data they collect. Then, he steps back while the clinicians carry out their studies.
To develop the specific content for each of the symptom inventories, specialists in each specific disease site hold multidisciplinary focus groups, including patients and family members, to collect data on symptoms and functionality issues.
Ensuring reliability and validity
Once he has the data, the first thing Mendoza tests is reliability. One way of doing this is to measure across two time points, called the test retest. The caveat for choosing this method, however, is that the time between the first and second time points must be short enough so there can be no intervening factors.
“This means that if you ask patients about a symptom at one point, they give you similar answers at the second point, if nothing in the patient’s condition changes,” Mendoza says. “For example, if you take your temperature with a thermometer, unless you develop a fever, you should get a similar result a few days later. If you don’t, your thermometer isn’t reliable. A variation, or noise, is coming from somewhere else. It’s like trying to hear a conversation above some ambient noise that we need to get rid of to understand the conversation.”
Of reliability and validity, Mendoza says they use more tests of validity because it’s more important. “You can have a reliable tool, but if it isn’t relevant or valid, it’s not very helpful.”
In a recent study to analyze data collected by David I. Rosenthal, M.D., and his colleagues for the MDASI-HN (head and neck; described on page 4 of this issue), he chose three validity measures: construct validity, known groups and concurrent validity.
Construct validity: This method helps determine underlying factors — latent constructs not directly measurable but that can be observed with indicators. For example, a family’s socioeconomic status (SES) cannot be directly measured, but you can measure a host of variables such as the parents’ occupations, education levels and incomes that are indirect measurements that affect the SES rating.
Known group validity: Using the Eastern Cooperative Oncology Group performance status as the grouping variable, Mendoza wanted to determine if patients with poor ECOG performance status also reported severe symptoms on the patient self-assessment symptom inventory (MDASI), and if patients with good ECOG performance status reported fewer and less severe symptoms.
Concurrent validity: This measures how well a self-report tool correlates with the well-established measurement of overall health, the SF12v2 from the Rand Corporation. If there is an overlap in results between the two tools, that provides another example of the self-assessment tool’s validity.
A growing series of inventories
To date, Cleeland, Mendoza and other members of their team have produced symptom inventories that include MDASI-BT (brain tumor), MDASI-Thy (thyroid), MDASI-Lung and MDASI-HF (heart failure).
In turn, these collaborations are allowing health care professionals in each area to collect solid, scientific evidence that can be used ultimately to design interventions to relieve the symptom burden caused by cancer and its treatments.