Support Hotline: (866) 737-5999 info@multiplesystematrophy.org

Science is the discipline used by researchers to answer questions and/or test hypotheses. The results of a scientific investigation may be for a specific subset or can be generalized. If, for example, a small sample size yields a particular result it in no way means that same result will fit with a larger sample size. The result has greater meaning if there is a large sample size with similar results. Since it is so difficult to get large sample sizes when the subject of study is a rare disease like Multiple System Atrophy it is difficult to confirm findings.

Scientists can get around this problem by using alternative resources as subjects for testing. One alternative resource might be mice models while another might be petri dishes and a specific media. Another might be computer modeling. Yet even these substitutes don’t always yield useful results. Scientists are then forced to ask different questions or pursue a different hypothesis.

There is a current issue being discussed in an MSA support group that is a good example of the vagaries of science. It involves physical testing and a recent science “claim.” In a short synopsis of the situation, a patient apparently diagnosed with MSA was asked to report to the doctor to have a skin biopsy. She put out an email questioning the procedure.

A recently published article abstract described the results of an MSA clinical trial examining sudomotor nerve density (total length of nerve per volume of glandular tissue) from three different body areas. Results were consistently less dense in test patients (patients with MSA) than in the controls. The conclusion of the authors was “Our data support the hypothesis that postganglionic impairment occurs in MSA …”

This author wrote the patient and explained the testing her doctor was performing may be based upon the results of this study. The study reports results based upon observations of human tissue that lead to a conclusion. The patient’s doctor took her biopsies from similar body sites thus raising the possibility the test will yield similar results. But the results the patient received were labeled “normal” by the doctor. That seems counter to the results of the above study. Thus is there anything the patient can take away from this?

From a science perspective there isn’t much. There were 29 MSA patients in the sample size with results significantly different from the control group for each of the three areas – a P value of .0001. That one patient did not have similar results is not unusual. Why? The sample size is small. The MSA patient may be mis-diagnosed and these are the correct results. The accuracy of the measurement tools may be off. The patient may have the correct diagnosis but is earlier in the disease process than the 29 subjects. It is also possible her doctor followed a different hypothesis or asked a different question and “normal” has a different meaning to him.

Science is not clean. It is cleaner when the questions are focused, the technology matches the tests being done, the observations are real-world, the sample size is large enough and the tests can be replicated. You can see the researcher has control over three of the criteria but not over the sample size. Nature decides the sample size when studying rare diseases. Until patients can be brought together in large enough groups or the percentage of MSA patients to the overall population increases – God forbid – ambiguity will continue. That is not what patients and caregivers want to hear.

Author: Larry Kellerman, PhD; Caregiver Representative, The MSA Coalition Board of Directors