To the editor:

I read with interest the recent publication of Kantarjian and colleagues1  regarding decitabine schedules in higher-risk myelodysplastic syndrome (MDS) and chronic myelomonocytic leukemia (CMML).

The authors conclude that the 5-day intravenous decitabine schedule had a higher response rate than the other tested schedules. This study is important, because there is considerable uncertainty about the optimal dosing, and the suggestion that one schedule is superior to another may have implications for the use and the reimbursement of the substance. However, the probability that the reported superiority of the 5-day intravenous schedule over the others is merely a chance finding is considerable. First, the Bayesian randomization method used in this trial assigned a patient to a treatment arm according to the estimates of the probability that the complete remission (CR) rate of the schedule was superior to the other 2 schedules. This happened after 15 patients had been assigned to each of the treatment arms. The number of patients achieving CR according to the article was as shown in Table 1(lines 1 and 2). Line 3 assumes the final response rate (39%) in the superior group also occurred in the first 15 patients. However, these observed differences could occur by chance alone, and it is difficult to understand why more patients should be randomized to schedule 1 at that moment.

Table 1

Decitabine dosage and response rates in patient subgroups

5-day IV dose5-day SC dose10-day IV dose
No. of patients 64 14 17 
No. of patients in CR (%) 25 (39) 3 (21) 4 (24) 
(Estimated) CR patients for the first 15 patients (6) 
5-day IV dose5-day SC dose10-day IV dose
No. of patients 64 14 17 
No. of patients in CR (%) 25 (39) 3 (21) 4 (24) 
(Estimated) CR patients for the first 15 patients (6) 

IV indicates intravenous; SC, subcutaneous.

There is, however, a second reason why we should be cautious in readily accepting the reported findings. The probability that a study finding is correct is not only a function of the P value and the power of a study. It also very much depends on the a priori probability that the question under investigation is sensible,2  for example, the a priori probability that the investigators had a good idea regarding their study testing different decitabine dosages. If the idea under investigation (eg, intravenous schedule superior to subcutaneous schedule) has a 10% chance to be correct and the study result yields a P = .05 at a power of 80%, the probability of this “statistically significant result” being false positive is 36%!2  There is no mathematic or statistical approach to the measurement of a priori probabilities. Now, it is beyond any doubt that Kantarjian and coworkers are experts in the field of the MDS. Indeed, I would not hesitate to refer to them if I had any question regarding any aspect of this disease. Still, as long as we are making intelligent guesses as to the exact mechanism of action of decitabine, the a priori probability that 3 different dosing schedules with the same cumulative dose are significantly different in terms of efficacy, is at least debatable, even after contemplating the results of this study.

Therefore, the data presented are not sufficient to allow final conclusions on the optimal dosage of decitabine in MDS. In the final paragraph, the authors correctly state that additional studies are needed to compare the subcutaneous dosing schedule to the intravenous one. However, this will be a difficult task needing high patient numbers and a very well-conceived study design.

Conflict-of-interest disclosure: The author declares no competing financial interests.

Correspondence: Aristoteles A. N. Giagounidis, St Johannes Hospital, An der Abtei 7–11, Duisburg, Germany, 47166; e-mail: gia@krebs-duisburg.de.

1
Kantarjian
 
H
Oki
 
Y
Garcia-Manero
 
G
, et al. 
Results of a randomized study of 3 schedules of low-dose decitabine in higher-risk myelodysplastic syndrome and chronic myelomonocytic leukemia.
Blood
2007
, vol. 
109
 (pg. 
52
-
57
)
2
Sterne
 
JA
Davey Smith
 
G
Sifting the evidence-what's wrong with significance tests.
BMJ
2001
, vol. 
322
 (pg. 
226
-
231
)
Sign in via your Institution