Publication bias is the preferential publication of research with positive results, and is a threat to the validity of medical literature. Preliminary evidence suggests that research in blood and marrow transplantation (BMT) lacks publication bias. We evaluated publication bias at an international conference, the 2006 Center for International Blood and Marrow Transplant Research (CIBMTR)/American Society for Blood and Marrow Transplantation (ASBMT) “tandem” meeting. All abstracts were categorized by type of research, funding status, number of centers, sample size, and direction of the results. Publication status was then determined for the abstracts by searching PubMed. Of 501 abstracts, 217 (43%) were later published as complete manuscripts. Abstracts with positive results were more likely to be published than those with negative or unstated results (P = .001). Furthermore, positive studies were published in journals with a mean impact factor of 6.92, whereas journals in which negative/unstated studies were published had an impact factor of only 4.30 (P = .02). We conclude that publication bias exists in the BMT literature. Full publication of research, regardless of direction of results, should be encouraged and the BMT community should be aware of the existence of publication bias.

Only a small portion of all research eventually becomes part of the published medical literature. Publication bias occurs when a particular characteristic of research renders it more or less likely to be published.1  For example, if research with “positive” findings is more likely to be published than research with “negative” findings, a bias toward a positive finding may ensue. This is of particular concern in areas of medicine that rely on data from multiple studies, each with small sample sizes.2  For example, if a new drug is studied in several small trials that demonstrate a mixture of positive and negative results, but only the studies showing benefit are published, the scientific community might falsely conclude that the medication is useful.

A variety of factors are associated with publication bias.3  Authors or funders may be more motivated to pursue the publication of a positive result than a negative one. In addition, journal editors might be more interested in publishing positive findings. It is also possible that authors might anticipate a low likelihood of success at submitting a negative study to a journal, and therefore choose not to proceed with manuscript submission. Another possibility is that if authors perceive a low likelihood of acceptance of negative findings in a journal with a high impact factor, they may choose to submit to a journal with a lower impact factor. Therefore, negative papers are not only less likely to be published, but when they are published, they may be in a less widely read or influential journal.

Publication bias has been examined in many areas of medicine, but has not been comprehensively studied in blood and marrow transplantation (BMT).4-6  BMT is a complex medical procedure performed in academically oriented medical settings. In this setting, publication bias may be less likely than in other fields because of the academic environment of the BMT community.

In a pilot study, we reviewed abstracts from the Canadian Blood and Marrow Transplant Group (CBMTG) meeting. A total of 141 abstracts were reviewed and categorized based on study type, funding source, numbers of centers involved, and the study results (“positive” or “negative,” using the authors' definition). We found that 37.7% of positive abstracts were published (20 of 53), compared with 26.1% of negative abstracts (23 of 88) (P = .35).7  Therefore, we did not find that publication bias was present, because positive abstracts were not more likely to be published than negative abstracts. However, that pilot study was based on a small number of abstracts submitted from a single country, so it had limited statistical power and generalizability. We therefore chose to further investigate publication bias in BMT in a much larger, international setting using similar methodology.

The primary objectives of the present study were to measure the overall publication rate of abstracts and to establish whether studies with positive results are more likely to be published than those with negative results. We also examined a previously unexplored form of reporting bias regarding the studies that are eventually published: whether positive studies appear in journals with higher impact factors.

We reviewed all abstracts from the 2006 American Society of Blood and Marrow Transplantation (ASBMT)/Center for International Blood and Marrow Transplant Research (CIBMTR) “tandem” meeting. We chose the 2006 meeting to ensure adequate time for publication, because previous observations, including our own pilot study, indicated that most abstracts are fully published within 5 years of presentation.4,7  Publication status was determined as of September 1, 2010. Abstracts from the tandem meeting are peer-reviewed, and published as a supplement to the journal Biology of Blood and Marrow Transplantation.8  Research abstracts presented at the tandem meeting encompass a variety of fields within BMT, including research in clinical, basic science, pharmacy, and nursing.

Using our previously published methodology,7  each abstract was reviewed by 2 separate authors. Content experts were chosen to review each category of abstracts; for example, 2 nurses reviewed nursing abstracts, whereas clinical research abstracts were reviewed by 2 authors with expertise in the subject matter.

Abstracts were categorized by type of research: clinical retrospective, clinical prospective, basic science, translational, case report, review/meta-analysis, and other. If an abstract was categorized as a clinical study, the number of subjects enrolled in the study was recorded. Abstracts were also categorized by the number of centers involved (single center or multicenter), funding status (industry funded, non-industry-funded, or not funded), and the direction of results (positive, negative, or not stated). If the abstract listed a hypothesis, the abstract was categorized as positive if the results validated the hypothesis. If no hypothesis was stated, then more subjective criteria were used. For example, if a new diagnostic test was found to be useful or a new therapy was found to be beneficial, then the abstract was categorized as positive. If the study was descriptive in nature with no hypothesis or result, then the abstract was categorized as “not stated.” If there was disagreement between the 2 content experts, this disagreement was documented and the abstract was reviewed by a third reviewer to achieve consensus.

One author determined publication status of each abstract separately. The National Library of Medicine's PubMed database was searched by the first and last author of each abstract. In our pilot study, searching additional databases such as EMBASE or CINAHL failed to result in any additional matches because all published studies were included in PubMed. In addition, in our pilot study, the first or last author of the abstract was universally found to be an author within the author list of the final publication.7  Therefore, we did not consider searching by additional authors to be indicated, and publication status was determined by searching for first and last authors only. The first or last author of the abstract only needed to be listed as an author in any position on the final paper for the paper to be detected by this search strategy. Potential matches were reviewed in detail to determine whether they represented the same work presented in the abstract. We also recorded the most recently published impact factor for the journal of each publication.9 

Because abstract categorization could be liable to misclassification, we stipulated that 2 reviewers independently categorize each abstract. We then calculated Cohen kappa values for each of the abstract categories to determine inter-rater agreement. This provided a measure of agreement to ensure that our categorization of abstracts was reliable. Using guidelines published by Landis and Koch, a kappa statistic of 0-0.2 would be characterized as “slight” agreement, values of 0.21-0.40 as “fair” agreement, 0.61-0.80 as “substantial” agreement, and 0.81-1 as “perfect” agreement.10  The publication rate was compared using χ2 tests for each of the abstract categories. Finally, the mean impact factor of journals of abstracts categorized as positive and negative or not stated was compared using the Student t test. All statistical analyses were conducted using SPSS Version 18 software for Mac OS X. This project involved analysis of publically available data and therefore did not require research ethics board approval.

A total of 501 abstracts were presented at the 2006 tandem meeting. The categorization of abstracts is illustrated in Table 1. The majority of abstracts (381 of 501, 76.0%) were from single institutions. Abstracts were primarily clinical in nature, with 169 abstracts categorized as clinical retrospective (169 of 501, 33.7%), and 124 (124 of 501, 25.0%) as clinical prospective studies. Eighty-four studies were categorized as basic science studies (84 of 501, 16.8%). There was a smaller number of case reports (21 of 501, 4.2%) and translational studies (40 of 501, 8%). Abstracts that were primarily descriptive in nature were categorized as “other” (62 of 501, 12.3%). Abstract results were categorized as positive in 65.2% (327 of 501), negative in 6.3% (33 of 501), and 28.0% (140 of 501) failed to demonstrate a clear direction of result. The Kappa statistic for inter-rater reliability was 0.457 for number of centers, 0.657 for study type, and 0.384 for direction of study results. Most abstracts did not include data on funding status (471 of 501, 94.4%), so this information was not included in our analysis.

Table 1

Rate of publication and abstract category for abstracts presented at the 2006 CIBMTR/ASMBT tandem meeting

Abstracts, n (%)Publications, n (%)P*
Total 501 217 (43.3)  
Number of centers    
    Single center 381 (76.0) 153 (40.2) <.001 
    Multicenter 101 (20.2) 58 (57.4)  
    Not stated 19 (3.8) 6 (31.6)  
Study type    
    Clinical 294 (58.7) 141 (48.0) <.001 
    Basic science 124 (24.8) 70 (56.5)  
    Descriptive/case report 83 (16.6) 6 (7.2)  
Presentation format    
    Oral 55 (11.0) 36 (65.4) .001 
    Poster 446 (89.0) 181 (40.1)  
Sample size (clinical studies only)    
     > 40 141 (28.1) 73 (51.8) .240 
     < 40 145 (28.9) 65 (44.8)  
Study results    
    Positive 327 (65.3) 164 (50.2) .001 
    Negative 33 (6.6) 11 (33.3)  
    Not stated 140 (27.9) 42 (30.0)  
Abstracts, n (%)Publications, n (%)P*
Total 501 217 (43.3)  
Number of centers    
    Single center 381 (76.0) 153 (40.2) <.001 
    Multicenter 101 (20.2) 58 (57.4)  
    Not stated 19 (3.8) 6 (31.6)  
Study type    
    Clinical 294 (58.7) 141 (48.0) <.001 
    Basic science 124 (24.8) 70 (56.5)  
    Descriptive/case report 83 (16.6) 6 (7.2)  
Presentation format    
    Oral 55 (11.0) 36 (65.4) .001 
    Poster 446 (89.0) 181 (40.1)  
Sample size (clinical studies only)    
     > 40 141 (28.1) 73 (51.8) .240 
     < 40 145 (28.9) 65 (44.8)  
Study results    
    Positive 327 (65.3) 164 (50.2) .001 
    Negative 33 (6.6) 11 (33.3)  
    Not stated 140 (27.9) 42 (30.0)  
*

Difference in publication rate between abstract categories using χ2 testing.

Includes clinical trials and observational studies.

Includes translational science.

Of the 501 abstracts, 217 were eventually published, resulting in an overall publication rate of 43%. The median number of months from presentation at the meeting until publication was 19. Multicenter studies were more likely to be published than single-center studies (57.4% vs 40.2%, P < .001). Clinical studies (141 of 294, 48.5%) and basic science research (70 of 124, 56.9%) were more likely to be published than descriptive studies or case reports (6 of 83, 7.1%; P < .001). Positive studies were more likely to be published (164 of 327, 50.1%) than negative studies (11 of 33, 33.3%) or studies without a clear direction of result (42 of 140, 30.0%; P < .001).

When this analysis was restricted to clinical studies, these findings persisted: 49.8% of clinical studies that were categorized as positive were published (100 of 201) compared with 24% of negative clinical studies (6 of 25; P = .047). In addition, there was no statistical difference in publication rate between prospective and retrospective clinical studies: 45.6% of retrospective clinical studies were published (77 of 169) compared with 50.4% of prospective clinical studies (62 of 123).

Abstracts presented as oral presentations were more likely to be published (36 of 55, 65.4%) than abstracts presented as a poster (181 of 445, 40.1%). The median number of subjects enrolled in clinical studies was 40, with a range of 1-6547. Clinical abstracts with more than 40 subjects were no more likely to be published than abstracts with less than 40 subjects: 44.8% of smaller studies (N < 40) were published (65 of 145), compared with 51.8% of larger (N > 40) studies (73 of 141; P = .240). Therefore, sample size did not influence publication rate.

The mean impact factor of journals publishing positive studies was 6.92 compared with 4.30 for journals publishing negative abstracts or abstracts without a clear result (P = .02). The majority of the abstracts published were found in 1 of 3 journals: Biology of Blood and Marrow Transplantation (19.8% of published abstracts), Blood (17.5%), and Bone Marrow Transplantation (15.2%). The remaining 47.5% of published abstracts were found in 51 different journals, with no single journal accounting for more than 3% of published abstracts.

In this review of abstracts presented at the 2006 ASMBT/CIBMTR tandem meeting, abstracts with positive results were more likely to be published than abstracts with negative results, a finding consistent with publication bias. Moreover, abstracts with positive results were more likely to be published in journals with a higher impact factor, suggesting that these studies would be more likely to be read by clinicians and researchers. Clinical studies and basic science studies were more likely to be published than other forms of research, as were multicenter studies. The publication rate of 43% is similar to previous studies,4  but higher than the rate noted in our pilot study. Therefore, most research presented at major scientific meetings does not go on to full publication in a peer-reviewed format. This suggests that those who attend major scientific meetings should exercise caution when drawing conclusions from abstracts presented at meetings until the research is presented in a formal, peer-reviewed publication.

A significant limitation of our study was the subjective nature of some abstract categories. In particular, because abstracts rarely had clearly defined hypotheses, we often had to rely on more subjective criteria to determine whether a study was positive or negative. The inter-rater reliability for the direction of results would be classified as fair, with moderate agreement between the 2 reviewers for each abstract on the number of centers participating in the research, and substantial agreement for study type.10  This limits the strength of the conclusions we can draw from our results. We strongly suspect that our inter-rater reliability would have been higher if more clearly defined hypotheses, objectives, and results were included in the formatting of the original abstracts. It was also noted in the abstracts that information on funding source for the research was generally lacking. Because a substantial proportion of abstracts will never be published in manuscript form, the abstract may serve as the only long-term accessible form of research output. Therefore, structured abstracts should be encouraged, including information on the source of funding and other potential conflicts of interest. This is becoming standard in larger meetings.11 

There may be multiple reasons why studies initially presented in abstract form are not subsequently published.12  If a study's results are negative, the authors' enthusiasm for proceeding on to publication might be lower. In addition, journal editors might be less likely to publish negative results. Editors may perceive a lack of interest by their readership or have concern about reduced citations of negative results, leading to a lower impact factor. In addition, it is possible that some research presented at major scientific meetings may be presented to generate hypotheses and discussion with no intent of ever publishing results in a peer-reviewed format. We also could not exclude funding status as a potential cause of publication bias, because few abstracts included details on funding. Finally, results of a negative study might be negative because of poor study design, low patient accrual, or insufficient sample size. However, in our results, we did not find a higher rate of publication in clinical studies with higher sample size. In the current study, we were unable to determine the reason why abstracts did not proceed to publication. We recommend that future studies examine specific reasons for nonpublication.

The results of the present study contradict the results of our earlier pilot study, which did not find an increased publication rate in positive studies.7  However, our initial study had a much smaller sample size and may have been underpowered to detect significant differences in publication rates. The overall publication rate in the current study is much higher than that in the previous one (43% vs 30%). This discrepancy in publication rate may reflect the difference in the nature of research presented at each meeting (international and national meetings, respectively).

Registration of controlled clinical trials is now a mandatory requirement for controlled clinical trials.13  This strategy is expected to reduce the nonpublication of clinical trials with negative results. A similar process has been proposed recently for systematic reviews.14  However, no such requirement exists for other types of research, such as observational studies. This is a particular issue for specialty areas such as BMT, in which clinical practice may be heavily dependent on retrospective reviews and small trials conducted outside of the realm of controlled clinical trials.15 

In conclusion, we found that publication bias was present in the field of BMT, because positive studies were more likely to be published than negative studies. We also found a previously unrecognized additional source of publication bias, wherein positive studies were more likely to be published in journals with a higher impact factor. Readers should be aware of these observations when using the medical literature. The majority of abstracts presented at major scientific meetings will not go on to full publication in peer-reviewed journals, suggesting that users of the medical literature exercise caution in drawing significant conclusions from abstracts alone. We recommend that full publication of negative results should be actively encouraged by study sponsors, scientific organizations, and journals. In addition, because the abstract may be the only permanent record of research, we recommend the use of structured abstracts, including details about funding status.

The publication costs of this article were defrayed in part by page charge payment. Therefore, and solely to indicate this fact, this article is hereby marked “advertisement” in accordance with 18 USC section 1734.

The authors thank K. Morcombe, K. Ramesar, and W. Watral for their assistance in reviewing abstracts.

Contribution: K.P. designed and performed the research, collected and analyzed the data, performed the statistical analysis, and wrote the manuscript; M. S. designed and performed the research, reviewed the manuscript, and was the overall project supervisor; and C.R., G.D.E.C., R.K., T.R., D.S., J.M., and D.W. performed the research and reviewed the manuscript.

Conflict-of-interest disclosure: The authors declare no competing financial interests.

Correspondence: Dr Kristjan Paulson, Leukemia/BMT Fellow, University of Manitoba, ON2050, Cancer Care Manitoba, R3E OV9, Winnipeg, MB; e-mail: umpaul02@cc.umanitoba.ca.

1
Dickersin
 
K
The existence of publication bias and risk factors for its occurrence.
JAMA
1990
, vol. 
263
 
10
(pg. 
1385
-
1389
)
2
Ioannidis
 
JP
Why most published research findings are false.
PLoS Med
2005
, vol. 
2
 
8
pg. 
e124
 
3
De Bellefeuille
 
C
Morrison
 
CA
Tannock
 
IF
The fate of abstracts submitted to a cancer meeting: factors which influence presentation and subsequent publication.
Ann Oncol
1992
, vol. 
3
 
3
(pg. 
187
-
191
)
4
Scherer
 
RW
Langenberg
 
P
von Elm
 
E
Full publication of results initially presented in abstracts.
Cochrane Database Syst Rev
2007
, vol. 
2
 pg. 
MR000005
 
5
Ramsey
 
S
Scoggins
 
J
Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology.
Oncologist
2008
, vol. 
13
 
9
(pg. 
925
-
929
)
6
von Elm
 
E
Costanza
 
MC
Walder
 
B
Tramèr
 
MR
More insight into the fate of biomedical meeting abstracts: a systematic review.
BMC Med Res Methodol
2003
, vol. 
3
 pg. 
12
 
7
Saeed
 
M
Paulson
 
K
Lambert
 
P
Szwajcer
 
D
Seftel
 
M
Publication bias in blood and marrow transplantation.
Biol Blood Marrow Transplant
2011
, vol. 
17
 
6
(pg. 
930
-
934
)
8
Krongold
 
R
2006
Proceedings of the 2006 ASBMT/CIBMTR annual meeting
2006 February 16-20
Honolulu, Hawaii
Philadelphia, PA
Elsevier
(pg. 
1
-
181
)
9
Thomson Reuters
Journal Citation Reports: ISI Web of Knowledge
Accessed January 25, 2011 
10
Landis
 
JR
Koch
 
GG
The measurement of observer agreement for categorical data.
Biometrics
1977
, vol. 
33
 
1
pg. 
159
 
11
American Society of Hematology
Information for Abstract Authors
Accessed October 9, 2011 
12
Weber
 
EJ
Callaham
 
ML
Wears
 
RL
Barton
 
C
Young
 
G
Unpublished research from a medical specialty meeting.
JAMA
1998
, vol. 
280
 
3
(pg. 
257
-
259
)
13
Deangelis
 
CD
Drazen
 
JM
Frizelle
 
FA
et al. 
Is this clinical trial fully registered?
JAMA
2005
, vol. 
293
 
23
(pg. 
2927
-
2929
)
14
PLoS Med. Editors
Best practice in systematic reviews: the importance of protocols and registration.
PLoS Med
2011
, vol. 
8
 
2
pg. 
e1001009
 
15
Pasquini
 
MC
Wang
 
Z
Horowitz
 
MM
Gale
 
RP
2010 report from the Center for International Blood and Marrow Transplant Research (CIBMTR): current uses and outcomes of hematopoietic cell transplants for blood and bone marrow disorders.
Clin Transpl
2010
(pg. 
87
-
105
)
Sign in via your Institution