Abstract 4618

Aim

The Adolescent Pediatric Pain Tool (APPT) explores the patient's self-report of pain, and has been studied and validated as a pain assessment tool in pediatric patients as young as 8 years of age. This tool consists of three components: a pictorial body outline (pain location), a Word Graphic pain Rating Scale (WGRS; pain intensity), and a qualitative descriptive word list (pain quality and pain pattern). Although straightforward for the patient to complete, the interpretation of the APPT by the medical professional is subject to individual reader bias. This bias is particularly crucial when the APPT is used as a research tool in studies of SCD pain, where data must be transcribed into a form suitable for analysis across patients. The purpose of this study was to determine accuracy of APPT pain analysis and ensure meaningful data collection in a research setting.

Methods

APPTs submitted by 102 adolescent and young adult (mean age 14.2 years, range 8 to 27; 48 female, 54 male) SCD patients (75 SS, 24 SC, 3 S-beta thal) were independently analyzed and entered into Excel spreadsheets by two trained raters. For the analysis, the anterior and posterior body outlines were divided into a total of 43 body ‘segments’. Each mark drawn on the body outline was considered a painful ‘site’. Each rater recorded: 1) the number of painful sites; 2) the body segments involved in each painful site; 3) a numeric scale (ranging from 0.0 to 10.0) estimate of the WGRS; and 4, the word descriptors selected by the patient. The % disagreement between raters was calculated for each of 3 APPT components: location, intensity, and quality.

Results

Of the 102 APPTS submitted, 96 were found by the raters to be sufficiently complete for analysis. Regarding pain location - Over the span of 43 potentially involved body segments, the two raters disagreed in only 1.01% of instances. These disagreements appeared to arise primarily from ambiguity in interpreting the patient's drawing (e.g., circle not closed, or line circles area twice). Regarding pain intensity – For 83% of patients, the measurement of pain intensity differed by +/- 0.2 or less. Intensity errors arose primarily from irregularities in marking of the pain scale by the patient (62.5% of patients did not follow instructions in marking the scale, i.e., making a circle or an “X” instead of a vertical line), as well as differences in the interpretation of the mark by the raters (15.6% disagreement between raters on whether or not the scale was marked per instructions). Regarding pain quality and pattern – disagreement between raters in word selections occurred in 16.7% of APPTs. Of these, the two readers differed by 1 word, 2 words or 3 words in 53.8%, 23.1%, and 23.1% of instances, respectively. Quality discrepancies frequently arose from ambiguities in patient word recording (such as incompletely circling words, or placing a mark next to the word instead of circling), and rater interpretation of that recording (such as what markings to consider as purposeful selection of a word), but may potentially arise from data transcription errors.

Conclusion

In adaptation of the APPT as a research tool, multiple sources of error need to be addressed and minimized. These include ambiguities introduced by the patient (which may be improved by careful instruction or practiced use of the tool by the patient), the algorithms used by raters to interpret results (which may need detailed instructions for dealing with ambiguous results), and data transcription errors (which may be identified and corrected by double data entry).

Disclosures:

No relevant conflicts of interest to declare.

Author notes

*

Asterisk with author names denotes non-ASH members.

Sign in via your Institution