The accuracy of using open-ended questions in structured conversations with children
Systematic review
|Published
When there is a suspicion of abuse, neglect or psychosocial problems in children, it is often necessary to interview children. But how to assess the credibility (truthfulness) of children’s statements is a difficult question. We aimed to assess the accuracy of using open-ended questions versus other types of questions in structured conversations with children.
Key message
When there is a suspicion of abuse, neglect or psychosocial problems in children, it is often necessary to interview children. But how to assess the credibility (truthfulness) of children’s statements is a difficult question. We aimed to assess the accuracy of using open-ended questions versus other types of questions in structured conversations with children.
Methods
We conducted a systematic review that compared the accuracy (truthfulness) of children’s statement when using open-ended questions versus more closed-ended types of questions.
Results
We included seven field studies. They were performed in England, Israel, USA, and Sweden and published in the years 1999-2009. The studies included 239 children ages 3-16. All studies were based on investigative interviews of children who were suspected victims of sexual abuse.
We grouped the seven studies into three types according to the methods used to judge whether the children’s statements were truthful or not: 1) CBCA (criteria-based content analysis) score, 2) contradictions, 3) confirmed allegations and confessions. The results showed that using open-ended questions elicited more accurate (truthful) information:
- All four studies that used CBCA score as their proxy for the truth found that open-ended questions retrieved more truthful descriptions than other types of questions (in one study, only in older children).
- The one study that used children’s self-contradictions as the proxy for the truth found that invitational (open-ended) questions retrieved more truthful descriptions than more focused questions.
- One of the two studies that used confirmed cases and perpetrator confessions as the proxy for the truth found that open-ended questions retrieved more accurate information than directive, option-posing or suggestive questions. The other study did not find this difference.
Findings from the seven included studies suggest that open-ended questioning seems to yield more credible information than focused questioning. However, more research is needed to draw firm conclusions.
Summary
Background
Preschool and school employees have extensive contact with children over long periods of time. This group of professionals therefore plays a crucial role in recognizing and responding to signs indicative of abuse, neglect and psychosocial problems in children, thereby ensuring children receive the support they need at an early stage. Addressing concerns can be challenging and regularly necessitates eliciting narrative accounts from the children through questions. However, truthful answers are not guaranteed, as the framing of the questions can affect children’s memory and the risk of false disclosures. While many daycares and schools have written routines for how to handle suspicions of abuse and neglect, first-line child service providers express a need for training on how to assess signs and how to talk with children about difficult issues.
Standardized conversation guides can support preschool employees, school employees and similar groups of professionals in confirming or disconfirming whether there is cause for concern. Various guidelines, reviews and “best-practice” documents address how to recognize and respond to abuse and neglect in children and youth. They all encourage concerned adults to explore their worries with children and youth by using open questions. Thus, open-ended questions in structured conversations with children appear to be considered best-practice, but it is unclear whether open-ended questions elicit more truthful disclosure or recall of events compared to more closed questions. We aimed to examine the extent to which the recommendation of open-ended questions in structured conversations with children is substantiated by research.
Objective
Our review question was: what is the accuracy of open-ended prompts, compared to more closed questions, in structured conversations between children and professionals with responsibility for children to uncover abuse, neglect or psychosocial problems?
Method
We conducted a systematic review that compared the accuracy (truthfulness) of children’s statement when using open-ended questions versus more closed questions. Our methods were based on the Cochrane Handbook for Systematic Reviews of Interventions, and because our review question related to accuracy, we also used the Cochrane Handbook for Diagnostic Test Accuracy Reviews. A protocol, which the project team and the commissioner discussed and agreed on, was prepared and published prior to undertaking the review.
We searched for and included studies according to the following inclusion criteria:
Population: First-line child service providers, including employees at daycares, primary- and secondary schools, and other professionals who have daily contact with and responsibility for children. Studies aimed at assessing the accuracy of conversation methods for police or child welfare services were also eligible.
Index test: Open-ended prompts or questions.
Comparison: Interview or conversation protocols or guides with fewer or no open ended questions.
Reference: Methods used to ascertain the truth or methods thought to be a proxy for the truth, e.g. investigations, convictions, confessions or number of self-contradictions.
Outcome: Accuracy of children’s recall regarding an incident/exposure/event/situation/state of being (e.g. depressed). Accuracy was interpreted as the chance of receiving either a true positive response (the child truthfully discloses a real event) or a true negative response (the child truthfully discloses that an event did not take place).
Study design: Systematic review, validation studies.
Ineligible studies were those that did not include a reference standard or if children were interviewed about staged events.
An information specialist developed and conducted systematic searches for literature in twelve electronic literature databases. We also searched Google Scholar, the reference lists of relevant publications, and contacted experts in the field. Two review authors independently performed an eligibility assessment of all titles and abstracts, and subsequently the relevant full texts, from the systematic searches. One researcher assessed the risk of bias and extracted data from the included studies and another researcher checked the information for accuracy and completeness. For our risk of bias assessment, we used an adapted version of the Quality assessment of diagnostic accuracy studies tool (QUADAS). Due to great variability in setting, study design and reporting of outcomes, it was not possible to conduct metaanalyses. Therefore, we described the results of each included study narratively. Data reported in the eligible studies were not reported in a way that allowed for calculations of sensitivity and specificity, and we therefore decided not to assess the certainty of evidence.
Results
The literature searches identified 19,621 unique records of which we assessed 362 full-text publications. We included seven field studies. The studies were performed in England, Israel, USA, and Sweden and published in the years 1999-2009. They include a total of 239 children ages 3-16 (mean 6.5-11.8 years) and all are based on criminal investigative interviews of children following allegations of child sexual abuse (there was one study about obscene phone calls).
All in all, we assessed there was low risk of systematic errors in the seven included studies. However, one study is prone to risk of bias associated with the participant selection and for three studies there is some concern about the reference standard. With respect to applicability (the extent to which the reported results are applicable or generalizable to the main aim of review), there are concerns about the selection of participants and the setting of the interview because all studies regard forensic interviewing of alleged sexual abuse cases.
The seven included studies used various sources of information to validate (establish accuracy of) the children’s accounts: medical evidence, suspect confessions, witness statements, recantations, polygraph examinations, physical evidence, and statement analysis (criteria-based content analysis, CBCA, scores). We grouped the studies into three types according to the methods used to judge whether the children’s statements were truthful or not: 1) CBCA score, 2) contradictions, 3) confirmed allegations and confessions. Overall, the results showed that open-ended probes appeared to be more likely to elicit accurate (truthful) responses from the children:
- All four studies that used CBCA score as their proxy for the truth found that open-questions retrieved more truthful descriptions than other types of questions (only among the oldest children, in one of the studies).
- The one study that used children’s self-contradictions as the proxy for the truth found that invitational (open-ended) questions retrieved more truthful descriptions than more focused questions.
- One of the two studies that used confirmed cases and perpetrator confessions as the proxy for the truth found that open-ended questions retrieved more accurate information than directive, option-posing or suggestive questions. The other study found no significant relationship between type of questions and accuracy.
Conclusion
How to assess the credibility of children’s statements is a difficult question and the possibility of examining accuracy of statements obtained in field studies of interviews with children is near impossible. Yet, we identified seven field studies which all assessed the veracity of the information obtained with independent indices of truthfulness, specifically statement analysis (CBCA), medical and physical evidence, suspect confessions, witness statements, recantations and polygraph examinations. Overall, the results of these studies support the usefulness of open-ended questions for eliciting potentially truthful (forensic) information. In contrast, closed questions, option-posing questions, and suggestive questions elicited more false information. Thus, the long-lasting proposition to use open-ended questions in structured conversations with children is to a degree substantiated by this body of research.
However, whether the results in these studies are generalizable to conversations between a child and a first-line child service provider (such as teacher), about neglect or psychosocial problems, taking place in a familiar environment, is likely but uncertain. There is a gap in evidence on the accuracy of open-ended questions in structured conversations between first-line child professionals and children.
Given open-ended questioning strategies seem to yield more credible information than focused questioning, there is some support for using open-ended questioning.