Double Screening in Systematic Reviews

As anyone who has worked on a systematic review (SR) knows, screening references for the study selection stage of the SR process can be quite time consuming and labor intensive. Ideally, the screening should be done by two people working independently, so it is a lot of work – times two! It’s not surprising, therefore, that many researchers wonder:

  • if they can get away with single screening
  • if there exists some way to automate part, or all, of the screening stage

Single Screening vs. Double Screening

An August 2020 paper by Mahtani et al. explores the latest evidence on this topic (see some examples listed below) and summarizes the guidance from leading evidence synthesis organizations/producers like the Cochrane Collaboration, the Joanna Briggs Institute, the Campbell Collaboration, and the Institute of Medicine (US) Committee on Standards for Systematic Reviews of Comparative Effectiveness Research – all of whom recommend (in their handbooks and documentation) that at least two people working independently be involved in the screening process.

Mahtani KR, Heneghan C, Aronson J. Single screening or double screening for study selection in systematic reviews? BMJ Evid Based Med. 2020 Aug;25(4):149-150. doi: 10.1136/bmjebm-2019-111269. Epub 2019 Nov 13. PMID: 31722997

Waffenschmidt S, Knelangen M, Sieben W, Bühn S, Pieper D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol. 2019 Jun 28;19(1):132. doi: 10.1186/s12874-019-0782-0. PMID: 31253092; PMCID: PMC6599339

Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med. 2002 Jun 15;21(11):1635-40. doi: 10.1002/sim.1190. PMID: 12111924. 

Conventional vs. Automated or Semi-Automated Screening

Quite a bit of research is currently being done on automating steps of the systematic review process, particularly investigating using AI/machine learning or text mining/natural language processing to replace the second reviewer (ie. semi-automated screening) and/or to reduce the number of records needed to be screened. There are already software tools in existence that have introduced relevance prediction/screening prioritization capabilities (for example, Abstrackr, DistillerSR/DistillerAI, EPPI-Reviewer, RobotAnalyst, etc.) but their performance is largely still under evaluation.

As technology improves, it’s highly likely that we will someday soon see acceptance of automated screening tool use for study selection in systematic reviews by leaders in the evidence synthesis field, but we are still far from there yet.  Progress in this area is already being made, however, as demonstrated by the creation and efforts of the International Collaboration for the Automation of Systematic Reviews (ICASR):

Beller E, Clark J, Tsafnat G, Adams C, Diehl H, Lund H, Ouzzani M, Thayer K, Thomas J, Turner T, Xia J, Robinson K, Glasziou P; founding members of the ICASR group. Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR). Syst Rev. 2018 May 19;7(1):77. doi: 10.1186/s13643-018-0740-7. PMID: 29778096; PMCID: PMC5960503.

O’Connor AM, Glasziou P, Taylor M, Thomas J, Spijker R, Wolfe MS. A focus on cross-purpose tools, automated recognition of study design in multiple disciplines, and evaluation of automation tools: a summary of significant discussions at the fourth meeting of the International Collaboration for Automation of Systematic Reviews (ICASR). Syst Rev. 2020 May 4;9(1):100. doi: 10.1186/s13643-020-01351-4. PMID: 32366302; PMCID: PMC7199360.

Be sure to check out the MSK Library’s Systematic Review Service LibGuide or Ask Us for more information if you are thinking about embarking on a systematic review project.

Covidence: Better SR Data Quality & Integrity

The Covidence systematic review (SR) data management software is essentially a research electronic data capture tool, similar to REDCap. In a SR, however, the “study population” consists not of patients, but rather of literature database search results (i.e., references), while the “survey” administered to each “study subject” consists of the inclusion and exclusion criteria. 

Different than in a typical clinical study, a unique feature of the systematic review study design is that all the information captured is done so in duplicate (ideally), by two human screeners/reviewers working independently of each other. In other words, the same “survey” is administered twice to the same “study subject” and the two data captures are then compared to identify any disagreements.

This is where REDCap differs in its functionality from Covidence. Covidence not only documents the decisions of the two reviewers but it also compares them, and then automatically separates out any conflicts that need to be resolved – providing built-in quality control.

In fact, Covidence requires that reviewers address all screening discrepancies before allowing them to move on to the next stage of the review. In the full-text review stage, where explanations for exclusions must be provided, even if both reviewers vote similarly to exclude an item, Covidence will flag any exclusion reason discrepancies and force the team to resolve the conflicts before being allowed to proceed.

Data integrity features are also prominent in Covidence. For example, reviewers have the ability to reverse a decision (ie. make changes to collected data), however, if the second reviewer has already voted on that item, both reviewers will have to re-screen the record from the beginning in order to re-capture both reviewers’ judgements (i.e., this undoes all of the votes associated with the reference from that stage).

Also, in order to minimize the introduction of bias into the review process, the individual decisions made by the two reviewers are blinded to the team so that if a conflict has to be resolved by a third party, the third party will not be influenced by knowing who made which decision (as they may unconsciously side with the more senior reviewer, etc.). Even though a specific batch of records cannot be assigned to/linked to a particular reviewer, a particular task in the review process can, however, be assigned to a specific team member (for example, resolving conflicts may be set to be solely handled by the project PI).

Another feature of Covidence that leads to better data is its quality assessment and data extraction process. If two reviewers are assessing each study for bias, a comparison of assessments and consensus of judgements will be needed to complete this stage. The data extraction completed by two reviewers independently is also followed by a consensus step. If the consensus step is skipped, data will appear blank in the data export as it is only the “consensus judgements of data extraction” that can be exported to Excel. In other words, if the data is not first “cleaned” by the team, they will literally not be able to get it out of Covidence.

Although Covidence does not include any data visualization or data/statistical analysis functionality, it does allow you to export the data in a spreadsheet. “The goal of this format is to facilitate import of data extracted in Covidence into statistical programs such as Stata, R, or SPSS.”

To learn more about Covidence, register for an upcoming workshop or Ask Us

Author Names: Manuscript Submission & PubMed Indexing

Anyone who has ever tackled the task of compiling a comprehensive CV – say of a researcher with a common last name who has published over multiple decades while working at several different institutions – knows that this is no easy task. Below are some reasons for the complexity, and some solutions for making the process more accurate and less daunting for everyone.

How did we get here?

Everyone involved in the research publication process has contributed at some time and in some way to this problem:

  • Authors who haven’t been consistent about the formatting of their name (or formatting of their institutional affiliation) when submitting their manuscripts for publication or have not provided their ORCID iD when prompted by the manuscript submission system;
  • Publishers who don’t provide database producers with full author name information (for example, only providing initials for the authors’ first and middle initials) or only ask for one co-author’s ORCID iD or do not require provision of an ORCID iD at all;
  • Database producers, like NLM’s PubMed, who may have not been consistent over time regarding how they handle adding the author information to their database records (more on that below).

And as with any structured database, information retrieval is only ever as good as the quality and extent of information contained in the database. In the case of PubMed, the quality control at NLM has always been top notch, but the extent of indexing certain fields (like the Author Name filed) has varied over time as their cataloging policies have evolved.

For example:

The take-home message from these cataloging details is that searching in PubMed will therefore need to be adjusted accordingly, depending on the publication dates of the author citations needed to be identified. Furthermore, authors themselves should realize that they are very much in control over what information ends up in the PubMed record since it all starts with the information that they themselves provide at the point of the manuscript submission to a journal publisher.

In fact, a new tool has recently been developed by cancer researchers at the National Cancer Institute called the AuthorArranger that can help authors provide more complete/accurate information to publishers at the time of manuscript submission. “AuthorArranger was created by Mitchell Machiela and Geoffrey Tobias in collaboration with the NCI Center for Biomedical Informatics and Information (CBIIT). Support for AuthorArranger comes from the 2018 DCEG Informatic Tool Challenge.”

From their website:

“AuthorArranger is a free web tool designed to help authors of research manuscripts automatically generate correctly formatted title pages for manuscript journal submission in a fraction of the time it takes to create the pages manually. Whether your manuscript has 20 authors or 200, AuthorArranger can save you time and resources by helping you conquer journal title pages in seconds.

Simply upload a spreadsheet containing author details ordered by author contribution, or download AuthorArranger’s easy-to-follow spreadsheet template and populate it with author and affiliation details. Either way, once your author information is uploaded AuthorArranger will allow you to make format choices based on the submission rules of the journal. When finished, you get a downloadable and formatted document that has all your authors and affiliations arranged for journal submission.”

The AuthorArranger tool was featured in a recent Cell Press “CrossTalk” blogpost.

For help with Author Name searching, manuscript submission, or training on Updating Scientific CVs – just Ask Us!