Covidence: Better SR Data Quality & Integrity

The Covidence systematic review (SR) data management software is essentially a research electronic data capture tool, similar to REDCap. In a SR, however, the “study population” consists not of patients, but rather of literature database search results (i.e., references), while the “survey” administered to each “study subject” consists of the inclusion and exclusion criteria. 

Different than in a typical clinical study, a unique feature of the systematic review study design is that all the information captured is done so in duplicate (ideally), by two human screeners/reviewers working independently of each other. In other words, the same “survey” is administered twice to the same “study subject” and the two data captures are then compared to identify any disagreements.

This is where REDCap differs in its functionality from Covidence. Covidence not only documents the decisions of the two reviewers but it also compares them, and then automatically separates out any conflicts that need to be resolved – providing built-in quality control.

In fact, Covidence requires that reviewers address all screening discrepancies before allowing them to move on to the next stage of the review. In the full-text review stage, where explanations for exclusions must be provided, even if both reviewers vote similarly to exclude an item, Covidence will flag any exclusion reason discrepancies and force the team to resolve the conflicts before being allowed to proceed.

Data integrity features are also prominent in Covidence. For example, reviewers have the ability to reverse a decision (ie. make changes to collected data), however, if the second reviewer has already voted on that item, both reviewers will have to re-screen the record from the beginning in order to re-capture both reviewers’ judgements (i.e., this undoes all of the votes associated with the reference from that stage).

Also, in order to minimize the introduction of bias into the review process, the individual decisions made by the two reviewers are blinded to the team so that if a conflict has to be resolved by a third party, the third party will not be influenced by knowing who made which decision (as they may unconsciously side with the more senior reviewer, etc.). Even though a specific batch of records cannot be assigned to/linked to a particular reviewer, a particular task in the review process can, however, be assigned to a specific team member (for example, resolving conflicts may be set to be solely handled by the project PI).

Another feature of Covidence that leads to better data is its quality assessment and data extraction process. If two reviewers are assessing each study for bias, a comparison of assessments and consensus of judgements will be needed to complete this stage. The data extraction completed by two reviewers independently is also followed by a consensus step. If the consensus step is skipped, data will appear blank in the data export as it is only the “consensus judgements of data extraction” that can be exported to Excel. In other words, if the data is not first “cleaned” by the team, they will literally not be able to get it out of Covidence.

Although Covidence does not include any data visualization or data/statistical analysis functionality, it does allow you to export the data in a spreadsheet. “The goal of this format is to facilitate import of data extracted in Covidence into statistical programs such as Stata, R, or SPSS.”

To learn more about Covidence, register for an upcoming workshop or Ask Us

Author Names: Manuscript Submission & PubMed Indexing

Anyone who has ever tackled the task of compiling a comprehensive CV – say of a researcher with a common last name who has published over multiple decades while working at several different institutions – knows that this is no easy task. Below are some reasons for the complexity, and some solutions for making the process more accurate and less daunting for everyone.

How did we get here?

Everyone involved in the research publication process has contributed at some time and in some way to this problem:

  • Authors who haven’t been consistent about the formatting of their name (or formatting of their institutional affiliation) when submitting their manuscripts for publication or have not provided their ORCID iD when prompted by the manuscript submission system;
  • Publishers who don’t provide database producers with full author name information (for example, only providing initials for the authors’ first and middle initials) or only ask for one co-author’s ORCID iD or do not require provision of an ORCID iD at all;
  • Database producers, like NLM’s PubMed, who may have not been consistent over time regarding how they handle adding the author information to their database records (more on that below).

And as with any structured database, information retrieval is only ever as good as the quality and extent of information contained in the database. In the case of PubMed, the quality control at NLM has always been top notch, but the extent of indexing certain fields (like the Author Name filed) has varied over time as their cataloging policies have evolved.

For example:

The take-home message from these cataloging details is that searching in PubMed will therefore need to be adjusted accordingly, depending on the publication dates of the author citations needed to be identified. Furthermore, authors themselves should realize that they are very much in control over what information ends up in the PubMed record since it all starts with the information that they themselves provide at the point of the manuscript submission to a journal publisher.

In fact, a new tool has recently been developed by cancer researchers at the National Cancer Institute called the AuthorArranger that can help authors provide more complete/accurate information to publishers at the time of manuscript submission. “AuthorArranger was created by Mitchell Machiela and Geoffrey Tobias in collaboration with the NCI Center for Biomedical Informatics and Information (CBIIT). Support for AuthorArranger comes from the 2018 DCEG Informatic Tool Challenge.”

From their website:

“AuthorArranger is a free web tool designed to help authors of research manuscripts automatically generate correctly formatted title pages for manuscript journal submission in a fraction of the time it takes to create the pages manually. Whether your manuscript has 20 authors or 200, AuthorArranger can save you time and resources by helping you conquer journal title pages in seconds.

Simply upload a spreadsheet containing author details ordered by author contribution, or download AuthorArranger’s easy-to-follow spreadsheet template and populate it with author and affiliation details. Either way, once your author information is uploaded AuthorArranger will allow you to make format choices based on the submission rules of the journal. When finished, you get a downloadable and formatted document that has all your authors and affiliations arranged for journal submission.”

The AuthorArranger tool was featured in a recent Cell Press “CrossTalk” blogpost.

For help with Author Name searching, manuscript submission, or training on Updating Scientific CVs – just Ask Us!

 

2020 Journal Citation Report Released

The new 2020 Journal Citation Report (JCR) was released on June 29 by Clarivate Analytics, providing the 2019 Impact Factor for journals.  The Impact Factor is based on the ratio of a journal’s citations in a given year to the journal’s total number of citable items from the previous two years. 

The JCR also ranks journals by subjects, enabling us to view journal impact within a specific subject category, such as Oncology.  Shown here are the top 10 journals in Oncology out of the 244 listed in this category, along with the number of MSK-affiliated publications for each journal in 2019. 

  1. CA: A Cancer Journal for Clinicians (2)
  2. Nature Reviews Clinical Oncology (6)
  3. Nature Reviews Cancer (5)
  4. Lancet Oncology (25)
  5. Journal of Clinical Oncology (60)
  6. Cancer Discovery (20)
  7. Cancer Cell (18)
  8. JAMA Oncology (34)
  9. Annals of Oncology (38)
  10. Molecular Cancer (1)

Contact us to find out more about the JCR.