Coming Soon: EndNote 20’s New Interface

Clarivate Analytics released the latest version of the EndNote citation management software in Fall 2020. The DigITs Technology Division plans to update MSK accounts to EndNote 20 in the early months of 2021. Please keep an eye on the MSK Library’s homepage for an upcoming notification message regarding the expected date for the MSK EndNote 20 scheduled update.

Here’s “What’s new in EndNote 20” according to the vendor (see 2:01 min video):

    • New modern interface design
    • Duplicate detection enhancements
    • Improved PDF reading experience
    • Time-saving workflow improvements

If you were a heavy user of the extensive toolbars of buttons/icons used in previous Endnote versions, you may miss them in this more minimalist, pared-down interface layout which was intentionally designed to be sleeker and more “modern”. Beyond aesthetics, however, the latest version has not changed very much in terms of functionality. As this comparison table between previous versions demonstrates, no functionality has actually been taken away.

New features in terms of functionality

Particularly for those who use EndNote to manage citations for systematic review projects, the enhanced duplicate detection functionality will be a welcome addition, with DOIs and PMCIDs now available as optional comparison fields. Also, for those who have a need to work between multiple libraries simultaneously, the ability to have more than one library open within the same window in EndNote 20 will make switching back and forth between multiple libraries easier. There is also more flexibility in how PDFs stored within EndNote can be viewed and handled.

Another notable change with EndNote 20 is that all of the 7,000+ bibliographic output styles available for EndNote will now come pre-loaded in the EndNote 20 desktop version, minimizing the possibility that authors will not find their needed output style and have to go download it from the vendor’s website. Additional tweaks to the number caps for various functions have been made to the latest updates of EndNote 20 and EndNote Online (for desktop users). All of these details can be found in these latest version comparison charts: HTML version and PDF version.

If you have any questions or concerns about the upcoming EndNote 20 update, please feel free to Ask Us at the MSK Library!

medRxiv – Preprint Server for Health Sciences

First launched only about a year and a half ago in June 2019, preprint server medRxiv has enjoyed a super-sharp uptick in the number of manuscripts posted since February 2020, largely due to COVID-19 related submissions. medRxiv is not a journal publication or journal publisher – rather it is a preprint server or outlet for “the distribution of preprints that are complete but unpublished manuscripts describing health research”.

The pandemic has certainly put preprints on the fast-track to acceptance by the clinical research community, despite the fact that they are “preliminary reports of work that have not been certified by peer review”. As part of their COVID-19 response, the National Library of Medicine took the lead among database service providers and made the rather significant decision to index COVID-19 related preprints from specific preprint servers in PubMed, thereby enhancing their discoverability (and potential citations in new research) further.

That said – the debate about the benefits/challenges of preprints in the health sciences is still ongoing. However, it is becoming clear that preprints can no longer be ignored.

For a nice overview of where things stand with medRxiv (and other preprint servers) and where they may headed, be sure to check out the November 10, 2020 issue of JAMA, which includes these noteworthy papers:

Krumholz HM, Bloom T, Sever R, Rawlinson C, Inglis JR, Ross JS. Submissions and Downloads of Preprints in the First Year of medRxiv. JAMA. 2020;324(18):1903–1905. doi:10.1001/jama.2020.17529 

Flanagin A, Fontanarosa PB, Bauchner H. Preprints Involving Medical Research—Do the Benefits Outweigh the Challenges? JAMA. 2020;324(18):1840–1843. doi:10.1001/jama.2020.20674

Malički M, Jerončić A, ter Riet G, et al. Preprint Servers’ Policies, Submission Requirements, and Transparency in Reporting and Research Integrity Recommendations. JAMA. 2020;324(18):1901–1903. doi:10.1001/jama.2020.17195 

For more information on preprints – be sure to Ask Us at the MSK Library

Covidence Extraction 2.0 Offers More Flexibility

In the last few months, the Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) has updated its data extraction module, offering users additional flexibility in the case of many of its features in Extraction 2.0. At least for the time being, users still have the ability to revert from the now 2.0 default back to Extraction 1.0 in the Settings if they so choose.

What’s involved in Systematic Review data extraction, in general?

Before discussing the Covidence Extraction 2.0 changes, it’s good to review some best practice standards for data extraction. The standards related to data extraction are mainly  intended to provide quality control to the extraction process by creating an environment where errors of omission and errors of inaccuracy can be minimized. From a PCORI presentation:

Relevant IOM Standards for Data Extraction
Standard 3.5: Manage data collection

  • At a minimum, use two or more researchers, working independently to extract quantitative and other critical data from each study
  • For other types of data, one person could extract data, while a second person independently checks for accuracy and completeness
  • Establish a fair process for resolving discrepancies (do not give final decision making to the senior reviewer)
  • Link publications from the same study to avoid including data more than once
  • Use standard data extraction forms developed for the specific review
  • Pilot-test the data extraction forms and process

Added flexibility and customization in Extraction 2.0 

Covidence Extraction 1.0 was best suited for systematic reviews where quantitative data would be extracted. As such, the default setting for the number of extractors was automatically locked in to two reviewers and could not be changed (in the way that the Title/Abstract Screen and Full Text Screen stages could be set to just one reviewer). Extraction 2.0 has made it possible to proceed with just one person carrying out the extraction. If two people are set to extract, consensus checking is still required. The comparison step, however, can be carried out as soon as both reviewers are done, at the individual study level.

Another noteworthy difference with Extraction 2.0 is that users now have the ability to begin designing the extraction form as soon as the Covidence review is created, even before any references have been added to the project. In other words, the extraction templates can be set up separate from the studies, something which benefits the process of pilot-testing the form/template. Also, whereas previously the addition of quantitative intervention outcome data automatically was configured into a table format and strictly followed a PICO framework, Extraction 2.0 recognizes that systematic reviews can vary and is less table intensive and more customizable in terms of the broader range of information that can be captured.

There are a couple of areas worth mentioning that have been intentionally left inflexible (most likely in the interest of better data quality/integrity). There is currently no way to restrict who on the team can be an extractor or do consensus – it is done by whoever gets there first. And once the data extraction has begun in Extraction 2.0, assigned roles cannot be re-assigned (so basically, only two people from the team can be involved in the data extraction stage). Also, once data extraction has begun, publications from the same study can be merged, however, this step cannot be undone.

Clearly, the Covidence Extraction 2.0 updates are still very much in line with the Standards for Data Extraction set out by the IOM (listed above). To learn more about the systematic review process and how to use Covidence, be sure to check out the MSK Library’s upcoming workshop schedule or Ask Us!