What are Search Hedges?

A “search hedge” is a library term that describes one of two things:

  • Filter: A pre-set feature found in some literature databases, that enables the user to be guided through a process to locate articles on a specific type of question.
  • Hedge: A published (and sometimes validated) comprehensive search strategy for a database on a specific concept or topic, that can be added to a search, customized, or used as a complete ready-made search.

Search Filters

Search filters are found within a database and work by assisting the user with locating articles for a specific type of question by filling out a form/selecting a pre-set filter.

PubMed Clinical Queries

PubMed Clinical Queries are a built-in search hedge found in PubMed, that assists users in locating clinical studies to answer clinical questions. Users simply enter their search terms and answer several questions (such as whether the question is related to diagnosis, therapy, prognosis, or etiology), and from there PubMed adds a pre-set hedge that is designed specifically for the answers provided.

The search hedge itself is “hidden” from the search strategy seen by the user, however the complete hedge can be viewed in the Clinical Queries filter details.

Embase Search tools

When using Embase (on the Elsevier platform) there are several built-in search tools available to help quickly filter results when searching on specific topics, including: PICO, PV Wizard, and Medical Device.

PICO

The PICO search tool assists in doing evidence-based practice by providing separate sections to enter each component of the PICO framework (population, intervention, comparison, outcome) as well as study design, to quickly identify clinical studies to answer the clinical question being addressed.

PV Wizard

The PV Wizard (PV stands for pharmacovigilance) allows the user to locate articles that address drug monitoring and adverse events using specific drug names (including trade, generic, and alternate names), with buttons to limit to things like adverse reactions, drug interactions, drug combinations, as well as special situations (pregnancy, breastfeeding, pediatric, geriatric, organ failure, etc.).

Medical Device

The Medical Device search allows users to quickly locate clinical and pre-clinical studies on general and medical devices, including manufacturers information and adverse events. This search hedge was developed and validated by industry representatives to ensure that it aligns with best practices for medical device monitoring.

Embase Quick Limits

Embase has several “quick limits” which are essentially search filters that can be added using a single click. These limits can be added by either checking off a box or by adding a specific field code to your search strategy.

  • Humans: Either the Humans quick limit button or the field code [humans]/lim
  • Animals: Either the Animal quick limit button or the field code [animals]/lim
  • EBM: Quick limit button for Cochrane Reviews, Controlled Trials and RCTs

Search Hedges

Search hedges are comprehensive search strategies for a specific database on topics or concepts that have been devised by librarians or information professionals. These hedges are published and available for anyone to use, and many are also validated. While hedges can be used on their own, as ready-made searches on various topics, they are most often added on to the user’s search strategy to limit or narrow the results.

Most search hedges are designed for things like specific populations, study types, diseases or conditions, or outcome measures. Typically a search hedge can simply be copied and pasted into the database, and then using the Boolean operator AND, added to a search already created.

Embase Study Type Hedges

The Embase Study Type Hedges are standardized search strategies for common and frequently used concepts that can used along with a search query. These search strategy hedges can be copied and pasted into the search box in Embase and then added to an already designed search using the Boolean operator AND.

These hedges are to be used to focus on locating specific types of clinical or experimental studies, and each hedge has an option for either sensitivity (comprehensive) and specificity (focused) based results.

There are several types of hedges available on Embase.com.

  • General Study Types: Therapy, Diagnosis, Prognosis, Etiology, Economics, etc
  • Hedges by Topic: Diabetes, Real-World Data, Cost Effectiveness, DEI
  • Animal Breed Hedges: Species-specific hedges for most animals used in research and agriculture

Locating Search Hedges

Since search hedges are published search strategies on specific topics, they can be found in a variety of places online. Some, such as the Study Type Hedges in Embase and the Study Type Filters in PubMed, are available from the database itself. Others can be found on various library websites, published literature, and in special search hedge repositories (though these may not be validated).

It’s common to find search hedges in systematic review resources, since systematic reviews require comprehensive search strategies, and search hedges can provide just that.

A validated search hedge is the “gold-standard” and has been independently tested and verified, so if there is a validated hedge available for the topic you are looking for, that is your best option.

Search Hedge Sources

Suggested Reading

What is the difference between a filter and a hedge?. J Eur Assoc Health Info Libr [Internet]. 2016 Apr. 1 [cited 2025 May 6];12(1). Available from: https://ojs.eahil.eu/JEAHIL/article/view/95

MSK Chatbots Can’t Perform a Literature Search

MSK now offers employees access to Open WebUI, a source for several chatbots available for workplace use. But if you think this tool can be used for searching the literature, think again.

What is Open WebUI and How Do I Access It?

This portal is “a proprietary, user-friendly, and PHI-secure portal where staff can access a wide array of popular large language models (LLMs) as well as tools for experienced developers behind the MSK firewall.”

To access:

  1. Log on to the VPN or be onsite
  2. Visit https://chat.aicopilot.aws.mskcc.org/
  3. Select “Continue with MSK PingID” if prompted
  4. You’ll then get the message “Account Activation Pending” followed by “Contact Admin for WebUI Access.”

No further action is needed and contacting admin isn’t necessary. You will not get confirmation once your account has been activated. But once it has, visiting the URL while onsite or on the VPN will take you to the tools.

Open WebUI includes the following chatbots:

Chatbot Description
Amazon Nova Pro A reasoning model for general analysis and summarization. Knowledge cutoff date: Unknown
Claude Sonnet 3.5 A general-use model by Anthropic. Effective with code generation. Knowledge cutoff date: April 15th, 2024
Claude Sonnet 3.7 Improved version of Sonnet 3.5, and also targets code generation as a differentiator. Knowledge cutoff date: October 2024
Claude Sonnet 4 High intelligence and balanced performance. Good for complex coding/debugging, detailed explanations, and documentation review. Detailed prompts recommended. Knowledge cutoff date: January 2024
DeepSeek R1 A reasoning model for logical inference, math problem-solving, code generation, or text-based clinical reasoning. Cannot process images. Knowledge cutoff date: October 2023
OpenAI o1 A reasoning model that thinks before it answers, making it suitable for deep analysis, task breakdown, or image-based clinical analysis. Knowledge cutoff date: October 2023
OpenAI GPT-4o A general-purpose model that balances quality, speed, and cost-effectiveness. Knowledge cutoff date: October 2023

You can toggle between tools on the top left of the page and click the “set as default” option under a tool name after you’ve selected it.

Why Can’t I Use These Tools to Perform a Literature Search?

When you ask Amazon Nova Pro to perform a literature search, it appears to do so:

A screenshot of Amazon Nova Pro appearing to summarize the literature in response to a prompt.

However, a follow-up question reveals that all is not as it seems, and that any citations provided are likely not real:

Amazon Nova Pro answering a prompt asking if it searched databases to come up with its answer. It says it did not.

Other tools are clearer about their limitations from the start:

OpenAI o1 saying it does not have database access and giving advice on how to search.
Claude Sonnet 3.7 saying it does not have database access and recommending speaking to a librarian.

What Should I Do Instead?

There are AI tools that specialize in searching the literature, but even these are typically limited to open-source texts. Use these tools cautiously, perhaps in the brainstorming and planning stages of a project.

As an alternative, we welcome you to contact us to request a literature search.

Want to learn more about the use of AI for literature searching? Sign up for our next class on August 19 from 12-1 pm.

Retractions, AI, and the Risks of Biomedical Misinformation

Retractions are a serious threat to biomedical research

In the high-stakes world of biomedical research, where published findings can shape clinical practice, policy decisions, and even drug approvals, the presence of retracted literature is not just an academic problem, it’s a public health concern. When flawed, fabricated, or irreproducible studies are left unchecked in the scientific ecosystem, they continue to misinform downstream research, meta-analyses, clinical guidelines, and ultimately, patient care.

Retractions aren’t rare, either. According to Retraction Watch, retractions have been steadily rising over the last decade. Still, many retracted studies continue to circulate in the literature without any obvious indication that they’ve been pulled. 

AI-powered biomedical searching and retractions

There are dozens, maybe even hundreds, of AI tools that promise to revolutionize biomedical literature searching. These tools claim to make life easier for clinicians and researchers by surfacing “the best” evidence quickly.

Unfortunately, these AI tools likely struggle with reliably flagging retracted articles. None of these tools appear to cross-reference the Retraction Watch Database, even though it’s one of the most comprehensive and up-to-date sources of retraction data.

The result? Users could end up citing, summarizing, or even basing treatment decisions on debunked science and the AI tools they trusted helped them do it.

Putting three AI search assistant tools to the test

To assess whether current AI-powered tools can reliably detect and communicate retracted biomedical research, we ran a small, but telling test using a recently retracted article:

Wu, S. Y., Sharma, S., Wu, K., Tyagi, A., Zhao, D., Deshpande, R. P., & Watabe, K. (2021). Tamoxifen suppresses brain metastasis of estrogen receptor-deficient breast cancer by skewing microglia polarization and enhancing their immune functions. Breast Cancer Research, 23, 1-16.

This article was retracted on May 12, 2025.

We located this article through the Retraction Watch Database, a critical resource for identifying retracted papers. We then tested how three popular AI tools responded when we searched for this article: 1) SciSpace, 2) Consensus, and 3) Elicit/

Baseline: Publisher and PubMed got it right

The article is clearly marked as retracted on both the publisher’s website (BMC, part of Springer Nature) and in PubMed. On BMC’s site, the article is branded with a bold red banner indicating that it has been retracted, and it links directly to the retraction notice.

In PubMed, the article’s retraction status is labelled. There’s a large red “Retracted Article” warning at the top of the article record. 

With Third Iron’s LibKey Nomad browser extension installed, the retraction warning also appeared directly in the search results list, providing an extra layer of protection.

These platforms demonstrate that it is possible to handle retractions clearly and transparently. But what happens when you try to search with an AI-powered tool?

1) SciSpace: No retraction flag, no awareness

SciSpace has gained traction for its AI-enabled “Papers” database and its Chat AI for article summarization. We searched for the retracted article using the Papers function. The article was retrieved with no indication that it had been retracted.

The PDF version offered by SciSpace appeared to be the original, unretracted version of the paper — there was no watermark or retraction notice. This likely occurred because SciSpace stored an earlier version of the file and does not dynamically update with retraction metadata or new PDFs.

When we asked the SciSpace Chat if the article had been retracted, the reply was: “Sorry, this is not discussed in the paper.” In other words, the AI agent only read the text of the article and had no external awareness of its retraction status.

SciSpace also failed to locate or return the associated Retraction Note (PMID: 40355962), which was published in the same journal.

2) Consensus: Accurate link, but no warning

Consensus is designed to help users quickly identify answers to scientific questions by ranking statements from published articles.

The article was returned in a basic search, and no indication of its retracted status was provided. The PDF link routed to the publisher’s version, which is good practice. Since BMC properly flags retractions, users landing on that page would see the retraction banner and be able to access the Retraction Note. While Consensus did not flag the article as retracted in its own search interface or metadata, it did link out to a source that did.

3) Elicit: Somewhat better!

Elicit offers two formats for reviewing articles: a plain-text view and a PDF view. When we searched for the retracted article via Elicit’s “Find Papers” tool, the results were mixed.

The article summary did not indicate that the paper had been retracted. The plain text view contained the word “RETRACTED” throughout the body text. Elicit also linked to a newer version of the article PDF that had the retraction stamp clearly watermarked across every page.

Lessons learned: We need accountability and standards

Users skimming article summaries, relying on search results, or using data extraction tables generated by AI tools might still miss the retraction unless they click deeper into the article itself. This is especially concerning in evidence synthesis workflows, where tools like Elicit auto-populate summary tables with study characteristics and conclusions—often without indicating the article has been retracted.

If AI is going to play a meaningful role in evidence retrieval and synthesis, it needs to be held to a higher standard. At a minimum, AI tools used in biomedical contexts must:

  • Flag retracted articles clearly and automatically
  • Cross-reference multiple retraction sources, including Retraction Watch
  • Date-stamp and cite their information sources transparently
  • Allow users to report errors or omissions easily

Until the current AI tools ecosystem improves, here are some tips to protect yourself and your team:

  • Always cross-check critical articles in the Retraction Watch Database or in PubMed
  • Use reference managers (like Zotero or EndNote) that integrate with PubMed and allow for manual annotations of retracted status
  • Avoid relying solely on AI summaries or ranking algorithms, especially for high-stakes research

It’s also worth noting that all the tools we tested (SciSpace, Consensus, and Elicit) are paid products and are effectively marketed as intelligent research assistants. Yet their inconsistent handling of retracted literature highlights the need for human-level vetting and cross-referencing.

In practice, this can take significantly more time than a traditional search if you’re trying to be thorough. Instead of accelerating research, these tools often introduce a false sense of efficiency, making it easy to miss red flags that would be obvious in a well-curated, librarian-led search process. The MSK Library team can help you navigate retraction risks, validate sources, and choose the right tools for your research. Connect with us today