This was the second session in NCP3's ongoing training series on using AI-powered search engines for advocacy and research. The session was led by Richard "Red" Lawhern, PhD (Founder, Emeritus) with significant contributions from guest speaker Debbie Cunningham of San Diego, who teaches AI literacy classes and provided hands-on demonstrations of voice-based AI interaction.
The session covered practical use of Perplexity.ai for pain advocacy research, limitations of AI search tools, strategies for overcoming built-in biases in large language models, and emerging AI tools for content creation and distribution.
Red conducted a live demonstration of Perplexity.ai using real-time questions relevant to NCP3's advocacy work. The primary example focused on the co-prescribing of benzodiazepines and opioids, prompted by participant David G., whose own doctor had refused to co-prescribe due to fear of losing their license.
Perplexity returned a response that aligned with the dominant guideline-driven narrative, stating that co-prescription is "generally discouraged due to significant safety concerns." Red pointed out that while the guidelines do discourage concurrent use, actual clinical experience contradicts this position. He referenced his published work with Dr. Stephen Nadeau demonstrating that the benefits of co-prescribing benzos and opioids far exceed the risks in properly managed patients.
When Red asked the follow-up question, "What sources suggest that the above curated answer may represent oversimplified thinking or simple misinformation?" Perplexity responded that no sources contradicted its answer. Red stated this was "an outright lie" and demonstrated that this is a key limitation of AI search tools: they tend to suppress minority opinions even when those opinions are backed by stronger evidence.
Red then showed that by rephrasing the follow-up question, the search engine could be "teased" into gradually acknowledging contradictory evidence, including studies showing that long-term benzodiazepine patients maintain stable doses for up to 40 years without escalation, misuse, or dependence, with compliance rates exceeding 70%.
Debbie Cunningham raised a critical insight that became the central theme of the session: AI has no inherent bias, but it equates quantity with truth. Large language models identify repeating patterns across their training data and treat the most frequently repeated information as the most reliable. This means that when the dominant literature on a subject is itself biased (as is the case with opioid prescribing guidelines shaped by the CDC, PROP, and Andrew Kolodny), AI tools will confidently present that biased information as authoritative.
Debbie recommended that NCP3 leverage this same dynamic to its advantage: by generating more published content, videos, infographics, and audio based on NCP3's vetted research, the organization can increase the volume of accurate information online, which will in turn shift what AI tools surface in response to pain-related queries.
Debbie demonstrated her preferred method of interacting with AI: voice input through ChatGPT's conversation mode. She explained that speaking to the AI rather than typing provides more context and "ammunition" for the model to work with, which improves the quality of results. She also noted that voice interaction is more accessible for people who have difficulty typing, including chronic pain patients.
Debbie shared a practical tip: ending voice prompts with "Does that make sense?" causes the AI to summarize back what it understood, allowing the user to confirm or correct before it generates a full response.
Debbie described her workflow of using multiple AI tools to cross-validate results: she generates initial content or research using ChatGPT's voice mode, then copies the output into Claude, Grok, and Gemini to compare responses and identify discrepancies. She also recommended adding "with attribution" to any AI prompt, which she stated reduces hallucination by 60-80% by forcing the model to cite its sources.
Debbie's recommendation for a single paid AI tool: Claude, which she described as "a cross between ChatGPT and Perplexity" and a "phenomenal bang for the buck" at $20/month.
Red referenced a pharmacist colleague named Norm who publishes "You Are Within the Norms," a blog that uses an AI-based tool to convert full articles with references into audio dialogues between two people in minutes. Red emphasized this as a powerful tool for making NCP3's research accessible in audio and podcast formats.
Debbie introduced NotebookLM as a free tool that can take research documents and transform them into infographics, audio podcasts (including argumentative dialogue), and slide decks. She noted she had experimented with it using NCP3-adjacent content and found the presentation quality impressive, though the content quality depended on feeding it the right source material.
Red previewed key elements of his upcoming SYNC 2026 presentation, "An Indictment of US Healthcare Policy on Treatment of Pain and Addiction," scheduled for March 16, 2026 in Rosslyn, Virginia. He referenced six key published studies that collectively demonstrate the real incidence of addiction or overdose caused by clinical opioid treatment is approximately one patient per thousand treated.
Key studies referenced include a Cochrane review, the largest study of post-operative prescribing to opioid-naive patients (37 million patients), the largest study of commercially insured patients, the Howard Jalal study published in Science (2018-2019) on trends in opioid-related overdose, and the Oliva et al. study from the VA examining risk of overdose, suicide attempt, and completed suicide in chronic pain patients (one million patients).
Red stated that these studies were known to the DEA as early as 2019 and were ignored, making the DEA "an unindicted co-conspirator with CDC in the deaths of thousands of patients." The CDC briefed the Jalal study to doctors in a controlled substances license renewal conference. Red asserted that the evidence of fraud and complicity in negligent homicide is strong enough that it is "only a matter of time before senior officials in these agencies are investigated, indicted, tried, convicted, and imprisoned."
Participant David G. raised the issue of the CDC being "co-opted in a somewhat conspiratorial way" by groups like PROP and Andrew Kolodny. Red confirmed this is a central theme of his upcoming presentation and stated that six key published studies, relying on the CDC's own data, prove "beyond any contradiction" that the CDC guideline writers committed "the largest and most deadly healthcare fraud in the history of the United States." He stated that the Agency for Healthcare Research and Quality (AHRQ) wrote most of the supporting analysis behind the CDC opioid guidelines and deliberately ignored any evidence that contradicted their political agenda.