AI bridges the gap

3 minute read


Generative models boost patient understanding of urologic cancer research, according to a new study.


Generative artificial intelligence could soon become a powerful ally in making cancer research accessible to patients and caregivers, according to new research.

Findings from the BRIDGE-AI 6 randomised controlled trial, which evaluated the ability of a generative AI framework to transform dense scientific abstracts on urologic oncology into lay abstracts and summaries (LASs) that are easier to read without losing scientific integrity, have been published in JCO Clinical Cancer Informatics.

Researchers tested 40 abstracts covering prostate, bladder, kidney and testis cancers drawn from leading journals. Each was processed through a freely available AI tool, producing multiple LAS versions designed to meet international plain-language standards at a 12-year-old reading level.

Two independent reviewers then rated the accuracy, completeness, and clarity of the AI-generated texts, while 277 patients and caregivers participated in a pilot trial comparing their comprehension and perception of LASs versus original abstracts.

The results were striking, the researchers reported. LASs generated in under 10 seconds consistently scored high on readability metrics, with a mean Flesch Reading Ease Score nearly three times that of original abstracts (68.9 vs 25.3; P < .001).

Accuracy and clarity remained high across sections, though the methods section lagged slightly, with 85% accuracy compared to 100% in the originals. AI “hallucinations” were rare, occurring in just 1% of sections, the researchers wrote.

“This study demonstrates that GAI-generated LASs can rapidly and consistently summarise complex urologic cancer research in an approachable and structured template with high readability, completeness, and clarity,” they wrote.

“Patients and caregivers also found LASs more comprehensible and better perceived than the OAs [original articles]. Although the accuracy was strong in most LAS sections, it was found to be somewhat lower in the methods section, owed in part to the presence of occasional AI hallucinations. Thus humans must remain in loop before distribution to ensure correctness.”

Patients and caregivers exposed to LASs demonstrated markedly better comprehension across all abstract sections, with adjusted odds ratios between 4.0 and 4.9 (P < .001).

They also reported more favourable perceptions, rating LASs as easier to understand, clearer and more useful, with higher willingness to share them. Importantly, trustworthiness ratings did not differ significantly between LASs and original abstracts, suggesting AI did not erode confidence in the information.

The implications for clinical practice and research dissemination are considerable. As misinformation proliferated online, peer-reviewed science that patients could actually understand became critical to informed decision-making and trust in medical care.

AI-generated LASs, when appropriately supervised, could offer a scalable, low-cost way to meet that need, ensuring that the rapid pace of oncology research translates more effectively into patient-centred knowledge, the authors said.

They noted that their study had some limitations, including its narrow focus on urologic cancers, as well as the lack of standardised tools for evaluating the quality of GAI output.

“More research is needed to develop standardised scoring systems for reproducibility and cross-study comparisons,” they wrote.

“Additionally, this study does not assess the ability of urologists to draft their own LASs or edit GAI-generated LASs. An RCT comparing provider-written LASs with and without AI assistance is currently ongoing (OSF 39), and the results of the pilot RCT reported here will be used to inform a larger RCT as part of the BRIDGE-AI initiative.”

In conclusion, the researchers said GAI-generated LASs showed significantly better readability than the OAs while maintaining high quality – but there was a caveat.

“The LASs were found to have better comprehension and perception by patients and caregivers compared with OAs,” they wrote.

“Although this pipeline automatically generates content that is more accessible, human supervision remains essential to ensure accurate, complete and clear representations of the original research.”

JCO Clinical Cancer Informatics, September 2025

End of content

No more pages to load

Log In Register ×