Professor Victoria Tzortziou Brown, President of the Royal College of GPs (RCGP) (Credit: Grainge Photography)
Physician groups have raised security concerns about an artificial intelligence tool that writes hospital discharge letters, which is planned to be rolled out nationwide through the Federated Data Platform (FDP).
The technology, developed at Chelsea and Westminster NHS Trust, uses a large language model to extract key details from medical records, such as diagnoses and test results, to help draft the documents needed to discharge people from hospital.
The AI-Assisted Discharge Summary tool has been in use at the trust since August 2025.
However, Professor Victoria Tzortziou Brown, president of the Royal College of GPs (RCGP), said Digital health news that he is concerned “about the potential risks to patient safety associated with current AI-assisted tools.”
“We would like to see clear evidence from an independent and impartial clinical safety assessment of any AI-assisted discharge summary tool, demonstrating that it is at least as safe and effective as current discharge summary processes before it is more widely implemented.
“This evaluation should include real-world testing in various clinical settings, including high-pressure environments, and full publication of the findings,” Brown said.
RCGP is a member of the FDP Check and Challenge Group, which had a demonstration of the discharge summary product at its September 2025 meeting.
At the meeting, members questioned whether the tool had been classified as a medical device by the Medicines and Healthcare products Regulatory Agency (MHRA) and raised the risk of loss of clinically relevant information that could affect ongoing care and safety.
One member also highlighted that using patient data to validate and revalidate a tool in development does not clearly fall within the scope of direct care.
The group warned that the tool needed “greater clinical depth and regulatory assurance, and called for visibility of further evidence of safety, clinical appropriateness, and formal regulation before further implementation,” according to meeting minutes.
In response, the FDP team confirmed that discussions were ongoing with the MHRA regarding medical device classification and that the tool was designed to support clinical judgment, with multiple validation steps built in to mitigate the risks of information loss.
A spokesman for the British Medical Association said Digital Health News: “Before we allow new technology like this to take a central role in managing patient data, we must first understand its impact on patients, physicians and the system as a whole.
“In our meetings with the verification and challenge group, we were assured that human approval will be required as a final step, but without fully exploring the potential risks, we remain concerned about the speed at which this new technology is being implemented.”
Anna Steere, Director of Understanding Data, said Digital health news that with proper safeguards in place the download tool could be “a pragmatic step forward in a system that we already know needs significant improvements.”
He added that “if doctors rely too much on AI and stop checking results carefully, errors could happen”, but said it was about setting clear rules for the use of AI across the health service.
Digital health news has contacted Chelsea and Westminster NHS Trust and NHS England for comment.
