Summary
Incident reports of medication errors are valuable learning resources for improving patient safety. However, key information is often contained within unstructured free text, which prevents automated analysis and limits the usefulness of these data. Natural language processing can be used to structure this free text automatically and retrieve relevant past incidents and learning materials, but this requires a large, fully annotated and validated set of incident reports. This study in Nature used a set of 58,658 machine-annotated incident reports of medication errors to test a natural language processing model. The authors provide access to the validation datasets and machine annotator for labelling future incident reports of medication errors.
0 Comments
Recommended Comments
There are no comments to display.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now