AI-assisted surgical training
When prospective surgeons train for keyhole surgery, their learning progress can hardly be assessed objectively so far. Researchers at the NCT/UCC Dresden, EKFZ for Digital Health and CeTI are therefore developing an AI-assisted system that makes training progress measurable and provides tailormade improvement suggestions.
Keyhole surgery is particularly difficult. Here, surgeons operate with long instruments that are inserted into the patient's body through small incisions. Orientation is provided by a screen on which two-dimensional camera images from inside the body are displayed.
Prospective surgeons can use special simulators to train key skills and manipulations for minimally invasive surgery. Until now, experienced surgeons have had to stand next to the trainees and evaluate their performance. This ties up capacity and leads to rather subjective assessments of the respective learning progress.
In the future, computers could use artificial intelligence to make learning progress objectively measurable. For this purpose, in a research project training data from a good 50 medical students and trainee surgeons were recorded in practical courses on minimally invasive surgery - at the beginning of a course, at various times during the course and at the end of the course. The participants performed predefined exercises on basic surgical skills in a special training box - such as cutting, suturing and knotting, or the two-handed use of instruments.
With the help of an optical tracking system, the camera images from inside the box showed in real time where the surgical instruments - for example scissors, graspers, needle holders - were located.
At least two experts then independently evaluated each recorded training video on the basis of certain criteria, e.g., whether movements were executed without tremors and in a targeted manner or how carefully and precisely the simulated tissue was processed. These evaluations were fed into the artificial neural network as information in the form of mathematical inputs. In addition, the defined basic exercises were performed by 18 experienced surgeons and an expert model was provided to the computer on this basis.
Each example and assessment helps the neural network to more accurately adjust the links between the various mathematical formulas describing different requirements and issues. Similar to a human, the network is able to learn from examples. The goal is for the computer to eventually be able to independently make a meaningful assessment based on the image information generated during a training sequence.
In this way, trainee surgeons could in future receive objective feedback from artificial intelligence during training on the simulator as to the areas in which they have already improved and the skills on which they need to work particularly hard. In the future, for learning purposes, the computer could display on the screen what a certain hand movement looks like when an expert performs it.
In addition to visual information, feedback via the sense of feeling should also support training. Therefore, researchers at CeTI have developed a laparoscopic forceps equipped with a vibration function. As soon as the instrument moves out of the camera image of the endoscope and thus out of the optimal surgical area, a neural network detects this and sends a signal to the forceps, which then begins to vibrate. This gives the surgeon immediate feedback that he or she needs to correct the position of the instrument.