Surgical navigation for cancer
Tumor operations on the rectum are performed along a millimeter-thin layer that borders on important nerves. If they are damaged, this can lead to incontinence and sexual dysfunction. In the Dresden "CoBot" project, researchers are developing a computer-based assistance system to significantly reduce the risk of such complications in the future with the help of artificial intelligence.
Around 60,000 people in Germany are diagnosed with colorectal cancer every year. About one third of them suffer from a tumor in the rectum. For the majority of patients, surgery is the most important treatment option. Due to the narrowness of the small pelvis and the close proximity of risk structures, this operation is particularly demanding. A few millimeters are crucial for the success of the operation: if the surgeon cuts too close to the tumor, he may not remove everything. If he cuts too far away, surrounding nerves can be damaged. As a result, around 50 percent of patients suffer from bladder or fecal incontinence and around 30 percent from erection problems or other sexual problems such as insensitivity during intercourse.
Situational assistance for robotic surgery
In most cases, rectal surgery is nowadays possible without a large abdominal incision. In addition to conventional laparoscopic procedures, robotic surgical systems such as the "DaVinci" can be used. The device relieves the surgeon of the direct holding and moving of instruments and translates larger hand movements, which the surgeon executes via two joystick-like handles, into tiny tremor-free incisions.
Up to now, there has been no evaluation of available image or sensor data during the operation that could provide the surgeon with content-related assistance. However, this would be particularly useful for robotic operations. This is because surgeons operate even more by eye than with open surgical techniques: they are dependent on visual information, since no feedback is available from the sense of touch in these systems.
Researchers at the NCT/UCC and the EKFZ for Digital Health are therefore developing a computer-based assistance system for robot-assisted rectal surgery. In the future, this should support surgeons during their difficult work and help to ensure that the quality of the procedure depends less than before on the experience of the individual surgeon and is increased across the board.
Bowel movement complicates navigation
A particular difficulty for computer-assisted assistance is that the bowel is a soft tube that moves continuously and is mobilized during surgery. Therefore, unlike in orthopedics, neurosurgery or otolaryngology, computed tomography or magnetic resonance imaging images obtained prior to surgery do not provide a suitable basis for reliable navigation. Instead, during surgery, the system must be able to recognize anatomical structures in the video images of the laparoscope and display them in real time.
In the application, the surgeon will in future see the camera images from the patient's abdomen as usual when looking at the monitor during the operation. If necessary, the system will display additional information in the camera images of the laparoscope, such as the location of important nerves or the optimal incision line. It is particularly important that the right information is available at the right time. The surgeons make the decisions themselves at all times. The system only supports them, similar to a navigation system in a car.
Neural network analyzes surgical images
To develop the system, the researchers use an artificial neural network, which, as a subfield of artificial intelligence, mimics the ability of humans to learn by example. In a neural network, numerous mathematical functions are interconnected, much like neurons in the human brain. Incoming information - for example, the color values of image pixels - is processed within the neural network and analyzed step by step. One also speaks of different layers of a neural network, each of which has the task of characterizing different features of an image. For example, if a neural network is to recognize bicycles in an image, the task for the first layer of the neural network might be to identify lines. Only if an image pixel is interpreted as part of a line, the result calculated by the associated mathematical function exceeds a certain threshold. In this case, the mathematical information is passed on to the next layer for further processing - the artificial neuron "fires". The mathematical task in the second layer could now be to form shapes from lines and to recognize spokes, for example.
Nowadays, scientists can often make use of existing neural networks, which they then adapt to their particular problem. The Dresden researchers are working, for example, with the so-called Detectron algorithm, which Facebook uses for face recognition. Large sample data sets are needed to train the system. In practice, images are presented to the system for analysis and then compared with the correct results already available - for example, the outlines of bicycles recognized by the computer in an image are compared with the bicycles marked by humans in the same image. In each case, the error is determined and the links are adjusted within the neural network to provide increasingly accurate results. Through this training, the neural networks should learn to analyze new, unknown images.
Annotated training data are rare
For simple objects that can be marked by laypersons, such as bicycles or various surgical instruments, there are now large freely accessible data collections. The situation is quite different for image data that requires medical and surgical expertise to annotate. That is why computer scientists, surgeons and engineers at the NCT/UCC, the University Hospital Carl Gustav Carus Dresden and the Faculty of Electrical and Computer Engineering at TU Dresden are working closely together in the "CoBot" project funded by the EKFZ for Digital Health.
Since 2017, recordings of DaVinci operations have been systematically archived at Dresden University Hospital. Around 25,000 individual images from 40 rectal surgeries have been tagged by surgeons and medical students since the end of 2019. On each individual image, they used a touch screen and a special pen to draw in important structures such as the optimal incision line and nerves to be spared. By training with this and other information, the computer learns to recognize different phases of surgery and fade in relevant information during operations that can last up to eight hours. The large number of training data from different patients is also necessary to ensure that the recognition of relevant structures will work in the future for different patients whose anatomical structures in the abdominal cavity do not look exactly the same in each case. In 2022, the system will be tested in real operations as part of a study.
Further information: https://digitalhealth.tu-dresden.de/research/innovation-projects/cobot/