The models, created by researchers at MIT and Boston Children’s Hospital, could provide a more intuitive way for surgeons to assess and prepare for the anatomical idiosyncrasies of individual patients.
“Our collaborators are convinced that this will make a difference,” says Polina Golland, a professor of electrical engineering and computer science at MIT, who led the project. “The phrase I heard is that ‘surgeons see with their hands,’ that the perception is in the touch.”
This autumn, seven cardiac surgeons at Boston Children’s Hospital will participate in a study intended to evaluate the models’ usefulness.
Danielle Pace, an MIT graduate student in electrical engineering and computer science, spearheaded the development of the software that analyses the MRI scans. Medhi Moghari, a physicist at Boston Children’s Hospital, developed new procedures that increase the precision of MRI scans tenfold, and Andrew Powell, a cardiologist at the hospital, leads the project’s clinical work.
MRI data consist of a series of cross-sections of a three-dimensional object. Like a black-and-white photograph, each cross section has regions of dark and light, and the boundaries between those regions may indicate the edges of anatomical structures. Then again, they may not.
Determining the boundaries between distinct objects in an image is one of the central problems in computer vision, known as 'image segmentation'. But general-purpose image-segmentation algorithms aren’t reliable enough to produce the very precise models that surgical planning requires
Typically, the way to make an image-segmentation algorithm more precise is to augment it with a generic model of the object to be segmented. Human hearts, for instance, have chambers and blood vessels that are usually in roughly the same places relative to each other. That anatomical consistency could give a segmentation algorithm a way to weed out improbable conclusions about object boundaries.
The problem with that approach is that many of the cardiac patients at Boston Children’s Hospital require surgery precisely because the anatomy of their hearts is irregular. Inferences from a generic model could obscure the very features that matter most to the surgeon.
In the past, researchers have produced printable models of the heart by manually indicating boundaries in MRI scans. But with the 200 or so cross-sections in one of Moghari’s high-precision scans, that process can take eight to ten hours.
Pace and Golland’s solution was to ask a human expert to identify boundaries in a few of the cross-sections and allow algorithms to take over from there. Their strongest results came when they asked the expert to segment only a small patch — one-ninth of the total area — of each cross-section.
In that case, segmenting just 14 patches and letting the algorithm infer the rest yielded 90 percent agreement with expert segmentation of the entire collection of 200 cross-sections. Human segmentation of just three patches yielded 80 percent agreement.
Together, human segmentation of sample patches and the algorithmic generation of a digital, 3D heart model takes about an hour. The 3D printing process takes a couple of hours more.
Currently, the algorithm examines patches of unsegmented cross-sections and looks for similar features in the nearest segmented cross-sections. But Golland believes that its performance might be improved if it also examined patches that ran obliquely across several cross-sections. This and other variations on the algorithm are the subject of ongoing research.