In the absence of computer-assistance, orthopaedic surgeons frequently rely on a challenging interpretation of fluoroscopy for intraoperative guidance. Existing computer-assisted navigation systems forgo this mental process and obtain accurate information of visually obstructed objects through the use of 3D imaging and additional intraoperative sensing hardware. This information is attained at the expense of increased invasiveness to patients and surgical workflows. Patients are exposed to large amounts of ionizing radiation during 3D imaging and undergo additional, and larger, incisions in order to accommodate navigational hardware. Non-standard equipment must be present in the operating room and time-consuming data collections must be conducted intraoperatively. Using periacetabular osteotomy (PAO) as the motivating clinical application, we introduce methods for computer-assisted fluoroscopic navigation of orthopaedic surgery, while remaining minimally invasive to both patients and surgical workflows.
Partial computed tomography (CT) of the pelvis is obtained preoperatively, and surface models of the entire pelvis are reconstructed using a combination of thin plate splines and a statistical model of pelvis anatomy. Intraoperative navigation is implemented through a 2D/3D registration pipeline, between 2D fluoroscopyand the 3D patient models. This pipeline recovers relative motion of the fluoroscopic imager using patient anatomy as a fiducial, without any introduction of external objects. PAO bone fragment poses are computed with respect to an anatomical coordinate frame and are used to intraoperatively assess acetabular coverage of the femoral head. Convolutional neural networks perform semantic segmentation and detect anatomical landmarks in fluoroscopy, allowing for automation of the registration pipeline. Real-time tracking of PAO fragments is enabled through the intraoperative injection ofBBs into the pelvis; fragment poses are automatically estimated from a single view in less than one second. A combination of simulated and cadaveric surgeries was used to design and evaluate the proposed methods.
Robert Grupp is a postdoctoral fellow at LCSR primarily working with Mehran Armand in the Biomechanical and Image-Guided Surgical Systems Lab. Herecently completed his PhD in the Department of Computer Science at JohnsHopkins University, advised by Russell Taylor. His current research focuses on medical image registration and aims to enable computer-assisted navigation during minimally invasive orthopaedic surgery. Some of this work has been highlighted as a feature article in the February 2020 issue of IEEE Transactions on Biomedical Engineering. Prior to starting his PhD studies, Robert worked on various Synthetic Aperture Radar exploitation algorithms as part of the Automatic Target Recognition group at Northrop Grumman:Electronic Systems. He received a BS in Computer Science and Mathematics from the University of Maryland: College Park.