PhD defense : Abdolrahim KADKHODAMOHAMMADI
Team : AVR
Title : 3D Detection and Pose Estimation of Medical Staff in Operating Rooms using RGB-D Images
Abstract : In this thesis, we address the problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients needed to develop many applications in such environments, like surgical activity recognition, surgical skill analysis and radiation safety monitoring. Because of the strict sterilization requirements of the OR and of the fact that the surgical workflow should not be disrupted, cameras are currently one of the least intrusive options that can be conveniently installed in the room to sense the environment. Even though recent vision-based human detection and pose estimation methods have achieved fairly promising results on standard computer vision datasets, we show that they do not necessarily generalize well to challenging OR environments. The main challenges are the presence of many visually similar surfaces, loose and textureless clinical clothes, clutter, occlusions and the fact that the environment is crowded. To address these challenges, we propose to use a set of compact RGB-D cameras installed on the ceiling of the OR. Such cameras capture the environment by using two inherently different sensors and therefore provide complementary information about the surfaces present in the scene, namely their visual appearance and their distances to the camera.
In this dissertation, we propose novel approaches that take into account depth, multi-view and temporal information to perform human detection and pose estimation. Firstly, we introduce an energy optimization approach to consistently track body poses over entire RGB-D sequences. Secondly, we present a novel approach to estimate the body poses directly in 3D by relying on both color and depth images. The approach also uses a new RGB-D body part detector. Finally, we present a multi-view approach for 3D human pose estimation, which relies on depth data to reliably incorporate information across all views. We also present a method to automatically model a priori information about the OR environment for obtaining a more robust human detection model. To evaluate our approaches, we generate several single- and multi-view datasets in operating rooms. We demonstrate very promising results on these datasets and show that our approaches outperform state-of-the-art methods on data acquired during real surgeries.
The defense will be held in English on Thursday, Dec 1st, at 13:30 in the Hirsch amphitheater at IRCAD.
Offers are available in the Job opportunities section of the ICube website or by clicking on the...
Le 13 novembre, le CNRS a réuni les 26 start-up issues de ses laboratoires sous tutelle,...
L'équipe de l'Université de Strasbourg et la délégation Alsace du CNRS se sont brillamment...
Le vendredi 20 septembre a eu lieu la réunion de lancement du projet INTERREG 2PhaseEx, au...
Paris 27 aout 2024 – ARCHOS annonce que POLADERME, filiale du Startup studio Medtech du groupe...
La 11e journée du département de mécanique s'est tenue le 18 juin 2024. Lors de cette...
A l'occasion de la soirée de gala du 103ème congrès de l’association française des professionnels...
Le 32ème Congrès Français de Thermique de la Société française de thermique (SFT) organisé par le...
L'un des 3 Prix du meilleur poster de la 11èmes journées de la Fédération de Médecine...
La neurostimulation guidée par l’imagerie cérébrale pour traiter les patients atteints d’épilepsie...
L'un des 3 Prix du meilleur poster de la 11èmes journées de la Fédération de Médecine...