🎉Result Release!🎉 Segmentation Registration
🎉Result Release!🎉 Segmentation Registration
In 🤓STS 2025 Challenge, the challenge evaluation criteria focus strictly on segmentation and registration accuracy. In specific, the Task 1 (Teeth and Pulp Root Canal Segmentation) inference results are evaluated using Dice Similarity Coefficient (DSC), Normalized Surface Dice (NSD), Intersection over Union (IoU), and Identification Accuracy (IA). DSC and IoU are used to measure the region error, while NSD assesses the boundary error. The IA metric evaluates the object-level localization (detection) performance of teeth. For Task 2 (Crown and Root Registration) , the performance is evaluated using Mean Translation Error (MTE) and Mean Rotation Error (MRE) to assess the alignment accuracy between the IOS and CBCT scans in terms of millimeters and degrees, respectively. You can learn the details from the python version of the code for calculating performance metrics that we released.
Special Explanation: The calculation definition of IA is: #{ D ∩ G } / #{D ∪ G}, where G is the set of all teeth in ground truth data, and D is the set of teeth prediction. #{ D ∩ G } indicates the intersection between D and G, reflecting the number of correctly labeled tooth instances detected by the algorithm, and #{D ∪ G} represents the union between the prediction result and the ground truth. In particular, the localization criterion is Mask IoU score localization criterion. Besides, we have implemented a greedy strategy to match reference and predicted objects. This means that only objects with consistent predicted categories and a Mask IoU greater than 0.5 will have their #{ D ∩ G } counts + 1.
🔔 Now, we have updated and released the evaluation code. If you encounter a bug or have questions about the evaluation metrics, you can check out our Github: https://github.com/ricoleehduu/STSR-Challenge/tree/main/STSR-2025/evaluation .
All metrics will be used to compute the ranking. The ranking scheme includes the following steps: