A2: Bone fitting
We want to visualize, compare, and analyze 3D scans of human bones using both iPhone XR's trueDepth camera Links to an external site. and Structure Sensor Links to an external site.. Identifying human bones is an important work for anthropologists. Currently, measurements of the human bones are taken by hand, which is time-consuming, and the error rates are highly dependent on the specialist's experience level. 3D pictures and scans provide a new method to measure the bones. We want to explore a tool to get measurements of human bones by the 3D photos taken by iPhone XR's trueDepth camera and compare the results with those achieved from the Structure Sensor. Read more details about the process in this report Links to an external site..
Figure: 3D Scans of a human bone using Structure Sensor (top) and iPhone XR's trueDepth camera (bottom)
Part 1: Visualize and Analyze 3D Scans of Bones
- Get the Python code for the automatic tool for bone measurement from this Github Repo Links to an external site.. The recommended IDE to run this code is PyCharm and install the dependencies as listed in the file requirements.txt Links to an external site.. The Github repo has the 3D Scans of Bones done by Structure Sensor already available. The 3D scans saved as .obj files in the data directory. Examine the 3D scans. There are four types of bones scanned in the dataset, (1) Femur, (2) Humerus, (3) Radius, and (4) Tibia.
- Get the 3D Scans of Bones done by iPhone from this Dropbox link Links to an external site. in the "bones obj files" directory. Add the iPhone scans for the same four types of bones in the data directory of the Python code downloaded in the previous step.
- The Dropbox link Links to an external site. also has the necessary documentations about the bone measurement tool and the datasets. Read to documents carefully. There are also some 3D scans of a iPhone Box using both iPhone XR's trueDepth camera and Structure Sensor to give you some idea about the scans of common object. Read more about this scan in the report Links to an external site..
-
Use the tool (Python code) to run measurements on the 3D scans to find landmarks and calculate errors. To run measurements on 3D models, run "test_scan.py
Links to an external site." in the "web" directory. This file will take either a single 3D model or multi 3D models as input and output the measurements to a *.csv file. Set the global variables in lines 25 to 31, such as bone type, input file name, and if the model is taken from iPhone or structure sensor, for specific configuration and run the tool. For example, to run measurement on femur_4.obj taken by the sensor, set
- bone_type = Bone.Type.FEMUR
- index_default = 4
- structure_sensor = True
- To calculate errors, take the output numbers and compare them to ground truth. This can be done by running "results_anlysis.py Links to an external site." in the "web/core_alg/utilities" directory. If the inputs are multiple models, then it will also output a *.csv file that stores the mean values, absolute errors, standard derivations with ground truth. If the input is a single model, the error should be manually calculated.
- Hypothesize why the iPhone errors are worse. Submit a PDF file with your results and analysis.
What to turn in:
A single PDF file, with figures generated by running the tool with description and discussion of your hypothesize why the iPhone errors are worse.
Submission deadline: Jan 19, 2021 at 11.59pm
Part 2: Use ICP to Align Individual 3D scans
- For this part, you will need to learn to use the Iterative closest point (ICP) algorithm to align individual 3D scans and merge them into a single scan. ICP can be applied to 3D models using existing tools such as Meshlab Links to an external site., CloudCompare Links to an external site., or MATLAB Links to an external site..
- Load and Visualize the 3D models using Meshlab, CloudCompare, or MATLAB.
- Use ICP using one of these tools to align individual 3D scans in the "picture" directories of the bone datasets and merge them into a single scan. Some tutorial links to use ICP to align individual 3D scans are given below.
- ICP using Meshlab Links to an external site.
- ICP using Meshlab (video) Links to an external site.
- ICP using CloudCompare Links to an external site.
- ICP using CloudCompare (video) Links to an external site.
- ICP using MATLAB Links to an external site.
- ICP using MATLAB (video) Links to an external site.
- CloudCompare can be used for comparing and visualizing two 3D meshes Links to an external site..
- Now, compute the error on the landmarks found from the merged scan using ICP.
- Determine if the error from the merged scan is better or worse than the original scans. Discuss your findings in a PDF report.
Submission deadline: Jan 29, 2021 at 11.59pm
What to turn in:
A single zip file containing a pdf report and code/files (if any) with clear instructions to run.
Part 3: Improve the Final Error
- In this final part, you will need to improve the final error in detecting landmarks in any way, either using a tool or by writing code. For example:
-
- Fitting a reconstruction to the point cloud to lower noise (Ball pivot or Poisson or other).
- Running a denoising algorithm or smoothing algorithm
- Merging with ICP and checking error after merging, vs using a single scan
- Using your iPhone to take 3D pictures of a paper towel roll (approximate bone), and see if you can improve the scan data to have less noise, or less smoothing around the bottom of the bone shape.
-
- You will need to explain your method in a submitted PDF report, providing visuals and screenshots to explain what you did and why you think its improving.
- You will need to produce error numbers versus ground truth using the Python code (unless you are taking new pictures, then this doesn't apply). We will compare who in the class can obtain lowest error (assuming any of us can lower the error).
- You could work either on iPhone data (starts with higher error) or Structure data (starts with lower error)
- You could work on either pictures or scans.
Submission deadline: Feb 05, 2021 at 11.59pm
What to turn in:
A single zip file containing a pdf report and code/files (if any) with clear instructions to run.