Automatic Head Model including 1010-System / Electrode Placement
-
@lucky_lin "Other tissues" is basically fat. I don't think this is an issue.
Note, that the head is not perfectly aligned with the axis, i.e. your XY plane may not be perpendicular to the "up" direction. You can also change the slice by using the interactive slice instead of axis-aligned.
Finally. Yes, it is normal that real subjects/people have asymmetrical features. If you are interested in the research question, you could test the impact of individual tissues (conductivity) by
- assigning the same tissue property to all tissues (except for Air?). Is this nearly/more symmetrical?
- assigning the same tissue property to all tissues, except for one tissue (and Air). Candidates are the thin layers around the brain, e.g. Dura, CSF, Brain grey matter, Skull cortical/cancellous, [Galea, Muscle, ...].
-
@bryn I tried to set both the rotation and translation of T1W to 0 and then generate the reference point, but it reported an error: Modeler : [Error] Exception during import: Expecting 'Version 1.0' on first line Modeler : [Error] operation unsuccessful.
-
The T1w image is placed in world (scanner) coordinates. This is useful, e.g. to align different acquisitions (e.g. T1, T2 with different resolutions or field of view).
You could remove the rotation and translation, e.g., by setting an identity transform. However, in my experience, it is often useful to preserve the position in world coordinates.
img = XCoreModeling.Import("some_t1w_mri.nii.gz") img.Transform = XCoreModeling.Transform()
-
I don't understand what you are doing. The error looks like Sim4Life cannot parse the .pts file produced by the landmark predictor.
- did you edit the .pts file manually?
- where/how did you set the rotation/translation to "0"?
- did you try to set the transform [to zero] (before running the prediction) as suggested in my last post?
-
-
I can reproduce your issue. If I set the transform to "0", i.e. Identity, the predictor fails. The head40 segmentation is also less accurate! We need to investigate.
A workaround would be:
- load image
- predict landmarks & segmentation
- compute inverse image transform
- apply this inverse to landmarks/segmentation/surfaces/etc
# assumes verts and labelfield are already predicted (without setting transform to "0") inv_tx = img.Transform.Inverse() # transform segmentation labelfield.ApplyTransform(inv_tx ) # transform landmarks for v in verts: v.ApplyTransform(inv_tx )
-
The issue is that our neural network was trained with the data in the RAS orientation (with some deviation +- 15 degrees, and flipping in all axes). If you manually edit the transform, you break assumptions used to pre-orient the data into RAS ordering.
Since RAS is a widely used convention in neuroscience, and medical images are always acquired with a direction (rotation) matrix and offset (translation), I think it is best you don't modify the transform.
For instance, if you try to assign DTI-based conductivity maps - you will need to rotate the grid AND the tensors accordingly. It can be done, but it will be more effort...
If this is to investigate if the fields are (nearly) symmetric, I suggest you
- find an approximate symmetry plane (wrt to the brain or skull or ...)
- align the plane of a slice viewer perpendicular to the symmetry plane
-
The default is 40 tissues. To be explicit you can specify this via
import ImageML labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head40)
For 30 (or 16) tissues you would specify the version
head30
(orhead16
)import ImageML labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head30)
But please note: the versions are an evolution. The
head16
segmentation is not the same, with fewer tissues. It is also less accurate, as it was the first version we published (and trained on less training data). -
@bryn Thank you very much for your response! I have a question: What is the difference between constructing a head model using T1-weighted (T1W) and T2-weighted (T2W) images and constructing a head model using only T1W images? Why can only 16 types of tissues be segmented when using T1W and T2W images?
-
@lucky_lin In our first version of the head segmenation (head16) we trained with a smaller dataset, where T1w and T2w was available. We trained two networks, one with just T1w as input, one that gets T1w + T2w as input.
In our later work we extended the training data, but only have T1w images. Therefore, the head30 and head40 only needs a T1w image.