8. Segmentation: Subcortical Brain Structures and FIRST (Struc E2)

8. Segmentation: Subcortical Brain Structures and FIRST (Struc E2)

SUBTITLE'S INFO:

Language: English

Type: Robot

Number of phrases: 687

Number of words: 3891

Number of symbols: 17889

DOWNLOAD SUBTITLES:

DOWNLOAD AUDIO AND VIDEO:

SUBTITLES:

Subtitles generated by robot
00:00
welcome to this video on subcortical brain segmentation this is a video about using the tool first within fsl my name is mark jenkinson and i'll be talking to you about how to use this tool in order to segment or delineate specific structures within the deep gray matter of the brain so the specific structures that we can use are illustrated here and they are quite specific we have 15 structures that is seven ones which are separated with the left and right
00:32
half and then the brain stem and these are the only ones that this particular tool is able to do that's because it actually uses training data in order to actually work out how to segment these regions so in the same way that if you're actually trying to train a person to find these areas of the brain and to segment them the algorithm also needs to be trained to understand where they are what they look like how to find them in order to do that we have training data that's been supplied to us by the center for morphometric analysis
01:03
in boston and that comprises 336 complete data sets which have been very carefully manually annotated they're actually manually annotated by people in their lab who spend a long time learning how to do that and then have to pass a test in terms of how consistent they are with previous labelers before they can start labeling new unseen brains so we know this is very good high quality data that we're working with the labeling has been done on t1-weighted images only and that it's a very wide range in the demographics
01:33
it includes both adults and children and includes a range of healthy individuals as well as very common pathologies that include schizophrenia alzheimer's disease as well as adhd and other things which are present in the population and that's deliberate so that we can actually have a tool which is very general it applies to a broad range of images that that you might want to segment because it's seen examples which are like that
02:03
like most training based algorithms it can't really cope with something which is very different from the training that it's seen so we've made it deliberately extremely broad so that it really can cope with a wide range of ages and different diseases but if you're actually interested in segmenting an individual which sits outside of this demographic and is noticeably different from that so it either is you know very young children or you have a particular pathology which is very pronounced or very
02:35
characteristic and different from what this is saying the chances are the first will not work well you can still try it and see how it does but you are certainly going outside the realms of what it has been designed to do what we do within first both in terms of building the model in terms of the process that it will go through when you give it data to work with is that it actually has to relate the model that it's learned to the images that you're working with and it does that through registration because we work in a standard space
03:07
that's how we combine things from many different individuals and so there's a registration process to standard space one of the steps is a fairly standard one as an affirm registration to the one millimeter standards-based template and that's simply using the flirt registration tool the second step is not a non-linear process which you might expect but actually are another affine registration which is refined to just concentrate on the subcortical features
03:37
so this one is done so that it's better at aligning the subcortical structures that we're interested in and may be worse aligning the edge of the brain in other parts of the cortex we do not do a nonlinear registration because actually one of the things we're interested in is what is the average shape of these structures and how do they vary across the population and so we must make sure that we preserve the shape of these structures so that we can actually measure that variation if we did non-linear registration we would get rid of that variation it would be encoded in
04:10
the warp fields but this is a more direct way of looking into the images and seeing what that variation in shape looks like so we preserve it within the images by sticking to just a linear or an affine registration process even though it's a two-stage one as you can see here fundamentally what this tool is trying to do is it's trying to look at individual shapes or individual anatomical structures within the brain and it models the boundary it does that by using a 3d mesh which is just
04:41
basically points on the surface of that structure so here's a 2d example you can see down at the bottom of points which are connected in the sense that will lie on the surface in 3d it's just a triangular mesh which i'm sure you've seen before and the idea is that that then represents where the edge of that structure is and therefore everything inside it is represents the structure itself when we're actually both trying to learn the model and when we're trying to fit that model to
05:13
the data the images that you provide it iteratively refines the location of these vertices or these points in order to sit on the boundary to figure out what are good shapes and what fits well with the data there are many things that it needs to learn about the characteristics of these particular subcortical shapes the main one is just what is the average shape that's a crucial thing that we need to learn but we also need to learn what are the likely variations
05:44
around that average shape which characterize the population because we know that there are certain changes in that structure which are likely to be seen for biological reasons and others which really are never likely to be seen biologically but might actually occur otherwise just due to artifacts or noise which are in the image and so this is useful in order to be able to separate what are biologically plausible changes from ones which might just be driven by some form of artifact or noise within
06:15
the image and so we have both of these things and we represent this with a formula you don't really need to concentrate on this at all this might be helpful for some of you but if equations aren't your thing that's totally fine you can still use the tool without knowing anything about this except for the basic concept that there's a mean component represented by that mu vector and then we represent the likely variations in terms of modes or equivalently they're called eigenvectors
06:46
similarly to what you might see if you you're familiar with pca there are also singular values which tell us how much of that mode we typically see in the population is it a small amount of that change or is it a large amount and then there are individual shape parameters these b values which actually tell us how much of each mode do we see for this particular subject and so that they are how we actually adjust the amount of each mode in order to best fit each individual subject whereas the modes themselves
07:17
characterize the whole population and so it's actually the b values which we are going to adjust in order to characterize each individual that we are trying to segment whereas we learn the mean and the modes and the singular values from the whole population or the whole training set in addition to knowing about the shape we also need to know about the intensity because it's actually the link between shape and intensity which allows us to take an image which has lots of intensity information in it and figure out where that shape
07:49
should be and effectively that is just learning what are the changes in intensity that we expect to see at the boundary of our structure and we do that by taking our training data figuring out where that boundary is and then sampling the intensity along little nines which are normal to the surface so here you can see an illustration where there's a yellow boundary and we've got red lines which represent the surface normals now we have many more than this in practice but this is just an illustration and for each of those normals
08:20
we sample at different points along that and we measure the intensity and we record what the intensity looks like which is just illustrated by this little black curve there and we record that for this individual and then we record it for all the individuals at all the different points along the boundary and so at each point along the boundary we then build up a model of how that intensity changes at that point in the anatomy across the all of the population and that then allows us to learn what is the average change in intensity that we see
08:51
but also what are the typical variations what are the plausible biological changes in intensity that we see which are represented by our training set as in a technical assigned intensities in mri are not quantitative that means that there's an arbitrary scaling factor in all mri images in order to compensate for that we actually need to rescale these so that they're all in a common range we do that by using the intensity either within the middle of the structure as a reference
09:24
or if the structure happens to be a small structure or a structure which might atrophy a lot such as the hippocampus we'll use a nearby structure such as the thalamus and i'm saying that both because it's something which occurs which can be useful to know but more specifically that you might see that that is an option in the tool as to what structure that you use as a reference and that we have a default that for some structures is going to be a thalamus reference when you see there's a thalamus reference it doesn't mean that it's segmenting the thalamus is just using the thalamus
09:55
as a way of normalizing the intensity values so if you see that when you're running the tool now you understand why that's there as i said before what we actually represent is the boundary of these shapes so we actually get points on the surface of the shapes if you want a labeled voxel image so you want to know at each box or whether it's part of that shape or not then we have to go through an extra step and that step is called boundary correction it's called boundary correction because if you look at the boundary of the shape which
10:27
is represented by the red contour down in the bottom there you can see that that boundary passes through the middle of many different voxels there are some fossils which are clearly inside that shape and they're easy they're the interior voxels which are labeled here there are also exterior voxels they're the ones which are in white which are outside this shape but there are a whole bunch of voxels where this boundary is passed through that voxel now we could work out what is the proportion of that volatile which is inside and outside the boundary but actually our boundaries are not so accurate that there
10:58
they can be precisely located within a voxel that would be sub-voxel accuracy which we really can't get so what we do instead is we actually go through each voxel and we determine for the intensity of that voxel is it more like the interior the immediate interior voxels or is it more like the exterior of voxels and so we actually do a voxel-wise labeling of each of these it's not a partial volume later mean like we had in fast in this case it's actually a hard classification it's either
11:30
inside that structure it's part of that structure or it's not and we do that because actually these subcortical structures are much more anatomically tricky they are collections of axons and cell bodies in different proportions than what you would see say in the cortical surface and because of that we at different points along these structures they are sort of different mixtures of what we would consider the gray matter which is predominantly cell bodies and white matter which is predominantly axons and the intensity fluctuates
12:02
and so it's not so straightforward to actually define what a partial volume would be and so when we do the boundary creation you will actually get hard labels as a result there's also an option to actually look at the labels before that has done the boundary correction where you can actually see which were the voxels which were on the boundary and had to be relabeled and which ones were the interior so like here the blue ones separate from the orange interior voxels and so that's one of the outputs which
12:34
you also have access to and you can have a look and if you're unhappy with how it's done the boundary correction then there are options that you can change in order to make the boundary correction work differently and try and optimize it for your particular scans and that can be useful sometimes in addition to actually getting a labeled image which can be very useful for making masks of the different structures so we might want a mask of the hippocampus say which case we want to do our boundary correction we want to get that image out
13:06
that's one of the things that we can do with first but another one is to actually look at how does the shape of this structure and the size of the structure change either between different groups so you might have a control group and a patient group like we're showing here or it might be over time there might be some kind of training some plasticity that you're interested in some sort of intervention might be diet could be all sorts of things which you're interested in how does the actual subcortical structures change as
13:36
a consequence of whatever that is and we can do that very directly with first because there's an option called vertex analysis and vertex analysis is all about looking at how these things change by using the boundary points that is the vertices which sit on the surface of these structures and we're going to go vertex by vertex and look at how they change so that's similar to the voxel-wise analysis but this is more directly related to our boundary
14:05
surfaces and our native representation so here i'll illustrate how this works so here we've got two controls and two disease obviously in a real case you will use more than two never use two in your studies but here we're just doing this for illustrative purposes so we've got these two and what i'm going to do is bring them all into a common space and so now you can actually see how they're different it wasn't easy to see that before so if i had a look here it's pretty difficult to see what parts of those structures
14:37
are differing once i put them into a common space we can easily visually see where that is but what the algorithm is going to do is it's going to consider each vertex in turn and we keep a correspondence of the vertices so we sort of have a number attached to each vertex on the surface which should always be roughly the same anatomical location so we're going to use that we're going to look at how these are arranged with respect to the average surface so we calculate the average surface
15:08
across all of the different groups then we're going to see how these displacements uh with respect to that surface using the normal direction so how far away are they from the average surface with a signed distance so ones which are inside negative ones which are outside are positive and so then that allows us to see what's going on and we can do a statistical analysis to see whether we have a predominance of one sign
15:39
or the other if we're doing a two group test or as i say we might be correlating with different things like the amount of training or some kind of exercise intervention there are all sorts of things which we might be interested in in looking at and at that point we're just putting it into standard statistical processing so we've got these signed measures we actually put them back into a volume at that point because many of our programs are optimized to do statistics on volumes to actually use a program called randomize and that
16:11
create creates a way of doing the statistics using a glm or a general linear model so if you're not familiar with these you can find these terms explained in other videos but it's the same concept the same concept we use everywhere in your imaging for doing our statistics we're going to use the glm we're going to set up a design could be looking at the difference between two groups or it could be correlating with some other factor and then we're going to see if there's any thing that we can detect in the placement of these vertices which is going to tell us about changes in the
16:42
shape or size of these subcortical structures another thing to keep in mind is that we can do this analysis either in the common space the standard space of the m i or we can do it in the native space we actually recommend doing it in the imminent space normally because that is going to normalize for the brain size it's doing an affine registration to the immigrant space so we still preserve all of our shape but it's going to at least take it into account the size of the head which
17:13
is a common compound that we need to do whenever we're doing volumetric or geometric analyses of brains because looking at the overall size of the head things just scale with that and that's a compound that we don't we're not interested in so we normally want to get rid of that we can get rid of it by doing our analysis in mni space or alternatively you can have a measure of the head size and you can put that directly into your statistics as a compound to get rid of that or you could do both
17:43
but whatever you do you should take into account the head size as a confounding factor in some way so in terms of running first in practice within fsl as i said we we need t1 weighted images and because our training data was all t1 weighted images it means that this program can only work with t1-weighted images it's the only thing that's built up a model to understand so it only understands t1-weighted images and can only be used with t1-weighted images there's also a
18:17
model which is provided with fsl so that is sort of behind the scenes you've already got that then when you run it it's a command line program it's a very simple call and you would run that with this t1 weighted image input and it will do the steps of the registration and then fitting the meshes these vertices on the surface and then doing a boundary correction and you will see outputs associated with each of these steps and that's why we're explaining these steps to you so you understand what the
18:48
outputs are you can look at the outputs and know what to expect to see and then identify if there are any problems where the fixes need to be as i said no tools are 100 accurate so you will encounter a problem at some point not very often but you always need to be alert to that and so looking at these output stages is really important to know how things are working and particularly for identifying where problems might occur where
19:18
failures might have happened so it's important to understand those steps and to have a look at those outputs particularly when you're not happy with the final output for any reason and if what you want is a labeled image where you've got the different subcortical structures labeled in a voxel-wise fashion then that's all you need to do if you want to do other analyses so it might be analyzing the volume so you just calculate the volume as a single number for a structure and then look at how that changes by plugging that into any old
19:48
statistical package like spss then that's possible we can extract the volume using a thing called first utils and first utils is also what would be used to create the vertex analysis so that's looking at specifically looking at changes in shape and size and then another tools need to use randomize in order to the statistics associated with that and these kind of processes are all explained more in the practicals which are associated with the fso force
20:22
so in summary first is a tool where we can segment specific deep gray structures so you saw the list of structures at the start we have a very broad training set so we have very general demographics which hopefully will match with whatever you are looking at so whatever you are looking at won't span that range typically but as long as it sits within that set and it's and first has seen examples of images which look like the kind of images that you're working with then that's all fine it only works with
20:54
t1 weighted images because that's what it's been trained to work with it models the average shape and intensity and also plausible variations of that that it's seen in this data so that it can separate what is a biologically plausible change from what is something which might be driven by noise or artifacts it represents the boundary of these shapes as sets of points or vertices and if you want a voxel labeled image then it goes through a separate boundary correction set and again there are options available
21:25
for adjusting that boundary correction step you can also look at changes in shape and size of these by performing vertex analysis and that's one of the really useful tools for looking at these kind of changes there are other ways that we can look at changes in structures and particularly in the amount of gray matter and we'll see those in other tools such as voxel-based morphometry or vbm and the cn tools which we present in another video

DOWNLOAD SUBTITLES: