Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MRG: Plot NIRS data between source and detector #7620

Merged
merged 7 commits into from
Apr 23, 2020

Conversation

rob-luke
Copy link
Member

@rob-luke rob-luke commented Apr 17, 2020

Reference issue

Discussed in #7057 and mne-tools/mne-nirs#59

What does this implement/fix?

Add option to plot line between source detector pairs in plot_alignment .

Additional details

This PR originally contained code to plot data over the head, but this has been moved to MNE-NIRS

@rob-luke
Copy link
Member Author

I don't know why this is throwing an error. It runs on my computer.

@rob-luke
Copy link
Member Author

rob-luke commented Apr 17, 2020

Here is an example of what the code currently looks like on my computer (its a different sensor montage that is meant to cover the inferior parietal lobe and superior temporal gyrus, the data/color is currently randn). I am happy with the overall look. Now I need to make the code nice.

image

@agramfort
Copy link
Member

@GuillaumeFavelier can you review this one ? thx

@rob-luke
Copy link
Member Author

Thanks. Any review is appreciated. Its early days but any push in the right direction is appreciated.

@agramfort
Copy link
Member

agramfort commented Apr 17, 2020 via email

@rob-luke
Copy link
Member Author

great. I will look in to _Renderer then.

@rob-luke
Copy link
Member Author

rob-luke commented Apr 17, 2020

Ok using _Renderer it now looks like (again a different dataset)...

image

which I am ok with

@codecov
Copy link

codecov bot commented Apr 17, 2020

Codecov Report

Merging #7620 into master will increase coverage by 0.03%.
The diff coverage is 93.93%.

@@            Coverage Diff             @@
##           master    #7620      +/-   ##
==========================================
+ Coverage   90.08%   90.11%   +0.03%     
==========================================
  Files         454      455       +1     
  Lines       83551    83902     +351     
  Branches    13228    13274      +46     
==========================================
+ Hits        75266    75611     +345     
- Misses       5420     5422       +2     
- Partials     2865     2869       +4     

@larsoner
Copy link
Member

Nice!

Might be worth setting sphinx_gallery_thumbnail_number = 16 in the example so that the brain is the thumbnail?

@rob-luke
Copy link
Member Author

People do love a brain picture :)

@GuillaumeFavelier
Copy link
Contributor

Very cool plot and usage of _Renderer here ^^

@larsoner
Copy link
Member

I had another API thought. I'm not sure we even need a separate function. For MEG we have meg=('sensors', 'helmet'). We should probably just have (by default) fnirs=('sensors', 'pairs'), where 'pairs' means "draw a line between them". Then we don't even need a new keyword argument if we can live with users not getting to choose the colormap.

@larsoner
Copy link
Member

(I mean this should all be done in plot_alignment)

mne/viz/nirs.py Outdated
If True, plot the digitization points; 'fiducials' to plot fiducial
points only.
ecog : bool
If True (default), show ECoG sensors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would get rid of all of these other channel type options and only show NIRS channels. If people want NIRS+other channels they should just use plot_alignment and not see the source-detector pairs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore this if you agree with my API proposal below about just using plot_alignment directly

mne/viz/nirs.py Outdated
- Transform matrix (keywords ``trans``, ``meg`` and ``mri_fiducials``),
- BEM surfaces (keywords ``bem`` and ``surfaces``),
- sphere conductor model (keywords ``bem`` and ``surfaces``) and
- source space (keywords ``surfaces`` and ``src``).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would get rid of these notes and just tell people to

For more information see :func:`mne.viz.plot_alignment`.

mne/viz/nirs.py Outdated
# Otherwise use blue to red and ensure zero sits at white
vmin = -1. * np.max(np.abs(data))
vmax = np.max(np.abs(data))
cmap = 'bwr'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MNE standard diverging/bidirectional map where zero is meaningful is RdBu_r

@rob-luke
Copy link
Member Author

Thanks for the feedback.

I mean this should all be done in plot_alignment

How does one set the values/colours of sensors in plot_alignment? In this PR I’m passing in a data array, I can’t see an equivalent in plot alignment?

I wanted the ability to set the line colour to represent the output of analysis. Either hrf or peak values etc

if we can live with users not getting to choose the colormap.

I’m a bit of a visualisation nerd. And would prefer being able to set colormaps. If we move to plot alignment can I add a keyword?

@larsoner
Copy link
Member

How does one set the values/colours of sensors in plot_alignment? In this PR I’m passing in a data array, I can’t see an equivalent in plot alignment? ... I wanted the ability to set the line colour to represent the output of analysis. Either hrf or peak values etc

You can't currently. Indeed if you want to be able to "plot values" or something in 3D then a separate function would be better.

To me there are two separate use cases:

  1. Showing the sensor arrangement relative to the head (i.e., geometry / configuration)
  2. Showing some data values (i.e., some activation)

To me the sensor pairings with tubes between them falls at least partly under (1), and thus would be well suited to being included in plot_alignment at least to start.

Showing data doesn't really fit in the scope of plot_alignment, so a separate function would be better. Currently for (2) you have topo, topomap, and plot_evoked_field (though it's not used that much). It sounds like you're proposing a third function that is akin to plot_alignment in that it plots a brain/head/whatever plus sensor locations, but the sensor colors reflect some data values. It sounds like the first image in this ECoG example (though it's constructed by hacking the 3D transforms):

https://mne.tools/dev/auto_examples/visualization/plot_3d_to_2d.html#project-3d-electrodes-to-a-2d-snapshot

I could imagine the utility of making this sort of brain + electrode visualization work natively, rather than via some projection scheme. Clearly there is/was already a use case and need for it in ECoG, and now you're proposing something that seems quite similar in principle.

So my vote would be to first do this pairing scheme in plot_alignment. Then you don't need a colormap, since it's not showing data, right?

Then once we have that, we could build a generic sensor+data+brain plotting function. I don't think it matters much whether it's fNIRS or EEG/ECoG/MEG data (except that MEG data must have the coordinate frame changed). WDYT?

@rob-luke
Copy link
Member Author

Sounds like a plan. My need is for use case 2. But lets knock over 1 and 2 in the same go.

So I will expand plot_alignment to allow for fnirs=('sensors', 'pairs'). Which will show the location of source-detector pairs, but not overlay any data. All pairs will have the same color.

I will create a function (generalise the one already in this PR) for generic sensor+data+brain plotting.

@rob-luke
Copy link
Member Author

Ok plot_alignment with pairs seems to be working now and can be seen in the ci circle build doc. And looks as expected (see below). Now I will move on to the generic data display function (use case 2).

mne/viz/nirs.py Outdated
from . import plot_alignment


def plot_nirs_source_detector(data, info=None, radius=0.001,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would you agree to kill this function and just add nirs support to plot_alignment?

Copy link
Member Author

@rob-luke rob-luke Apr 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agree to killing this function.

But I would like to add a function to overlay channel analysis over a brain.

Would you prefer I convert this PR to be only for the extension to plot_alignment and move discussion of a data overlay to another issue/PR? Or move to MNE-NIRS?

@agramfort
Copy link
Member

agramfort commented Apr 20, 2020 via email

@rob-luke
Copy link
Member Author

rob-luke commented Apr 20, 2020

Sorry, I should be clearer with my messages.

In NIRS literature a channel is between a light source and detector. These have been marked as lines in the figures above. For each channel we analyse the measurement and extract a metric (peak value, hrf analysis, etc) and usually call this "channel level analysis". I guess its equivalent to scalp level analysis in EEG.

Plotting the output of the analysis for a source-detector pair is traditionally done by drawing a line between the source and detector and setting the line equal to the extracted metric (an example from another software is shown here). This is what I would like to create / that's what the plot_nirs_source_detector does.

@agramfort
Copy link
Member

agramfort commented Apr 20, 2020 via email

@rob-luke
Copy link
Member Author

so the color of the tube or its size would match a scalar quantity?

Various tools do it differently. My preferred approach is colour represents a scalar quantity. Size remains constant. And you have dashed or solid lines to represent a Boolean (usually to represent if a significant response was observed).

I think it will be difficult to make different line types with Renderer. So maybe tube size could represent significance. Small tube size for no significant response. Large tube size for significant response.

One of the reasons I like the approach of showing the source detectors as a line/tube is that you can see how far apart the optodes are. Optode separation distance is proportional to how deep in the brain the photons will reach, and what brain regions the pair is sensitive too (larger separation=deeper). So you gain additional insight to what’s available in the topomap.

Also happy to move this to MNE-NIRS if that’s more appropriate. Or happy to discuss alternative visualisation approaches. It’s always good to hear another opinion from someone outside the nirs field.

@agramfort
Copy link
Member

agramfort commented Apr 20, 2020 via email

@rob-luke
Copy link
Member Author

The code is passing on PyVista but failing on Mayavi (CI fail). Could this be a bug?

If I change this line to below the tests pass, but its uglier.

def tube(self, origin, destination, radius=0.001, color='white',
         scalars=None, vmin=None, vmax=None, colormap='RdBu',
         normalized_colormap=False, reverse_lut=False):
    color = _check_color(color)
    origin = np.atleast_2d(origin)
    destination = np.atleast_2d(destination)
    if scalars is None:
        for idx in range(origin.shape[0]):
            surface = self.mlab.plot3d([origin[idx, 0], destination[idx, 0]],
                                       [origin[idx, 1], destination[idx, 1]],
                                       [origin[idx, 2], destination[idx, 2]],
                                       tube_radius=radius,
                                       color=color,
                                       figure=self.fig)

@rob-luke
Copy link
Member Author

The other code has been moved to mne-tools/mne-nirs#70 as suggested by agramfort

@agramfort
Copy link
Member

agramfort commented Apr 21, 2020 via email

@rob-luke
Copy link
Member Author

I can’t tell if this single CI failure due to segmentation fault is related to this PR. I recall seeing similar faults on other commits. But it does occur near the test I altered.

@rob-luke rob-luke changed the title WIP: Plot NIRS data between source and detector MRG: Plot NIRS data between source and detector Apr 21, 2020
@rob-luke
Copy link
Member Author

@larsoner @agramfort @GuillaumeFavelier could you please review. I have changed the status to mrg as I am happy with the code and scope of work now.

Copy link
Contributor

@GuillaumeFavelier GuillaumeFavelier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can’t tell if this single CI failure due to segmentation fault is related to this PR

I saw Travis failing on master for this exact same test. I don't think it's related to this PR.

LGTM @rob-luke

Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few minor comments.

Is it meant for there to be some tubes going not between sensors? It looks like there are a few (e.g., in the lower left)

@rob-luke
Copy link
Member Author

Is it meant for there to be some tubes going not between sensors?

Yes, these are the short channels. I have added an additional line to the tutorial to mention what they are.

fig = mne.viz.plot_alignment(raw_intensity.info, show_axes=True,
subject='fsaverage',
trans='fsaverage', surfaces=['brain'],
fnirs=['channels', 'pairs'],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You changed the default True to mean ('pairs',) but then use ['channels', 'pairs'] in the example. Would ('channels', 'pairs') might actually be the best default for plot_alignment? Or do you indeed want to do something non-default in this example? Sorry for the back and forth on this one...

Also would this look better with surfaces=['brain', 'head']? Might look cooler?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I leave both channels and pairs in here as they are described above and I see this tutorial as a tool to show people the nirs processing options, then they can choose the option they prefer. I think pairs is still the best default. Also I prefer brain alone without head here.

@larsoner larsoner merged commit 1d44c29 into mne-tools:master Apr 23, 2020
@larsoner
Copy link
Member

Thanks @rob-luke !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants