Time-Lapse Microscopy#
Show code cell source
from IPython.display import HTML
HTML("""
<div style="display: flex; justify-content: center; padding: 10px;">
<iframe width="560" height="315" src="https://www.youtube.com/embed/Qa-wrIdMYH0?si=KDzApOEt2e4ROu-l" title="YouTube video player" frameborder="0" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
""")
Overview#
This workflow demonstrates the display of time-lapse microscopy in neuroscience. Each frame in the image-stack dataset corresponds to a concurrent time sample, typically intended to capture a dynamic process of living cells.
For example, a dynamic process of interest could be neural action potentials, and the data might come from a miniature microscope (see image in this notebook’s header) that captures the change in fluorescence of special proteins caused by electrochemical fluctuations indicative of neuronal activity. These video-like datasets often contain many more frames in the ‘Time’ dimension compared to the number of pixels in the height or width of each frame.
App Versions#
We will build three different visualization approaches to cater to different use cases:
Basic Viewer: A one-line application using hvPlot, a high-level package that wraps HoloViews. This version is ideal for quick inspections and preliminary analyses of image stacks.
Intermediate Viewer with Side Views: Uses HoloViews for additional interactive elements, scalebars, and linked side views, aiding better navigation of the image stack and identification of regions of interest.
Advanced Viewer with Annotations and Linked Timeseries: Building from the intermediate
HoloViews
viewer, this version adds annotation capabilities using HoloNote, allowing for interactive spatial annotations with linked timeseries.
These applications are designed to handle large datasets efficiently, leveraging tools like Xarray
, Dask
, and Zarr
for scalable data management.
Prerequisites#
Topic |
Type |
Notes |
---|---|---|
Prerequisite |
Essential introduction to working with |
Imports and Configuration#
from pathlib import Path
import numpy as np
import pandas as pd
import xarray as xr
import holoviews as hv
from holoviews.operation.datashader import rasterize
import hvplot.xarray # noqa
import panel as pn
import fsspec
pn.extension('tabulator')
hv.extension('bokeh')
Loading and Inspecting the Data#
We’ll be working with a sample dataset of time-lapse microscopy images. The dataset is stored in Zarr format, which is optimized for chunked, compressed, and scalable storage.
DATA_URL = 'https://datasets.holoviz.org/miniscope/v1/real_miniscope_uint8.zarr/'
DATA_DIR = Path('./data')
DATA_FILENAME = Path(DATA_URL).name
DATA_PATH = DATA_DIR / DATA_FILENAME
print(f'Local Data Path: {DATA_PATH}')
Local Data Path: data/real_miniscope_uint8.zarr
Let’s download the dataset (if it wasn’t already) so we can have a local copy and avoid any network delays. However, this workflow should also work if the dataset stays remote (thanks to Xarray, Zarr, Dask, and other scalability-providing tools), such as when it’s too large to reasonably download in its entirety.
Note
If you are viewing this notebook as a result of using the `anaconda-project run` command, the data has already been ingested, as configured in the associated yaml file. Running the following cell should find that data and skip any further download.Warning
If the data was not previously ingested with `anaconda-project`, the following cell will download ~300 MB the first time it is run.DATA_DIR.mkdir(parents=True, exist_ok=True)
# Download the data if it doesn't exist
if not DATA_PATH.exists():
print(f'Downloading data to: {DATA_PATH}')
ds_remote = xr.open_dataset(
fsspec.get_mapper(DATA_URL), engine='zarr', chunks={}
)
ds_remote.to_zarr(str(DATA_PATH)) # Save locally
print(f'Dataset downloaded to: {DATA_PATH}')
else:
print(f'Data exists at: {DATA_PATH}')
Data exists at: data/real_miniscope_uint8.zarr
Now, let’s load the dataset using xarray
, specifying chunks for efficient data handling with Dask
.
# Open the dataset from the local copy
ds = xr.open_dataset(
DATA_PATH,
engine='zarr',
chunks={'frame': 400, 'height': -1, 'width': -1} # Chunk by frames
)
# Access the variable 'varr_ref' which contains the image data
da = ds['varr_ref']
da
<xarray.DataArray 'varr_ref' (frame: 2000, height: 480, width: 752)> Size: 722MB dask.array<open_dataset-varr_ref, shape=(2000, 480, 752), dtype=uint8, chunksize=(400, 480, 752), chunktype=numpy.ndarray> Coordinates: * frame (frame) int64 16kB 0 1 2 3 4 5 6 ... 1994 1995 1996 1997 1998 1999 * height (height) int64 4kB 0 1 2 3 4 5 6 7 ... 473 474 475 476 477 478 479 * width (width) int64 6kB 0 1 2 3 4 5 6 7 ... 745 746 747 748 749 750 751
The dataset da
is a 3D array with dimensions (frame, height, width)
. Each frame corresponds to a time point in the image stack.
App V1: Basic Viewer with hvPlot#
Our first application is a simple viewer using hvPlot
, which allows for quick visualization of the image stack with minimal code.
hvplot_app = da.hvplot.image(
groupby="frame",
title='hvPlot App',
cmap='viridis',
clim=(0, 20),
data_aspect=1,
widget_location='bottom',
)
# hvplot_app
Here’s a static snapshot of what the previous cell produces in a live notebook - the quick hvPlot app. 👉
To facilitate widget-interactivity on static websites, we can embed the data right in the HTML output by using dynamic=False
. But we’ll only do this on a subset of the frames to avoid overloading every visitor to the site:
da_subset = da.isel(frame=slice(20, 40))
da_subset.hvplot.image(
dynamic=False, # Embeds all frames in webpage. Only do this with a few frames.
groupby="frame",
title='Use my widget on a static website!',
cmap='viridis',
clim=(0, 20),
data_aspect=1,
widget_location='bottom',
)
As you can see, this creates an interactive image viewer where you can navigate through frames using a slider widget. Not much more needs to be said about that; it’s simple and effective in a pinch!
To easily enrich and extend this simple app, we do things like add a maximum-projection image so we can see the maximum fluorescence per pixel and visually locate the potential neurons in two-dimensions.
max_proj = da.max('frame').compute().astype(np.float32)
img_max_proj = max_proj.hvplot.image(
title='Max Over Time',
cmap="magma",
clim=(0,20),
data_aspect=1,
)
img_max_proj