This notebook shows how to use 3DeeCellTracker to track cells in single mode.
The basic procedures:
Please run folloing codes according to the instructions
%load_ext autoreload
%autoreload 2
import os
import warnings
warnings.filterwarnings('ignore')
from IPython.core.display import display, HTML
from matplotlib.animation import FuncAnimation, ArtistAnimation
from CellTracker.tracker import Tracker
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
Using TensorFlow backend.
Image parameters
Segmentation parameters
Tracking parameters
Paths
Multiple folders were automatically created to store data, model, and results
tracker = Tracker(
volume_num=50, siz_xyz=(512, 1024, 21), z_xy_ratio=9.2, z_scaling=10,
noise_level=20, min_size=100, beta_tk=300, lambda_tk=0.1, maxiter_tk=20,
folder_path=os.path.abspath("./worm1"), image_name="aligned_t%03i_z%03i.tif",
unet_model_file="unet3_pretrained.h5", ffn_model_file="ffn_pretrained.h5")
Following folders were made under: /home/wen/PycharmProjects/3DeeCellTracker worm1/data worm1/auto_vol1 worm1/manual_vol1 worm1/track_information worm1/models worm1/unet_cache worm1/track_results_SingleMode worm1/anim worm1/models/unet_weights
Prepare images
Modify the segmentation parameters (optional)
tracker.set_segmentation(noise_level=20, min_size=100)
Segmentation parameters were not modified
Segment cells at volume 1
tracker.load_unet()
tracker.segment_vol1()
WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:131: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. Loaded the 3D U-Net model Load images with shape: (512, 1024, 21) Segmented volume 1 and saved it
Draw the results of segmentation (Max projection)
anim_seg = tracker.draw_segresult(percentile_high=99.8)
Segmentation results (max projection):
Show segmentation in each layer
HTML(anim_seg)
Manual correction
Move files to the folder
Load the manually corrected segmentation
tracker.load_manual_seg()
Loaded manual _segment at vol 1
Re-train the U-Net using the manual segmentation (optional)
tracker.retrain_unet()
Images were normalized Images were divided Data for training 3D U-Net were prepared WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. WARNING:tensorflow:From /home/wen/anaconda3/envs/3DCT-2/lib/python3.7/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where 144/144 [==============================] - 8s 58ms/step val_loss before retraining: 0.019848952897720866
Epoch 1/1 60/60 [==============================] - 70s 1s/step - loss: 0.0206 - val_loss: 0.0318 Epoch 1/1 60/60 [==============================] - 65s 1s/step - loss: 0.0156 - val_loss: 0.0232 Epoch 1/1 60/60 [==============================] - 66s 1s/step - loss: 0.0165 - val_loss: 0.1331 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0174 - val_loss: 0.0255 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0152 - val_loss: 0.0246 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0152 - val_loss: 0.0208 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0132 - val_loss: 0.0255 Epoch 1/1 60/60 [==============================] - 67s 1s/step - loss: 0.0139 - val_loss: 0.0184 val_loss updated from 0.019848952897720866 to [0.018430436952459987]
Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0136 - val_loss: 0.0242 Epoch 1/1 60/60 [==============================] - 68s 1s/step - loss: 0.0147 - val_loss: 0.0204
tracker.select_unet_weights(step=0)
tracker.set_segmentation(del_cache=True)
tracker.segment_vol1()
anim_seg = tracker.draw_segresult(percentile_high=99.8)
Segmentation parameters were not modified All files under /unet folder were deleted Load images with shape: (512, 1024, 21) Segmented volume 1 and saved it Segmentation results (max projection):
Interpolate cells to make more accurate/smooth cell boundary
tracker.interpolate_seg()
tracker.draw_manual_seg1()
Interpolating... cell:164
Initiate variables required for tracking
tracker.cal_subregions()
tracker.load_ffn()
tracker.initiate_tracking()
Calculating subregions... cell: 164 Loaded the FFN model Initiated coordinates for tracking (from vol 1)
Modify tracking parameters if the test result is not satisfied (optional)
tracker.set_tracking(beta_tk=300, lambda_tk=0.1, maxiter_tk=20)
Tracking parameters were not modified
Test a matching between volume 1 and a target volume, and show the FFN + PR-GLS process by an animation (5 iterations)
anim_tracking, results = tracker.match(target_volume=50)
HTML(anim_tracking)
Matching between vol 1 and vol 50 was computed
Show the accurate correction after the FFN + PR-GLS transformation
tracker.draw_correction(*results[2:])
Show the superimposed cells + labels before/after tracking
tracker.draw_overlapping(*results[:3])
Track and show the processes
%matplotlib notebook
fig, ax = tracker.subplots_tracking()
tracker.track(fig, ax, from_volume=2)
Show the processes as an animation (for diagnosis)
%matplotlib inline
track_anim = tracker.replay_track_animation()
HTML(track_anim)