Skip to content

Commit

Permalink
Adds styling, base content, new template (#121)
Browse files Browse the repository at this point in the history
Co-authored-by: barnesjoseph <[email protected]>
  • Loading branch information
jakevdp and barnesjoseph authored Dec 3, 2024
1 parent 776df65 commit a5b6247
Show file tree
Hide file tree
Showing 54 changed files with 1,009 additions and 69 deletions.
2 changes: 1 addition & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ build:
python: "3.12"

sphinx:
configuration: docs/conf.py
configuration: docs/source/conf.py
fail_on_warning: true

python:
Expand Down
51 changes: 0 additions & 51 deletions docs/index.md

This file was deleted.

1 change: 1 addition & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Sphinx-related requirements.
sphinx
sphinx-book-theme>=1.0.1
myst-nb
myst-parser[linkify]
sphinx-book-theme
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -693,7 +693,7 @@
"In this section we will implement the [UNETR](https://arxiv.org/abs/2103.10504) model from scratch using Flax NNX. The reference PyTorch implementation of this model can be found on the [MONAI Library GitHub repository](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/unetr.py).\n",
"\n",
"The UNETR model utilizes a transformer as the encoder to learn sequence representations of the input and to capture the global multi-scale information, while also following the “U-shaped” network design like [UNet](https://arxiv.org/abs/1505.04597) model:\n",
"![image.png](./_static/unetr_architecture.png)\n",
"![image.png](./_static/images/unetr_architecture.png)\n",
"\n",
"The UNETR architecture on the image above is processing 3D inputs, but it can be easily adapted to 2D input.\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -367,7 +367,7 @@ for img, mask in zip(images[:3], masks[:3]):
In this section we will implement the [UNETR](https://arxiv.org/abs/2103.10504) model from scratch using Flax NNX. The reference PyTorch implementation of this model can be found on the [MONAI Library GitHub repository](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/nets/unetr.py).

The UNETR model utilizes a transformer as the encoder to learn sequence representations of the input and to capture the global multi-scale information, while also following the “U-shaped” network design like [UNet](https://arxiv.org/abs/1505.04597) model:
![image.png](./_static/unetr_architecture.png)
![image.png](./_static/images/unetr_architecture.png)

The UNETR architecture on the image above is processing 3D inputs, but it can be easily adapted to 2D input.

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@
"source": [
"After running all above and launching `tensorboard --logdir runs/test` from the same folder, you should see the following in the supplied URL:\n",
"\n",
"![image.png](./_static/training_data_example.png)"
"![image.png](./_static/images/training_data_example.png)"
]
},
{
Expand Down Expand Up @@ -225,7 +225,7 @@
"source": [
"We've now created the basic model - the above cell will render an interactive view of the model. Which, when fully expanded, should look something like this:\n",
"\n",
"![image.png](./_static/nnx_display_example.png)"
"![image.png](./_static/images/nnx_display_example.png)"
]
},
{
Expand Down Expand Up @@ -328,7 +328,7 @@
"\n",
"The output there should look something like the following:\n",
"\n",
"![image.png](./_static/loss_acc_example.png)"
"![image.png](./_static/images/loss_acc_example.png)"
]
},
{
Expand All @@ -339,11 +339,11 @@
"\n",
"At step 1, we see poor accuracy, as you would expect\n",
"\n",
"![image.png](./_static/testsheet_start_example.png)\n",
"![image.png](./_static/images/testsheet_start_example.png)\n",
"\n",
"By 500, the model is essentially done, but we see the bottom row `7` get lost and recovered at higher epochs as we go far into an overfitting regime. This kind of stored data can be very useful when the training routines become automated and a human is potentially only looking when something has gone wrong.\n",
"\n",
"![image.png](./_static/testsheets_500_3000.png)"
"![image.png](./_static/images/testsheets_500_3000.png)"
]
},
{
Expand Down Expand Up @@ -427,7 +427,7 @@
"source": [
"The above cell output will give you an interactive plot that looks like this image below, where here we've 'clicked' in the bottom plot for entry `7` and hover over the corresponding value in the top plot.\n",
"\n",
"![image.png](./_static/model_display_example.png)"
"![image.png](./_static/images/model_display_example.png)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ with test_summary_writer.as_default():

After running all above and launching `tensorboard --logdir runs/test` from the same folder, you should see the following in the supplied URL:

![image.png](./_static/training_data_example.png)
![image.png](./_static/images/training_data_example.png)

```{code-cell} ipython3
:id: 6jrYisoPh6TL
Expand Down Expand Up @@ -131,7 +131,7 @@ nnx.display(model) # Interactive display if penzai is installed.

We've now created the basic model - the above cell will render an interactive view of the model. Which, when fully expanded, should look something like this:

![image.png](./_static/nnx_display_example.png)
![image.png](./_static/images/nnx_display_example.png)

+++

Expand Down Expand Up @@ -211,19 +211,19 @@ During the training has run, and after, the added `Loss` and `Accuracy` scalars

The output there should look something like the following:

![image.png](./_static/loss_acc_example.png)
![image.png](./_static/images/loss_acc_example.png)

+++

Since we've stored the example test sheet every 500 epochs, it's easy to go back and step through the progress. With each training step using all of the training data the steps and epochs are essentially the same here.

At step 1, we see poor accuracy, as you would expect

![image.png](./_static/testsheet_start_example.png)
![image.png](./_static/images/testsheet_start_example.png)

By 500, the model is essentially done, but we see the bottom row `7` get lost and recovered at higher epochs as we go far into an overfitting regime. This kind of stored data can be very useful when the training routines become automated and a human is potentially only looking when something has gone wrong.

![image.png](./_static/testsheets_500_3000.png)
![image.png](./_static/images/testsheets_500_3000.png)

+++

Expand All @@ -235,7 +235,7 @@ nnx.display(model(images_test[:35])), nnx.display(model(images_test[:35]).argmax

The above cell output will give you an interactive plot that looks like this image below, where here we've 'clicked' in the bottom plot for entry `7` and hover over the corresponding value in the top plot.

![image.png](./_static/model_display_example.png)
![image.png](./_static/images/model_display_example.png)

+++

Expand Down
Loading

0 comments on commit a5b6247

Please sign in to comment.