Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
dblasko committed Nov 13, 2023
1 parent 55f43c6 commit 7cae15f
Show file tree
Hide file tree
Showing 2 changed files with 77 additions and 12 deletions.
72 changes: 64 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ Deep-learning-based low-light image enhancer specialized on restoring dark image
- [Pre-training](#pre-training)
- [Fine-tuning](#fine-tuning)
- [Running the model for inference](#running-the-model-for-inference)
- [Inference on a directory of images](#inference-on-a-directory-of-images)
- [Inference on a single image](#inference-on-a-single-image)
- [Inference on a directory of images](#inference-on-a-directory-of-images)
- [Generating the datasets](#generating-the-datasets)
- [Pre-training dataset](#pre-training-dataset)
- [Fine-tuning dataset](#fine-tuning-dataset)
Expand Down Expand Up @@ -68,28 +68,84 @@ Then, you can run the training script while pointing to your configuration file
**Remind model is available in releases**

## Running the model for inference
**Specify in which folder there should be a model, can be gotten from (link to releases).**
To run model inference, put the model weights in the `model/weights` folder. For example, weights of the pretrained and fine-tuned MIRNet model are available in the [releases](https://github.com/dblasko/low-light-event-img-enhancer/releases).

### Inference on a single image
To enhance a low-light image with a model, run the inference script as follows:
```bash
$ python inference/enhance_image.py -i <path_to_input_image>
[-o <path_to_output_folder> -m <path_to_model>]
# or
$ python inference/enhance_image.py --input_image_path <path_to_input_image>
[--output_folder_path <path_to_output_folder> --model_path <path_to_model>]
```
* If the output folder is not specified, the enhanced image is written to the directory the script is run from.
* If the model path is not specified, the default model defined in the `MODEL_PATH` constant of the script, which can be updated as needed, is used.

A typical use-case looks like this:
```
$ python inference/enhance_image.py -i data/test/low/0001.png
-o inference/results -m model/weights/pretrained_mirnet.pt
```

### Inference on a directory of images
To gauge how a model performs on a directory of images (*typically, the test or validation subset of a dataset*), the `inference/visualize_model_predictions.py` script can be used. It generates a grid of the original images (dark and light) and their enhanced versions produced by the model to visually evaluate performance, and computes the PSNR and Charbonnier loss on those pictures as well.

**TODO: explain, gives the grid + eval metrics, show example of output, explain how to use the script with example usage**
To use the script, change the values of the three constants defined at the top of the file if needed (*`IMG_SIZE`, `NUM_FEATURES`, `MODEL_PATH` - the default values should be appropriate for most cases*), and run the script as:
```bash
$ python inference/visualize_model_predictions.py <path_to_image_folder>
```
The image folder should contain two subfolders, `imgs` and `targets` as any split of the datasets. The script will generate a grid of the original images and their enhanced versions and save it as a png file in `inference/results`, and output the PSNR and Charbonnier loss on the dataset.

**For example, to visualize model performance on the test subset of the LoL dataset, proceed as follows:**
```bash
$ python inference/visualize_model_predictions.py data/pretraining/test
-> mps device detected.
100%|████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.31s/it]
***Performance on the dataset:***
Test loss: 0.10067714005708694 - Test PSNR: 19.464811325073242

```
Image produced by the script:

<img src='https://camo.githubusercontent.com/89d418300cdffb1eb5cd7db764683a2cde2b71ed26732ea36beb62ef5f1192f5/68747470733a2f2f6c68332e676f6f676c6575736572636f6e74656e742e636f6d2f64726976652d7669657765722f414b3761506143377658494e2d586636543650734a41533243376c76517668534632496e3675567a546c53594b7850505f364543516c5a304d536e7a616b6e495a31515652424f6a306f4b39334a5561516869514a5a30586a59597153326c4859513d7331363030' width='500'>

### Inference on a single image
*μTODO: implement & document**

## Generating the datasets
To run model training or inference on the datasets used for the experiments in this project, the datasets in the right format have to be generated, or prepared datasets need to be downloaded and placed in the `data` folder.

### Pre-training dataset
**Explain how could also use Night2Day by changing the one line for example**
The pre-training dataset is the [LoL dataset](https://daooshee.github.io/BMVC2018website/). To download and format it in the expected format, run the dedicated script with `python dataset_generation/pretraining_generation.py`. The script downloads the dataset from HuggingFace Datasets, and generates the pre-training dataset in the `data/pretraining` folder (*created if it does not exist*) with the dark images in the `imgs` subfolder, and the ground-truth pictures in `targets`.

To use a different dataset for pre-training, simply change the HuggingFace dataset reference following line in the following line of `dataset_generation/pretraining_generation.py`:
```
dataset = load_dataset("geekyrakshit/LoL-Dataset") # can be changed to e.g. "huggan/night2day"
```

### Fine-tuning dataset
**Explain choices, also rotations all horizontal & resizing, multiple iterations of adding image not exactly event too but photographic style improved performance further... show how to run the script on a raw dataset. Remind it is available in releases**

**If you would like to directly use the processed fine-tuning dataset**, you can download it from the [corresponding release](https://github.com/dblasko/low-light-event-img-enhancer/releases/tag/fine-tuning-dataset), extract the archive and place the three `train`, `val`, and `test` folders it contains in the `data/finetuning` folder (*to be created if it does not exist*).

Otherwise, **to generate a fine-tuning dataset in the correct format**, the `dataset_generation/finetuning_generation.py` script can be used to re-generate the finetuning dataset from the original images, or to create a finetuning dataset from any photographs of any size you would like.
To do so:
* Place your well-lit original images in a `data/finetuning/original_images` folder. Note that the images do not need to be of same size of orientation, the generation script takes care of unifying them.
* Then, you can run the generation script with `python dataset_generation/finetuning_generation.py`, which will create the `data/finetuning/[train&val&test]` folders with the ground-truth images in a `targets` subfolder, and the corresponded low-light images in a `inputs` subfolder. The images are split into the train, validation, and test sets with a 85/10/5 ratio.

The low-light images are generated by randomly darkening the original images by 80 to 90% of their original brightness, and then adding random gaussian noise and color-shift noise to better emulate how a photographer's camera would capture the picture in low-light conditions. For both the ground-truth and darkened images, the image's smallest dimension is then resized to 400 pixels while preserving the aspect ratio, and a center-crop of 704x400 or 400x704 (depending on the image orientation) is applied to unify the image aspect-ratios (*this leads to a 16/9 aspect-ratio, which is a standard out of most cameras*).

## Running tests
**Also specify all are run on commits on main through Github Actions**
The unit and integration tests are located in the `tests` folder, and they can be run with the `pytest` command from the root directory of the project. Currently, they test the different components of the model and the full model itself, the loss function and optimizer, the data pipeline (dataset, data loader, etc.), as well as the training and testing/validation procedures.


All tests are also run on every commit on the `main` branch through Github Actions alongside linting, and their status can be observed [here](https://github.com/dblasko/low-light-event-img-enhancer/actions).

To add further tests, simply add a new file in the `tests` folder, and name it `test_*.py` where `*` describes what you want to test. Then, add your tests in a class named `Test*`.

# Usage of the web-application based on the model
*Coming soon.*

## Running the inference endpoint
*Coming soon.*

## Running the web application
*Coming soon.*
17 changes: 13 additions & 4 deletions inference/enhance_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,10 @@

"""
Run this script to run model inference on a specified image and write the enhanced image to an output folder.
Usage: python inference/enhance_image.py -i <path_to_input_image> [-o <path_to_output_folder>]
or python inference/enhance_image.py --input_image_path <path_to_input_image> [--output_folder_path <path_to_output_folder>]
Usage: python inference/enhance_image.py -i <path_to_input_image> [-o <path_to_output_folder> -m <path_to_model>]
or python inference/enhance_image.py --input_image_path <path_to_input_image> [--output_folder_path <path_to_output_folder> --model_path <path_to_model>]
If the output folder is not specified, the enhanced image is written to the directory the script is run from.
If the model path is not specified, the default model defined in MODEL_PATH is used.
"""

IMG_SIZE = 400
Expand Down Expand Up @@ -79,9 +80,15 @@ def run_inference(input_image_path, output_folder_path, device, model_path=MODEL
parser.add_argument(
"--output_folder_path",
"-o",
help="Path to the output folder to save the enhanced image to.",
help="Path to the output folder to save the enhanced image to the MODEL_PATH constant specified in the script.",
default=".",
)
parser.add_argument(
"--model_path",
"-m",
help="Path to model weights to use. Defaults to the ",
default=MODEL_PATH,
)
args = parser.parse_args()

device = (
Expand All @@ -93,4 +100,6 @@ def run_inference(input_image_path, output_folder_path, device, model_path=MODEL
)
print(f"-> {device.type} device detected.")

run_inference(args.input_image_path, args.output_folder_path, device)
run_inference(
args.input_image_path, args.output_folder_path, device, args.model_path
)

0 comments on commit 7cae15f

Please sign in to comment.