You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, when I build a line map on a dataset with over 1000 images, it will appear:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 938.00 MiB (GPU 0; 5.77 GiB total capacity; 1.07 GiB already allocated; 879.31 MiB free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So, I would like to inquire if it is possible to change certain parameters to accommodate small memory GPUs?
The text was updated successfully, but these errors were encountered:
In the implementation only one image is inferred at each time (we disable parallelization by default at feature extraction and matching) so there is unfortunately nothing to do to reduce memory requirements without degrading the performance if it exceeds the memory limit of your GPU.
…s of pytorch with CUDA (cvg#29)
* Add OpenCV as dependency and pytorch-CUDA compatible reference link to README
* update readme.
* update readme.
* update readme.
* update tp-lsd installation in readme.
* minor update.
* fix tp-lsd link.
Co-authored-by: B1ueber2y <[email protected]>
Hello, when I build a line map on a dataset with over 1000 images, it will appear:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 938.00 MiB (GPU 0; 5.77 GiB total capacity; 1.07 GiB already allocated; 879.31 MiB free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So, I would like to inquire if it is possible to change certain parameters to accommodate small memory GPUs?
The text was updated successfully, but these errors were encountered: