womp, womp
Quote:
RuntimeError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 7.79 GiB total capacity; 5.62 GiB already allocated; 678.25 MiB free; 5.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
|
Setting everything up was pretty easy for me since it's all Python and git repos, and I work in Python and git repos daily. Unfortunately, it looks like my low-tier GPU might not have enough memory. I'll try to fiddle around with options to make it happy...