RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 1.83 GiB of which 26.44 MiB is free. Including non-PyTorch memory, this process has 1.81 GiB memory in use. Of the allocated memory 771.77 MiB is allocated by PyTorch, and 68.23 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
All results are saved in results/cropped_faces_0.5
使用codeformer进行图片修复报错,nvidia-smi如下 ed Oct 11 13:48:09 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.199.02 Driver Version: 470.199.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=++==============| | 0 NVIDIA GeForce … Off | 00000000:01:00.0 Off | N/A | | N/A 50C P8 N/A / N/A | 0MiB / 1878MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
问题补充:python inference_codeformer.py -w 0.5 –has_aligned –input_path /home/root/CodeFormer/inpots/cropped_faces 我执行的是这个命令,里面就一个照片
The error message you’re encountering, “CUDA out of memory,” indicates that the GPU (Graphics Processing Unit) you’re using does not have enough memory to allocate for the operation you’re trying to perform. Here’s a breakdown of the error and some steps you can take to address it:
Memory Usage: The process using the GPU has 1.81 GiB memory in use, which includes both PyTorch and non-PyTorch memory.
PyTorch Allocation:
torch.cuda.empty_cache()
torch.cuda.memory_summary()
Given your specific command:
python inference_codeformer.py -w 0.5 --has_aligned --input_path /home/root/CodeFormer/inpots/cropped_faces
-w
nvidia-smi
By applying these steps, you should be able to mitigate the “CUDA out of memory” error and allow your script to run successfully on your GPU. Adjust parameters and settings based on your specific use case and available resources.