No support for RTX 5090 in tyDiffusion 1.26
#1
Root cause is that sm120 is not supported in the pytorch included(ComfyUI?)

Nvidia 50 Series (Blackwell) support thread: How to get ComfyUI running on your new 50 series GPU. · comfyanonymous/ComfyUI · Discussion #6643 · GitHub


From the running log:
Code:
C:\ProgramData\tyFlow\tyDiffusion\Tools\python\lib\site-packages\torch\cuda\__init__.py:215: UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

Is there any thing that I can do to get it working with tyDiffusion 1.26 or do we just have to wait until the tool change is updated pytorch->ConfyUI->tyDiffusion 1.26
  Reply
#2
You can try manually updating the pyTorch installation that comes with tyDiffusion...

Currently it's very difficult to procure 5090s where I live so I have no ability to test this issue.
  Reply
#3
I'll see what I can do on my side
I think sm_120 is a 50XX issue, not a 5090 specific one.
  Reply
#4
I'm not that confidant when it comes to python, so I just changed the setup script that are unpacked when tyDiffussion is started(Without Clicking on the One Click Install


C:\ProgramData\tyFlow\tyDiffusion\_bin\setup_ComfyUI.bat

From:

Code:
"%PYTHON%" -m pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121

To:
Code:
"%PYTHON%" -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

With that it installed the nightly instead, and I could get a picture of that fluffy red cat

I can see this in the commandline running in the background:

Code:
Total VRAM 32607 MB, total RAM 65140 MB
pytorch version: 2.8.0.dev20250404+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
VAE dtype: torch.bfloat16
2025-04-05 16:24:31.435058 [comfyui_controlnet_aux] | INFO -> Using ckpts path: C:\ProgramData\tyFlow\tyDiffusion\Models\ControlNets\preprocessors
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
2025-04-05 16:24:31.596587 DWPose: Onnxruntime with acceleration providers detected
Note: NumExpr detected 32 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
NumExpr defaulting to 16 threads.
2025-04-05 16:24:31.872715 FizzleDorf Custom Nodes: Loaded
  Reply


Forum Jump: