You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hihi,
today I test the ‘Multiple GPUs one node’ part with 2 rtx4090 on one node,and I set the ‘--use_profilier’ parameter,but there is not ‘distributed’ view in tensorboard,I cannot find the reason,help please,thanks!
Error logs
no error,but missing distributed view in tensorboard
Expected behavior
show distributed view by torch.profilier in tensorboard
The text was updated successfully, but these errors were encountered:
Hi! Can you share your command for this task? Did you see two workers in the profiler results? Can you share some screenshots? This may be a Pytorch Profiler with Tensorboard issue mentioned here.
System Info
torch=2.1.2
torch-tb-profilier==0.4.3
cuda=11.8
Information
🐛 Describe the bug
hihi,
today I test the ‘Multiple GPUs one node’ part with 2 rtx4090 on one node,and I set the ‘--use_profilier’ parameter,but there is not ‘distributed’ view in tensorboard,I cannot find the reason,help please,thanks!
Error logs
no error,but missing distributed view in tensorboard
Expected behavior
show distributed view by torch.profilier in tensorboard
The text was updated successfully, but these errors were encountered: