训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

运行Dit时,torchrun –nnodes=1 –nproc_per_node=8 train.py –model DiT-XL/2 –data-path /home/pansiyuan/jupyter/qianyu/data

遇到报错

1 完整报错

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

2 报错关键位置

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 83746) of binary: /opt/conda/bin/python Traceback (most recent call last):

torch.distributed.elastic.multiprocessing.errors.ChildFailedError

解决办法:

此时是多卡计算看不到报错信息

采用单卡

torchrun –nnodes=1 –nproc_per_node=1 train.py –model DiT-XL/2 –data-path /home/pansiyuan/jupyter/qianyu/data

单卡之后报错结果是数据集找不到

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

没有找到文件args,data_path是不是出问题了

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

值得注意的是这里的DiT给的路径data_path等等,在train.py文件arg里面都是用的不是下划线

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

注意这里的指令也需要下划线
torchrun –nnodes=1 –nproc_per_node=8 train.py –model DiT-XL/2 –data-path /home/pansiyuan/jupyter/qianyu/data/train

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

args.data_path

修改后,再次尝试调小batch就行

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

如果使用7张卡设置batch size为256就就如下报错,因为无法整除

训练DiT报错ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local

本文来自网络,不代表协通编程立场,如若转载,请注明出处:https://net2asp.com/bee5fd2a2b.html