|
|
发表于 2025-10-20 00:39:34
|
显示全部楼层
大佬请问一下我是7900xtx,首先最新的版本是torch-2.9.0a0+rocm7.0.0rc20250908-cp313-cp313-win_amd64。
patch -u .venv/lib/site-packages/ultralytics/utils/ops.py patches/increase_mms_time_limit.patch时出错。
测试(lada-cli --list-devices,
lada-cli --list-codecs等)都正常,但是正常使用时报错
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Lada\lada\.venv\Scripts\lada-cli.exe\__main__.py", line 6, in <module>
sys.exit(main())
~~~~^^
File "D:\Lada\lada\lada\cli\main.py", line 151, in main
mosaic_detection_model, mosaic_restoration_model, preferred_pad_mode = load_models(
~~~~~~~~~~~^
args.device, args.mosaic_restoration_model, args.mosaic_restoration_model_path, args.mosaic_restoration_config_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
args.mosaic_detection_model_path
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\Lada\lada\lada\lib\frame_restorer.py", line 26, in load_models
from lada.basicvsrpp.inference import load_model, get_default_gan_inference_config
File "D:\Lada\lada\lada\basicvsrpp\inference.py", line 9, in <module>
from mmengine.runner import load_checkpoint
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\runner\__init__.py", line 2, in <module>
from ._flexible_runner import FlexibleRunner
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\runner\_flexible_runner.py", line 14, in <module>
from mmengine._strategy import BaseStrategy
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\_strategy\__init__.py", line 3, in <module>
from mmengine.utils.dl_utils import TORCH_VERSION
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\utils\dl_utils\__init__.py", line 8, in <module>
from .time_counter import TimeCounter
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\utils\dl_utils\time_counter.py", line 8, in <module>
from mmengine.dist.utils import master_only
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\dist\__init__.py", line 2, in <module>
from .dist import (all_gather_object, all_reduce, all_gather, all_reduce_dict,
...<2 lines>...
collect_results_cpu, collect_results_gpu, all_reduce_params)
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\dist\dist.py", line 26, in <module>
def _get_reduce_op(name: str) -> torch_dist.ReduceOp:
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch.distributed' has no attribute 'ReduceOp'
看了一下gfx120的9.09(教程)的轮子和gfx110的9.08(我使用的)关于torch.distributed代码内容几乎一致。所以应该是我其他地方出了问题。
希望大佬能解惑 |
|