本帖最后由 asas99 于 2025-10-22 17:37 编辑
本文基于@fish313331(https://javpcn.com/forum.php?mod=viewthread&tid=1324)、@gbywreyuerh@out、@fish313331Amd卡在Win11下运用显卡原生加速运行Lada教程,简单易懂!! - JavPlayer - JavPlayer 中文交流论坛 -教程讨论lada在win10,7900xtx平台下的安装与使用情况。
@gbywreyuerh@out大佬在其教程中已写明具体操作,但lada在github删库以及大佬的安装代码是针对amd9000系,现打算针对现阶段7000系显卡的运行进行补充。
步骤1、2、3不变,注意将需要的程序添加到环境变量,否则无法正常运行。库由https://github.com/ladaapp/lada替换为https://codeberg.org/ladaapp/lada
步骤4 7000系使用以下代码
pip install ` https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/torch-2.9.0a0%2Brocm7.0.0rc20250908-cp313-cp313-win_amd64.whl` https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/torchaudio-2.8.0a0%2Brocm7.0.0rc20250908-cp313-cp313-win_amd64.whl` https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/torchvision-0.24.0a0%2Brocm7.0.0rc20250908-cp313-cp313-win_amd64.whl` --extra-index-url https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/ 步骤5 将模型文件放入lada目录下的model_weights文件夹中。模型文件下载地址: https://pan.quark.cn/s/6db69da7b91b 最后完成全部教程步骤,到测试阶段都不会报错。直到运行时可能会弹出以下报错: Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Lada\lada\.venv\Scripts\lada-cli.exe\__main__.py", line 6, in <module>
sys.exit(main())
~~~~^^
File "D:\Lada\lada\lada\cli\main.py", line 151, in main
mosaic_detection_model, mosaic_restoration_model, preferred_pad_mode = load_models(
~~~~~~~~~~~^
args.device, args.mosaic_restoration_model, args.mosaic_restoration_model_path, args.mosaic_restoration_config_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
args.mosaic_detection_model_path
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\Lada\lada\lada\lib\frame_restorer.py", line 26, in load_models
from lada.basicvsrpp.inference import load_model, get_default_gan_inference_config
File "D:\Lada\lada\lada\basicvsrpp\inference.py", line 9, in <module>
from mmengine.runner import load_checkpoint
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\runner\__init__.py", line 2, in <module>
from ._flexible_runner import FlexibleRunner
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\runner\_flexible_runner.py", line 14, in <module>
from mmengine._strategy import BaseStrategy
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\_strategy\__init__.py", line 3, in <module>
from mmengine.utils.dl_utils import TORCH_VERSION
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\utils\dl_utils\__init__.py", line 8, in <module>
from .time_counter import TimeCounter
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\utils\dl_utils\time_counter.py", line 8, in <module>
from mmengine.dist.utils import master_only
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\dist\__init__.py", line 2, in <module>
from .dist import (all_gather_object, all_reduce, all_gather, all_reduce_dict,
...<2 lines>...
collect_results_cpu, collect_results_gpu, all_reduce_params)
File "D:\Lada\lada\.venv\Lib\site-packages\mmengine\dist\dist.py", line 26, in <module>
def _get_reduce_op(name: str) -> torch_dist.ReduceOp:
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch.distributed' has no attribute 'ReduceOp' 在.venv\Lib\site-packages\mmengine\model\wrappers\__init__.py开头加上
- import torch, types
- if not hasattr(torch, "distributed") or not hasattr(torch.distributed, "fsdp"):
- torch.distributed = types.SimpleNamespace()
- torch.distributed.fsdp = types.SimpleNamespace()
- torch.distributed.fsdp.fully_sharded_data_parallel = types.SimpleNamespace()
然後後面這段
- if digit_version(TORCH_VERSION) >= digit_version('2.0.0'):
- from .fully_sharded_distributed import \
- MMFullyShardedDataParallel # noqa:F401
- __all__.append('MMFullyShardedDataParallel')
改為- if digit_version(TORCH_VERSION) >= digit_version('2.0.0'):
- try:
- from .fully_sharded_distributed import MMFullyShardedDataParallel # noqa:F401
- except Exception as e:
- import warnings
- warnings.warn(f"FSDP disabled: {e}")
- MMFullyShardedDataParallel = None
- __all__.append('MMFullyShardedDataParallel')
[color=rgb(51, 102, 153) !important]复制代码
另一個檔案.venv\Lib\site-packages\mmengine\dist\dist.py
在- def _get_reduce_op(name: str) -> torch_dist.ReduceOp:
之前加上
- if not hasattr(torch.distributed, "ReduceOp"):
- class DummyReduceOp:
- SUM = None
- MEAN = None
- torch.distributed.ReduceOp = DummyReduceOp
至此lada即可在win10,7900xtx平台下正常使用 |