Skip to content

Conversation

destinysky
Copy link

@destinysky destinysky commented Sep 12, 2025

What this PR does / why we need it?

This PR introduces a new model loader called Netloader, which leverages high-bandwidth P2P direct transfer between NPU cards to achieve weight loading. Netloader is implemented as a plugin through the newly added 'register_model_loader' function in vLLM 0.10. It facilitates the process of weight loading by sending weights from a pre-loaded model (server) to an empty model of a newly started instance (client). The server operates concurrently with normal inference tasks through sub-threads and the 'stateless_init_torch_distributed_process_group' in vLLM. The client initiates a transfer request after verifying that the model and partitioning method are the same as the server's, and uses HCCL's collective communication (send/recv) to load the weights in the order they are stored in the model.
image

image

Application Scenarios:

  1. Significantly Reduces Inference Instance Startup Time By reusing the weights of already loaded instances and performing high-speed transfers directly between computing cards, this method reduces model loading latency compared to traditional remote/local pull methods.
  2. Reduces Network and Storage Pressure Avoids the need to repeatedly download weight files from remote repositories, reducing the impact on centralized storage and network traffic, thereby enhancing overall system stability and service quality.
  3. Improves Resource Utilization and Reduces Costs Accelerating the loading process reduces reliance on redundant computing pools, allowing computing resources to be elastically scaled and reclaimed as needed.
  4. Enhances Business Continuity and High Availability In fault recovery scenarios, new instances can quickly take over existing services, avoiding prolonged business interruptions and improving the system's high availability and user experience.

Does this PR introduce any user-facing change?

Netloader utilizes the existing --load-format=netloader and --model-loader-extra-config to be activated. The model-loader-extra-config needs to be input as a JSON string (as it is now), and the supported fields are as follows:

Field Name: SOURCE
Type: List
Description: Specifies weighted data sources. Each item in the list has two fields: device_id and sources, used to indicate the rank ID and the available sources (IP:port). For example: {"SOURCE":[{"device_id":0, "sources": ["10.170.22.152:19374"]},{"device_id":1, "sources": ["10.170.22.152:11228"]}] If this field is empty, it falls back to the default model loader. This SOURCE provided here is second priority.
Allowed values: A list where each item is a map with device_id (integer) and sources (list of strings "IP:port")

Field Name: MODEL
Type: String
Description: The model name, used to verify that the source corresponds to the same model.
Allowed values: If not provided, defaults to the --model argument in the startup command

Field Name: LISTEN_PORT
Type: Integer
Description: Port on which the server will listen.
Allowed values: A base port number; the actual port used is LISTEN_PORT + RANK. If not given, a random valid port is chosen. Valid range: 1024-65535. If provided outside this range, that instance does not enable the server.

Field Name: INT8_CACHE
Type: String
Description: For quantized models, specifies behavior when encountering int8 parameters. Allowed values: One of ['hbm', 'dram', 'no'].

  • hbm means saving a copy of the original int8 parameters into high-bandwidth memory (HBM) (not recommended for large models due to large HBM usage).
  • dram means saving into DRAM.
  • no means no special handling (this may lead to differences in weights and unpredictable behavior). Default is no.

Field Name: INT8_CACHE_NAME
Type: List
Description: Names of the parameters filtered by INT8_CACHE.
Allowed values: Default is None — if None, means all parameters (no filtering).

Field Name: OUTPUT_PREFIX
Type: String
Description: As a server, prefix for file names where each rank’s listening IP and Port are written.
Allowed values: If specified, each rank writes its listening address and port into a file named {OUTPUT_PREFIX}{RANK}.txt, text format, contents are IP:Port.

Field Name: CONFIG_FILE
Type: String
Description: Specifies the above parameters via a JSON file.
Allowed values: If provided, the SOURCE specified inside this file has first priority (overrides SOURCE from other configuration).

How was this patch tested?

The test needs at least two NPUs.

One NPU works as the server:
VLLM_SLEEP_WHEN_IDLE=1 vllm serve (model file location) --tensor-parallel-size 1 --served-model-name (model name) --enforce-eager --port=8080 --load-format=netloader

One NPU works as the client:
export NETLOADER_CONFIG='{"SOURCE":[{"device_id":0, "sources": ["(server IP in the log):(server Port in the log)"]}]}' VLLM_SLEEP_WHEN_IDLE=1 ASCEND_RT_VISIBLE_DEVICES=2,3 vllm serve (same as the server) --tensor-parallel-size 1 --served-model-name (same as the server) --enforce-eager --port=8081 --load-format=netloader --model-loader-extra-config="${NETLOADER_CONFIG}"

Afterwards, you can check whether the outputs for the same sentence are consistent when the temperature is set to 0.

Signed-off-by: destinysky kangrui10@126.com

### What this PR does / why we need it?

This PR introduces a new model loader called Netloader, which leverages high-bandwidth P2P direct transfer between NPU cards to achieve weight loading.
Netloader is implemented as a plugin through the newly added 'register_model_loader' function in vLLM 0.10. It facilitates the process of weight loading by sending weights from a pre-loaded model (server) to an empty model of a newly started instance (client).
The server operates concurrently with normal inference tasks through sub-threads and the 'stateless_init_torch_distributed_process_group' in vLLM.
The client initiates a transfer request after verifying that the model and partitioning method are the same as the server's, and uses HCCL's collective communication (send/recv) to load the weights in the order they are stored in the model.

Application Scenarios:
1. Significantly Reduces Inference Instance Startup Time
By reusing the weights of already loaded instances and performing high-speed transfers directly between computing cards, this method reduces model loading latency compared to traditional remote/local pull methods.
2. Reduces Network and Storage Pressure
Avoids the need to repeatedly download weight files from remote repositories, reducing the impact on centralized storage and network traffic, thereby enhancing overall system stability and service quality.
3. Improves Resource Utilization and Reduces Costs
Accelerating the loading process reduces reliance on redundant computing pools, allowing computing resources to be elastically scaled and reclaimed as needed.
4. Enhances Business Continuity and High Availability
In fault recovery scenarios, new instances can quickly take over existing services, avoiding prolonged business interruptions and improving the system's high availability and user experience.


### Does this PR introduce _any_ user-facing change?

Netloader utilizes the existing --load-format=netloader and --model-loader-extra-config to be activated. The model-loader-extra-config needs to be input as a JSON string (as it is now), and the supported fields are as follows:

Field Name: SOURCE
Type: List
Description: Specifies weighted data sources. Each item in the list has two fields: device_id and sources, used to indicate the rank ID and the available sources (IP:port). For example:
{"SOURCE":[{"device_id":0, "sources": ["10.170.22.152:19374"]},{"device_id":1, "sources": ["10.170.22.152:11228"]}]
If this field is empty, it falls back to the default model loader.
This SOURCE provided here is second priority.
Allowed values: A list where each item is a map with device_id (integer) and sources (list of strings "IP:port")

Field Name: MODEL
Type: String
Description: The model name, used to verify that the source corresponds to the same model.
Allowed values: If not provided, defaults to the --model argument in the startup command

Field Name: LISTEN_PORT
Type: Integer
Description: Port on which the server will listen.
Allowed values: A base port number; the actual port used is LISTEN_PORT + RANK. If not given, a random valid port is chosen. Valid range: 1024-65535. If provided outside this range, that instance does not enable the server.

Field Name: INT8_CACHE
Type: String
Description: For quantized models, specifies behavior when encountering int8 parameters.
Allowed values: One of ['hbm', 'dram', 'no'].
- hbm means saving a copy of the original int8 parameters into high-bandwidth memory (HBM) (not recommended for large models due to large HBM usage).
- dram means saving into DRAM.
- no means no special handling (this may lead to differences in weights and unpredictable behavior). Default is no.

Field Name: INT8_CACHE_NAME
Type: List
Description: Names of the parameters filtered by INT8_CACHE.
Allowed values: Default is None — if None, means all parameters (no filtering).

Field Name: OUTPUT_PREFIX
Type: String
Description: As a server, prefix for file names where each rank’s listening IP and Port are written.
Allowed values: If specified, each rank writes its listening address and port into a file named {OUTPUT_PREFIX}{RANK}.txt, text format, contents are IP:Port.

Field Name: CONFIG_FILE
Type: String
Description: Specifies the above parameters via a JSON file.
Allowed values: If provided, the SOURCE specified inside this file has first priority (overrides SOURCE from other configuration).

### How was this patch tested?

The test needs at least two NPUs.

One NPU works as the server:
VLLM_SLEEP_WHEN_IDLE=1 vllm serve (model file location) --tensor-parallel-size 1 --served-model-name (model name) --enforce-eager --port=8080 --load-format=netloader

One NPU works as the client:
export NETLOADER_CONFIG='{"SOURCE":[{"device_id":0, "sources": ["(server IP in the log):(server Port in the log)"]}]}'
VLLM_SLEEP_WHEN_IDLE=1 ASCEND_RT_VISIBLE_DEVICES=2,3 vllm serve (same as the server) --tensor-parallel-size 1 --served-model-name (same as the server) --enforce-eager --port=8081 --load-format=netloader --model-loader-extra-config="${NETLOADER_CONFIG}"

Afterwards, you can check whether the outputs for the same sentence are consistent when the temperature is set to 0.


- vLLM version: main
- vLLM main: vllm-project/vllm@d6249d0

Signed-off-by: destinysky <kangrui10@126.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an impressive new feature, the Netloader, for accelerating model weight loading via P2P transfers between NPU cards. The implementation is comprehensive, including the core client-server logic, integration as a vLLM model loader plugin, and a solid suite of unit and end-to-end tests. My review focuses on enhancing the robustness and resource management of the new networking code. I've identified a few critical and high-severity issues concerning potential resource leaks from un-destroyed process groups, an unhandled exception that could crash the server's handler thread, and the use of unreliable __del__ methods for socket cleanup. Addressing these points will significantly improve the stability and reliability of the new loader.

Copy link

codecov bot commented Sep 12, 2025

Codecov Report

❌ Patch coverage is 80.43478% with 162 lines in your changes missing coverage. Please review.
✅ Project coverage is 75.17%. Comparing base (1bbb20e) to head (19d5865).
⚠️ Report is 41 commits behind head on main.

Files with missing lines Patch % Lines
vllm_ascend/model_loader/netloader/netloader.py 58.27% 63 Missing ⚠️
...nd/model_loader/netloader/executor/elastic_load.py 16.12% 52 Missing ⚠️
...cend/model_loader/netloader/interaction/elastic.py 82.14% 30 Missing ⚠️
tests/ut/model_loader/netloader/test_netloader.py 94.94% 5 Missing ⚠️
...t/model_loader/netloader/test_netloader_elastic.py 98.05% 4 Missing ⚠️
vllm_ascend/__init__.py 33.33% 2 Missing ⚠️
vllm_ascend/model_loader/netloader/utils.py 90.90% 2 Missing ⚠️
...s/ut/model_loader/netloader/test_netloader_load.py 98.43% 1 Missing ⚠️
.../ut/model_loader/netloader/test_netloader_utils.py 96.15% 1 Missing ⚠️
vllm_ascend/model_loader/netloader/__init__.py 50.00% 1 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2888      +/-   ##
==========================================
+ Coverage   74.76%   75.17%   +0.40%     
==========================================
  Files         150      164      +14     
  Lines       20891    22134    +1243     
==========================================
+ Hits        15620    16639    +1019     
- Misses       5271     5495     +224     
Flag Coverage Δ
unittests 75.17% <80.43%> (+0.40%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

wangxiyuan
wangxiyuan previously approved these changes Sep 13, 2025
@wangxiyuan wangxiyuan dismissed their stale review September 13, 2025 15:59

wrong action

@wangxiyuan
Copy link
Collaborator

Thanks for the contribution. I'll take a look more.


class ElasticClient:

def __init__(self, sources: list, device_id: int, model_path: str, tp: int,
Copy link
Collaborator

@momo609 momo609 Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz add some notes for arguments

Copy link
Author

@destinysky destinysky Sep 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment!
I have add more notes.

port = int(port)
except Exception as e:
logger.error(f"IP format error: {source}, detail: {e}")
continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why continue? And which element in the sources do we truly need ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Why continue?

Considering the need to prevent a single point of failure from causing an inability to load weights, the 'self.sources' here stores a list of source IP:port pairs. We iterate through this given list in order, and if we encounter a connection failure or a mismatch in information, we skip (continue) to the next possible weight source.

  • And which element in the sources do we truly need?

Any element in the list is a potential source, and we can find an available one in order.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants