-
Notifications
You must be signed in to change notification settings - Fork 435
[Misc] Add a model loader that utilizes HCCL for weight loading #2888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
### What this PR does / why we need it? This PR introduces a new model loader called Netloader, which leverages high-bandwidth P2P direct transfer between NPU cards to achieve weight loading. Netloader is implemented as a plugin through the newly added 'register_model_loader' function in vLLM 0.10. It facilitates the process of weight loading by sending weights from a pre-loaded model (server) to an empty model of a newly started instance (client). The server operates concurrently with normal inference tasks through sub-threads and the 'stateless_init_torch_distributed_process_group' in vLLM. The client initiates a transfer request after verifying that the model and partitioning method are the same as the server's, and uses HCCL's collective communication (send/recv) to load the weights in the order they are stored in the model. Application Scenarios: 1. Significantly Reduces Inference Instance Startup Time By reusing the weights of already loaded instances and performing high-speed transfers directly between computing cards, this method reduces model loading latency compared to traditional remote/local pull methods. 2. Reduces Network and Storage Pressure Avoids the need to repeatedly download weight files from remote repositories, reducing the impact on centralized storage and network traffic, thereby enhancing overall system stability and service quality. 3. Improves Resource Utilization and Reduces Costs Accelerating the loading process reduces reliance on redundant computing pools, allowing computing resources to be elastically scaled and reclaimed as needed. 4. Enhances Business Continuity and High Availability In fault recovery scenarios, new instances can quickly take over existing services, avoiding prolonged business interruptions and improving the system's high availability and user experience. ### Does this PR introduce _any_ user-facing change? Netloader utilizes the existing --load-format=netloader and --model-loader-extra-config to be activated. The model-loader-extra-config needs to be input as a JSON string (as it is now), and the supported fields are as follows: Field Name: SOURCE Type: List Description: Specifies weighted data sources. Each item in the list has two fields: device_id and sources, used to indicate the rank ID and the available sources (IP:port). For example: {"SOURCE":[{"device_id":0, "sources": ["10.170.22.152:19374"]},{"device_id":1, "sources": ["10.170.22.152:11228"]}] If this field is empty, it falls back to the default model loader. This SOURCE provided here is second priority. Allowed values: A list where each item is a map with device_id (integer) and sources (list of strings "IP:port") Field Name: MODEL Type: String Description: The model name, used to verify that the source corresponds to the same model. Allowed values: If not provided, defaults to the --model argument in the startup command Field Name: LISTEN_PORT Type: Integer Description: Port on which the server will listen. Allowed values: A base port number; the actual port used is LISTEN_PORT + RANK. If not given, a random valid port is chosen. Valid range: 1024-65535. If provided outside this range, that instance does not enable the server. Field Name: INT8_CACHE Type: String Description: For quantized models, specifies behavior when encountering int8 parameters. Allowed values: One of ['hbm', 'dram', 'no']. - hbm means saving a copy of the original int8 parameters into high-bandwidth memory (HBM) (not recommended for large models due to large HBM usage). - dram means saving into DRAM. - no means no special handling (this may lead to differences in weights and unpredictable behavior). Default is no. Field Name: INT8_CACHE_NAME Type: List Description: Names of the parameters filtered by INT8_CACHE. Allowed values: Default is None — if None, means all parameters (no filtering). Field Name: OUTPUT_PREFIX Type: String Description: As a server, prefix for file names where each rank’s listening IP and Port are written. Allowed values: If specified, each rank writes its listening address and port into a file named {OUTPUT_PREFIX}{RANK}.txt, text format, contents are IP:Port. Field Name: CONFIG_FILE Type: String Description: Specifies the above parameters via a JSON file. Allowed values: If provided, the SOURCE specified inside this file has first priority (overrides SOURCE from other configuration). ### How was this patch tested? The test needs at least two NPUs. One NPU works as the server: VLLM_SLEEP_WHEN_IDLE=1 vllm serve (model file location) --tensor-parallel-size 1 --served-model-name (model name) --enforce-eager --port=8080 --load-format=netloader One NPU works as the client: export NETLOADER_CONFIG='{"SOURCE":[{"device_id":0, "sources": ["(server IP in the log):(server Port in the log)"]}]}' VLLM_SLEEP_WHEN_IDLE=1 ASCEND_RT_VISIBLE_DEVICES=2,3 vllm serve (same as the server) --tensor-parallel-size 1 --served-model-name (same as the server) --enforce-eager --port=8081 --load-format=netloader --model-loader-extra-config="${NETLOADER_CONFIG}" Afterwards, you can check whether the outputs for the same sentence are consistent when the temperature is set to 0. - vLLM version: main - vLLM main: vllm-project/vllm@d6249d0 Signed-off-by: destinysky <kangrui10@126.com>
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an impressive new feature, the Netloader
, for accelerating model weight loading via P2P transfers between NPU cards. The implementation is comprehensive, including the core client-server logic, integration as a vLLM model loader plugin, and a solid suite of unit and end-to-end tests. My review focuses on enhancing the robustness and resource management of the new networking code. I've identified a few critical and high-severity issues concerning potential resource leaks from un-destroyed process groups, an unhandled exception that could crash the server's handler thread, and the use of unreliable __del__
methods for socket cleanup. Addressing these points will significantly improve the stability and reliability of the new loader.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2888 +/- ##
==========================================
+ Coverage 74.76% 75.17% +0.40%
==========================================
Files 150 164 +14
Lines 20891 22134 +1243
==========================================
+ Hits 15620 16639 +1019
- Misses 5271 5495 +224
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Thanks for the contribution. I'll take a look more. |
|
||
class ElasticClient: | ||
|
||
def __init__(self, sources: list, device_id: int, model_path: str, tp: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
plz add some notes for arguments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your comment!
I have add more notes.
port = int(port) | ||
except Exception as e: | ||
logger.error(f"IP format error: {source}, detail: {e}") | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why continue? And which element in the sources do we truly need ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why continue?
Considering the need to prevent a single point of failure from causing an inability to load weights, the 'self.sources' here stores a list of source IP:port pairs. We iterate through this given list in order, and if we encounter a connection failure or a mismatch in information, we skip (continue) to the next possible weight source.
- And which element in the sources do we truly need?
Any element in the list is a potential source, and we can find an available one in order.
Signed-off-by: destinysky <kangrui10@126.com>
…scend into main-netloader
What this PR does / why we need it?
This PR introduces a new model loader called Netloader, which leverages high-bandwidth P2P direct transfer between NPU cards to achieve weight loading. Netloader is implemented as a plugin through the newly added 'register_model_loader' function in vLLM 0.10. It facilitates the process of weight loading by sending weights from a pre-loaded model (server) to an empty model of a newly started instance (client). The server operates concurrently with normal inference tasks through sub-threads and the 'stateless_init_torch_distributed_process_group' in vLLM. The client initiates a transfer request after verifying that the model and partitioning method are the same as the server's, and uses HCCL's collective communication (send/recv) to load the weights in the order they are stored in the model.

Application Scenarios:
Does this PR introduce any user-facing change?
Netloader utilizes the existing --load-format=netloader and --model-loader-extra-config to be activated. The model-loader-extra-config needs to be input as a JSON string (as it is now), and the supported fields are as follows:
Field Name: SOURCE
Type: List
Description: Specifies weighted data sources. Each item in the list has two fields: device_id and sources, used to indicate the rank ID and the available sources (IP:port). For example: {"SOURCE":[{"device_id":0, "sources": ["10.170.22.152:19374"]},{"device_id":1, "sources": ["10.170.22.152:11228"]}] If this field is empty, it falls back to the default model loader. This SOURCE provided here is second priority.
Allowed values: A list where each item is a map with device_id (integer) and sources (list of strings "IP:port")
Field Name: MODEL
Type: String
Description: The model name, used to verify that the source corresponds to the same model.
Allowed values: If not provided, defaults to the --model argument in the startup command
Field Name: LISTEN_PORT
Type: Integer
Description: Port on which the server will listen.
Allowed values: A base port number; the actual port used is LISTEN_PORT + RANK. If not given, a random valid port is chosen. Valid range: 1024-65535. If provided outside this range, that instance does not enable the server.
Field Name: INT8_CACHE
Type: String
Description: For quantized models, specifies behavior when encountering int8 parameters. Allowed values: One of ['hbm', 'dram', 'no'].
Field Name: INT8_CACHE_NAME
Type: List
Description: Names of the parameters filtered by INT8_CACHE.
Allowed values: Default is None — if None, means all parameters (no filtering).
Field Name: OUTPUT_PREFIX
Type: String
Description: As a server, prefix for file names where each rank’s listening IP and Port are written.
Allowed values: If specified, each rank writes its listening address and port into a file named {OUTPUT_PREFIX}{RANK}.txt, text format, contents are IP:Port.
Field Name: CONFIG_FILE
Type: String
Description: Specifies the above parameters via a JSON file.
Allowed values: If provided, the SOURCE specified inside this file has first priority (overrides SOURCE from other configuration).
How was this patch tested?
The test needs at least two NPUs.
One NPU works as the server:
VLLM_SLEEP_WHEN_IDLE=1 vllm serve (model file location) --tensor-parallel-size 1 --served-model-name (model name) --enforce-eager --port=8080 --load-format=netloader
One NPU works as the client:
export NETLOADER_CONFIG='{"SOURCE":[{"device_id":0, "sources": ["(server IP in the log):(server Port in the log)"]}]}' VLLM_SLEEP_WHEN_IDLE=1 ASCEND_RT_VISIBLE_DEVICES=2,3 vllm serve (same as the server) --tensor-parallel-size 1 --served-model-name (same as the server) --enforce-eager --port=8081 --load-format=netloader --model-loader-extra-config="${NETLOADER_CONFIG}"
Afterwards, you can check whether the outputs for the same sentence are consistent when the temperature is set to 0.
Signed-off-by: destinysky kangrui10@126.com