Skip to content

[Bug] The DeepSeek-R1 w4a8 model has numerical overflow issues with high frequency #8493

@chenxijun1029

Description

@chenxijun1029

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

After deploying the DeepSeek-R1-W4AFP8 model, the server occasionally shutdown with a overflow error:

[2025-07-24 16:23:41] DetokenizerManager hit an exception: Traceback (most recent call last):
  File "/sgl-workspace/sglang/python/sglang/srt/managers/detokenizer_manager.py", line 275, in run_detokenizer_process
    manager.event_loop()
  File "/sgl-workspace/sglang/python/sglang/srt/managers/detokenizer_manager.py", line 110, in event_loop
    output = self._request_dispatcher(recv_obj)
  File "/sgl-workspace/sglang/python/sglang/utils.py", line 471, in __call__
    return fn(obj)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/detokenizer_manager.py", line 174, in handle_batch_token_id_out
    read_texts = self.tokenizer.batch_decode(
  File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 3801, in batch_decode
    return [
  File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 3802, in <listcomp>
    self.decode(
  File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 3841, in decode
    return self._decode(
  File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py", line 682, in _decode
    text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
OverflowError: out of range integral type conversion attempted

While launching the server with the parameter --enable-nan-detection, the crash could be avoid, while may hurt the model performance:

[2025-07-24 07:22:20 TP5] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP6] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP7] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP4] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP3] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP0] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP2] Detected errors during sampling! NaN in the logits.
[2025-07-24 07:22:20 TP1] Detected errors during sampling! NaN in the logits.

Also, I have found the unittest pytest test_cutlass_w4a8_moe.py::test_cutlass_w4a8_moe failed occasionally, with a nan error:

AssertionError: Tensor-likes are not close!
Mismatch elements: 14336 / 14336 (100.0%)
Greatest absolute difference: nan at index (0, 0) (up to 0.1 allowed)
Greatest relative difference: nan at index (0, 0) (up to 0.01 allowed)
test_cutlass_w4a8_moe.py:158: AssertionError

Reproduction

I launched the server exactly as described as the pr https://github.com/sgl-project/sglang/pull/7762 :
Launch command:

SGL_ENABLE_JIT_DEEPGEMM=1 python3 -m sglang.launch_server --model-path ${MODEL_PATH} --context-length 8192 --tp 8 --trust-remote-code --host 0.0.0.0 --port 8000 --mem-fraction-static 0.8 --enable-ep-moe --cuda-graph-max-bs 256 --cuda-graph-bs 1 2 4 8 16 32 64 128 256 --max-running-requests 256 --disable-radix-cache

The model: DeepSeek-R1-W4AFP8

Before launching the server successfully, I have fixed a little bug, which may introduced during refactoring quant method. Here is my bugfix:

diff --git a/python/sglang/srt/layers/quantization/w4afp8.py b/python/sglang/srt/layers/quantization/w4afp8.py
index 1c9dc5d..8766e44 100644
--- a/python/sglang/srt/layers/quantization/w4afp8.py
+++ b/python/sglang/srt/layers/quantization/w4afp8.py
@@ -98,7 +98,7 @@ class W4AFp8Config(QuantizationConfig):
         return []


-class W4AFp8MoEMethod(FusedMoEMethodBase):
+class W4AFp8MoEMethod():

     def __init__(self, quant_config: W4AFp8Config):
         self.quant_config = quant_config

Environment

Python: 3.10.12 (main, May 27 2025, 17:12:29) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H20
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.68
CUDA Driver Version: 550.54.15
PyTorch: 2.7.1+cu126
sglang: 0.4.9
sgl_kernel: 0.2.5
flashinfer_python: 0.2.7.post1
triton: 3.3.1
transformers: 4.53.3
torchao: 0.9.0+cu126
numpy: 2.2.6
aiohttp: 3.12.13
fastapi: 0.115.14
hf_transfer: 0.1.9
huggingface_hub: 0.33.2
interegular: 0.3.3
modelscope: 1.27.1
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.19
openai: 1.93.0
tiktoken: 0.9.0
anthropic: 0.57.1
litellm: 1.74.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS 0-95,192-287 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS SYS PIX PHB SYS SYS 0-95,192-287 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS SYS PHB PIX SYS SYS 0-95,192-287 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS 0-95,192-287 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX 96-191,288-383 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS PIX PIX SYS SYS SYS SYS SYS 96-191,288-383 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 PIX PIX PHB PHB SYS SYS SYS SYS SYS SYS SYS 96-191,288-383 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X PHB PHB PIX PIX SYS SYS SYS SYS SYS SYS SYS 96-191,288-383 1 N/A
NIC0 SYS SYS SYS SYS SYS SYS PIX PHB X PIX PHB PHB SYS SYS SYS SYS SYS SYS SYS
NIC1 SYS SYS SYS SYS SYS SYS PIX PHB PIX X PHB PHB SYS SYS SYS SYS SYS SYS SYS
NIC2 SYS SYS SYS SYS SYS SYS PHB PIX PHB PHB X PIX SYS SYS SYS SYS SYS SYS SYS
NIC3 SYS SYS SYS SYS SYS SYS PHB PIX PHB PHB PIX X SYS SYS SYS SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS X PIX SYS SYS SYS SYS SYS
NIC5 SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS PIX X SYS SYS SYS SYS SYS
NIC6 PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X SYS SYS SYS SYS
NIC7 SYS PIX PHB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X PHB SYS SYS
NIC8 SYS PHB PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PHB X SYS SYS
NIC9 SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X SYS
NIC10 SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_10
NIC1: mlx5_11
NIC2: mlx5_12
NIC3: mlx5_13
NIC4: mlx5_14
NIC5: mlx5_15
NIC6: mlx5_bond_0
NIC7: mlx5_bond_1
NIC8: mlx5_bond_2
NIC9: mlx5_bond_3
NIC10: mlx5_bond_4

ulimit soft: 1048576

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingquantLLM Quantization

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions