Skip to content

20-series GPUs do not support the sinks parameter; attempting to access it directly raises an error. Could you fix this #24062

@wang824892540

Description

@wang824892540

self.has_sink = extra_impl_args.get("sinks") is not None

PS C:\Users\y> docker run --runtime nvidia --gpus all -v C:/Users/y/model/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model unsloth/gpt-oss-20b-unsloth-bnb-4bit
INFO 09-01 19:03:21 [init.py:241] Automatically detected platform cuda.
(APIServer pid=1) INFO 09-01 19:03:23 [api_server.py:1805] vLLM API server version 0.10.1.1
(APIServer pid=1) INFO 09-01 19:03:23 [utils.py:326] non-default args: {'model': 'unsloth/gpt-oss-20b-unsloth-bnb-4bit'}
(APIServer pid=1) INFO 09-01 19:03:33 [init.py:711] Resolved architecture: GptOssForCausalLM
(APIServer pid=1) INFO 09-01 19:03:33 [init.py:1750] Using max model len 131072
(APIServer pid=1) WARNING 09-01 19:03:34 [init.py:1171] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
(APIServer pid=1) WARNING 09-01 19:03:34 [arg_utils.py:1770] Compute Capability < 8.0 is not supported by the V1 Engine. Falling back to V0.
(APIServer pid=1) WARNING 09-01 19:03:34 [arg_utils.py:1555] The model has a long context length (131072). This may causeOOM during the initial memory profiling phase, or result in low performance due to small KV cache size. Consider setting --max-model-len to a smaller value.
(APIServer pid=1) INFO 09-01 19:03:38 [config.py:273] Overriding max cuda graph capture size to 1024 for performance.
(APIServer pid=1) INFO 09-01 19:03:38 [api_server.py:295] Started engine process with PID 60
INFO 09-01 19:03:43 [init.py:241] Automatically detected platform cuda.
INFO 09-01 19:03:45 [llm_engine.py:222] Initializing a V0 LLM engine (v0.10.1.1) with config: model='unsloth/gpt-oss-20b-unsloth-bnb-4bit', speculative_config=None, tokenizer='unsloth/gpt-oss-20b-unsloth-bnb-4bit', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=131072, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend='GptOss'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=unsloth/gpt-oss-20b-unsloth-bnb-4bit, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":null,"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":0,"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{"enable_fusion":false,"enable_noop":false},"max_capture_size":256,"local_cache_dir":null}, use_cached_outputs=True,
WARNING 09-01 19:03:48 [interface.py:389] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
INFO 09-01 19:03:48 [cuda.py:384] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-01 19:03:48 [cuda.py:433] Using XFormers backend.
INFO 09-01 19:03:49 [parallel_state.py:1134] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
INFO 09-01 19:03:49 [model_runner.py:1080] Starting to load model unsloth/gpt-oss-20b-unsloth-bnb-4bit...
INFO 09-01 19:03:50 [cuda.py:384] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-01 19:03:50 [cuda.py:433] Using XFormers backend.
ERROR 09-01 19:03:50 [engine.py:467] XFormersImpl.init() got an unexpected keyword argument 'sinks'
ERROR 09-01 19:03:50 [engine.py:467] Traceback (most recent call last):
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 455, in run_mp_engine
ERROR 09-01 19:03:50 [engine.py:467] engine = MQLLMEngine.from_vllm_config(
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/utils/init.py", line 1557, in inner
ERROR 09-01 19:03:50 [engine.py:467] return fn(*args, **kwargs)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 144, in from_vllm_config
ERROR 09-01 19:03:50 [engine.py:467] return cls(
ERROR 09-01 19:03:50 [engine.py:467] ^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 88, in init
ERROR 09-01 19:03:50 [engine.py:467] self.engine = LLMEngine(*args, **kwargs)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 257, in init
ERROR 09-01 19:03:50 [engine.py:467] self.model_executor = executor_class(vllm_config=vllm_config)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 54, in init
ERROR 09-01 19:03:50 [engine.py:467] self._init_executor()
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 49, in _init_executor
ERROR 09-01 19:03:50 [engine.py:467] self.collective_rpc("load_model")
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 58, in collective_rpc
ERROR 09-01 19:03:50 [engine.py:467] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/utils/init.py", line 3007, in run_method
ERROR 09-01 19:03:50 [engine.py:467] return func(*args, **kwargs)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 211, in load_model
ERROR 09-01 19:03:50 [engine.py:467] self.model_runner.load_model()
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1083, in load_model
ERROR 09-01 19:03:50 [engine.py:467] self.model = get_model(vllm_config=self.vllm_config)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/init.py", line 118, in get_model
ERROR 09-01 19:03:50 [engine.py:467] return loader.load_model(vllm_config=vllm_config,
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py", line 44, in load_model
ERROR 09-01 19:03:50 [engine.py:467] model = initialize_model(vllm_config=vllm_config,
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model
ERROR 09-01 19:03:50 [engine.py:467] return model_class(vllm_config=vllm_config, prefix=prefix)
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 239, in init
ERROR 09-01 19:03:50 [engine.py:467] self.model = GptOssModel(
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 183, in init
ERROR 09-01 19:03:50 [engine.py:467] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 212, in init
ERROR 09-01 19:03:50 [engine.py:467] TransformerBlock(
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 181, in init
ERROR 09-01 19:03:50 [engine.py:467] self.attn = OAIAttention(config, prefix=f"{prefix}.attn")
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 106, in init
ERROR 09-01 19:03:50 [engine.py:467] self.attn = Attention(
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] File "/usr/local/lib/python3.12/dist-packages/vllm/attention/layer.py", line 175, in init
ERROR 09-01 19:03:50 [engine.py:467] self.impl = impl_cls(num_heads, head_size, scale, num_kv_heads,
ERROR 09-01 19:03:50 [engine.py:467] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 09-01 19:03:50 [engine.py:467] TypeError: XFormersImpl.init() got an unexpected keyword argument 'sinks'
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 469, in run_mp_engine
raise e from None
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 455, in run_mp_engine
engine = MQLLMEngine.from_vllm_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/utils/init.py", line 1557, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 144, in from_vllm_config
return cls(
^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 88, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 257, in init
self.model_executor = executor_class(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 54, in init
self._init_executor()
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 49, in _init_executor
self.collective_rpc("load_model")
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 58, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/utils/init.py", line 3007, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 211, in load_model
self.model_runner.load_model()
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1083, in load_model
self.model = get_model(vllm_config=self.vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/init.py", line 118, in get_model
return loader.load_model(vllm_config=vllm_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py", line 44, in load_model
model = initialize_model(vllm_config=vllm_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model
return model_class(vllm_config=vllm_config, prefix=prefix)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 239, in init
self.model = GptOssModel(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 183, in init
old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 212, in init
TransformerBlock(
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 181, in init
self.attn = OAIAttention(config, prefix=f"{prefix}.attn")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 106, in init
self.attn = Attention(
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/attention/layer.py", line 175, in init
self.impl = impl_cls(num_heads, head_size, scale, num_kv_heads,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: XFormersImpl.init() got an unexpected keyword argument 'sinks'
[rank0]:[W901 19:03:50.682329363 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
(APIServer pid=1) Traceback (most recent call last):
(APIServer pid=1) File "", line 198, in _run_module_as_main
(APIServer pid=1) File "", line 88, in _run_code
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1920, in
(APIServer pid=1) uvloop.run(run_server(args))
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 109, in run
(APIServer pid=1) return __asyncio.run(
(APIServer pid=1) ^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1) return runner.run(main)
(APIServer pid=1) ^^^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1) return self._loop.run_until_complete(task)
(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 61, in wrapper
(APIServer pid=1) return await main
(APIServer pid=1) ^^^^^^^^^^
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1850, in run_server
(APIServer pid=1) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1870, in run_server_worker
(APIServer pid=1) async with build_async_engine_client(
(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter
(APIServer pid=1) return await anext(self.gen)
(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client
(APIServer pid=1) async with build_async_engine_client_from_engine_args(
(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter
(APIServer pid=1) return await anext(self.gen)
(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 318, in build_async_engine_client_from_engine_args
(APIServer pid=1) raise RuntimeError(
(APIServer pid=1) RuntimeError: Engine process failed to start. See stack trace for the root cause.
PS C:\Users\y>

Your 2080 Max-Q is part of the Turing (20-series) generation, so it’s affected by the same limitation: the sinks parameter is unsupported and will throw an error.

Metadata

Metadata

Assignees

No one assigned

    Labels

    gpt-ossRelated to GPT-OSS models

    Type

    No type

    Projects

    Status

    To Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions