Skip to content

Conversation

zzhx1
Copy link
Contributor

@zzhx1 zzhx1 commented Sep 10, 2025

Delete redundant code from linear.py

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: zzhx1 <zzh_201018@outlook.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors linear.py to remove redundant code by moving logic from AscendMergedColumnParallelLinear to its parent AscendColumnParallelLinear, and removing the AscendLinearBase class. While the refactoring simplifies the code, it introduces a couple of critical bugs related to handling the output_sizes parameter, which can lead to TypeError exceptions during initialization. My review provides specific fixes for these issues.

Comment on lines +90 to +91
assert all(output_size % self.tp_size == 0
for output_size in output_sizes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This assertion will raise a TypeError if output_sizes is None. Since output_sizes is an optional argument in AscendColumnParallelLinear.__init__ with a default value of None, this will cause a crash when the class is initialized without this argument.

A similar issue exists in the if hasattr(self, "output_sizes") block at line 97, which is not part of the diff but is affected by this change. The list comprehension at line 100 will also fail if self.output_sizes is None.

Please add a check for output_sizes being non-None before attempting to iterate over it.

Suggested change
assert all(output_size % self.tp_size == 0
for output_size in output_sizes)
if output_sizes:
assert all(output_size % self.tp_size == 0 for output_size in output_sizes)

Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant