-
Notifications
You must be signed in to change notification settings - Fork 629
Top9 #4760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Top9 #4760
Conversation
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
Signed-off-by: Che Ruan <cr623@ic.ac.uk>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a mix_placement feature for Mixture-of-Experts models, primarily targeting DeepseekV2/V3 on Ascend hardware. The changes involve modifications to configuration, expert selection logic, and weight loading patches. While the overall direction seems correct for enabling this new functionality, I've identified a critical bug in the expert group calculation within moe_mlp.py that will lead to incorrect behavior. I've also pointed out a high-severity maintainability issue related to a hardcoded magic number in the expert selection logic. Please address these points to ensure correctness and code quality.
| group_diff = torch.diff(group_list) | ||
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a bug in the calculation of new_group. When group_list is a cumulative sum of token counts, this logic incorrectly calculates the token count for the first group. For example, if group_list is [10, 25, 30], the token counts per group are [10, 15, 5]. The current code produces [15, 15, 5], using the second group's count for the first group. Additionally, this will raise an IndexError if group_list has fewer than two elements.
The correct approach is to combine the first element of group_list with the differences of the rest of the list.
| group_diff = torch.diff(group_list) | |
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) | |
| group_diff = torch.diff(group_list) | |
| new_group = torch.cat([group_list[:1], group_diff]) |
| device=topk_ids.device) | ||
|
|
||
| pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1), | ||
| 0.4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value 0.4 is hardcoded as the weight for the padded shared expert. This is a magic number which harms readability and maintainability. It should be defined as a named constant with a descriptive name at the top of the file (e.g., SHARED_EXPERT_DEFAULT_WEIGHT), or passed as a parameter if it's meant to be configurable.
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?