Skip to content

Conversation

@kiscad
Copy link
Contributor

@kiscad kiscad commented Dec 6, 2025

What this PR does / why we need it?

  • Extend the dispatch_ffn_combine host and kernel implementations to support the decoding phase, including updated tiling, epilogue, and quantized MoE routing handling.
  • Introduce and refine HCCL shared‑memory utilities (HcclShmem) for cross‑rank synchronization and safe shared‑memory access in multi‑rank decode.
  • Wire the new dispatch_ffn_combine decode path into the Ascend runtime (forward context, fused MoE communication, W8A8 quantization, MTP proposer, and model_runner_v1).
  • Fix rank initialization and shared‑memory size/offset handling in the FFN dispatch headers to avoid crashes and incorrect memory access during decoding.

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link

github-actions bot commented Dec 6, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the fused MoE implementation, primarily to decouple operations and dynamically calculate maxOutputSize. While this is a good direction, the changes introduce several critical issues. There's a C++ syntax error that will prevent compilation, a mismatch between a Python operator call and its C++ definition, and the removal of safety checks that could lead to correctness issues. I've detailed these in the review comments.

@kiscad kiscad changed the title Dec fusedmoe Adapt dispatch_ffn_combine for decoding Dec 6, 2025
Signed-off-by: mojave2 <chenchen145@huawei.com>
Signed-off-by: mojave2 <chenchen145@huawei.com>
Signed-off-by: mojave2 <chenchen145@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant