Merged
Conversation
erictang000
added a commit
that referenced
this pull request
Oct 17, 2025
This reverts commit 8389f22.
erictang000
added a commit
that referenced
this pull request
Oct 17, 2025
Reverts #481 due to dependency issues with megatron.
SumanthRH
pushed a commit
that referenced
this pull request
Oct 24, 2025
… to 0.11.0 + pin minimum uv version for extra-build-dependencies (#528) ## Separates vllm + megatron deps After #481, there were some megatron flashinfer issues with --extra vllm. This PR separates out the version of vllm that megatron relies on from the general vllm version, allowing us to bump vllm to 0.11.0 for the rest of the training stack. ## Update flash-attn installation Updates flash-attn installation to use the `extra-build-dependencies` feature from uv, requiring us to use a uv version >= 0.8.10. This feature allows us to do the following, removing the need to deal with markers + extras to specify a url source for each set of extras. ``` [tool.uv.extra-build-dependencies] flash-attn = [{requirement = "torch", match-runtime = true}] [tool.uv.extra-build-variables] flash-attn = { FLASH_ATTENTION_SKIP_CUDA_BUILD = "TRUE"} [project.optional-dependencies] vllm = [ "vllm==0.11.0", "flash-attn==2.8.3", ... ] mcore = [ "flash-attn==2.7.4.post1" ... ] ```
li-boxuan
pushed a commit
to li-boxuan/SkyRL
that referenced
this pull request
Nov 23, 2025
This required a flashinfer bump, which we now install from pypi directly, but they now provide prebuilt jit compilation caches! So startup is still fast.
li-boxuan
pushed a commit
to li-boxuan/SkyRL
that referenced
this pull request
Nov 23, 2025
Reverts NovaSky-AI#481 due to dependency issues with megatron.
li-boxuan
pushed a commit
to li-boxuan/SkyRL
that referenced
this pull request
Nov 23, 2025
… to 0.11.0 + pin minimum uv version for extra-build-dependencies (NovaSky-AI#528) ## Separates vllm + megatron deps After NovaSky-AI#481, there were some megatron flashinfer issues with --extra vllm. This PR separates out the version of vllm that megatron relies on from the general vllm version, allowing us to bump vllm to 0.11.0 for the rest of the training stack. ## Update flash-attn installation Updates flash-attn installation to use the `extra-build-dependencies` feature from uv, requiring us to use a uv version >= 0.8.10. This feature allows us to do the following, removing the need to deal with markers + extras to specify a url source for each set of extras. ``` [tool.uv.extra-build-dependencies] flash-attn = [{requirement = "torch", match-runtime = true}] [tool.uv.extra-build-variables] flash-attn = { FLASH_ATTENTION_SKIP_CUDA_BUILD = "TRUE"} [project.optional-dependencies] vllm = [ "vllm==0.11.0", "flash-attn==2.8.3", ... ] mcore = [ "flash-attn==2.7.4.post1" ... ] ```
dzorlu
pushed a commit
to fleet-ai/SkyRL
that referenced
this pull request
Feb 4, 2026
This required a flashinfer bump, which we now install from pypi directly, but they now provide prebuilt jit compilation caches! So startup is still fast.
dzorlu
pushed a commit
to fleet-ai/SkyRL
that referenced
this pull request
Feb 4, 2026
Reverts NovaSky-AI#481 due to dependency issues with megatron.
dzorlu
pushed a commit
to fleet-ai/SkyRL
that referenced
this pull request
Feb 4, 2026
… to 0.11.0 + pin minimum uv version for extra-build-dependencies (NovaSky-AI#528) ## Separates vllm + megatron deps After NovaSky-AI#481, there were some megatron flashinfer issues with --extra vllm. This PR separates out the version of vllm that megatron relies on from the general vllm version, allowing us to bump vllm to 0.11.0 for the rest of the training stack. ## Update flash-attn installation Updates flash-attn installation to use the `extra-build-dependencies` feature from uv, requiring us to use a uv version >= 0.8.10. This feature allows us to do the following, removing the need to deal with markers + extras to specify a url source for each set of extras. ``` [tool.uv.extra-build-dependencies] flash-attn = [{requirement = "torch", match-runtime = true}] [tool.uv.extra-build-variables] flash-attn = { FLASH_ATTENTION_SKIP_CUDA_BUILD = "TRUE"} [project.optional-dependencies] vllm = [ "vllm==0.11.0", "flash-attn==2.8.3", ... ] mcore = [ "flash-attn==2.7.4.post1" ... ] ```
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This required a flashinfer bump, which we now install from pypi directly, but they now provide prebuilt jit compilation caches! So startup is still fast.