Feat/nemo rl rlix f5 f6#2
Open
TianyeGGBond wants to merge 3 commits intorlops:mainfrom
Open
Conversation
Add rlix_hooks.py: RLixHooksProtocol (typing_extensions Protocol) + NoOpRLixHooks default for standalone mode. Seam file keeps NeMo RL free of direct rlix package imports. Modify async_grpo_train: - rlix_hooks parameter injected by NemoRLRLixHooks from pipeline actor - DO_TIME_SHARING flag from RLIX_CONTROL_PLANE env var - before_training(step): blocks on scheduler GPU grant before lp_inference - after_training(step): notifies scheduler release; replaces refit in RLix mode (weight sync + version update done atomically in _expand_workers, F6) - on_trajectory_collector_created: registers collector handle so _expand_workers can call set_weight_version before activating dp rank routing - Initial refit and prepare_for_generation skipped when DO_TIME_SHARING=True TODO placeholders in after_training branch: F4: policy.build_cpu_bucket_cache(step) F11: policy.offload_training_gpu() + policy.destroy_nccl_groups() Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
feat(rlix): wire F5/F6 scheduler hooks and vLLM weight update receiver
正文
Summary
rlix_hooks.py(new): definesRLixHooksProtocol+NoOpRLixHooksas the seam between NeMo RL and RLix. NeMo RL never imports from the rlix package directly — the real implementation is injected at runtime byNemoRLRLixHooks(rlix repo).grpo.py: wires F5/F6 hooks intoasync_grpo_trainvia an optionalrlix_hooksparameter. AddsDO_TIME_SHARINGflag (controlled byRLIX_CONTROL_PLANE=rlix) to skip standalone refit/prepare paths that conflict with scheduler-driven sleep/wake.vllm_backend.py: adds RLix weight update receiver methods toVllmInternalWorkerExtension—setup_collective_group,update_parameter_in_bucket,broadcast_parameter,destroy_collective_group,finalize_weight_update,verify_model.vllm_generation.py: addsget_model_update_receiver(exposes worker surface for selective sync) andfinalize_weight_update(dispatches post-load hooks to selected DP ranks after bucket sync).vllm_worker.py/vllm_worker_async.py: addsrlix_model_update_rpcdispatcher that forwards RLix weight-update method calls to vLLM internal workers viacollective_rpc.How it fits together
async_grpo_train
hooks.before_training(step) ← F5: blocks on scheduler GPU grant
policy.train()
hooks.after_training(step) ← F5: releases actor_train GPUs
└─ scheduler triggers resize_infer(add=overlap_ranks)
└─ _expand_workers (rlix repo)
├─ wake_up_partial
├─ NemoRLModelUpdateService.sync_selected_workers
│ └─ setup_collective_group → broadcast_parameter → finalize_weight_update
├─ set_weight_version (collector)
└─ activate_dp_ranks (routing on)
Standalone mode (
RLIX_CONTROL_PLANEunset):NoOpRLixHooksis used, all hook calls are no-ops, refit/prepare paths are unchanged.Pending (follow-up features)
TODO F4:policy.build_cpu_bucket_cache(step)beforeafter_trainingTODO F11:policy.offload_training_gpu()+destroy_nccl_groups()beforeafter_trainingTest plan
RLIX_CONTROL_PLANEunset →NoOpRLixHooks, refit path unchangedDO_TIME_SHARING=True: initialprepare_for_generation/ refit skipped;before_training/after_trainingcalled each steprlix_model_update_rpcdispatches correctly to sync/async worker variantsfinalize_weight_updateonVllmGenerationdispatches only to requested DP rankssetup_collective_groupearly-returnsTruefor ranks not incomm_plan