Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checkpointing randomly fails #12203

Open
aflah02 opened this issue Feb 15, 2025 · 0 comments
Open

Checkpointing randomly fails #12203

aflah02 opened this issue Feb 15, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@aflah02
Copy link

aflah02 commented Feb 15, 2025

Hi
I'm pretraining an LLM and noticed that deep into the training sometimes the checkpointing process crashes and kills the run.

The exact error seems to be due to async checkpointing -

i.pretrain/0 [default0]:[NeMo W 2025-02-15 08:28:04 nemo_logging:405] Some async checkpoint saves might be not finalized properly.
i.pretrain/0 [default0]:[rank0]: Traceback (most recent call last):
i.pretrain/0 [default0]:[rank0]:   File "<frozen runpy>", line 198, in _run_module_as_main
i.pretrain/0 [default0]:[rank0]:   File "<frozen runpy>", line 88, in _run_code
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo-Run/src/nemo_run/core/runners/fdl_runner.py", line 66, in <module>
i.pretrain/0 [default0]:[rank0]:     fdl_runner_app()
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/typer/main.py", line 340, in __call__
i.pretrain/0 [default0]:[rank0]:     raise e
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/typer/main.py", line 323, in __call__
i.pretrain/0 [default0]:[rank0]:     return get_command(self)(*args, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 1161, in __call__
i.pretrain/0 [default0]:[rank0]:     return self.main(*args, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/typer/core.py", line 680, in main
i.pretrain/0 [default0]:[rank0]:     return _main(
i.pretrain/0 [default0]:[rank0]:            ^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/typer/core.py", line 198, in _main
i.pretrain/0 [default0]:[rank0]:     rv = self.invoke(ctx)
i.pretrain/0 [default0]:[rank0]:          ^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 1443, in invoke
i.pretrain/0 [default0]:[rank0]:     return ctx.invoke(self.callback, **ctx.params)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 788, in invoke
i.pretrain/0 [default0]:[rank0]:     return __callback(*args, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/typer/main.py", line 698, in wrapper
i.pretrain/0 [default0]:[rank0]:     return callback(**use_params)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo-Run/src/nemo_run/core/runners/fdl_runner.py", line 62, in fdl_direct_run
i.pretrain/0 [default0]:[rank0]:     fdl_fn()
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/collections/llm/api.py", line 150, in pretrain
i.pretrain/0 [default0]:[rank0]:     return train(
i.pretrain/0 [default0]:[rank0]:            ^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/collections/llm/api.py", line 107, in train
i.pretrain/0 [default0]:[rank0]:     trainer.fit(model, data)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
i.pretrain/0 [default0]:[rank0]:     call._call_and_handle_interrupt(
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/call.py", line 46, in _call_and_handle_interrupt
i.pretrain/0 [default0]:[rank0]:     return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
i.pretrain/0 [default0]:[rank0]:     return function(*args, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
i.pretrain/0 [default0]:[rank0]:     self._run(model, ckpt_path=ckpt_path)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
i.pretrain/0 [default0]:[rank0]:     results = self._run_stage()
i.pretrain/0 [default0]:[rank0]:               ^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage
i.pretrain/0 [default0]:[rank0]:     self.fit_loop.run()
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run
i.pretrain/0 [default0]:[rank0]:     self.advance()
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance
i.pretrain/0 [default0]:[rank0]:     self.epoch_loop.run(self._data_fetcher)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/training_epoch_loop.py", line 141, in run
i.pretrain/0 [default0]:[rank0]:     self.on_advance_end(data_fetcher)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/training_epoch_loop.py", line 295, in on_advance_end
i.pretrain/0 [default0]:[rank0]:     self.val_loop.run()
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/utilities.py", line 178, in _decorator
i.pretrain/0 [default0]:[rank0]:     return loop_run(self, *args, **kwargs)
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/evaluation_loop.py", line 142, in run
i.pretrain/0 [default0]:[rank0]:     return self.on_run_end()
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/evaluation_loop.py", line 268, in on_run_end
i.pretrain/0 [default0]:[rank0]:     self._on_evaluation_end()
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/loops/evaluation_loop.py", line 313, in _on_evaluation_end
i.pretrain/0 [default0]:[rank0]:     call._call_callback_hooks(trainer, hook_name, *args, **kwargs)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/call.py", line 218, in _call_callback_hooks
i.pretrain/0 [default0]:[rank0]:     fn(trainer, trainer.lightning_module, *args, **kwargs)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 335, in on_validation_end
i.pretrain/0 [default0]:[rank0]:     self._save_last_checkpoint(trainer, monitor_candidates)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 696, in _save_last_checkpoint
i.pretrain/0 [default0]:[rank0]:     self._save_checkpoint(trainer, filepath)
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/lightning/pytorch/callbacks/model_checkpoint.py", line 628, in _save_checkpoint
i.pretrain/0 [default0]:[rank0]:     trainer.save_checkpoint(filepath, save_weights_only, storage_options=storage_options)
i.pretrain/0 [default0]:[rank0]:   File "/usr/local/lib/python3.12/dist-packages/lightning/pytorch/trainer/trainer.py", line 1365, in save_checkpoint
i.pretrain/0 [default0]:[rank0]:     self.strategy.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/lightning/pytorch/strategies/megatron_strategy.py", line 762, in save_checkpoint
i.pretrain/0 [default0]:[rank0]:     self.checkpoint_io.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/utils/callbacks/dist_ckpt_io.py", line 130, in save_checkpoint
i.pretrain/0 [default0]:[rank0]:     async_request = self.checkpoint_io.save_checkpoint(checkpoint, path, storage_options)
i.pretrain/0 [default0]:[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/opt/NeMo/nemo/lightning/io/pl.py", line 200, in save_checkpoint
i.pretrain/0 [default0]:[rank0]:     return dist_checkpointing.save(
i.pretrain/0 [default0]:[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0 [default0]:[rank0]:   File "/opt/megatron-lm/megatron/core/dist_checkpointing/serialization.py", line 354, in save
i.pretrain/0 [default0]:[rank0]:     raise CheckpointingException(
i.pretrain/0 [default0]:[rank0]: megatron.core.dist_checkpointing.core.CheckpointingException: Checkpoint destination directory (Checkpoints/llama32_1b_dclm-SL-2048-PGBS-16-GAS-4-NGPU-8-NNODES-1-TW-2025-02-14-16-34-40/llama32_1b_dclm-SL-2048-PGBS-16-GAS-4-NGPU-8-NNODES-1-TW/checkpoints/model_name=0--val_loss=4.18-step=2299-consumed_samples=1177600.0-last/weights) is not empty
i.pretrain/0 W0215 08:28:15.329000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 557 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.335000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 558 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.340000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 559 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.345000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 560 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.350000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 561 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.356000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 562 closing signal SIGTERM
i.pretrain/0 W0215 08:28:15.366000 485 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 563 closing signal SIGTERM
i.pretrain/0 W0215 08:28:45.370000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 557 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:28:49.411000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 558 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:28:52.973000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 559 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:28:57.039000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 560 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:29:00.442000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 561 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:29:04.253000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 562 via 15, forcefully exiting via 9
i.pretrain/0 W0215 08:29:07.786000 485 torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 563 via 15, forcefully exiting via 9
i.pretrain/0 E0215 08:29:10.949000 485 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 556) of binary: /usr/bin/python
i.pretrain/0 I0215 08:29:11.027000 485 torch/distributed/elastic/multiprocessing/errors/__init__.py:368] ('local_rank %s FAILED with no error file. Decorate your entrypoint fn with @record for traceback info. See: https://pytorch.org/docs/stable/elastic/errors.html', 0)
i.pretrain/0 Traceback (most recent call last):
i.pretrain/0   File "/usr/local/bin/torchrun", line 33, in <module>
i.pretrain/0     sys.exit(load_entry_point('torch==2.6.0a0+ecf3bae40a.nv25.1', 'console_scripts', 'torchrun')())
i.pretrain/0              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0   File "/usr/local/lib/python3.12/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
i.pretrain/0     return f(*args, **kwargs)
i.pretrain/0            ^^^^^^^^^^^^^^^^^^
i.pretrain/0   File "/usr/local/lib/python3.12/dist-packages/torch/distributed/run.py", line 918, in main
i.pretrain/0     run(args)
i.pretrain/0   File "/usr/local/lib/python3.12/dist-packages/torch/distributed/run.py", line 909, in run
i.pretrain/0     elastic_launch(
i.pretrain/0   File "/usr/local/lib/python3.12/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
i.pretrain/0     return launch_agent(self._config, self._entrypoint, list(args))
i.pretrain/0            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
i.pretrain/0   File "/usr/local/lib/python3.12/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
i.pretrain/0     raise ChildFailedError(
i.pretrain/0 torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
i.pretrain/0 ============================================================
i.pretrain/0 nemo_run.core.runners.fdl_runner FAILED
i.pretrain/0 ------------------------------------------------------------
i.pretrain/0 Failures:
i.pretrain/0   <NO_OTHER_FAILURES>
i.pretrain/0 ------------------------------------------------------------
i.pretrain/0 Root Cause (first observed failure):
i.pretrain/0 [0]:
i.pretrain/0   time      : 2025-02-15_08:28:15
i.pretrain/0   host      : localhost
i.pretrain/0   rank      : 0 (local_rank: 0)
i.pretrain/0   exitcode  : 1 (pid: 556)
i.pretrain/0   error_file: <N/A>
i.pretrain/0   traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
i.pretrain/0 ============================================================
[08:29:11] INFO     Job                                          launcher.py:162
                    nemo.collections.llm.api.pretrain-pmslfcf3g6                
                    wgg finished: FAILED                                        
                                                                                
# The experiment was run with the following tasks: ['nemo.collections.llm.api.pr
# You can inspect and reconstruct this experiment at a later point in time using
experiment = run.Experiment.from_id("nemo.collections.llm.api.pretrain_173955088
experiment.status() # Gets the overall status                                   
experiment.logs("nemo.collections.llm.api.pretrain") # Gets the log for the prov
experiment.cancel("nemo.collections.llm.api.pretrain") # Cancels the provided ta
                                                                                
                                                                                
# You can inspect this experiment at a later point in time using the CLI as well
nemo experiment status nemo.collections.llm.api.pretrain_1739550881             
nemo experiment logs nemo.collections.llm.api.pretrain_1739550881 0             
nemo experiment cancel nemo.collections.llm.api.pretrain_1739550881 0           

Here's my code - https://gist.github.com/aflah02/edf6c71fb24edbbb82794317d8ef624c

This is very flaky and seems to happen at random during training. Any thoughts on how to fix this?

@aflah02 aflah02 added the bug Something isn't working label Feb 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant