-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rescalability via IBM dataset layers #1372
base: main
Are you sure you want to change the base?
Conversation
""" | ||
|
||
|
||
def _shard_partition(itemlist: List[Any], rank: int, worldsize: int) -> List[Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are tail elements just truncated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this code will distribute extra elements as evenly as possible, even if it's not perfect. Technically nothing breaks if you load into a worldsize that doesn't divide logical_shards evenly, you just end up with some shards progressing faster than others (since some devices now have an extra logical shard)
# Setup / loading flags | ||
self.is_setup = False | ||
|
||
def setup(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be mapped pretty easily to BaseNode.reset()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I thought so too!
[setattr(self, flag, state_dict[self.statename(flag)]) for flag in self.state_params] | ||
|
||
|
||
class _WrapperDataset(_StatefulDataset): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking out loud: could we do this with mixins instead of extending the type hierarchy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, what's the benefit of having two subclasses?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The base reader is a _StatefulDataset but not a _WrapperDataset, so the distinction is meaningful, but yeah the only reason it's not mixins is because of my lack of familiarity with building mixins!
while True: | ||
ind = self.current_reader | ||
# Read doc | ||
out = next(data[ind]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is StopIteration handled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not, in this framework we just assume each iterator loops forever. Converting to a next()
based framework would make this pretty easy to handle though.
# Convert to tensor form | ||
out = {} | ||
for k, v in state_dict.items(): | ||
v = torch.tensor(v) | ||
if len(v.shape) == 0: | ||
k = k + ".scalar" | ||
v = v.unsqueeze(0) | ||
out[k] = v |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this done to satisfy DCP requirements?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, dtensors have to have at least one dimension
#### ------------------------- CHECKPOINT FUNCTIONS ------------------------- #### | ||
|
||
|
||
def __pop_dstate(state, device_mesh, placements): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should create standard utilities to get these in torchdata #1337
self.current_reader = (self.current_reader + 1) % self.n_logicals | ||
yield out | ||
|
||
def state_dict(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{
my_children: [c.state_dict() for c in self.children],
scalar_state: self.scalar, # "my_string"
my_reshardale_state: tensor.array([1, 2, 3, 4, 5]), # 2d tensor
}
question: what happens if above state_dict gets passed to DCP?
Answer: it will fail because torch.tensor gets called on everything?
Andrew to follow up with @pradeepfn on this
Implements rescaling of checkpoints to different world sizes and numbers of workers. User specifies in advance the number of data partitions, and when saving/loading checkpoints with different total workers (must divide partition number evenly), stateful guarantees are maintained: seen data is not revisited until the next epoch.
Based off of the datasets in the corresponding IBM torchtitan PR, but uses StatefulDataLoader and DCP to manage checkpointing from the master process. Sampling and Dummy datasets are included for demo purposes. It is possible that the IBM datasets can be merged into the existing node structure.
Changes
torchdata/stateful_dataloader/ibm_rescalable.py
examples/ibm_rescaling/rescaling_demo.py