Skip to content

Commit

Permalink
make quantize_.set_inductor_config None by default for future depreca…
Browse files Browse the repository at this point in the history
…tion

Summary:

We want to migrate this to individual workflows, see #1715 for migration plan.

This PR is step 1 where we enable distinguishing whether the user
specified this argument or not.  After this PR, we can control the
behavior per-workflow, such as setting this functionality to False for
future training workflows.

Test Plan: CI

Reviewers:

Subscribers:

Tasks:

Tags:
  • Loading branch information
vkuzo committed Feb 14, 2025
1 parent 12e830b commit e84f67a
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 3 deletions.
3 changes: 3 additions & 0 deletions torchao/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -386,6 +386,9 @@ The benchmarks below were run on a single NVIDIA-A6000 GPU.
You try can out these apis with the `quantize_` api as above alongside the constructor `codebook_weight_only` an example can be found in in `torchao/_models/llama/generate.py`.

### Automatic Inductor Configuration

:warning: <em>This functionality is being migrated from the top level `quantize_` API to individual workflows, see https://github.com/pytorch/ao/issues/1715 for more details.</em>

The `quantize_` and `autoquant` apis now automatically use our recommended inductor configuration setings. You can mimic the same configuration settings for your own experiments by using the `torchao.quantization.utils.recommended_inductor_config_setter` to replicate our recommended configuration settings. Alternatively if you wish to disable these recommended settings, you can use the key word argument `set_inductor_config` and set it to false in the `quantize_` or `autoquant` apis to prevent assignment of those configuration settings. You can also overwrite these configuration settings after they are assigned if you so desire, as long as they are overwritten before passing any inputs to the torch.compiled model. This means that previous flows which referenced a variety of inductor configurations that needed to be set are now outdated, though continuing to manually set those same inductor configurations is unlikely to cause any issues.

## (To be moved to prototype) A16W4 WeightOnly Quantization with GPTQ
Expand Down
13 changes: 10 additions & 3 deletions torchao/quantization/quant_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ def quantize_(
model: torch.nn.Module,
config: Union[AOBaseConfig, Callable[[torch.nn.Module], torch.nn.Module]],
filter_fn: Optional[Callable[[torch.nn.Module, str], bool]] = None,
set_inductor_config: bool = True,
set_inductor_config: Optional[bool] = None,
device: Optional[torch.types.Device] = None,
):
"""Convert the weight of linear modules in the model with `config`, model is modified inplace
Expand All @@ -498,7 +498,7 @@ def quantize_(
config (Union[AOBaseConfig, Callable[[torch.nn.Module], torch.nn.Module]]): either (1) a workflow configuration object or (2) a function that applies tensor subclass conversion to the weight of a module and return the module (e.g. convert the weight tensor of linear to affine quantized tensor). Note: (2) will be deleted in a future release.
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `config` on
the weight of the module
set_inductor_config (bool, optional): Whether to automatically use recommended inductor config settings (defaults to True)
set_inductor_config (bool, optional): Whether to automatically use recommended inductor config settings (defaults to None)
device (device, optional): Device to move module to before applying `filter_fn`. This can be set to `"cuda"` to speed up quantization. The final model will be on the specified `device`.
Defaults to None (do not change device).
Expand All @@ -522,7 +522,14 @@ def quantize_(
quantize_(m, int4_weight_only(group_size=32))
"""
if set_inductor_config:
if set_inductor_config != None:
warnings.warn(
"""The `set_inductor_config` argument to `quantize_` will be removed in a future release. This functionality is being migrated to individual workflows. Please see https://github.com/pytorch/ao/issues/1715 for more details."""
)
# for now, default to True to not change existing behavior when the
# argument is not specified
set_inductor_config = True
if set_inductor_config is True:
torchao.quantization.utils.recommended_inductor_config_setter()

if isinstance(config, AOBaseConfig):
Expand Down

0 comments on commit e84f67a

Please sign in to comment.