capsa_torch.vote¶
- class capsa_torch.vote.EagerWrapper¶
- __init__(n_voters=4, alpha=1, use_bias=True, finetune=False, weight_noise=0.2, param_filter=None, limited_memory_training=False, classification_dimensions=None, opinion_basis='perturb', *, verbose=0)¶
Initialize a Vote Wrapper with configs.
- Parameters:
n_voters (int) – More voters will give a more diverse set of opinions and quality of uncertainty, but will also cost more memory and compute time. (default:
4
)alpha (int) – Approximate multiple votes with a shared internal representation. Smaller alpha (i.e., alpha=1) indicates more sharing between voters (faster runtime and less memory requirement). (default:
1
)use_bias (bool) – Create separate bias weights for each voter. (default:
True
)finetune (bool) – Freeze the existing module weights and only train the added parameters. (default:
False
)weight_noise (float) – How much noise to use when initializing new weights. Suggest experimenting with values in the range (0.0, 0.4]. (default:
0.2
)param_filter (str | Callable[[str, Tensor], bool] | None) – Either a string representing a regex pattern of parameters to match or a Callable that accepts a parameter name (str) and value (Tensor) as input and returns True if the parameter should be wrapped, False otherwise. (default:
None
)limited_memory_training (bool) – Limits each call to a single voter when training=True. This can reduce memory during training because the batch size doesn’t need to be a multiple of n_voters. (default:
False
)verbose – Set the verbosity level for wrapping.
0 <= verbose <= 2
(default:0
)classification_dimensions (tuple[int, …] | int | None) – A list that indicates which of the output tensor dimensions are classification logits. (default:
None
)opinion_basis (str) – A string indicating which method to use to generate diverse opinions for the voters. The possible options are: “perturb” or “expand”. (default:
'perturb'
)
Note
verbose
is a keyword argument only
- __call__(module_or_module_class)¶
Applies wrapper to either an instantiated
torch.nn.Module
or a class that subclassestorch.nn.Module
to create a new wrapped implementation.- Parameters:
module_or_module_class (
TypeVar
(T
, bound= torch.nn.Module | type[torch.nn.Module])) – The Module to wrap- Return type:
TypeVar
(T
, bound= torch.nn.Module | type[torch.nn.Module])- Returns:
The wrapped module, with weights shared with module
Example Usage
Wrapping a Module¶from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote wrapper = Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options wrapped_module = wrapper(module) # wrap your module y = wrapped_module(x) # Use the wrapped module as usual y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values
Decorator approach¶from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote @Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options class MyModule(torch.nn.Module): # Note: MyModule must subclass torch.nn.Module def __init__(self, ...): ... def forward(self, ...): ... wrapped_module = MyModule(...) # Call MyModule's __init__ fn as usual to create a wrapped module y = wrapped_module(x) # Use the wrapped module as usual y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values
- class capsa_torch.vote.Wrapper¶
- __init__(n_voters=4, alpha=1, use_bias=True, finetune=False, weight_noise=0.2, param_filter=None, limited_memory_training=False, classification_dimensions=None, opinion_basis='perturb', *, verbose=0, torch_compile=False, symbolic_trace=True)¶
Initialize a Vote Wrapper with configs.
- Parameters:
n_voters (int) – More voters will give a more diverse set of opinions and quality of uncertainty, but will also cost more memory and compute time. (default:
4
)alpha (int) – Approximate multiple votes with a shared internal representation. Smaller alpha (i.e., alpha=1) indicates more sharing between voters (faster runtime and less memory requirement). (default:
1
)use_bias (bool) – Create separate bias weights for each voter. (default:
True
)finetune (bool) – Freeze the existing module weights and only train the added parameters. (default:
False
)weight_noise (float) – How much noise to use when initializing new weights. Suggest experimenting with values in the range (0.0, 0.4]. (default:
0.2
)param_filter (str | Callable[[str, Tensor], bool] | None) – Either a string representing a regex pattern of parameters to match or a Callable that accepts a parameter name (str) and value (Tensor) as input and returns True if the parameter should be wrapped, False otherwise. (default:
None
)limited_memory_training (bool) – Limits each call to a single voter when training=True. This can reduce memory during training because the batch size doesn’t need to be a multiple of n_voters. (default:
False
)classification_dimensions (tuple[int, …] | int | None) – A list that indicates which of the output tensor dimensions are classification logits. (default:
None
)verbose – Set the verbosity level for wrapping.
0 <= verbose <= 2
(default:0
)torch_compile – Apply torch’s inductor to compile the wrapped model. This should improve model perfomance, at the cost of initial overhead. (default:
False
)symbolic_trace – Attempt to use symbolic shapes when tracing the module’s graph. Turning this off may help if the module is failing to wrap, however the resulting graph is more likely to use fixed input dimensions and trigger rewraps when fed different input shapes. (default:
True
)opinion_basis (str) – A string indicating which method to use to generate diverse opinions for the voters. The possible options are: “perturb” or “expand”. (default:
'perturb'
)
Note
verbose
andsymbolic_trace
are keyword arguments only
- __call__(module_or_module_class)¶
Applies wrapper to either an instantiated
torch.nn.Module
or a class that subclassestorch.nn.Module
to create a new wrapped implementation.- Parameters:
module_or_module_class (
TypeVar
(T
, bound= torch.nn.Module | type[torch.nn.Module])) – The Module to wrap- Return type:
TypeVar
(T
, bound= torch.nn.Module | type[torch.nn.Module])- Returns:
The wrapped module, with weights shared with module
Example Usage
Wrapping a Module¶from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote wrapper = Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options wrapped_module = wrapper(module) # wrap your module y = wrapped_module(x) # Use the wrapped module as usual y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values
Decorator approach¶from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote @Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options class MyModule(torch.nn.Module): # Note: MyModule must subclass torch.nn.Module def __init__(self, ...): ... def forward(self, ...): ... wrapped_module = MyModule(...) # Call MyModule's __init__ fn as usual to create a wrapped module y = wrapped_module(x) # Use the wrapped module as usual y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values