capsa_torch.vote

class capsa_torch.vote.Wrapper
__init__(n_voters=4, alpha=1, use_bias=True, independent=True, finetune=False, weight_noise=0.1, param_filter=None, *, torch_compile=False, verbose=0, symbolic_trace=True)

Initialize a Vote Wrapper with configs.

Parameters:
  • n_voters (int) – More voters will give a more diverse set of opinions and quality of uncertainty, but will also cost more memory and compute time. (default: 4)

  • alpha (int) – Approximate multiple votes with a shared internal representation. Smaller alpha (i.e., alpha=1) indicates more sharing between voters (faster runtime and less memory requirement). (default: 1)

  • use_bias (bool) – Create separate bias weights for each voter. (default: True)

  • independent (bool) – Controls if each voter should be trained independently. Training voters in a dependent fashion means training will be faster and less constrained but potentially less accurate. (default: True)

  • finetune (bool) – Freeze the existing module weights and only train the added parameters. (default: False)

  • weight_noise (float) – How much noise to use when initializing new weights. Suggested range [0., 0.3] (default: 0.1)

  • param_filter (str | Callable[[str, Tensor], bool] | None) – Either a string representing a regex pattern of parameters to match or a Callable that accepts a parameter name (str) and value (Tensor) as input and returns True if the parameter should be wrapped, False otherwise. (default: None)

  • torch_compile (bool) – Apply torch’s torch inductor to compile the wrapped model. This should improve model perfomance, at the cost of initial overhead. (default: False)

  • verbose (int) – Set the verbose level for wrapping. 0 <= verbose <= 2 (default: 0)

  • symbolic_trace (bool) – Attempt to use symbolic shapes when tracing the module’s graph. Turning this off may help if the module is failing to wrap, however the resulting graph is more likely to use fixed input dimensions and trigger rewraps when fed different input shapes. (default: True)

Note

verbose and symbolic_trace are keyword arguments only

__call__(module_or_module_class)

Applies wrapper to either an instantiated torch.nn.Module or a class that subclasses torch.nn.Module to create a new wrapped implementation.

Parameters:

module_or_module_class (TypeVar(T, Module, type[torch.nn.Module])) – The Module to wrap

Return type:

TypeVar(T, Module, type[torch.nn.Module])

Returns:

The wrapped module, with weights shared with module

Example Usage

Wrapping a Module
from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote
wrapper = Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options

wrapped_module = wrapper(module) # wrap your module

y = wrapped_module(x) # Use the wrapped module as usual
y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values
Decorator approach
from capsa_torch.sample import Wrapper # or capsa_torch.sculpt, capsa_torch.vote

@Wrapper(n_samples=3, verbose=1) # Initialize a wrapper object with your config options
class MyModule(torch.nn.Module): # Note: MyModule must subclass torch.nn.Module
    def __init__(self, ...):
        ...

    def forward(self, ...):
        ...

wrapped_module = MyModule(...) # Call MyModule's __init__ fn as usual to create a wrapped module

y = wrapped_module(x) # Use the wrapped module as usual
y, risk = wrapped_module(x, return_risk=True) # Use the wrapped module to obtain risk values