# Sample¶

## Background¶

Sampling-based wrappers compute a measure of model (i.e., epistemic) uncertainty by measuring the disagreement of the model across multiple runs on the same input. Intuitively, if a model is shown an input five times and responds with five very different answers, then this would be a strong indicator that the model is uncertain and should not be trusted.

Traditionally, neural networks are deterministic – running a deterministic neural network on the same input will result in the same output no matter how many times it is called – which makes such models incompatible with uncertainty estimation. This wrapper automatically converts any model to a new stochastic form, compatible with sampling and uncertainty estimation. Stochastic noise can be inserted into a neural network in a variety of ways – from the weights to the activations, as well as being trainable or fixed, or even applied over the input itself.

## Usage¶

Wrapping your model with `capsa_torch.sample`

```
from torch import nn
from capsa_torch import sample
# Define your model
model = nn.Sequential(...)
# Specify which distribution to use when modifying the model
dist = sample.Bernoulli(0.1)
# Build a wrapper with this distribution and wrap!
wrapper = sample.Wrapper(n_samples=5, distribution=dist)
wrapped_model = wrapper(model)
# or in one line
wrapped_model = sample.Wrapper(n_samples=5, distribution=dist)(model)
```

Calling your wrapped model

```
# By default, your wrapped model returns a prediction
prediction = wrapped_model(input_batch)
# By using `return_risk=True` you can also automatically get uncertainty too!
prediction, uncertainty = wrapped_model(input_batch, return_risk=True)
```

Examples

API Reference

Wrapping your model with `capsa_tf.sample`

```
#Import the module
from capsa_tf import sample
#Define the wrapper and arguments
dist = sample.Bernoulli(p=0.05)
wrapper = sample.Wrapper(n_samples=5, distribution=dist)
#Wrap the model
model.call_default = wrapper(model.call_default)
```

Calling your wrapped model

```
# By default, your wrapped model returns a prediction
prediction = model(input_batch)
# By using `return_risk` you can also automatically get uncertainty too!
prediction, uncertainty = model(input_batch, return_risk=True)
```

Examples

API Reference