Refitting PyMC3 models with ArviZ (and xarray)#

ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses SamplingWrappers to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.

Below there is an example of SamplingWrapper usage for PyMC3.

Before starting, it is important to note that PyMC3 cannot modify the shapes of the input data using the same compiled model. Thus, each refitting will require a recompilation of the model.

import arviz as az
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr

For the example, we will use a linear regression model.

np.random.seed(26)

xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata);
../_images/f686f600761c2c58284efd5c03d410a691a0509004f3ca70fa4f86b06bf59534.png

Now we will write the PyMC3 model, keeping in mind the following two points:

  1. Data must be modifiable (both x and y).

  2. The model must be recompiled in order to be refitted with the modified data. We, therefore, have to create a function that recompiles the model when it’s called. Luckily for us, compilation in PyMC3 is generally quite fast.

def compile_linreg_model(xdata, ydata):
    with pm.Model() as model:
        x = pm.Data("x", xdata)
        b0 = pm.Normal("b0", 0, 10)
        b1 = pm.Normal("b1", 0, 10)
        sigma_e = pm.HalfNormal("sigma_e", 10)

        y = pm.Normal("y", b0 + b1 * x, sigma_e, observed=ydata)
    return model
sample_kwargs = {"draws": 500, "tune": 500, "chains": 4}
with compile_linreg_model(xdata, ydata) as linreg_model:
    trace = pm.sample(**sample_kwargs)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_e, b1, b0]
100.00% [4000/4000 00:02<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 500 tune and 500 draw iterations (2_000 + 2_000 draws total) took 3 seconds.
The acceptance probability does not match the target. It is 0.917825834816141, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8850799498280131, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8818306765045102, but should be close to 0.8. Try to increase the number of tuning steps.

We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters.

We follow the same pattern with az.from_pymc3.

Note, however, how coords are not set. This is done to prevent errors due to coordinates and values shapes being incompatible during refits. Otherwise we’d have to handle subsetting of the coordinate values even though the refits are never used outside the refitting functions such as reloo().

We also exclude the model because the model, like the trace, is different for every refit. This may seem counterintuitive or even plain wrong, but we have to remember that the pm.Model object contains information like the observed data.

dims = {"y": ["time"], "x": ["time"]}
idata_kwargs = {
    "dims": dims,
    "log_likelihood": False,
}
idata = az.from_pymc3(trace, model=linreg_model, **idata_kwargs)
idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          b0       (chain, draw) float64 -2.545 -2.677 -2.088 ... -3.023 -2.545 -2.89
          b1       (chain, draw) float64 1.016 1.02 0.9981 1.022 ... 1.034 1.023 1.012
          sigma_e  (chain, draw) float64 2.772 2.993 2.867 2.833 ... 2.8 2.838 3.039
      Attributes:
          created_at:                 2020-10-06T00:56:59.863225
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3
          sampling_time:              3.375001907348633
          tuning_steps:               500

    • <xarray.Dataset>
      Dimensions:             (chain: 4, draw: 500)
      Coordinates:
        * chain               (chain) int64 0 1 2 3
        * draw                (draw) int64 0 1 2 3 4 5 6 ... 494 495 496 497 498 499
      Data variables:
          step_size_bar       (chain, draw) float64 0.4844 0.4844 ... 0.5218 0.5218
          step_size           (chain, draw) float64 0.6579 0.6579 ... 0.5572 0.5572
          energy_error        (chain, draw) float64 0.1089 -0.01555 ... -0.1793 1.522
          process_time_diff   (chain, draw) float64 0.0005479 0.001 ... 0.001683
          max_energy_error    (chain, draw) float64 0.1089 -0.1235 ... -0.1793 2.335
          perf_counter_start  (chain, draw) float64 3.84e+04 3.84e+04 ... 3.84e+04
          depth               (chain, draw) int64 2 3 4 4 4 4 3 3 ... 3 3 3 2 3 3 3 3
          diverging           (chain, draw) bool False False False ... False False
          perf_counter_diff   (chain, draw) float64 0.0005474 0.0009994 ... 0.001682
          energy              (chain, draw) float64 256.5 256.4 256.8 ... 257.8 260.0
          mean_tree_accept    (chain, draw) float64 0.935 1.0 0.9774 ... 0.9993 0.4627
          lp                  (chain, draw) float64 -256.3 -256.3 ... -255.9 -258.6
          tree_size           (chain, draw) float64 3.0 7.0 15.0 11.0 ... 7.0 7.0 7.0
      Attributes:
          created_at:                 2020-10-06T00:56:59.868643
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3
          sampling_time:              3.375001907348633
          tuning_steps:               500

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T00:56:59.872278
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T00:56:59.872872
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3

We are now missing the log_likelihood group due to setting log_likelihood=False in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get PyMC3 to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.

Even though it is not ideal to lose part of the straight out of the box capabilities of PyMC3, this should generally not be a problem. In fact, other PPLs such as Stan always require writing the pointwise log likelihood values manually (either within the Stan code or in Python). Moreover, computing the pointwise log likelihood in Python using xarray will be more efficient in computational terms than the automatic extraction from PyMC3.

It could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape n_samples * n_observations) in memory.

def calculate_log_lik(x, y, b0, b1, sigma_e):
    mu = b0 + b1 * x
    return stats.norm(mu, sigma_e).logpdf(y)

This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.

Therefore, we can use xr.apply_ufunc to handle the broadasting and preserve the dimension names:

log_lik = xr.apply_ufunc(
    calculate_log_lik,
    idata.constant_data["x"],
    idata.observed_data["y"],
    idata.posterior["b0"],
    idata.posterior["b1"],
    idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)

The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xr.apply_ufunc.

We are now passing the arguments to calculate_log_lik initially as xarray.DataArrays. What is happening here behind the scenes is that xr.apply_ufunc is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing Numpy arrays to calculate_log_lik. Everything works automagically.

Now let’s see what happens if we were to pass the arrays directly to calculate_log_lik instead:

calculate_log_lik(
    idata.constant_data["x"].values,
    idata.observed_data["y"].values,
    idata.posterior["b0"].values,
    idata.posterior["b1"].values,
    idata.posterior["sigma_e"].values
)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-fc2d553bde92> in <module>
----> 1 calculate_log_lik(
      2     idata.constant_data["x"].values,
      3     idata.observed_data["y"].values,
      4     idata.posterior["b0"].values,
      5     idata.posterior["b1"].values,

<ipython-input-8-e6777d985e1f> in calculate_log_lik(x, y, b0, b1, sigma_e)
      1 def calculate_log_lik(x, y, b0, b1, sigma_e):
----> 2     mu = b0 + b1 * x
      3     return stats.norm(mu, sigma_e).logpdf(y)

ValueError: operands could not be broadcast together with shapes (4,500) (100,) 

If you are still curious about the magic of xarray and xr.apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple cells before:

dims = {"y": ["time"], "x": ["time"]}

What happens to the result if you use a different name for the dimension of x?

idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          b0       (chain, draw) float64 -2.545 -2.677 -2.088 ... -3.023 -2.545 -2.89
          b1       (chain, draw) float64 1.016 1.02 0.9981 1.022 ... 1.034 1.023 1.012
          sigma_e  (chain, draw) float64 2.772 2.993 2.867 2.833 ... 2.8 2.838 3.039
      Attributes:
          created_at:                 2020-10-06T00:56:59.863225
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3
          sampling_time:              3.375001907348633
          tuning_steps:               500

    • <xarray.Dataset>
      Dimensions:             (chain: 4, draw: 500)
      Coordinates:
        * chain               (chain) int64 0 1 2 3
        * draw                (draw) int64 0 1 2 3 4 5 6 ... 494 495 496 497 498 499
      Data variables:
          step_size_bar       (chain, draw) float64 0.4844 0.4844 ... 0.5218 0.5218
          step_size           (chain, draw) float64 0.6579 0.6579 ... 0.5572 0.5572
          energy_error        (chain, draw) float64 0.1089 -0.01555 ... -0.1793 1.522
          process_time_diff   (chain, draw) float64 0.0005479 0.001 ... 0.001683
          max_energy_error    (chain, draw) float64 0.1089 -0.1235 ... -0.1793 2.335
          perf_counter_start  (chain, draw) float64 3.84e+04 3.84e+04 ... 3.84e+04
          depth               (chain, draw) int64 2 3 4 4 4 4 3 3 ... 3 3 3 2 3 3 3 3
          diverging           (chain, draw) bool False False False ... False False
          perf_counter_diff   (chain, draw) float64 0.0005474 0.0009994 ... 0.001682
          energy              (chain, draw) float64 256.5 256.4 256.8 ... 257.8 260.0
          mean_tree_accept    (chain, draw) float64 0.935 1.0 0.9774 ... 0.9993 0.4627
          lp                  (chain, draw) float64 -256.3 -256.3 ... -255.9 -258.6
          tree_size           (chain, draw) float64 3.0 7.0 15.0 11.0 ... 7.0 7.0 7.0
      Attributes:
          created_at:                 2020-10-06T00:56:59.868643
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3
          sampling_time:              3.375001907348633
          tuning_steps:               500

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T00:56:59.872278
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T00:56:59.872872
          arviz_version:              0.10.0
          inference_library:          pymc3
          inference_library_version:  3.9.3

    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500, time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          x        (time, chain, draw) float64 -2.022 -2.105 -2.0 ... -2.368 -2.183

We will create a subclass of az.SamplingWrapper.

class PyMC3LinRegWrapper(az.SamplingWrapper):        
    def sample(self, modified_observed_data):
        with self.model(*modified_observed_data) as linreg_model:
            idata = pm.sample(
                **self.sample_kwargs, 
                return_inferencedata=True, 
                idata_kwargs=self.idata_kwargs
            )
        return idata
    
    def get_inference_data(self, idata):
        return idata
        
    def sel_observations(self, idx):
        xdata = self.idata_orig.constant_data["x"]
        ydata = self.idata_orig.observed_data["y"]
        mask = np.isin(np.arange(len(xdata)), idx)
        data__i = [ary[~mask] for ary in (xdata, ydata)]
        data_ex = [ary[mask] for ary in (xdata, ydata)]
        return data__i, data_ex
loo_orig = az.loo(idata, pointwise=True)
loo_orig
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.78     7.13
p_loo        2.96        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%

In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make az.reloo believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.

loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])

We initialize our sampling wrapper. Let’s stop and analyze each of the arguments.

We’d generally use model to pass a model object of some kind, already compiled and reexecutable, however, as we saw before, we need to recompile the model every time we use it to pass the model generating function instead. Close enough.

We then use the log_lik_fun and posterior_vars argument to tell the wrapper how to call xr.apply_ufunc. log_lik_fun is the function to be called, which is then called with the following positional arguments:

log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]

where data_ex is the second element returned by sel_observations and idata__i is the InferenceData object result of get_inference_data which contains the fit on the subsetted data. We have generated data_ex to be a tuple of DataArrays so it plays nicely with this call signature.

We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations.

Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.

pymc3_wrapper = PyMC3LinRegWrapper(
    model=compile_linreg_model, 
    log_lik_fun=calculate_log_lik, 
    posterior_vars=("b0", "b1", "sigma_e"),
    idata_orig=idata,
    sample_kwargs=sample_kwargs, 
    idata_kwargs=idata_kwargs,
)

And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results.

loo_relooed = az.reloo(pymc3_wrapper, loo_orig=loo_orig)
/home/oriol/miniconda3/envs/arviz/lib/python3.8/site-packages/arviz/stats/stats_refitting.py:99: UserWarning: reloo is an experimental and untested feature
  warnings.warn("reloo is an experimental and untested feature", UserWarning)
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 13
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_e, b1, b0]
100.00% [4000/4000 00:01<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 500 tune and 500 draw iterations (2_000 + 2_000 draws total) took 2 seconds.
The acceptance probability does not match the target. It is 0.9084390959319811, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8833232031335186, but should be close to 0.8. Try to increase the number of tuning steps.
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 42
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_e, b1, b0]
100.00% [4000/4000 00:01<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 500 tune and 500 draw iterations (2_000 + 2_000 draws total) took 2 seconds.
The acceptance probability does not match the target. It is 0.8788024509211416, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.900598064671754, but should be close to 0.8. Try to increase the number of tuning steps.
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 56
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_e, b1, b0]
100.00% [4000/4000 00:01<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 500 tune and 500 draw iterations (2_000 + 2_000 draws total) took 2 seconds.
The acceptance probability does not match the target. It is 0.8949149672236311, but should be close to 0.8. Try to increase the number of tuning steps.
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 73
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma_e, b1, b0]
100.00% [4000/4000 00:02<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 500 tune and 500 draw iterations (2_000 + 2_000 draws total) took 2 seconds.
The acceptance probability does not match the target. It is 0.8797995668769552, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.882380854441132, but should be close to 0.8. Try to increase the number of tuning steps.
The acceptance probability does not match the target. It is 0.8936869082173754, but should be close to 0.8. Try to increase the number of tuning steps.
loo_relooed
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.77     7.13
p_loo        2.95        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%
loo_orig
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.78     7.13
p_loo        2.96        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)       96   96.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         2    2.0%
   (1, Inf)   (very bad)    2    2.0%