Refitting PyStan (2.x) models with ArviZ (and xarray)#

ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses SamplingWrappers to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.

Below there is an example of SamplingWrapper usage for PyStan (2.x).

import arviz as az
import pystan
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr

For the example, we will use a linear regression model.

np.random.seed(26)

xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
[<matplotlib.lines.Line2D at 0x7eff3db05e80>]
../_images/f686f600761c2c58284efd5c03d410a691a0509004f3ca70fa4f86b06bf59534.png

Now we will write the Stan code, keeping in mind to include only the array shapes as parameters.

refit_lr_code = """
data {
  // Define data for fitting
  int<lower=0> N;
  vector[N] x;
  vector[N] y;
}

parameters {
  real b0;
  real b1;
  real<lower=0> sigma_e;
}

model {
  b0 ~ normal(0, 10);
  b1 ~ normal(0, 10);
  sigma_e ~ normal(0, 10);
  for (i in 1:N) {
    y[i] ~ normal(b0 + b1 * x[i], sigma_e);  // use only data for fitting
  }
  
}

generated quantities {
    vector[N] y_hat;
    
    for (i in 1:N) {
        // pointwise log likelihood will be calculated outside Stan, 
        // posterior predictive however will be generated here, there are 
        // no restrictions on adding more generated quantities
        y_hat[i] = normal_rng(b0 + b1 * x[i], sigma_e);
    }
}
"""
sm = pystan.StanModel(model_code=refit_lr_code)
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_2cdc9d1f1db425bb7186f919c45c9b36 NOW.
data_dict = {
    "N": len(ydata),
    "y": ydata,
    "x": xdata,
}
sample_kwargs = {"iter": 1000, "chains": 4}
fit = sm.sampling(data=data_dict, **sample_kwargs)

We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with from_pystan().

dims = {"y": ["time"], "x": ["time"], "y_hat": ["time"]}
idata_kwargs = {
    "posterior_predictive": ["y_hat"],
    "observed_data": "y",
    "constant_data": "x",
    "dims": dims,
}
idata = az.from_pystan(posterior=fit, **idata_kwargs)

We are now missing the log_likelihood group because we have not used the log_likelihood argument in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get Stan to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.

Even though it is not ideal to lose part of the straight out of the box capabilities of PyStan-ArviZ integration, this should generally not be a problem. We are basically moving the pointwise log likelihood calculation from the Stan code to the Python code, in both cases, we need to manually write the function to calculate the pointwise log likelihood.

Moreover, the Python computation could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape n_samples * n_observations) in memory.

def calculate_log_lik(x, y, b0, b1, sigma_e):
    mu = b0 + b1 * x
    return stats.norm(mu, sigma_e).logpdf(y)

This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.

Therefore, we can use xr.apply_ufunc to handle the broadcasting and preserve the dimension names:

log_lik = xr.apply_ufunc(
    calculate_log_lik,
    idata.constant_data["x"],
    idata.observed_data["y"],
    idata.posterior["b0"],
    idata.posterior["b1"],
    idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)

The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xarray.apply_ufunc.

We are now passing the arguments to calculate_log_lik initially as xarray.DataArrays. What is happening here behind the scenes is that apply_ufunc() is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing NumPy arrays to calculate_log_lik. Everything works automagically.

Now let’s see what happens if we were to pass the arrays directly to calculate_log_lik instead:

calculate_log_lik(
    idata.constant_data["x"].values,
    idata.observed_data["y"].values,
    idata.posterior["b0"].values,
    idata.posterior["b1"].values,
    idata.posterior["sigma_e"].values
)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-fc2d553bde92> in <module>
----> 1 calculate_log_lik(
      2     idata.constant_data["x"].values,
      3     idata.observed_data["y"].values,
      4     idata.posterior["b0"].values,
      5     idata.posterior["b1"].values,

<ipython-input-8-e6777d985e1f> in calculate_log_lik(x, y, b0, b1, sigma_e)
      1 def calculate_log_lik(x, y, b0, b1, sigma_e):
----> 2     mu = b0 + b1 * x
      3     return stats.norm(mu, sigma_e).logpdf(y)

ValueError: operands could not be broadcast together with shapes (4,500) (100,) 

If you are still curious about the magic of xarray and apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple of cells before:

dims = {"y": ["time"], "x": ["time"]}

What happens to the result if you use a different name for the dimension of x?

idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          b0       (chain, draw) float64 -1.71 -3.234 -2.254 ... -1.516 -3.213 -2.435
          b1       (chain, draw) float64 1.007 1.036 1.025 1.028 ... 0.99 1.046 1.025
          sigma_e  (chain, draw) float64 3.135 3.333 2.613 2.644 ... 2.705 3.116 3.664
      Attributes:
          created_at:                 2020-10-06T01:27:54.846477
          arviz_version:              0.10.0
          inference_library:          pystan
          inference_library_version:  2.19.1.1
          args:                       [{'random_seed': '345470392', 'chain_id': 0, ...
          inits:                      [[1.7126106914526407, -0.030748403931205592, ...
          step_size:                  [0.376451, 0.417677, 0.382013, 0.282232]
          metric:                     ['diag_e', 'diag_e', 'diag_e', 'diag_e']
          inv_metric:                 [[0.360859, 0.000475137, 0.00443736], [0.3487...
          adaptation_info:            ['# Adaptation terminated\n# Step size = 0.37...
          stan_code:                  \ndata {\n  // Define data for fitting\n  int...

    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500, time: 100)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y_hat    (chain, draw, time) float64 -6.143 3.58 -2.542 ... 50.53 52.38
      Attributes:
          created_at:                 2020-10-06T01:27:54.856579
          arviz_version:              0.10.0
          inference_library:          pystan
          inference_library_version:  2.19.1.1

    • <xarray.Dataset>
      Dimensions:      (chain: 4, draw: 500)
      Coordinates:
        * chain        (chain) int64 0 1 2 3
        * draw         (draw) int64 0 1 2 3 4 5 6 7 ... 493 494 495 496 497 498 499
      Data variables:
          accept_stat  (chain, draw) float64 0.9181 0.9607 0.9892 ... 0.9131 0.8482
          stepsize     (chain, draw) float64 0.3765 0.3765 0.3765 ... 0.2822 0.2822
          treedepth    (chain, draw) int64 3 3 4 3 2 1 3 4 4 3 ... 1 3 2 3 3 4 4 1 4 4
          n_leapfrog   (chain, draw) int64 7 11 15 15 3 3 7 15 ... 11 7 15 15 3 15 15
          diverging    (chain, draw) bool False False False ... False False False
          energy       (chain, draw) float64 159.8 160.4 160.1 ... 156.7 157.6 162.1
          lp           (chain, draw) float64 -156.2 -157.9 -156.6 ... -156.5 -159.7
      Attributes:
          created_at:                 2020-10-06T01:27:54.851944
          arviz_version:              0.10.0
          inference_library:          pystan
          inference_library_version:  2.19.1.1
          args:                       [{'random_seed': '345470392', 'chain_id': 0, ...
          inits:                      [[1.7126106914526407, -0.030748403931205592, ...
          step_size:                  [0.376451, 0.417677, 0.382013, 0.282232]
          metric:                     ['diag_e', 'diag_e', 'diag_e', 'diag_e']
          inv_metric:                 [[0.360859, 0.000475137, 0.00443736], [0.3487...
          adaptation_info:            ['# Adaptation terminated\n# Step size = 0.37...
          stan_code:                  \ndata {\n  // Define data for fitting\n  int...

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T01:27:54.841473
          arviz_version:              0.10.0
          inference_library:          pystan
          inference_library_version:  2.19.1.1

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T01:27:54.842743
          arviz_version:              0.10.0
          inference_library:          pystan
          inference_library_version:  2.19.1.1

    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 500, time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
      Data variables:
          x        (time, chain, draw) float64 -2.066 -2.272 -1.931 ... -2.53 -2.504

We will create a subclass of SamplingWrapper. Therefore, instead of having to implement all functions required by reloo() we only have to implement sel_observations() (we are cloning sample() and get_inference_data() from the SamplingWrapper in order to use apply_ufunc instead of assuming the log likelihood is calculated within Stan).

Let’s check the 2 outputs of sel_observations.

  1. data__i is a dictionary because it is an argument of sample which will pass it as is to model.sampling.

  2. data_ex is a list because it is an argument to log_likelihood__i which will pass it as *data_ex to apply_ufunc.

More on data_ex and apply_ufunc integration is given below.

class LinearRegressionWrapper(az.SamplingWrapper):
    def sel_observations(self, idx):
        xdata = self.idata_orig.constant_data["x"]
        ydata = self.idata_orig.observed_data["y"]
        mask = np.isin(np.arange(len(xdata)), idx)
        data__i = {"x": xdata[~mask], "y": ydata[~mask], "N": len(ydata[~mask])}
        data_ex = [ary[mask] for ary in (xdata, ydata)]
        return data__i, data_ex
    
    def sample(self, modified_observed_data):
        #Cloned from PyStan2SamplingWrapper.
        fit = self.model.sampling(data=modified_observed_data, **self.sample_kwargs)
        return fit

    def get_inference_data(self, fit):
        # Cloned from PyStan2SamplingWrapper.
        idata = az.from_pystan(posterior=fit, **self.idata_kwargs)
        return idata
loo_orig = az.loo(idata, pointwise=True)
loo_orig
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.79     7.12
p_loo        2.95        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%

In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make reloo() believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.

loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])

We initialize our sampling wrapper. Let’s stop and analyze each of the arguments.

We use the log_lik_fun and posterior_vars argument to tell the wrapper how to call apply_ufunc(). log_lik_fun is the function to be called, which is then called with the following positional arguments:

log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]

where data_ex is the second element returned by sel_observations and idata__i is the InferenceData object result of get_inference_data which contains the fit on the subsetted data. We have generated data_ex to be a tuple of DataArrays so it plays nicely with this call signature.

We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations.

Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.

pystan_wrapper = LinearRegressionWrapper(
    sm, 
    log_lik_fun=calculate_log_lik, 
    posterior_vars=("b0", "b1", "sigma_e"),
    idata_orig=idata, 
    sample_kwargs=sample_kwargs, 
    idata_kwargs=idata_kwargs
)

And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results.

loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
/home/oriol/miniconda3/envs/arviz/lib/python3.8/site-packages/arviz/stats/stats_refitting.py:99: UserWarning: reloo is an experimental and untested feature
  warnings.warn("reloo is an experimental and untested feature", UserWarning)
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 13
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 13
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 42
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 42
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 56
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 56
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 73
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 73
loo_relooed
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.79     7.12
p_loo        2.95        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%
loo_orig
Computed from 2000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.79     7.12
p_loo        2.95        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)       96   96.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         2    2.0%
   (1, Inf)   (very bad)    2    2.0%