Refitting NumPyro models with ArviZ (and xarray)#

ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses SamplingWrappers to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.

Below there is an example of SamplingWrapper usage for NumPyro.

import arviz as az
import numpyro
import numpyro.distributions as dist
import jax.random as random
from numpyro.infer import MCMC, NUTS
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
numpyro.set_host_device_count(4)

For this example, we will use a linear regression model.

np.random.seed(26)

xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
[<matplotlib.lines.Line2D at 0x7f6999702280>]
../_images/f686f600761c2c58284efd5c03d410a691a0509004f3ca70fa4f86b06bf59534.png

Now we will write the NumPyro Code:

def model(N, x, y=None):
    b0 = numpyro.sample("b0", dist.Normal(0, 10))
    b1 = numpyro.sample("b1", dist.Normal(0, 10))
    sigma_e = numpyro.sample("sigma_e", dist.HalfNormal(10))
    numpyro.sample("y", dist.Normal(b0 + b1 * x, sigma_e), obs=y)
data_dict = {
    "N": len(ydata),
    "y": ydata,
    "x": xdata,
}
kernel = NUTS(model)
sample_kwargs = dict(
    sampler=kernel, 
    num_warmup=1000, 
    num_samples=1000, 
    num_chains=4, 
    chain_method="parallel"
)
mcmc = MCMC(**sample_kwargs)
mcmc.run(random.PRNGKey(0), **data_dict)

We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with az.from_numpyro.

dims = {"y": ["time"], "x": ["time"]}
idata_kwargs = {
    "dims": dims,
    "constant_data": {"x": xdata}
}
idata = az.from_numpyro(mcmc, **idata_kwargs)
del idata.log_likelihood
idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          b0       (chain, draw) float32 -3.0963688 -3.1254756 ... -2.5883367
          b1       (chain, draw) float32 1.0462681 1.0379426 ... 1.038727 1.0135907
          sigma_e  (chain, draw) float32 3.047911 2.6600552 ... 3.0927758 3.2862334
      Attributes:
          created_at:                 2020-10-06T03:44:50.997985
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:    (chain: 4, draw: 1000)
      Coordinates:
        * chain      (chain) int64 0 1 2 3
        * draw       (draw) int64 0 1 2 3 4 5 6 7 ... 992 993 994 995 996 997 998 999
      Data variables:
          diverging  (chain, draw) bool False False False False ... False False False
      Attributes:
          created_at:                 2020-10-06T03:44:50.999466
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T03:44:51.079386
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T03:44:51.079921
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

We are now missing the log_likelihood group because we have not used the log_likelihood argument in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get Stan to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.

Even though it is not ideal to lose part of the straight out of the box capabilities of PyStan-ArviZ integration, this should generally not be a problem. We are basically moving the pointwise log likelihood calculation from the Stan Code to the Python code, in both cases, we need to manually write the function to calculate the pointwise log likelihood.

Moreover, the Python computation could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape n_samples * n_observations) in memory.

def calculate_log_lik(x, y, b0, b1, sigma_e):
    mu = b0 + b1 * x
    return stats.norm(mu, sigma_e).logpdf(y)

This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.

Therefore, we can use xr.apply_ufunc to handle the broadcasting and preserve the dimension names:

log_lik = xr.apply_ufunc(
    calculate_log_lik,
    idata.constant_data["x"],
    idata.observed_data["y"],
    idata.posterior["b0"],
    idata.posterior["b1"],
    idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)

The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xr.apply_ufunc.

We are now passing the arguments to calculate_log_lik initially as xr.DataArrays. What is happening here behind the scenes is that xr.apply_ufunc is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing NumPy arrays to calculate_log_lik. Everything works automagically.

Now let’s see what happens if we were to pass the arrays directly to calculate_log_lik instead:

calculate_log_lik(
    idata.constant_data["x"].values,
    idata.observed_data["y"].values,
    idata.posterior["b0"].values,
    idata.posterior["b1"].values,
    idata.posterior["sigma_e"].values
)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-fc2d553bde92> in <module>
----> 1 calculate_log_lik(
      2     idata.constant_data["x"].values,
      3     idata.observed_data["y"].values,
      4     idata.posterior["b0"].values,
      5     idata.posterior["b1"].values,

<ipython-input-8-e6777d985e1f> in calculate_log_lik(x, y, b0, b1, sigma_e)
      1 def calculate_log_lik(x, y, b0, b1, sigma_e):
----> 2     mu = b0 + b1 * x
      3     return stats.norm(mu, sigma_e).logpdf(y)

ValueError: operands could not be broadcast together with shapes (4,1000) (100,) 

If you are still curious about the magic of xarray and apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple cells before:

dims = {"y": ["time"], "x": ["time"]}

What happens to the result if you use a different name for the dimension of x?

idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          b0       (chain, draw) float32 -3.0963688 -3.1254756 ... -2.5883367
          b1       (chain, draw) float32 1.0462681 1.0379426 ... 1.038727 1.0135907
          sigma_e  (chain, draw) float32 3.047911 2.6600552 ... 3.0927758 3.2862334
      Attributes:
          created_at:                 2020-10-06T03:44:50.997985
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:    (chain: 4, draw: 1000)
      Coordinates:
        * chain      (chain) int64 0 1 2 3
        * draw       (draw) int64 0 1 2 3 4 5 6 7 ... 992 993 994 995 996 997 998 999
      Data variables:
          diverging  (chain, draw) bool False False False False ... False False False
      Attributes:
          created_at:                 2020-10-06T03:44:50.999466
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T03:44:51.079386
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T03:44:51.079921
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 1000, time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          x        (time, chain, draw) float64 -2.186 -2.105 -2.077 ... -2.646 -2.305

We will create a subclass of SamplingWrapper. Therefore, instead of having to implement all functions required by reloo() we only have to implement sel_observations() (we are cloning sample() and get_inference_data() from the SamplingWrapper in order to use apply_ufunc instead of assuming the log likelihood is calculated within Stan).

Let’s check the 2 outputs of sel_observations.

  1. data__i is a dictionary because it is an argument of sample which will pass it as is to model.sampling.

  2. data_ex is a list because it is an argument to log_likelihood__i which will pass it as *data_ex to apply_ufunc.

More on data_ex and apply_ufunc integration is given below.

class NumPyroSamplingWrapper(az.SamplingWrapper):
    def __init__(self, model, **kwargs):        
        self.rng_key = kwargs.pop("rng_key", random.PRNGKey(0))
        
        super(NumPyroSamplingWrapper, self).__init__(model, **kwargs)
    
    def sample(self, modified_observed_data):
        self.rng_key, subkey = random.split(self.rng_key)
        mcmc = MCMC(**self.sample_kwargs)
        mcmc.run(subkey, **modified_observed_data)
        return mcmc

    def get_inference_data(self, fit):
        # Cloned from PyStanSamplingWrapper.
        idata = az.from_numpyro(mcmc, **self.idata_kwargs)
        return idata
    
class LinRegWrapper(NumPyroSamplingWrapper):
    def sel_observations(self, idx):
        xdata = self.idata_orig.constant_data["x"]
        ydata = self.idata_orig.observed_data["y"]
        mask = np.isin(np.arange(len(xdata)), idx)
        # data__i is passed to numpyro to sample on it -> dict of numpy array
        # data_ex is passed to apply_ufunc -> list of DataArray
        data__i = {"x": xdata[~mask].values, "y": ydata[~mask].values, "N": len(ydata[~mask])}
        data_ex = [xdata[mask], ydata[mask]]
        return data__i, data_ex
loo_orig = az.loo(idata, pointwise=True)
loo_orig
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.92     7.20
p_loo        3.11        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%

In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make reloo() believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.

loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])

We initialize our sampling wrapper. Let’s stop and analyze each of the arguments.

We use the log_lik_fun and posterior_vars argument to tell the wrapper how to call apply_ufunc(). log_lik_fun is the function to be called, which is then called with the following positional arguments:

log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]

where data_ex is the second element returned by sel_observations and idata__i is the InferenceData object result of get_inference_data which contains the fit on the subsetted data. We have generated data_ex to be a tuple of DataArrays so it plays nicely with this call signature.

We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations.

Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.

pystan_wrapper = LinRegWrapper(
    mcmc, 
    rng_key=random.PRNGKey(7),
    log_lik_fun=calculate_log_lik, 
    posterior_vars=("b0", "b1", "sigma_e"),
    idata_orig=idata, 
    sample_kwargs=sample_kwargs, 
    idata_kwargs=idata_kwargs
)

And eventually, we can use this wrapper to call reloo(), and compare the results with the PSIS LOO-CV results.

loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
/home/oriol/miniconda3/envs/arviz/lib/python3.8/site-packages/arviz/stats/stats_refitting.py:99: UserWarning: reloo is an experimental and untested feature
  warnings.warn("reloo is an experimental and untested feature", UserWarning)
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 13
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 13
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 42
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 42
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 56
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 56
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 73
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 73
loo_relooed
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.89     7.20
p_loo        3.08        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%
loo_orig
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.92     7.20
p_loo        3.11        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)       96   96.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         2    2.0%
   (1, Inf)   (very bad)    2    2.0%