Refitting NumPyro models with ArviZ#

ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses SamplingWrapper to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.

Below there is an example of SamplingWrapper usage for NumPyro.

import arviz as az
import numpyro
import numpyro.distributions as dist
import jax.random as random
from numpyro.infer import MCMC, NUTS
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
numpyro.set_host_device_count(4)

For the example, we will use a linear regression model.

np.random.seed(26)

xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
[<matplotlib.lines.Line2D at 0x7f3f799141f0>]
../_images/f686f600761c2c58284efd5c03d410a691a0509004f3ca70fa4f86b06bf59534.png

Now we will write the NumPyro code:

def model(N, x, y=None):
    b0 = numpyro.sample("b0", dist.Normal(0, 10))
    b1 = numpyro.sample("b1", dist.Normal(0, 10))
    sigma_e = numpyro.sample("sigma_e", dist.HalfNormal(10))
    numpyro.sample("y", dist.Normal(b0 + b1 * x, sigma_e), obs=y)
data_dict = {
    "N": len(ydata),
    "y": ydata,
    "x": xdata,
}
kernel = NUTS(model)
sample_kwargs = dict(
    sampler=kernel, 
    num_warmup=1000, 
    num_samples=1000, 
    num_chains=4, 
    chain_method="parallel"
)
mcmc = MCMC(**sample_kwargs)
mcmc.run(random.PRNGKey(0), **data_dict)

We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with az.from_numpyro.

dims = {"y": ["time"], "x": ["time"]}
idata_kwargs = {
    "dims": dims,
    "constant_data": {"x": xdata}
}
idata = az.from_numpyro(mcmc, **idata_kwargs)
idata
arviz.InferenceData
    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 1000)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
      Data variables:
          b0       (chain, draw) float32 -3.0963688 -3.1254756 ... -2.5883367
          b1       (chain, draw) float32 1.0462681 1.0379426 ... 1.038727 1.0135907
          sigma_e  (chain, draw) float32 3.047911 2.6600552 ... 3.0927758 3.2862334
      Attributes:
          created_at:                 2020-10-06T03:36:51.467097
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (chain: 4, draw: 1000, time: 100)
      Coordinates:
        * chain    (chain) int64 0 1 2 3
        * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (chain, draw, time) float32 -2.1860917 -3.248132 ... -2.305284
      Attributes:
          created_at:                 2020-10-06T03:36:51.544419
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:    (chain: 4, draw: 1000)
      Coordinates:
        * chain      (chain) int64 0 1 2 3
        * draw       (draw) int64 0 1 2 3 4 5 6 7 ... 992 993 994 995 996 997 998 999
      Data variables:
          diverging  (chain, draw) bool False False False False ... False False False
      Attributes:
          created_at:                 2020-10-06T03:36:51.468495
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          y        (time) float64 -1.412 -7.319 1.151 1.502 ... 48.49 48.52 46.03
      Attributes:
          created_at:                 2020-10-06T03:36:51.545286
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

    • <xarray.Dataset>
      Dimensions:  (time: 100)
      Coordinates:
        * time     (time) int64 0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
      Data variables:
          x        (time) float64 0.0 0.5051 1.01 1.515 ... 48.48 48.99 49.49 50.0
      Attributes:
          created_at:                 2020-10-06T03:36:51.545865
          arviz_version:              0.10.0
          inference_library:          numpyro
          inference_library_version:  0.4.0

We will create a subclass of az.SamplingWrapper. Therefore, instead of having to implement all functions required by reloo() we only have to implement sel_observations() (we are cloning sample() and get_inference_data() from the PyStanSamplingWrapper in order to use apply_ufunc() instead of assuming the log likelihood is calculated within Stan).

Let’s check the 2 outputs of sel_observations.

  1. data__i is a dictionary because it is an argument of sample which will pass it as is to model.sampling.

  2. data_ex is a list because it is an argument to log_likelihood__i which will pass it as *data_ex to apply_ufunc.

More on data_ex and apply_ufunc integration is given below.

class NumPyroSamplingWrapper(az.SamplingWrapper):
    def __init__(self, model, **kwargs):        
        self.model_fun = model.sampler.model
        self.rng_key = kwargs.pop("rng_key", random.PRNGKey(0))
        
        super(NumPyroSamplingWrapper, self).__init__(model, **kwargs)
        
    def log_likelihood__i(self, excluded_obs, idata__i):
        samples = {
            key: values.values.reshape((-1, *values.values.shape[2:]))
            for key, values 
            in idata__i.posterior.items()
        }
        log_likelihood_dict = numpyro.infer.log_likelihood(
            self.model_fun, samples, **excluded_obs
        )
        if len(log_likelihood_dict) > 1:
            raise ValueError("multiple likelihoods found")
        data = {}
        nchains = idata__i.posterior.dims["chain"]
        ndraws = idata__i.posterior.dims["draw"]
        for obs_name, log_like in log_likelihood_dict.items():
            shape = (nchains, ndraws) + log_like.shape[1:]
            data[obs_name] = np.reshape(log_like.copy(), shape)
        return az.dict_to_dataset(data)[obs_name]
    
    def sample(self, modified_observed_data):
        self.rng_key, subkey = random.split(self.rng_key)
        mcmc = MCMC(**self.sample_kwargs)
        mcmc.run(subkey, **modified_observed_data)
        return mcmc

    def get_inference_data(self, fit):
        # Cloned from PyStanSamplingWrapper.
        idata = az.from_numpyro(mcmc, **self.idata_kwargs)
        return idata
    
class LinRegWrapper(NumPyroSamplingWrapper):
    def sel_observations(self, idx):
        xdata = self.idata_orig.constant_data["x"].values
        ydata = self.idata_orig.observed_data["y"].values
        mask = np.isin(np.arange(len(xdata)), idx)
        data__i = {"x": xdata[~mask], "y": ydata[~mask], "N": len(ydata[~mask])}
        data_ex = {"x": xdata[mask], "y": ydata[mask], "N": len(ydata[mask])}
        return data__i, data_ex
loo_orig = az.loo(idata, pointwise=True)
loo_orig
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.92     7.20
p_loo        3.11        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%

In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make az.reloo believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.

loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])

We initialize our sampling wrapper. Let’s stop and analyze each of the arguments.

  • We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations.

  • We also use model to get automatic log likelihood computation and we have the option to set the rng_key. Even if the data for each fit is different the rng_key is split with every fit.

  • Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.

numpyro_wrapper = LinRegWrapper(
    mcmc, 
    rng_key=random.PRNGKey(5),
    idata_orig=idata, 
    sample_kwargs=sample_kwargs, 
    idata_kwargs=idata_kwargs
)

And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results.

loo_relooed = az.reloo(numpyro_wrapper, loo_orig=loo_orig)
/home/oriol/miniconda3/envs/arviz/lib/python3.8/site-packages/arviz/stats/stats_refitting.py:99: UserWarning: reloo is an experimental and untested feature
  warnings.warn("reloo is an experimental and untested feature", UserWarning)
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 13
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 13
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 42
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 42
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 56
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 56
arviz.stats.stats_refitting - INFO - Refitting model excluding observation 73
INFO:arviz.stats.stats_refitting:Refitting model excluding observation 73
loo_relooed
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.89     7.20
p_loo        3.08        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)      100  100.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         0    0.0%
   (1, Inf)   (very bad)    0    0.0%
loo_orig
Computed from 4000 by 100 log-likelihood matrix

         Estimate       SE
elpd_loo  -250.92     7.20
p_loo        3.11        -
------

Pareto k diagnostic values:
                         Count   Pct.
(-Inf, 0.5]   (good)       96   96.0%
 (0.5, 0.7]   (ok)          0    0.0%
   (0.7, 1]   (bad)         2    2.0%
   (1, Inf)   (very bad)    2    2.0%