MuyGPS

class MuyGPyS.gp.muygps.MuyGPS(kernel, noise=<MuyGPyS.gp.noise.homoscedastic.HomoscedasticNoise object>, scale=<MuyGPyS.gp.hyperparameter.scale.FixedScale object>, _backend_mean_fn=<function _muygps_posterior_mean>, _backend_var_fn=<function _muygps_diagonal_variance>, _backend_fast_mean_fn=<function _muygps_fast_posterior_mean>, _backend_fast_precompute_fn=<function _muygps_fast_posterior_mean_precompute>)[source]

Local Kriging Gaussian Process.

Performs approximate GP inference by locally approximating an observation’s response using its nearest neighbors. Implements the MuyGPs algorithm as articulated in [muyskens2021muygps].

Kernels accept different hyperparameter dictionaries specifying hyperparameter settings. Keys can include val and bounds. bounds must be either a len == 2 iterable container whose elements are scalars in increasing order, or the string fixed. If bounds == fixed (the default behavior), the hyperparameter value will remain fixed during optimization. val must be either a scalar (within the range of the upper and lower bounds if given) or the strings "sample" or log_sample", which will randomly sample a value within the range given by the bounds.

In addition to individual kernel hyperparamters, each MuyGPS object also possesses a noise model, possibly with parameters, and a vector of \(\sigma^2\) indicating the scale parameter associated with the posterior variance of each dimension of the response.

Example

>>> from MuyGPyS.gp import MuyGPS
>>> muygps = MuyGPS(
...    kernel=Matern(
...        nu=Parameter(0.38, (0.1, 2.5)),
...        deformation=Isotropy(
...            metric=F2,
...            length_scale=Parameter(0.2),
...        ),
...    ),
...    noise=HomoscedasticNoise(1e-5),
...    scale=AnalyticScale(),
... )

MuyGPyS depends upon linear operations on specially-constructed tensors in order to efficiently estimate GP realizations. One can use (see their documentation for details) MuyGPyS.gp.tensors.pairwise_tensor() to construct pairwise difference tensors and MuyGPyS.gp.tensors.crosswise_tensor() to produce crosswise diff tensors that MuyGPS can then use to construct kernel tensors and cross-covariance matrices, respectively.

We can easily realize kernel tensors using a MuyGPS object’s kernel functor once we have computed a pairwise_diffs tensor and a crosswise_diffs matrix.

Example

>>> K = muygps.kernel(pairwise_diffs)
>>> Kcross = muygps.kernel(crosswise_diffs)
Parameters:
  • kernel (KernelFn) – The kernel to be used.

  • noise (NoiseFn) – A noise model.

  • scale (ScaleFn) – A variance scale parameter.

  • response_count – The number of response dimensions.

fast_coefficients(K, train_nn_targets_fast)[source]

Produces coefficient matrix for the fast posterior mean given in Equation (8) of [dunton2022fast].

To form each row of this matrix, we compute

\[\mathbf{C}_{N^*}(i, :) = (K_{\hat{\theta}} (X_{N^*}, X_{N^*}) + \varepsilon)^{-1} Y(X_{N^*}).\]

Here \(X_{N^*}\) is the union of the nearest neighbor of the ith test point and the nn_count - 1 nearest neighbors of this nearest neighbor, \(K_{\hat{\theta}}\) is the trained kernel functor specified by self.kernel, \(\varepsilon\) is a diagonal noise matrix whose diagonal elements are informed by the self.noise hyperparameter, and \(Y(X_{N^*})\) is the (train_count,) vector of responses corresponding to the training features indexed by $N^*$.

Parameters:
  • K (ndarray) – A tensor of shape (batch_count, nn_count, nn_count) containing the (nn_count, nn_count)-shaped kernel matrices corresponding to each of the batch elements.

  • Kcross – A matrix of shape (batch_count, nn_count) whose rows consist of (1, nn_count)-shaped cross-covariance vector corresponding to each of the batch elements and its nearest neighbors.

Return type:

ndarray

Returns:

A matrix of shape (train_count, nn_count) whose rows are the precomputed coefficients for fast posterior mean inference.

fast_posterior_mean(Kcross, coeffs_tensor)[source]

Performs fast posterior mean inference using provided cross-covariance and precomputed coefficient matrix.

Assumes that cross-covariance matrix Kcross is already computed and given as an argument.

Returns the predicted response in the form of a posterior mean for each element of the batch of observations, as computed in Equation (9) of [dunton2022fast]. For each test point \(\mathbf{z}\), we compute

\[\widehat{Y} (\mathbf{z} \mid X) = K_\theta (\mathbf{z}, X_{N^*}) \mathbf{C}_{N^*}.\]

Here \(X_{N^*}\) is the union of the nearest neighbor of the queried test point \(\mathbf{z}\) and the nearest neighbors of that training point, \(K_\theta\) is the kernel functor specified by self.kernel, and \(\mathbf{C}_{N^*}\) is the matrix of precomputed coefficients given in Equation (8) of [dunton2022fast].

Parameters:
  • Kcross (ndarray) – A matrix of shape (batch_count, nn_count) whose rows consist of (1, nn_count)-shaped cross-covariance vector corresponding to each of the batch elements and its nearest neighbors.

  • coeffs_tensor (ndarray) – A matrix of shape (batch_count, nn_count, response_count) whose rows are given by precomputed coefficients.

Return type:

ndarray

Returns:

A matrix of shape (batch_count, response_count) whose rows are the predicted response for each of the given indices.

fixed()[source]

Checks whether all kernel and model parameters are fixed.

This is a convenience utility to determine whether optimization is required.

Return type:

bool

Returns:

Returns True if all parameters are fixed, and False otherwise.

get_opt_mean_fn()[source]

Return a posterior mean function for use in optimization.

Assumes that optimization parameter literals will be passed as keyword arguments.

Return type:

Callable

Returns:

A function implementing the posterior mean, where noise is either fixed or takes updating values during optimization. The function expects keyword arguments corresponding to current hyperparameter values for unfixed parameters.

get_opt_params()[source]

Return lists of unfixed hyperparameter names, values, and bounds.

Return type:

Tuple[List[str], ndarray, ndarray]

Returns:

names:

A list of unfixed hyperparameter names.

params:

A list of unfixed hyperparameter values.

bounds:

A list of unfixed hyperparameter bound tuples.

get_opt_var_fn()[source]

Return a posterior variance function for use in optimization.

Assumes that optimization parameter literals will be passed as keyword arguments.

Return type:

Callable

Returns:

A function implementing posterior variance, where noise is either fixed or takes updating values during optimization. The function expects keyword arguments corresponding to current hyperparameter values for unfixed parameters.

posterior_mean(K, Kcross, batch_nn_targets)[source]

Returns the posterior mean from the provided covariance, cross-covariance, and target tensors.

Computes parallelized local solves of systems of linear equations using the last two dimensions of K along with Kcross and batch_nn_targets to predict responses in terms of the posterior mean. Assumes that kernel tensor K and cross-covariance matrix Kcross are already computed and given as arguments.

Returns the predicted response in the form of a posterior mean for each element of the batch of observations, as computed in Equation (3.4) of [muyskens2021muygps]. For each batch element \(\mathbf{x}_i\), we compute

\[\widehat{Y}_{NN} (\mathbf{x}_i \mid X_{N_i}) = K_\theta (\mathbf{x}_i, X_{N_i}) (K_\theta (X_{N_i}, X_{N_i}) + \varepsilon)^{-1} Y(X_{N_i}).\]

Here \(X_{N_i}\) is the set of nearest neighbors of \(\mathbf{x}_i\) in the training data, \(K_\theta\) is the kernel functor specified by self.kernel, \(\varepsilon\) is a diagonal noise matrix whose diagonal elements are informed by the self.noise hyperparameter, and \(Y(X_{N_i})\) is the (nn_count, response_count) matrix of responses of the nearest neighbors given by the second two dimensions of the batch_nn_targets argument.

Parameters:
  • K (ndarray) – A tensor of shape (batch_count, nn_count, nn_count) containing the (nn_count, nn_count)-shaped kernel matrices corresponding to each of the batch elements.

  • Kcross (ndarray) – A matrix of shape (batch_count, nn_count) whose rows consist of (1, nn_count)-shaped cross-covariance vector corresponding to each of the batch elements and its nearest neighbors.

  • batch_nn_targets (ndarray) – A tensor of shape (batch_count, nn_count, response_count) whose last dimension lists the vector-valued responses for the nearest neighbors of each batch element.

Return type:

ndarray

Returns:

A matrix of shape (batch_count, response_count) whose rows are the predicted response for each of the given indices.

posterior_variance(K, Kcross)[source]

Returns the posterior mean from the provided covariance and cross-covariance tensors.

Return the local posterior variances of each prediction, corresponding to the diagonal elements of a covariance matrix. For each batch element \(\mathbf{x}_i\), we compute

\[Var(\widehat{Y}_{NN} (\mathbf{x}_i \mid X_{N_i})) = K_\theta (\mathbf{x}_i, \mathbf{x}_i) - K_\theta (\mathbf{x}_i, X_{N_i}) (K_\theta (X_{N_i}, X_{N_i}) + \varepsilon)^{-1} K_\theta (X_{N_i}, \mathbf{x}_i).\]
Parameters:
  • K (ndarray) – A tensor of shape (batch_count, nn_count, nn_count) containing the (nn_count, nn_count)-shaped kernel matrices corresponding to each of the batch elements.

  • Kcross (ndarray) – A matrix of shape (batch_count, nn_count) whose rows consist of (1, nn_count)-shaped cross-covariance vector corresponding to each of the batch elements and its nearest neighbors.

Return type:

ndarray

Returns:

A vector of shape (batch_count, response_count) consisting of the diagonal elements of the posterior variance.