MuyGPS
- class MuyGPyS.gp.muygps.MuyGPS(kernel, eps=<MuyGPyS.gp.noise.homoscedastic.HomoscedasticNoise object>, response_count=1)[source]
Local Kriging Gaussian Process.
Performs approximate GP inference by locally approximating an observation’s response using its nearest neighbors. Implements the MuyGPs algorithm as articulated in [muyskens2021muygps].
Kernels accept different hyperparameter dictionaries specifying hyperparameter settings. Keys can include val and bounds. bounds must be either a len == 2 iterable container whose elements are scalars in increasing order, or the string fixed. If bounds == fixed (the default behavior), the hyperparameter value will remain fixed during optimization. val must be either a scalar (within the range of the upper and lower bounds if given) or the strings “sample” or log_sample”, which will randomly sample a value within the range given by the bounds.
In addition to individual kernel hyperparamters, each MuyGPS object also possesses a homoscedastic \(\varepsilon\) noise parameter and a vector of \(\sigma^2\) indicating the scale parameter associated with the posterior variance of each dimension of the response.
\(\sigma^2\) is the only parameter assumed to be a training target by default, and is treated differently from all other hyperparameters. All other training targets must be manually specified in k_kwargs.
Example
>>> from MuyGPyS.gp import MuyGPS >>> k_kwargs = { ... "kern": "rbf", ... "metric": "F2", ... "eps": {"val": 1e-5}, ... "nu": {"val": 0.38, "bounds": (0.1, 2.5)}, ... "length_scale": {"val": 7.2}, ... } >>> muygps = MuyGPS(**k_kwarg)
MuyGPyS depends upon linear operations on specially-constructed tensors in order to efficiently estimate GP realizations. One can use (see their documentation for details)
MuyGPyS.gp.tensors.pairwise_tensor()
to construct pairwise difference tensors andMuyGPyS.gp.tensors.crosswise_tensor()
to produce crosswise diff tensors that MuyGPS can then use to construct kernel tensors and cross-covariance matrices, respectively.We can easily realize kernel tensors using a MuyGPS object’s kernel functor once we have computed a pairwise_diffs tensor and a crosswise_diffs matrix.
Example
>>> K = muygps.kernel(pairwise_diffs) >>> Kcross = muygps.kernel(crosswise_diffs)
- Parameters:
- apply_new_noise(new_noise)[source]
Updates the homo/heteroscedastic noise parameter(s) of a MuyGPs model. To be used when the MuyGPs model has been trained and needs to be used for prediction, or if multiple batches are needed during training of a heteroscedastic model.
- Parameters:
new_noise (
Union
[HeteroscedasticNoise
,HomoscedasticNoise
,NullNoise
]) – If homoscedastic, a float to update the nugget parameter. If heteroscedastic, a matrix of shape (test_count, nn_count) containing the measurement noise corresponding to the nearest neighbors of each test point.- Returns:
A MuyGPs model with updated noise parameter(s).
- fast_coefficients(K, train_nn_targets_fast)[source]
Produces coefficient matrix for the fast posterior mean given in Equation (8) of [dunton2022fast].
To form each row of this matrix, we compute
\[\mathbf{C}_{N^*}(i, :) = (K_{\hat{\theta}} (X_{N^*}, X_{N^*}) + \varepsilon I_k)^{-1} Y(X_{N^*}).\]Here \(X_{N^*}\) is the union of the nearest neighbor of the ith test point and the nn_count - 1 nearest neighbors of this nearest neighbor, \(K_{\hat{\theta}}\) is the trained kernel functor specified by self.kernel, \(\varepsilon I_k\) is a diagonal noise matrix whose diagonal is the value of the self.eps hyperparameter, and \(Y(X_{N^*})\) is the (train_count,) vector of responses corresponding to the training features indexed by $N^*$.
- Parameters:
K (
ndarray
) – The full pairwise kernel tensor of shape (train_count, nn_count, nn_count).train_nn_targets_fast (
ndarray
) – The nearest neighbor response of each training points of shape (train_count, nn_count, response_count).
- Return type:
ndarray
- Returns:
A matrix of shape (train_count, nn_count) whose rows are the precomputed coefficients for fast posterior mean inference.
- fast_posterior_mean(Kcross, coeffs_tensor)[source]
Performs fast posterior mean inference using provided cross-covariance and precomputed coefficient matrix.
Assumes that cross-covariance matrix Kcross is already computed and given as an argument.
Returns the predicted response in the form of a posterior mean for each element of the batch of observations, as computed in Equation (9) of [dunton2022fast]. For each test point \(\mathbf{z}\), we compute
\[\widehat{Y} (\mathbf{z} \mid X) = K_\theta (\mathbf{z}, X_{N^*}) \mathbf{C}_{N^*}.\]Here \(X_{N^*}\) is the union of the nearest neighbor of the queried test point \(\mathbf{z}\) and the nearest neighbors of that training point, \(K_\theta\) is the kernel functor specified by self.kernel, and \(\mathbf{C}_{N^*}\) is the matrix of precomputed coefficients given in Equation (8) of [dunton2022fast].
- Parameters:
Kcross (
ndarray
) – A matrix of shape (batch_count, nn_count) containing the 1 x nn_count -shaped cross-covariance vector corresponding to each of the batch elements.coeffs_tensor (
ndarray
) – A matrix of shape (batch_count, nn_count, response_count) whose rows are given by precomputed coefficients.
- Return type:
ndarray
- Returns:
A matrix of shape (batch_count, response_count) whose rows are the predicted response for each of the given indices.
- fixed()[source]
Checks whether all kernel and model parameters are fixed.
This is a convenience utility to determine whether optimization is required.
- Return type:
bool
- Returns:
Returns True if all parameters are fixed, and False otherwise.
- get_opt_mean_fn()[source]
Return a posterior mean function for use in optimization.
This function is designed for use with
MuyGPyS.optimize.chassis.optimize_from_tensors()
and assumes that either eps will be passed via a keyword argument or not at all.- Return type:
Callable
- Returns:
A function implementing the posterior mean, where eps is either fixed or takes updating values during optimization. The function expects keyword arguments corresponding to current hyperparameter values for unfixed parameters.
- get_opt_params()[source]
Return lists of unfixed hyperparameter names, values, and bounds.
- Return type:
Tuple
[List
[str
],ndarray
,ndarray
]- Returns:
- names:
A list of unfixed hyperparameter names.
- params:
A list of unfixed hyperparameter values.
- bounds:
A list of unfixed hyperparameter bound tuples.
- get_opt_var_fn()[source]
Return a posterior variance function for use in optimization.
This function is designed for use with
MuyGPyS.optimize.chassis.optimize_from_tensors()
and assumes that either eps will be passed via a keyword argument or not at all.- Return type:
Callable
- Returns:
A function implementing posterior variance, where eps is either fixed or takes updating values during optimization. The function expects keyword arguments corresponding to current hyperparameter values for unfixed parameters.
- posterior_mean(K, Kcross, batch_nn_targets)[source]
Returns the posterior mean from the provided covariance, cross-covariance, and target tensors.
Computes parallelized local solves of systems of linear equations using the last two dimensions of K along with Kcross and batch_nn_targets to predict responses in terms of the posterior mean. Assumes that kernel tensor K and cross-covariance matrix Kcross are already computed and given as arguments.
Returns the predicted response in the form of a posterior mean for each element of the batch of observations, as computed in Equation (3.4) of [muyskens2021muygps]. For each batch element \(\mathbf{x}_i\), we compute
\[\widehat{Y}_{NN} (\mathbf{x}_i \mid X_{N_i}) = K_\theta (\mathbf{x}_i, X_{N_i}) (K_\theta (X_{N_i}, X_{N_i}) + \varepsilon I_k)^{-1} Y(X_{N_i}).\]Here \(X_{N_i}\) is the set of nearest neighbors of \(\mathbf{x}_i\) in the training data, \(K_\theta\) is the kernel functor specified by self.kernel, \(\varepsilon I_k\) is a diagonal homoscedastic noise matrix whose diagonal is the value of the self.eps hyperparameter, and \(Y(X_{N_i})\) is the (nn_count, response_count) matrix of responses of the nearest neighbors given by the second two dimensions of the batch_nn_targets argument.
- Parameters:
K (
ndarray
) – A tensor of shape (batch_count, nn_count, nn_count) containing the (nn_count, nn_count -shaped kernel matrices corresponding to each of the batch elements.Kcross (
ndarray
) – A matrix of shape (batch_count, nn_count) containing the 1 x nn_count -shaped cross-covariance matrix corresponding to each of the batch elements.batch_nn_targets (
ndarray
) – A tensor of shape (batch_count, nn_count, response_count) whose last dimension lists the vector-valued responses for the nearest neighbors of each batch element.
- Return type:
ndarray
- Returns:
A matrix of shape (batch_count, response_count) whose rows are the predicted response for each of the given indices.
- posterior_variance(K, Kcross)[source]
Returns the posterior mean from the provided covariance and cross-covariance tensors.
Return the local posterior variances of each prediction, corresponding to the diagonal elements of a covariance matrix. For each batch element \(\mathbf{x}_i\), we compute
\[Var(\widehat{Y}_{NN} (\mathbf{x}_i \mid X_{N_i})) = K_\theta (\mathbf{x}_i, \mathbf{x}_i) - K_\theta (\mathbf{x}_i, X_{N_i}) (K_\theta (X_{N_i}, X_{N_i}) + \varepsilon I_k)^{-1} K_\theta (X_{N_i}, \mathbf{x}_i).\]- Parameters:
K (
ndarray
) – A tensor of shape (batch_count, nn_count, nn_count) containing the (nn_count, nn_count -shaped kernel matrices corresponding to each of the batch elements.Kcross (
ndarray
) – A matrix of shape (batch_count, nn_count) containing the 1 x nn_count -shaped cross-covariance matrix corresponding to each of the batch elements.
- Return type:
ndarray
- Returns:
A vector of shape (batch_count, response_count) consisting of the diagonal elements of the posterior variance.