loss
Loss Function Handling
MuyGPyS includes predefined loss functions and convenience functions for indicating them to optimization.
- MuyGPyS.optimize.loss.cross_entropy_fn(predictions, targets)[source]
Cross entropy function.
Computes the cross entropy loss the predicted versus known response. Transforms
predictions
to be row-stochastic, and ensures thattargets
contains no negative elements.@NOTE[bwp] I don’t remember why we hard-coded eps=1e-6. Might need to revisit.
- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.
- Return type:
float
- Returns:
The cross-entropy loss of the prediction.
- MuyGPyS.optimize.loss.get_loss_func(loss_method)[source]
Select a loss function based upon string key.
Currently supports strings
"log"
or"cross-entropy"
forMuyGPyS.optimize.objective.cross_entropy_fn()
and"mse"
forMuyGPyS.optimize.objective.mse_fn()
.- Parameters:
predictions – The predicted response of shape
(batch_count, response_count)
.targets – The expected response of shape
(batch_count, response_count)
.
- Return type:
Callable
- Returns:
The loss function Callable.
- Raises:
NotImplementedError – Unrecognized strings will result in an error.
- MuyGPyS.optimize.loss.lool_fn(predictions, targets, variances, sigma_sq)[source]
Leave-one-out likelihood function.
Computes leave-one-out likelihood (LOOL) loss of the predicted versus known response. Treats multivariate outputs as interchangeable in terms of loss penalty. The function computes
\[l(f(x), y \mid \sigma) = \sum_{i=1}^b \sum_{j=1}^s \frac{(f(x_i) - y)^2}{\sigma_j} + \log \sigma_j\]- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.variances (
ndarray
) – The unscaled variance of the predicted responses of shape(batch_count, response_count)
.sigma_sq (
ndarray
) – The sigma_sq variance scaling parameter of shape(response_count,)
.
- Return type:
float
- Returns:
The LOOL loss of the prediction.
- MuyGPyS.optimize.loss.lool_fn_unscaled(predictions, targets, variances)[source]
Leave-one-out likelihood function.
Computes leave-one-out likelihood (LOOL) loss of the predicted versus known response. Treats multivariate outputs as interchangeable in terms of loss penalty. Unlike lool_fn, does not require sigma_sq as an argument. The function computes
\[l(f(x), y \mid \sigma) = \sum_{i=1}^b \frac{(f(x_i) - y)^2}{\sigma} + \log \sigma\]- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.variances (
ndarray
) – The unscaled variance of the predicted responses of shape(batch_count, response_count)
.
- Return type:
float
- Returns:
The LOOL loss of the prediction.
- MuyGPyS.optimize.loss.looph_fn(predictions, targets, variances, sigma_sq, boundary_scale=1.5)[source]
Variance-regularized pseudo-Huber loss function.
Computes a smooth approximation to the Huber loss function, similar to
pseudo_huber_fn()
, with the addition of both a variance scaling and a additive logarithmic variance regularization term to avoid exploding the variance. The function computes\[l(f(x), y \mid \delta) = \delta^2 \sum_{i=1}^b \left ( \sqrt{ \left ( 1 + \frac{y_i - f(x_i)}{\sigma_i \delta} \right )^2 } - 1 \right ) + \log \sigma_i\]- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.variances (
ndarray
) – The unscaled variance of the predicted responses of shape(batch_count, response_count)
.sigma_sq (
ndarray
) – The sigma_sq variance scaling parameter of shape(response_count,)
.boundary_scale (
float
) – The boundary value for the residual beyond which the loss becomes approximately linear. Useful values depend on the scale of the response.
- Return type:
float
- Returns:
The sum of pseudo-Huber losses of the predictions.
- MuyGPyS.optimize.loss.mse_fn(predictions, targets)[source]
Mean squared error function.
Computes mean squared error loss of the predicted versus known response. Treats multivariate outputs as interchangeable in terms of loss penalty. The function computes
\[l(f(x), y \mid \sigma) = \frac{1}{b} \sum_{i=1}^b (f(x_i) - y)^2}\]- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.
- Return type:
float
- Returns:
The mse loss of the prediction.
- MuyGPyS.optimize.loss.pseudo_huber_fn(predictions, targets, boundary_scale=1.5)[source]
Pseudo-Huber loss function.
Computes a smooth approximation to the Huber loss function, which balances sensitive squared-error loss for relatively small errors and robust-to-outliers absolute loss for larger errors, so that the loss is not overly sensitive to outliers. Used the form from [wikipedia](https://en.wikipedia.org/wiki/Huber_loss#Pseudo-Huber_loss_function). The function computes
\[l(f(x), y \mid \delta) = \delta^2 \sum_{i=1}^b \left ( \sqrt{ \left ( 1 + \frac{y_i - f(x_i)}{\delta} \right )^2 } - 1 \right )\]- Parameters:
predictions (
ndarray
) – The predicted response of shape(batch_count, response_count)
.targets (
ndarray
) – The expected response of shape(batch_count, response_count)
.boundary_scale (
float
) – The boundary value for the residual beyond which the loss becomes approximately linear. Useful values depend on the scale of the response.
- Return type:
float
- Returns:
The sum of pseudo-Huber losses of the predictions.