Skip to content

Logarithmic Score

scoringrules.logs_beta

logs_beta(
    observation: ArrayLike,
    a: ArrayLike,
    b: ArrayLike,
    /,
    lower: ArrayLike = 0.0,
    upper: ArrayLike = 1.0,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the beta distribution.

This score is equivalent to the negative log likelihood of the beta distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
a ArrayLike

First shape parameter of the forecast beta distribution.

required
b ArrayLike

Second shape parameter of the forecast beta distribution.

required
lower ArrayLike

Lower bound of the forecast beta distribution.

0.0
upper ArrayLike

Upper bound of the forecast beta distribution.

1.0
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between Beta(a, b) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_beta(0.3, 0.7, 1.1)

scoringrules.logs_binomial

logs_binomial(
    observation: ArrayLike,
    n: ArrayLike,
    prob: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the binomial distribution.

This score is equivalent to the negative log likelihood of the binomial distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
n ArrayLike

Size parameter of the forecast binomial distribution as an integer or array of integers.

required
prob ArrayLike

Probability parameter of the forecast binomial distribution as a float or array of floats.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between Binomial(n, prob) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_binomial(4, 10, 0.5)

scoringrules.logs_ensemble

logs_ensemble(
    observations: ArrayLike,
    forecasts: Array,
    /,
    axis: int = -1,
    *,
    bw: ArrayLike = None,
    backend: Backend = None,
) -> Array

Estimate the Logarithmic score for a finite ensemble via kernel density estimation.

Gaussian kernel density estimation is used to convert the finite ensemble to a mixture of normal distributions, with the component distributions centred at each ensemble member, with scale equal to the bandwidth parameter 'bw'.

The log score for the ensemble forecast is then the log score for the mixture of normal distributions.

Parameters:

Name Type Description Default
observations ArrayLike

The observed values.

required
forecasts Array

The predicted forecast ensemble, where the ensemble dimension is by default represented by the last axis.

required
axis int

The axis corresponding to the ensemble. Default is the last axis.

-1
bw ArrayLike

The bandwidth parameter for each forecast ensemble. If not given, estimated using Silverman's rule of thumb.

None
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score Array

The LS between the forecast ensemble and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_ensemble(obs, pred)

scoringrules.logs_exponential

logs_exponential(
    observation: ArrayLike,
    rate: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the exponential distribution.

This score is equivalent to the negative log likelihood of the exponential distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
rate ArrayLike

Rate parameter of the forecast exponential distribution.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between Exp(rate) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_exponential(0.8, 3.0)

scoringrules.logs_exponential2

logs_exponential2(
    observation: ArrayLike,
    /,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the exponential distribution with location and scale parameters.

This score is equivalent to the negative log likelihood of the exponential distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
location ArrayLike

Location parameter of the forecast exponential distribution.

0.0
scale ArrayLike

Scale parameter of the forecast exponential distribution.

1.0
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and Exp2(location, scale).

Examples:

>>> import scoringrules as sr
>>> sr.logs_exponential2(0.2, 0.0, 1.0)

scoringrules.logs_2pexponential

logs_2pexponential(
    observation: ArrayLike,
    scale1: ArrayLike,
    scale2: ArrayLike,
    location: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the two-piece exponential distribution.

This score is equivalent to the negative log likelihood of the two-piece exponential distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
scale1 ArrayLike

First scale parameter of the forecast two-piece exponential distribution.

required
scale2 ArrayLike

Second scale parameter of the forecast two-piece exponential distribution.

required
location ArrayLike

Location parameter of the forecast two-piece exponential distribution.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between 2pExp(sigma1, sigma2, location) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_2pexponential(0.8, 3.0, 1.4, 0.0)

scoringrules.logs_gamma

logs_gamma(
    observation: ArrayLike,
    shape: ArrayLike,
    /,
    rate: ArrayLike | None = None,
    *,
    scale: ArrayLike | None = None,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the gamma distribution.

This score is equivalent to the negative log likelihood of the gamma distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
shape ArrayLike

Shape parameter of the forecast gamma distribution.

required
rate ArrayLike | None

Rate parameter of the forecast gamma distribution.

None
scale ArrayLike | None

Scale parameter of the forecast gamma distribution, where scale = 1 / rate.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and Gamma(shape, rate).

Examples:

>>> import scoringrules as sr
>>> sr.logs_gamma(0.2, 1.1, 0.1)

Raises:

Type Description
ValueError

If both rate and scale are provided, or if neither is provided.

scoringrules.logs_gev

logs_gev(
    observation: ArrayLike,
    shape: ArrayLike,
    /,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the generalised extreme value (GEV) distribution.

This score is equivalent to the negative log likelihood of the GEV distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
shape ArrayLike

Shape parameter of the forecast GEV distribution.

required
location ArrayLike

Location parameter of the forecast GEV distribution.

0.0
scale ArrayLike

Scale parameter of the forecast GEV distribution.

1.0
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and GEV(shape, location, scale).

Examples:

>>> import scoringrules as sr
>>> sr.logs_gev(0.3, 0.1)

scoringrules.logs_gpd

logs_gpd(
    observation: ArrayLike,
    shape: ArrayLike,
    /,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the generalised Pareto distribution (GPD).

This score is equivalent to the negative log likelihood of the GPD

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
shape ArrayLike

Shape parameter of the forecast GPD distribution.

required
location ArrayLike

Location parameter of the forecast GPD distribution.

0.0
scale ArrayLike

Scale parameter of the forecast GPD distribution.

1.0
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and GPD(shape, location, scale).

Examples:

>>> import scoringrules as sr
>>> sr.logs_gpd(0.3, 0.9)

scoringrules.logs_hypergeometric

logs_hypergeometric(
    observation: ArrayLike,
    m: ArrayLike,
    n: ArrayLike,
    k: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the hypergeometric distribution.

This score is equivalent to the negative log likelihood of the hypergeometric distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
m ArrayLike

Number of success states in the population.

required
n ArrayLike

Number of failure states in the population.

required
k ArrayLike

Number of draws, without replacement. Must be in 0, 1, ..., m + n.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and Hypergeometric(m, n, k).

Examples:

>>> import scoringrules as sr
>>> sr.logs_hypergeometric(5, 7, 13, 12)

scoringrules.logs_laplace

logs_laplace(
    observation: ArrayLike,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the Laplace distribution.

This score is equivalent to the negative log likelihood of the Laplace distribution

Parameters:

Name Type Description Default
observation ArrayLike

Observed values.

required
location ArrayLike

Location parameter of the forecast laplace distribution.

0.0
scale ArrayLike

Scale parameter of the forecast laplace distribution. The LS between obs and Laplace(location, scale).

1.0

scoringrules.logs_loglaplace

logs_loglaplace(
    observation: ArrayLike,
    locationlog: ArrayLike,
    scalelog: ArrayLike,
    *,
    backend: Backend = None
) -> ArrayLike

Compute the logarithmic score (LS) for the log-Laplace distribution.

This score is equivalent to the negative log likelihood of the log-Laplace distribution

Parameters:

Name Type Description Default
observation ArrayLike

Observed values.

required
locationlog ArrayLike

Location parameter of the forecast log-laplace distribution.

required
scalelog ArrayLike

Scale parameter of the forecast log-laplace distribution.

required

Returns:

Name Type Description
score ArrayLike

The LS between obs and Loglaplace(locationlog, scalelog).

Examples:

>>> import scoringrules as sr
>>> sr.logs_loglaplace(3.0, 0.1, 0.9)

scoringrules.logs_logistic

logs_logistic(
    observation: ArrayLike,
    mu: ArrayLike,
    sigma: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the logistic distribution.

This score is equivalent to the negative log likelihood of the logistic distribution

Parameters:

Name Type Description Default
observations

Observed values.

required
mu ArrayLike

Location parameter of the forecast logistic distribution.

required
sigma ArrayLike

Scale parameter of the forecast logistic distribution.

required

Returns:

Name Type Description
score ArrayLike

The LS for the Logistic(mu, sigma) forecasts given the observations.

Examples:

>>> import scoringrules as sr
>>> sr.logs_logistic(0.0, 0.4, 0.1)

scoringrules.logs_loglogistic

logs_loglogistic(
    observation: ArrayLike,
    mulog: ArrayLike,
    sigmalog: ArrayLike,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the log-logistic distribution.

This score is equivalent to the negative log likelihood of the log-logistic distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
mulog ArrayLike

Location parameter of the log-logistic distribution.

required
sigmalog ArrayLike

Scale parameter of the log-logistic distribution.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between obs and Loglogis(mulog, sigmalog).

Examples:

>>> import scoringrules as sr
>>> sr.logs_loglogistic(3.0, 0.1, 0.9)

scoringrules.logs_lognormal

logs_lognormal(
    observation: ArrayLike,
    mulog: ArrayLike,
    sigmalog: ArrayLike,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the log-normal distribution.

This score is equivalent to the negative log likelihood of the log-normal distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
mulog ArrayLike

Mean of the normal underlying distribution.

required
sigmalog ArrayLike

Standard deviation of the underlying normal distribution.

required

Returns:

Name Type Description
score ArrayLike

The LS between Lognormal(mu, sigma) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_lognormal(0.0, 0.4, 0.1)

scoringrules.logs_mixnorm

logs_mixnorm(
    observation: ArrayLike,
    m: ArrayLike,
    s: ArrayLike,
    /,
    w: ArrayLike = None,
    axis: ArrayLike = -1,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score for a mixture of normal distributions.

This score is equivalent to the negative log likelihood of the normal mixture distribution

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
m ArrayLike

Means of the component normal distributions.

required
s ArrayLike

Standard deviations of the component normal distributions.

required
w ArrayLike

Non-negative weights assigned to each component.

None
axis ArrayLike

The axis corresponding to the mixture components. Default is the last axis.

-1
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between MixNormal(m, s) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_mixnormal(0.0, [0.1, -0.3, 1.0], [0.4, 2.1, 0.7], [0.1, 0.2, 0.7])

scoringrules.logs_negbinom

logs_negbinom(
    observation: ArrayLike,
    n: ArrayLike,
    /,
    prob: ArrayLike | None = None,
    *,
    mu: ArrayLike | None = None,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the negative binomial distribution.

This score is equivalent to the negative log likelihood of the negative binomial distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
n ArrayLike

Size parameter of the forecast negative binomial distribution.

required
prob ArrayLike | None

Probability parameter of the forecast negative binomial distribution.

None
mu ArrayLike | None

Mean of the forecast negative binomial distribution.

None

Returns:

Name Type Description
score ArrayLike

The LS between NegBinomial(n, prob) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_negbinom(2, 5, 0.5)

Raises:

Type Description
ValueError

If both prob and mu are provided, or if neither is provided.

scoringrules.logs_normal

logs_normal(
    observation: ArrayLike,
    mu: ArrayLike,
    sigma: ArrayLike,
    /,
    *,
    negative: bool = True,
    backend: Backend = None,
) -> Array

Compute the logarithmic score (LS) for the normal distribution.

This score is equivalent to the (negative) log likelihood (if negative = True)

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
mu ArrayLike

Mean of the forecast normal distribution.

required
sigma ArrayLike

Standard deviation of the forecast normal distribution.

required
backend Backend

The backend used for computations.

None

Returns:

Name Type Description
score Array

The LS between Normal(mu, sigma) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_normal(0.0, 0.4, 0.1)

scoringrules.logs_2pnormal

logs_2pnormal(
    observation: ArrayLike,
    scale1: ArrayLike,
    scale2: ArrayLike,
    location: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the two-piece normal distribution.

This score is equivalent to the negative log likelihood of the two-piece normal distribution.

Parameters:

Name Type Description Default
observations

The observed values.

required
scale1 ArrayLike

Scale parameter of the lower half of the forecast two-piece normal distribution.

required
scale2 ArrayLike

Scale parameter of the upper half of the forecast two-piece normal distribution.

required
location ArrayLike

Location parameter of the forecast two-piece normal distribution.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between 2pNormal(scale1, scale2, location) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_2pnormal(0.0, 0.4, 2.0, 0.1)

scoringrules.logs_poisson

logs_poisson(
    observation: ArrayLike,
    mean: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the Poisson distribution.

This score is equivalent to the negative log likelihood of the Poisson distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
mean ArrayLike

Mean parameter of the forecast poisson distribution.

required
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score ArrayLike

The LS between Pois(mean) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_poisson(1, 2)

scoringrules.logs_t

logs_t(
    observation: ArrayLike,
    df: ArrayLike,
    /,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the Student's t distribution.

This score is equivalent to the negative log likelihood of the t distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
df ArrayLike

Degrees of freedom parameter of the forecast t distribution.

required
location ArrayLike

Location parameter of the forecast t distribution.

0.0
sigma

Scale parameter of the forecast t distribution.

required

Returns:

Name Type Description
score ArrayLike

The LS between t(df, location, scale) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_t(0.0, 0.1, 0.4, 0.1)

scoringrules.logs_tlogistic

logs_tlogistic(
    observation: ArrayLike,
    location: ArrayLike,
    scale: ArrayLike,
    /,
    lower: ArrayLike = float("-inf"),
    upper: ArrayLike = float("inf"),
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the truncated logistic distribution.

This score is equivalent to the negative log likelihood of the truncated logistic distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
location ArrayLike

Location parameter of the forecast distribution.

required
scale ArrayLike

Scale parameter of the forecast distribution.

required
lower ArrayLike

Lower boundary of the truncated forecast distribution.

float('-inf')
upper ArrayLike

Upper boundary of the truncated forecast distribution.

float('inf')

Returns:

Name Type Description
score ArrayLike

The LS between tLogistic(location, scale, lower, upper) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_tlogistic(0.0, 0.1, 0.4, -1.0, 1.0)

scoringrules.logs_tnormal

logs_tnormal(
    observation: ArrayLike,
    location: ArrayLike,
    scale: ArrayLike,
    /,
    lower: ArrayLike = float("-inf"),
    upper: ArrayLike = float("inf"),
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the truncated normal distribution.

This score is equivalent to the negative log likelihood of the truncated normal distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
location ArrayLike

Location parameter of the forecast distribution.

required
scale ArrayLike

Scale parameter of the forecast distribution.

required
lower ArrayLike

Lower boundary of the truncated forecast distribution.

float('-inf')
upper ArrayLike

Upper boundary of the truncated forecast distribution.

float('inf')

Returns:

Name Type Description
score ArrayLike

The LS between tNormal(location, scale, lower, upper) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_tnormal(0.0, 0.1, 0.4, -1.0, 1.0)

scoringrules.logs_tt

logs_tt(
    observation: ArrayLike,
    df: ArrayLike,
    /,
    location: ArrayLike = 0.0,
    scale: ArrayLike = 1.0,
    lower: ArrayLike = float("-inf"),
    upper: ArrayLike = float("inf"),
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the truncated Student's t distribution.

This score is equivalent to the negative log likelihood of the truncated t distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
df ArrayLike

Degrees of freedom parameter of the forecast distribution.

required
location ArrayLike

Location parameter of the forecast distribution.

0.0
scale ArrayLike

Scale parameter of the forecast distribution.

1.0
lower ArrayLike

Lower boundary of the truncated forecast distribution.

float('-inf')
upper ArrayLike

Upper boundary of the truncated forecast distribution.

float('inf')

Returns:

Name Type Description
score ArrayLike

The LS between tt(df, location, scale, lower, upper) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_tt(0.0, 2.0, 0.1, 0.4, -1.0, 1.0)

scoringrules.logs_uniform

logs_uniform(
    observation: ArrayLike,
    min: ArrayLike,
    max: ArrayLike,
    /,
    *,
    backend: Backend = None,
) -> ArrayLike

Compute the logarithmic score (LS) for the uniform distribution.

This score is equivalent to the negative log likelihood of the uniform distribution.

Parameters:

Name Type Description Default
observation ArrayLike

The observed values.

required
min ArrayLike

Lower bound of the forecast uniform distribution.

required
max ArrayLike

Upper bound of the forecast uniform distribution.

required

Returns:

Name Type Description
score ArrayLike

The LS between U(min, max, lmass, umass) and obs.

Examples:

>>> import scoringrules as sr
>>> sr.logs_uniform(0.4, 0.0, 1.0)

Conditional and Censored Likelihood Score

scoringrules.clogs_ensemble

clogs_ensemble(
    observations: ArrayLike,
    forecasts: Array,
    /,
    a: ArrayLike = float("-inf"),
    b: ArrayLike = float("inf"),
    axis: int = -1,
    *,
    bw: ArrayLike = None,
    cens: bool = True,
    backend: Backend = None,
) -> Array

Estimate the conditional and censored likelihood score for an ensemble forecast.

The conditional and censored likelihood scores are introduced by Diks et al. (2011):

The weight function is an indicator function of the form \(w(z) = 1\{a < z < b\}\).

The ensemble forecast is converted to a mixture of normal distributions using Gaussian kernel density estimation. The score is then calculated for this smoothed distribution.

Parameters:

Name Type Description Default
observations ArrayLike

The observed values.

required
forecasts Array

The predicted forecast ensemble, where the ensemble dimension is by default represented by the last axis.

required
a ArrayLike

The lower bound in the weight function.

float('-inf')
b ArrayLike

The upper bound in the weight function.

float('inf')
axis int

The axis corresponding to the ensemble. Default is the last axis.

-1
bw ArrayLike

The bandwidth parameter for each forecast ensemble. If not given, estimated using Silverman's rule of thumb.

None
cens Boolean

Boolean specifying whether to return the conditional ('cens = False') or the censored likelihood score ('cens = True').

True
backend Backend

The name of the backend used for computations. Defaults to 'numba' if available, else 'numpy'.

None

Returns:

Name Type Description
score Array

The CoLS or CeLS between the forecast ensemble and obs for the chosen weight parameters.

Examples:

>>> import scoringrules as sr
>>> sr.clogs_ensemble(obs, pred, -1.0, 1.0)