bayesml.bernoulli package#

_images/bernoulli_example.png

Module contents#

The Bernoulli distribution with the beta prior distribution.

The stochastic data generative model is as follows:

  • \(x \in \{ 0, 1\}\): a data point

  • \(\theta \in [0, 1]\): a parameter

\[p(x | \theta) = \mathrm{Bern}(x|\theta) = \theta^x (1-\theta)^{1-x}.\]
\[\begin{split}\mathbb{E}[x | \theta] &= \theta, \\ \mathbb{V}[x | \theta] &= \theta (1 - \theta).\end{split}\]

The prior distribution is as follows:

  • \(\alpha_0 \in \mathbb{R}_{>0}\): a hyperparameter

  • \(\beta_0 \in \mathbb{R}_{>0}\): a hyperparameter

  • \(B(\cdot,\cdot): \mathbb{R}_{>0} \times \mathbb{R}_{>0} \to \mathbb{R}_{>0}\): the Beta function

\[p(\theta) = \mathrm{Beta}(\theta|\alpha_0,\beta_0) = \frac{1}{B(\alpha_0, \beta_0)} \theta^{\alpha_0 - 1} (1-\theta)^{\beta_0 - 1}.\]
\[\begin{split}\mathbb{E}[\theta] &= \frac{\alpha_0}{\alpha_0 + \beta_0}, \\ \mathbb{V}[\theta] &= \frac{\alpha_0 \beta_0}{(\alpha_0 + \beta_0)^2 (\alpha_0 + \beta_0 + 1)}.\end{split}\]

The posterior distribution is as follows:

  • \(x^n = (x_1, x_2, \dots , x_n) \in \{ 0, 1\}^n\): given data

  • \(\alpha_n \in \mathbb{R}_{>0}\): a hyperparameter

  • \(\beta_n \in \mathbb{R}_{>0}\): a hyperparameter

\[p(\theta | x^n) = \mathrm{Beta}(\theta|\alpha_n,\beta_n) = \frac{1}{B(\alpha_n, \beta_n)} \theta^{\alpha_n - 1} (1-\theta)^{\beta_n - 1},\]
\[\begin{split}\mathbb{E}[\theta | x^n] &= \frac{\alpha_n}{\alpha_n + \beta_n}, \\ \mathbb{V}[\theta | x^n] &= \frac{\alpha_n \beta_n}{(\alpha_n + \beta_n)^2 (\alpha_n + \beta_n + 1)},\end{split}\]

where the updating rule of the hyperparameters is

\[\begin{split}\alpha_n = \alpha_0 + \sum_{i=1}^n I \{ x_i = 1 \},\\ \beta_n = \beta_0 + \sum_{i=1}^n I \{ x_i = 0 \}.\end{split}\]

The predictive distribution is as follows:

  • \(x_{n+1} \in \{ 0, 1\}\): a new data point

  • \(\alpha_\mathrm{p} \in \mathbb{R}_{>0}\): a parameter

  • \(\beta_\mathrm{p} \in \mathbb{R}_{>0}\): a parameter

  • \(\theta_\mathrm{p} \in [0,1]\): a parameter

\[p(x_{n+1} | x^n) = \mathrm{Bern}(x_{n+1}|\theta_\mathrm{p}) =\theta_\mathrm{p}^{x_{n+1}}(1-\theta_\mathrm{p})^{1-x_{n+1}},\]
\[\begin{split}\mathbb{E}[x_{n+1} | x^n] &= \theta_\mathrm{p}, \\ \mathbb{V}[x_{n+1} | x^n] &= \theta_\mathrm{p} (1 - \theta_\mathrm{p}),\end{split}\]

where the parameters are obtained from the hyperparameters of the posterior distribution as follows.

\[\theta_\mathrm{p} = \frac{\alpha_n}{\alpha_n + \beta_n}.\]
class bayesml.bernoulli.GenModel(theta=0.5, h_alpha=0.5, h_beta=0.5, seed=None)#

Bases: Generative

The stochastic data generative model and the prior distribution.

Parameters:
thetafloat, optional

a real number in \([0, 1]\), by default 0.5.

h_alphafloat, optional

a positive real number, by default 0.5.

h_betafloat, optional

a positive real number, by default 0.5.

seed{None, int}, optional

A seed to initialize numpy.random.default_rng(), by default None.

Methods

gen_params()

Generate the parameter from the prior distribution.

gen_sample(sample_size)

Generate a sample from the stochastic data generative model.

get_constants()

Get constants of GenModel.

get_h_params()

Get the hyperparameters of the prior distribution.

get_params()

Get the parameter of the sthocastic data generative model.

load_h_params(filename)

Load the hyperparameters to h_params.

load_params(filename)

Load the parameters saved by save_params.

save_h_params(filename)

Save the hyperparameters using python pickle module.

save_params(filename)

Save the parameters using python pickle module.

save_sample(filename, sample_size)

Save the generated sample as NumPy .npz format.

set_h_params([h_alpha, h_beta])

Set the hyperparameters of the prior distribution.

set_params([theta])

Set the parameter of the sthocastic data generative model.

visualize_model([sample_size, sample_num])

Visualize the stochastic data generative model and generated samples.

get_constants()#

Get constants of GenModel.

This model does not have any constants. Therefore, this function returns an emtpy dict {}.

Returns:
constantsan empty dict
set_h_params(h_alpha=None, h_beta=None)#

Set the hyperparameters of the prior distribution.

Parameters:
h_alphafloat, optional

a positive real number, by default None.

h_betafloat, optional

a positive real number, by default None.

get_h_params()#

Get the hyperparameters of the prior distribution.

Returns:
h_paramsdict of {str: float}
  • "h_alpha" : The value of self.h_alpha

  • "h_beta" : The value of self.h_beta

gen_params()#

Generate the parameter from the prior distribution.

The generated vaule is set at self.theta.

set_params(theta=None)#

Set the parameter of the sthocastic data generative model.

Parameters:
thetafloat, optional

a real number :math:` heta in [0, 1]`, by default None.

get_params()#

Get the parameter of the sthocastic data generative model.

Returns:
paramsdict of {str:float}
  • "theta" : The value of self.theta.

gen_sample(sample_size)#

Generate a sample from the stochastic data generative model.

Parameters:
sample_sizeint

A positive integer

Returns:
xnumpy ndarray

1 dimensional array whose size is sammple_size and elements are 0 or 1.

save_sample(filename, sample_size)#

Save the generated sample as NumPy .npz format.

It is saved as a NpzFile with keyword: “x”.

Parameters:
filenamestr

The filename to which the sample is saved. .npz will be appended if it isn’t there.

sample_sizeint

A positive integer

visualize_model(sample_size=20, sample_num=5)#

Visualize the stochastic data generative model and generated samples.

Parameters:
sample_sizeint, optional

A positive integer, by default 20.

sample_numint, optional

A positive integer, by default 5.

Examples

>>> from bayesml import bernoulli
>>> model = bernoulli.GenModel()
>>> model.visualize_model()
theta:0.5
x0:[1 1 0 0 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 0]
x1:[1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0]
x2:[0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 0 1 1]
x3:[0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 1]
x4:[1 0 1 1 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 1]
_images/bernoulli_example.png
class bayesml.bernoulli.LearnModel(h0_alpha=0.5, h0_beta=0.5)#

Bases: Posterior, PredictiveMixin

The posterior distribution and the predictive distribution.

Parameters:
h0_alphafloat, optional

a positive real number, by default 0.5.

h0_betafloat, optional

a positive real number, by default 0.5.

Attributes:
hn_alphafloat

a positive real number

hn_betafloat

a positive real number

p_thetafloat

a real number :math:` heta_mathrm{p} in [0, 1]`

Methods

calc_log_marginal_likelihood()

Calculate log marginal likelihood

calc_pred_dist()

Calculate the parameters of the predictive distribution.

estimate_interval([credibility])

Credible interval of the parameter.

estimate_params([loss, dict_out])

Estimate the parameter of the stochastic data generative model under the given criterion.

get_constants()

Get constants of LearnModel.

get_h0_params()

Get the initial values of the hyperparameters of the posterior distribution.

get_hn_params()

Get the hyperparameters of the posterior distribution.

get_p_params()

Get the parameters of the predictive distribution.

load_h0_params(filename)

Load the hyperparameters to h0_params.

load_hn_params(filename)

Load the hyperparameters to hn_params.

make_prediction([loss])

Predict a new data point under the given criterion.

overwrite_h0_params()

Overwrite the initial values of the hyperparameters of the posterior distribution by the learned values.

pred_and_update(x[, loss])

Predict a new data point and update the posterior sequentially.

reset_hn_params()

Reset the hyperparameters of the posterior distribution to their initial values.

save_h0_params(filename)

Save the hyperparameters using python pickle module.

save_hn_params(filename)

Save the hyperparameters using python pickle module.

set_h0_params([h0_alpha, h0_beta])

Set initial values of the hyperparameter of the posterior distribution.

set_hn_params([hn_alpha, hn_beta])

Set updated values of the hyperparameter of the posterior distribution.

update_posterior(x)

Update the hyperparameters of the posterior distribution using traning data.

visualize_posterior()

Visualize the posterior distribution for the parameter.

get_constants()#

Get constants of LearnModel.

This model does not have any constants. Therefore, this function returns an emtpy dict {}.

Returns:
constantsan empty dict
set_h0_params(h0_alpha=None, h0_beta=None)#

Set initial values of the hyperparameter of the posterior distribution.

Parameters:
h0_alphafloat, optional

a positive real number, by default None.

h0_betafloat, optionanl

a positive real number, by default None.

get_h0_params()#

Get the initial values of the hyperparameters of the posterior distribution.

Returns:
h0_paramsdict of {str: float}
  • "h0_alpha" : The value of self.h0_alpha

  • "h0_beta" : The value of self.h0_beta

set_hn_params(hn_alpha=None, hn_beta=None)#

Set updated values of the hyperparameter of the posterior distribution.

Parameters:
hn_alphafloat, optional

a positive real number, by default None.

hn_betafloat, optional

a positive real number, by default None.

get_hn_params()#

Get the hyperparameters of the posterior distribution.

Returns:
hn_paramsdict of {str: float}
  • "hn_alpha" : The value of self.hn_alpha

  • "hn_beta" : The value of self.hn_beta

update_posterior(x)#

Update the hyperparameters of the posterior distribution using traning data.

Parameters:
xnumpy.ndarray

All the elements must be 0 or 1.

estimate_params(loss='squared', dict_out=False)#

Estimate the parameter of the stochastic data generative model under the given criterion.

Parameters:
lossstr, optional

Loss function underlying the Bayes risk function, by default “squared”. This function supports “squared”, “0-1”, “abs”, and “KL”.

dict_outbool, optional

If True, output will be a dict, by default False.

Returns:
estimator{float, None, rv_frozen} or dict of {strfloat, None}

The estimated values under the given loss function. If it is not exist, None will be returned. If the loss function is “KL”, the posterior distribution itself will be returned as rv_frozen object of scipy.stats.

estimate_interval(credibility=0.95)#

Credible interval of the parameter.

Parameters:
credibilityfloat, optional

A posterior probability that the interval conitans the paramter, by default 0.95.

Returns:
lower, upper: float

The lower and the upper bound of the interval

visualize_posterior()#

Visualize the posterior distribution for the parameter.

Examples

>>> from bayesml import bernoulli
>>> gen_model = bernoulli.GenModel()
>>> x = gen_model.gen_sample(20)
>>> print(x)
[0 1 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 1 0]
>>> learn_model = bernoulli.LearnModel()
>>> learn_model.update_posterior(x)
>>> learn_model.visualize_posterior()
_images/bernoulli_posterior.png
get_p_params()#

Get the parameters of the predictive distribution.

Returns:
p_paramsdict of {str: float}
  • "p_theta" : The value of self.p_theta

calc_pred_dist()#

Calculate the parameters of the predictive distribution.

make_prediction(loss='squared')#

Predict a new data point under the given criterion.

Parameters:
lossstr, optional

Loss function underlying the Bayes risk function, by default “squared”. This function supports “squared”, “0-1”, “abs”, and “KL”.

Returns:
Predicted_value{int, numpy.ndarray}

The predicted value under the given loss function. If the loss function is “KL”, the predictive distribution itself will be returned as numpy.ndarray.

pred_and_update(x, loss='squared')#

Predict a new data point and update the posterior sequentially.

Parameters:
xint

It must be 0 or 1

lossstr, optional

Loss function underlying the Bayes risk function, by default “squared”. This function supports “squared”, “0-1”, “abs”, and “KL”.

Returns:
Predicted_value{int, numpy.ndarray}

The predicted value under the given loss function. If the loss function is “KL”, the predictive distribution itself will be returned as numpy.ndarray.

calc_log_marginal_likelihood()#

Calculate log marginal likelihood

Returns:
log_marginal_likelihoodfloat

The log marginal likelihood.