User API

Documentation for ForneyLab.jl's user API.

If you want to know how you can extend ForneyLab.jl (e.g. register new update rules), see Developer API.

Contents

Index

Model specification

ForneyLab.@RVMacro

@RV provides a convenient way to add Variables and FactorNodes to the graph.

Examples:

# Automatically create new Variable x, try to assign x.id = :x if this id is available
@RV x ~ Gaussian(constant(0.0), constant(1.0))

# Explicitly specify the id of the Variable
@RV [id=:my_y] y ~ Gaussian(constant(0.0), constant(1.0))

# Automatically assign z.id = :z if this id is not yet taken
@RV z = x + y

# Manual assignment
@RV [id=:my_u] u = x + y

# Just create a variable
@RV x
@RV [id=:my_x] x
source

Factor nodes

ForneyLab.AdditionType

Description:

An addition constraint factor node.

f(out,in1,in2) = δ(in1 + in2 - out)

Interfaces:

1. out
2. in1
3. in2

Construction:

Addition(out, in1, in2, id=:some_id)
source
Base.:-Method
-(in1::Variable, in2::Variable)

A subtraction constraint based on the addition factor node.

source
ForneyLab.BernoulliType

Description:

Bernoulli factor node

out ∈ {0, 1}
p ∈ [0, 1]

f(out, p) = Ber(out|p) = p^out (1 - p)^{1 - out}

Interfaces:

1. out
2. p

Construction:

Bernoulli(id=:some_id)
source
ForneyLab.BetaType

Description:

Beta factor node

Real scalars
a > 0
b > 0

f(out, a, b) = Beta(out|a, b) = Γ(a + b)/(Γ(a) Γ(b)) out^{a - 1} (1 - out)^{b - 1}

Interfaces:

1. out
2. a
3. b

Construction:

Beta(id=:some_id)
source
ForneyLab.CategoricalType

Description:

Categorical factor node

The categorical node defines a one-dimensional probability
distribution over the normal basis vectors of dimension d

out ∈ {0, 1}^d where Σ_k out_k = 1
p ∈ [0, 1]^d, where Σ_k p_k = 1

f(out, p) = Cat(out | p)
          = Π_i p_i^{out_i}

Interfaces:

1. out
2. p

Construction:

Categorical(id=:some_id)
source
ForneyLab.ChanceConstraintType

Description:

Chance constraint on the marginal q of the connected variable x, as
ϵ ⩽ 1 - ∫_G q(x) dx, where G indicates a region. In other words,
the probability mass of q is not allowed to overflow the region G
by more than ϵ. Implementation according to (van de Laar et al.
"Chance-Constrained Active Inference", MIT Neural Computation, 2021).

Interfaces:

1. out

Construction:

ChanceConstraint(out; G=(min,max), epsilon=epsilon, id=:my_node)
source
ForneyLab.ClampType

Description:

A factor that clamps a variable to a constant value.

f(out) = δ(out - value)

Interfaces:

1. out

Construction:

Clamp(out, value, id=:some_id)
Clamp(value, id=:some_id)
source
ForneyLab.constantMethod

constant creates a Variable which is linked to a new Clamp, and returns this variable.

y = constant(3.0, id=:y)
source
ForneyLab.placeholderMethod

placeholder(...) creates a Clamp node and registers this node as a data placeholder with the current graph.

# Link variable y to buffer with id :y,
# indicate that Clamp will hold Float64 values.
placeholder(y, :y, datatype=Float64)

# Link variable y to index 3 of buffer with id :y.
# Specify the data type by passing a default value for the Clamp.
placeholder(y, :y, index=3, default=0.0)

# Indicate that the Clamp will hold an array of size `dims`,
# with Float64 elements.
placeholder(X, :X, datatype=Float64, dims=(3,2))
source
ForneyLab.@compositeMacro

The @composite macro allows for defining custom (composite) nodes. Composite nodes allow for implementating of custom update rules that may be computationally more efficient or convenient. A composite node can be defined with or without an internal model. For detailed usage instructions we refer to the composite_nodes demo.

source
ForneyLab.ContingencyType

Description:

Contingency factor node

The contingency distribution is a multivariate generalization of
the categorical distribution. As a bivariate distribution, the
contingency distribution defines the joint probability
over two unit vectors. The parameter p encodes a contingency matrix
that specifies the probability of co-occurrence.

out1 ∈ {0, 1}^d1 where Σ_j out1_j = 1
out2 ∈ {0, 1}^d2 where Σ_k out2_k = 1
p ∈ [0, 1]^{d1 × d2}, where Σ_jk p_jk = 1

f(out1, out2, p) = Con(out1, out2 | p)
                 = Π_jk p_jk^{out1_j * out2_k}

A Contingency distribution over more than two variables requires
higher-order tensors as parameters; these are not implemented in ForneyLab.

Interfaces:

1. out1
2. out2
3. p

Construction:

Contingency(id=:some_id)
source
ForneyLab.DeltaType

Description:

Delta node modeling a custom deterministic relation. Updates for
the Delta node are computed through the unscented transform (by default), 
importance sampling, conjugate approximation, or local linear approximation.

For more details see "On Approximate Nonlinear Gaussian Message Passing on
Factor Graphs", Petersen et al. 2018.

f(out, in1) = δ(out - g(in1))

Interfaces:

1. out
2. in1

Construction:

Delta{T}(out, in1; g=g, id=:my_node)
Delta{T}(out, in1; g=g, g_inv=g_inv, id=:my_node)
Delta{T}(out, in1, in2, ...; g=g, id=:my_node)
Delta{T}(out, in1, in2, ...; g=g, g_inv=(g_inv_in1, g_inv_in2, ...), id=:my_node)
Delta{T}(out, in1, in2, ...; g=g, g_inv=(g_inv_in1, nothing, ...), id=:my_node)

where T encodes the approximation method: Unscented, Sampling, or Extended.
source
ForneyLab.DirichletType

Description:

Dirichlet factor node

Multivariate:
f(out, a) = Dir(out|a)
          = Γ(Σ_i a_i)/(Π_i Γ(a_i)) Π_i out_i^{a_i}
where 'a' is a vector with every a_i > 0

Matrix variate:
f(out, a) = Π_k Dir(out|a_*k)
where 'a' represents a left-stochastic matrix with every a_jk > 0

Interfaces:

1. out
2. a

Construction:

Dirichlet(id=:some_id)
source
ForneyLab.DotProductType

Description:

out = in1'*in2

in1: d-dimensional vector
in2: d-dimensional vector
out: scalar

       in2
       |
  in1  V   out
----->[⋅]----->

f(out, in1, in2) =  δ(out - in1'*in2)

Interfaces:

1 i[:out], 2 i[:in1], 3 i[:in2]

Construction:

DotProduct(out, in1, in2, id=:my_node)
source
ForneyLab.EqualityType

Description:

An equality constraint factor node

f([1],[2],[3]) = δ([1] - [2]) δ([1] - [3])

Interfaces:

1, 2, 3

Construction:

Equality(id=:some_id)

The interfaces of an Equality node have to be connected manually.
source
ForneyLab.ExponentialType

Description:

Maps a location to a scale parameter by exponentiation

f(out,in1) = δ(out - exp(in1))

Interfaces:

1. out
2. in1

Construction:

Exponential(out, in1, id=:some_id)
source
ForneyLab.GammaType

Description:

A gamma node with shape-rate parameterization:

f(out,a,b) = Gam(out|a,b) = 1/Γ(a) b^a out^{a - 1} exp(-b out)

Interfaces:

1. out
2. a (shape)
3. b (rate)

Construction:

Gamma(out, a, b, id=:some_id)
source
ForneyLab.GaussianType

Description:

A Gaussian with moments, precision or canonical parameterization:

f(out, m, v) = 𝒩(out | m, v)
f(out, m, w) = 𝒩(out | m, w^{-1})
f(out, xi, w) = 𝒩(out | w^{-1}*xi, w^{-1})

Interfaces:

1. out
2. m, xi
3. v, w

Construction:

Gaussian(out, m, v, id=:some_id)
Gaussian{Moments}(out, m, v, id=:some_id)
Gaussian{Precision}(out, m, w, id=:some_id)
Gaussian{Canonical}(out, xi, w, id=:some_id)
source
ForneyLab.GaussianMixtureType

Description:

A Gaussian mixture with mean-precision parameterization:

f(out, z, m1, w1, m2, w2, ...) = 𝒩(out|m1, w1)^z_1 * 𝒩(out|m2, w2)^z_2 * ...

Interfaces:

1. out
2. z (switch)
3. m1 (mean)
4. w1 (precision)
5. m2 (mean)
6. w2 (precision)
...

Construction:

GaussianMixture(out, z, m1, w1, m2, w2, ..., id=:some_id)
source
ForneyLab.LogNormalType

Description:

A log-normal node with location-scale parameterization:

f(out,m,s) = logN(out|m, s) = 1/out (2π s)^{-1/2} exp(-1/(2s) (log(out) - m)^2))

Interfaces:

1. out
2. m (location)
3. s (squared scale)

Construction:

LogNormal(out, m, s, id=:some_id)
source
ForneyLab.LogitType

Description:

Logit mapping between a real variable in1 ∈ R and a binary variable out ∈ {0, 1}.

f(out, in1, xi) =  Ber(out | σ(in1))
                >= exp(in1*out) σ(xi) exp[-(in1 + xi)/2 - λ(xi)(in1^2 - xi^2)], where
           σ(x) =  1/(1 + exp(-x))
           λ(x) =  (σ(x) - 1/2)/(2*x)

Interfaces:

1. out (binary)
2. in1 (real)
3. xi (auxiliary variable)

Construction:

Logit(out, in1, xi)
source
ForneyLab.MomentConstraintType

Description:

Constraints the marginal of the connected variable to an 
expectation ∫q(x)g(x)dx = G. The parameter η in the node function 
is actively adapted s.t. the marginal respects the above constraint.
Implementation according to (van de Laar et al. "Chance-Constrained
Active Inference", MIT Neural Computation, 2021).

f(out) = exp(η g(out))

Interfaces:

1. out

Construction:

MomentConstraint(out; g=g, G=G, id=:my_node)
source
ForneyLab.MultiplicationType

Description:

For continuous random variables, the multiplication node acts
as a (matrix) multiplication constraint, with node function

f(out, in1, a) = δ(out - a*in1)

Interfaces:

1. out
2. in1
3. a

Construction:

Multiplication(out, in1, a, id=:some_id)
source
ForneyLab.PointMassConstraintType

Description:

Constraints the marginal of the connected variable to a point-mass.
Implementation according to (Senoz et al. "Variational Message Passing
and Local Constraint Manipulation in Factor Graphs", Entropy, 2021).

Interfaces:

1. out

Construction:

PointMassConstraint(out; id=:my_node)
source
ForneyLab.PoissonType

Description:

Poisson factor node

Real scalars
l > 0 (rate)

f(out, l) = Poisson(out|l) = 1/(x!) * l^x * exp(-l)

Interfaces:

1. out
2. l

Construction:

Poisson(id=:some_id)
source
ForneyLab.ProbitType

Description:

Constrains a continuous, real-valued variable in1 ∈ R with a binary (boolean) variable out ∈ {0, 1} through a probit link function.

f(out, in1) = Ber(out | Φ(in1))

Interfaces:

1. out (binary)
2. in1 (real)

Construction:

Probit(out, in1, id=:some_id)
source
ForneyLab.SoftmaxType

Description:

Softmax mapping between a real variable in1 ∈ R^d and discrete variable out ∈ {0, 1}^d.

f(out, in1, xi, a) = Cat(out_j | exp(in1_j)/Σ_k exp(in1_k)),
where log(Σ_k exp(x_k)) is upper-bounded according to (Bouchard, 2007).

Interfaces:

1. out (discrete)
2. in1 (real)
3. xi  (auxiliary variable)
4. a   (auxiliary variable)

Construction:

Softmax(out, in1, xi, a)
source
ForneyLab.TransitionType

Description:

The transition node models a transition between discrete
random variables, with node function

f(out, in1, a) = Cat(out | a*in1)

Where a is a left-stochastic matrix (columns sum to one)

Interfaces:

1. out
2. in1
3. a

Construction:

Transition(out, in1, a, id=:some_id)
source
ForneyLab.TransitionMixtureType

Description:

A mixture of discrete transitions:

f(out, in1, z, A1, A2, ...) = Cat(out|A1*in1)^z_1 * Cat(out|A2*in1)^z_2 * ...

Interfaces:

1. out
2. in1
3. z (switch)
4. A1 (transition matrix)
5. A2 (transition matrix)
...

Construction:

TransitionMixture(out, in1, z, A1, A2, ..., id=:some_id)
source
ForneyLab.WishartType

Description:

A Wishart node:

f(out,v,nu) = W(out|v, nu) = B(v, nu) |out|^{(nu - D - 1)/2} exp(-1/2 tr(v^{-1} out))

Interfaces:

1. out
2. v (scale matrix)
3. nu (degrees of freedom)

Construction:

Wishart(out, v, nu, id=:some_id)
source

Scheduling

ForneyLab.PosteriorFactorizationType

Initialize an empty PosteriorFactorization for sequential construction

source

Initialize a PosteriorFactorization consisting of a single PosteriorFactor for the entire graph

source

Construct a PosteriorFactorization consisting of one PosteriorFactor for each argument

source

Algorithm assembly

Algorithm code generation

Algorithm execution

ForneyLab.PointMassType

PointMass is an abstract type used to describe point mass distributions. It never occurs in a FactorGraph, but it is used as a probability distribution type.

source

Helper

ForneyLab.@symmetricalMacro
@symmetrical `function_definition`

Duplicate a method definition with the order of the first two arguments swapped. This macro is used to duplicate methods that are symmetrical in their first two input arguments, but require explicit definitions for the different argument orders. Example:

@symmetrical function prod!(x, y, z)
    ...
end
source
ForneyLab.isRoundedPosDefMethod

Checks if input matrix is positive definite. We also perform rounding in order to prevent floating point precision problems that isposdef()` suffers from.

source
ForneyLab.trigammaInverseMethod

Solve trigamma(y) = x for y.

Uses Newton's method on the convex function 1/trigramma(y). Iterations converge monotonically. Based on trigammaInverse implementation in R package "limma" by Gordon Smyth: https://github.com/Bioconductor-mirror/limma/blob/master/R/fitFDist.R

source