User guide

This user guide outlines the usage of ForneyLab for solving inference problems. The main content is divided in three parts:

For installation instructions see the Getting started page. To import ForneyLab into the active Julia session run

using ForneyLab

Specifying a model

Probabilistic models incorporate the element of randomness to describe an event or phenomenon by using random variables and probability theory. A probabilistic model can be represented diagrammatically by using probabilistic graphical models (PGMs). A factor graph is a type of PGM that is well suited to cast inference tasks in terms of graphical manipulations.

Creating a new factor graph

In ForneyLab, Factor graphs are represented by a FactorGraph type (struct). To instantiate a new (empty) factor graph we use the constructor without arguments

g = FactorGraph()

ForneyLab tracks the active factor graph. Operations on the graph, such as adding variables or nodes, only affect the active graph. The call to FactorGraph() creates a new instance of a factor graph and registers it as the active graph.

To get the active graph run

fg = currentGraph()

To set the active graph run

setCurrentGraph(g)

Adding random variables (edges)

Random variables are represented as edges on Forney-style factor graphs. You can add a random variable to the active graph by instantiating a Variable. The constructor accepts an id of type Symbol. For example,

x = Variable(id=:x)

associates the variable x to an edge of the active graph.

Alternatively, the @RV macro achieves the same by a more compact notation. The following line has the same result as the previous one.

@RV x

By default, the @RV macro uses the Julia-variable's name to create an id for the Variable object (:x in this example). However, if this id was already assigned, then ForneyLab creates a default id of the form :variable_n, where n is a number that increments. In case you want to provide a custom id, the @RV macro accepts an optional keyword argument between square brackets. For example,

@RV [id=:my_id] x

adds the variable x to the active graph and assigns the :my_id symbol to its id field. Later we will see that this is useful once we start visualizing the factor graph.

Adding factor nodes

Factor nodes are used to define a relationship between random variables. A factor node defines a probability distribution over selected random variables. See Factor nodes for a complete list of the available factor nodes in ForneyLab.

We assign a probability distribution to a random variable using the ~ operator together with the @RV macro. For example, to create a Gaussian random variable y, where its mean and variance are controlled by the random variables m and v respectively, we define

@RV m
@RV v
@RV y ~ GaussianMeanVariance(m, v)

Visualizing a factor graph

Factor graphs can be visualized using the draw function, which takes a FactorGraph as argument. Let's visualize the factor graph that we defined in the previous section.

ForneyLab.draw(g)
G 3018588680203348353 ๐’ฉ gaussianmeanvariance_1 179610843780328615002 179610843780328615002--3018588680203348353 v 3 v 98570110105095102652 98570110105095102652--3018588680203348353 y 1 out 94841445712899091472 94841445712899091472--3018588680203348353 m 2 m

Clamping

Suppose we know that the variance of the random variable y, is fixed to a certain value. ForneyLab provides a Clamp node to impose such constraints. Clamp nodes can be implicitly defined by using literals like in the following example

g = FactorGraph() # create a new factor graph
@RV m
@RV y ~ GaussianMeanVariance(m, 1.0)
ForneyLab.draw(g)
G 18202424596950594977 ๐’ฉ gaussianmeanvariance_1 15015166724365622970 clamp_1 18202424596950594977--15015166724365622970 clamp_1 1 out 3 v 24352539934669111682 24352539934669111682--18202424596950594977 m 2 m 98217247488264280912 98217247488264280912--18202424596950594977 y 1 out

Here, the literal 1.0 creates a clamp node implicitly. Clamp nodes are visualized with a gray background.

Alternatively, if you want to assign a custom id to a Clamp factor node, then you have to instantiate them explicitly using its constructor function, i.e.

g = FactorGraph() # create a new factor graph
@RV m
@RV v ~ Clamp(1.0)
@RV y ~ GaussianMeanVariance(m, v)
ForneyLab.draw(g)
G 8428692671686655421 ๐’ฉ gaussianmeanvariance_1 11671054919974900955 clamp_1 8428692671686655421--11671054919974900955 v 1 out 3 v 144075536102738222652 144075536102738222652--8428692671686655421 y 1 out 116492503228689722342 116492503228689722342--8428692671686655421 m 2 m

Placeholders

Placeholders are Clamp factors that act as entry points for data. They associate a given random variable with a buffer through which data is fed at a later point. This buffer has an id, a dimensionality and a data type. Placeholders are created with the placeholder function. Suppose that we want to feed an array of one-dimensional floating-point data to the y random variable of the previous model. We would then need to define y as a placeholder as follows.

g = FactorGraph() # create a new factor graph
@RV m
@RV v ~ Clamp(1.0)
@RV y ~ GaussianMeanVariance(m, v)
placeholder(y, :y)
ForneyLab.draw(g)
G 17495787314925065079 clamp_1 7564278020592425634 ๐’ฉ gaussianmeanvariance_1 7564278020592425634--17495787314925065079 v 1 out 3 v 8410221131335541116 placeholder_y 8410221131335541116--7564278020592425634 y 1 out 1 out 1931521311216648012 1931521311216648012--7564278020592425634 m 2 m

Placeholders default to one-dimensional floating-point data. In case we want to override this with, for example, 3-dimensional integer data, we would need to specify the dims and datatpye parameters of the placeholder function as follows

placeholder(y, :y, dims=(3,), datatype=Int)

In the previous example, we first created the random variable y and then marked it as a placeholder. There is, however, a shorthand version to perform these two steps in one. The syntax consists of calling a placeholder method that takes an id Symbol as argument and returns the new random variable. Here is an example:

x = placeholder(:x)

where x is now a random variable linked to a placeholder with id :x.

In section Executing an algorithm we will see how the data is fed to the placeholders.

Overloaded operators

ForneyLab supports the use of the +, - and * operators between random variables that have certain types of probability distributions. This is known as operator overloading. These operators are represented as deterministic factor nodes in ForneyLab. As an example, a two-component Gaussian mixture can be defined as follows

g = FactorGraph() # create a new factor graph
@RV x ~ GaussianMeanVariance(0.0, 1.0)
@RV y ~ GaussianMeanVariance(2.0, 3.0)
@RV z = x + y
placeholder(z, :z)
ForneyLab.draw(g)
G 16267205363763839878 clamp_1 7510982188122229112 ๐’ฉ gaussianmeanvariance_1 7510982188122229112--16267205363763839878 clamp_1 1 out 2 m 11742189180512432254 clamp_2 7510982188122229112--11742189180512432254 clamp_2 1 out 3 v 15154626428468648787 clamp_4 6070185390966240457 ๐’ฉ gaussianmeanvariance_2 6070185390966240457--15154626428468648787 clamp_4 1 out 3 v 3829520968347887818 clamp_3 6070185390966240457--3829520968347887818 clamp_3 1 out 2 m 9212326562102814607 + addition_1 9212326562102814607--7510982188122229112 x 1 out 2 in1 9212326562102814607--6070185390966240457 y 1 out 3 in2 998966396729081717 placeholder_z 998966396729081717--9212326562102814607 z 1 out 1 out

Online vs. offline learning

Online learning refers to a procedure where observations are processed as soon as they become available. In the context of factor graphs this means that observations need to be fed to the placeholders and processed one at a time. In a Bayesian setting, this reduces to the application of Bayes rule in a recursive fashion, i.e. the posterior distribution for a given random variable, becomes the prior for the next processing step. Since we are feeding one observation at each time step, the factor graph will have one placeholder for every observed variable. All of the factor graphs that we have seen so far were specified to process data in this fashion.

Let's take a look at an example in order to contrast it with its offline counterpart. In this simple example, the mean x of a Gaussian distributed random variable y is modelled by another Gaussian distribution with hyperparameters m and v. The variance of y is assumed to be known.

g = FactorGraph() # create a new factor graph
m = placeholder(:m)
v = placeholder(:v)
@RV x ~ GaussianMeanVariance(m, v)
@RV y ~ GaussianMeanVariance(x, 1.0)
placeholder(y, :y)
ForneyLab.draw(g)
G 9879667553477415680 ๐’ฉ gaussianmeanvariance_1 6557494521677275774 placeholder_v 9879667553477415680--6557494521677275774 v 1 out 3 v 17784788516119711878 placeholder_m 9879667553477415680--17784788516119711878 m 1 out 2 m 5375477348826719568 ๐’ฉ gaussianmeanvariance_2 5375477348826719568--9879667553477415680 x 1 out 2 m 4293463754622737153 clamp_1 5375477348826719568--4293463754622737153 clamp_1 1 out 3 v 6244733864870624452 placeholder_y 6244733864870624452--5375477348826719568 y 1 out 1 out

As we have seen in previous examples, there is one placeholder linked to the observed variable y that accepts one observation at a time. Perhaps what is less obvious is that the hyperparameters m and v are also defined as placeholders. The reason for this is that we will use them to input our current (prior) belief about x for every observation that is processed. In section Executing an algorithm we will elaborate more on this.

Offline learning, on the other hand, involves feeding and processing a batch of N observations (typically all available observations) in a single step. This translates into a factor graph that has one placeholder linked to a random variable for each sample in the batch. We can specify this type of models using a for loop like in the following example.

g = FactorGraph()   # create a new factor graph
N = 3               # number of observations
y = Vector{Variable}(undef, N)
@RV x ~ GaussianMeanVariance(0.0, 1.0)
for i = 1:N
    @RV y[i] ~ GaussianMeanVariance(x, 1.0)
    placeholder(y[i], :y, index=i)
end
ForneyLab.draw(g)
G 10139084825470795909 clamp_1 7919958756711773440 ๐’ฉ gaussianmeanvariance_2 5592988440872732727 clamp_3 7919958756711773440--5592988440872732727 clamp_3 1 out 3 v 8280287170635149009 = equ_x_2 17592825279805282935 = equ_x_1 8280287170635149009--17592825279805282935 x 3 3 2 2 15985681609616715601 ๐’ฉ gaussianmeanvariance_3 8280287170635149009--15985681609616715601 x 2 m 1 1 9707560318360359120 ๐’ฉ gaussianmeanvariance_4 8280287170635149009--9707560318360359120 x 2 m 3 3 3850616722437894962 clamp_5 7738127956857140291 placeholder_y_3 7738127956857140291--9707560318360359120 y_3 1 out 1 out 17592825279805282935--7919958756711773440 x 2 m 2 2 15195542919613330477 ๐’ฉ gaussianmeanvariance_1 17592825279805282935--15195542919613330477 x 1 out 1 1 7871610096602077264 placeholder_y_2 7871610096602077264--15985681609616715601 y_2 1 out 1 out 11748675416501798603 clamp_4 571780728702632373 placeholder_y_1 571780728702632373--7919958756711773440 y_1 1 out 1 out 14504294494710949670 clamp_2 15985681609616715601--11748675416501798603 clamp_4 1 out 3 v 15195542919613330477--10139084825470795909 clamp_1 1 out 2 m 15195542919613330477--14504294494710949670 clamp_2 1 out 3 v 9707560318360359120--3850616722437894962 clamp_5 1 out 3 v

The important thing to note here is that we need an extra array of N observed random variables where each of them is linked to a dedicated index of the placeholder's buffer. This buffer can be thought of as an N dimensional array of Clamp factor nodes. We achieve this link by means of the index parameter of the placeholder function.

In section Executing an algorithm we will see examples of how the data is fed to the placeholders in each of these two scenarios.

Generating an algorithm

ForneyLab supports code generation for four different types of message-passing algorithms:

Whereas belief propagation computes exact inference for the random variables of interest, the variational message passing (VMP), expectation maximization (EM) and expectation propagation (EP) algorithms are approximation methods that can be applied to a larger range of models.

Belief propagation

The way to instruct ForneyLab to generate a belief propagation algorithm (also known as a sum-product algorithm) is by using the messagePassingAlgorithm function. This function takes as argument(s) the random variable(s) for which we want to infer the posterior distribution. As an example, consider the following hierarchical model in which the mean of a Gaussian distribution is represented by another Gaussian distribution whose mean is modelled by another Gaussian distribution.

g = FactorGraph() # create a new factor graph
@RV m2 ~ GaussianMeanVariance(0.0, 1.0)
@RV m1 ~ GaussianMeanVariance(m2, 1.0)
@RV y ~ GaussianMeanVariance(m1, 1.0)
placeholder(y, :y)
ForneyLab.draw(g)
G 16036462113311035507 clamp_4 17508203978050031338 ๐’ฉ gaussianmeanvariance_3 17508203978050031338--16036462113311035507 clamp_4 1 out 3 v 15750227571820845562 ๐’ฉ gaussianmeanvariance_2 17508203978050031338--15750227571820845562 m1 1 out 2 m 13500421805177176413 placeholder_y 13500421805177176413--17508203978050031338 y 1 out 1 out 9263526725653602841 clamp_2 11411779803012627894 clamp_1 12666278989855206808 ๐’ฉ gaussianmeanvariance_1 15750227571820845562--12666278989855206808 m2 1 out 2 m 13677524446702782013 clamp_3 15750227571820845562--13677524446702782013 clamp_3 1 out 3 v 12666278989855206808--9263526725653602841 clamp_2 1 out 3 v 12666278989855206808--11411779803012627894 clamp_1 1 out 2 m

If we were only interested in inferring the posterior distribution of m1 then we would run

algorithm = messagePassingAlgorithm(m1)
algorithm_code = algorithmSourceCode(algorithm)

On the other hand, if we were interested in the posterior distributions of both m1 and m2 we would then need to pass them as elements of an array, i.e.

algorithm = messagePassingAlgorithm([m1, m2])
algorithm_code = algorithmSourceCode(algorithm)

Note that the message-passing algorithm returned by the algorithmSourceCode function is a String that contains the definition of a Julia function. In order to access this function, we need to parse the code and evaluate it in the current scope. This can be done as follows

algorithm_expr = Meta.parse(algorithm_code)
:(function step!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=Array{Message}(undef, 4))
      #= none:3 =#
      messages[1] = ruleSPGaussianMeanVarianceOutNPP(nothing, Message(Univariate, PointMass, m=0.0), Message(Univariate, PointMass, m=1.0))
      #= none:4 =#
      messages[2] = ruleSPGaussianMeanVarianceOutNGP(nothing, messages[1], Message(Univariate, PointMass, m=1.0))
      #= none:5 =#
      messages[3] = ruleSPGaussianMeanVarianceMPNP(Message(Univariate, PointMass, m=data[:y]), nothing, Message(Univariate, PointMass, m=1.0))
      #= none:6 =#
      messages[4] = ruleSPGaussianMeanVarianceMGNP(messages[3], nothing, Message(Univariate, PointMass, m=1.0))
      #= none:8 =#
      marginals[:m1] = (messages[2]).dist * (messages[3]).dist
      #= none:9 =#
      marginals[:m2] = (messages[1]).dist * (messages[4]).dist
      #= none:11 =#
      return marginals
  end)
eval(algorithm_expr)

At this point a new function named step! becomes available in the current scope. This function contains a message-passing algorithm that infers both m1 and m2 given one or more y observations. In the section Executing an algorithm we will see how this function is used.

Variational message passing

Variational message passing (VMP) algorithms are generated much in the same way as the belief propagation algorithm we saw in the previous section. There is a major difference though: for VMP algorithm generation we need to define the factorization properties of our approximate distribution. A common approach is to assume that all random variables of the model factorize with respect to each other. This is known as the mean field assumption. In ForneyLab, the specification of such factorization properties is defined using the PosteriorFactorization composite type. Let's take a look at a simple example to see how it is used. In this model we want to learn the mean and variance of a Gaussian distribution, where the former is modelled with a Gaussian distribution and the latter with a Gamma.

g = FactorGraph() # create a new factor graph
@RV m ~ GaussianMeanVariance(0, 10)
@RV w ~ Gamma(0.1, 0.1)
@RV y ~ GaussianMeanPrecision(m, w)
placeholder(y, :y)
draw(g)
G 1628297563217271470 clamp_1 8961491078803617993 Gam gamma_1 9263339916040682247 clamp_3 8961491078803617993--9263339916040682247 clamp_3 1 out 2 a 8278162286451987808 clamp_4 8961491078803617993--8278162286451987808 clamp_4 1 out 3 b 2765001804343984366 ๐’ฉ gaussianmeanvariance_1 2765001804343984366--1628297563217271470 clamp_1 1 out 2 m 2591314273928854159 clamp_2 2765001804343984366--2591314273928854159 clamp_2 1 out 3 v 9448261878376638122 placeholder_y 78672926039817619 ๐’ฉ gaussianmeanprecision_1 9448261878376638122--78672926039817619 y 1 out 1 out 78672926039817619--8961491078803617993 w 1 out 3 w 78672926039817619--2765001804343984366 m 1 out 2 m

The construct of the PosteriorFactorization composite type takes the random variables of interest as arguments and one final argument consisting of an array of symbols used to identify each of these random variables. Here is an example of how to use this construct for the previous model where we want to infer m and w.

q = PosteriorFactorization(m, w, ids=[:M, :W])

Here, the PosteriorFactorization constructor specifies a posterior factorization. We can view the posterior factorization as dividing the factor graph into several subgraphs, each corresponding to a separate factor in the posterior factorization. Minimization of the free energy is performed by iterating over each subgraph in order to update the posterior marginal corresponding to the current factor which depends on messages coming from the other subgraphs. This iteration is repeated until either the free energy converges to a certain value or the posterior marginals of each factor stop changing. We can use the ids passed to the PosteriorFactorization constructor to visualize the corresponding subgraphs, as shown below.

ForneyLab.draw(q.posterior_factors[:M])
G 2765001804343984366 ๐’ฉ gaussianmeanvariance_1 1628297563217271470 2765001804343984366--1628297563217271470 1 out 2 m 2591314273928854159 2765001804343984366--2591314273928854159 1 out 3 v 78672926039817619 ๐’ฉ gaussianmeanprecision_1 78672926039817619--2765001804343984366 m 1 out 2 m 8961491078803617993 78672926039817619--8961491078803617993 1 out 3 w 9448261878376638122 9448261878376638122--78672926039817619 1 out 1 out
ForneyLab.draw(q.posterior_factors[:W])
G 8961491078803617993 Gam gamma_1 9263339916040682247 8961491078803617993--9263339916040682247 1 out 2 a 8278162286451987808 8961491078803617993--8278162286451987808 1 out 3 b 78672926039817619 ๐’ฉ gaussianmeanprecision_1 78672926039817619--8961491078803617993 w 1 out 3 w 2765001804343984366 78672926039817619--2765001804343984366 1 out 2 m 9448261878376638122 9448261878376638122--78672926039817619 1 out 1 out

Generating the VMP algorithm then follows a similar same procedure as the belief propagation algorithm. In the VMP case however, the resulting algorithm will consist of multiple step functions, one for each posterior factor, that need to be executed iteratively until convergence.

# Construct and compile a variational message passing algorithm
algo = messagePassingAlgorithm()
algo_code = algorithmSourceCode(algo)
eval(Meta.parse(algo_code))
Meta.parse(algo) = quote
    #= none:3 =#
    function stepM!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=Array{Message}(undef, 2))
        #= none:5 =#
        messages[1] = ruleVBGaussianMeanVarianceOut(nothing, ProbabilityDistribution(Univariate, PointMass, m=0), ProbabilityDistribution(Univariate, PointMass, m=10))
        #= none:6 =#
        messages[2] = ruleVBGaussianMeanPrecisionM(ProbabilityDistribution(Univariate, PointMass, m=data[:y]), nothing, marginals[:w])
        #= none:8 =#
        marginals[:m] = (messages[1]).dist * (messages[2]).dist
        #= none:10 =#
        return marginals
    end
    #= none:14 =#
    function stepW!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=Array{Message}(undef, 2))
        #= none:16 =#
        messages[1] = ruleVBGammaOut(nothing, ProbabilityDistribution(Univariate, PointMass, m=0.1), ProbabilityDistribution(Univariate, PointMass, m=0.1))
        #= none:17 =#
        messages[2] = ruleVBGaussianMeanPrecisionW(ProbabilityDistribution(Univariate, PointMass, m=data[:y]), marginals[:m], nothing)
        #= none:19 =#
        marginals[:w] = (messages[1]).dist * (messages[2]).dist
        #= none:21 =#
        return marginals
    end
end

Computing free energy

VMP inference boils down to finding the member of a family of tractable probability distributions that is closest in KL divergence to an intractable posterior distribution. This is achieved by minimizing a quantity known as free energy. ForneyLab offers to optionally compile code for evaluating the free energy. Free energy is particularly useful to test for convergence of the VMP iterative procedure.

algo = messagePassingAlgorithm(free_energy=true)
algo_code = algorithmSourceCode(algo, free_energy=true)
eval(Meta.parse(algo_code))

Expectation maximization

The expectation step of the expectation maximization (EM) algorithm can be executed by sum-product message passing, while the maximization step can be performed analytically or numerically.

g = FactorGraph()
v = placeholder(:v) # parameter of interest
@RV m ~ GaussianMeanVariance(0.0, 1.0)
@RV y ~ GaussianMeanVariance(m, v)
placeholder(y, :y)

Generating a sum-product algorithm towards a clamped variable signals ForneyLab that the user wishes to optimize this parameter. Next to returning a step!() function, ForneyLab will then provide mockup code for an optimize!() function as well.

algo = messagePassingAlgorithm(v)
# You have created an algorithm that requires updates for (a) clamped parameter(s).
# This algorithm requires the definition of a custom `optimize!` function that updates the parameter value(s)
# by altering the `data` dictionary in-place. The custom `optimize!` function may be based on the mockup below:

# function optimize!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=init())
#   ...
#   return data
# end

This optimize!() function accepts This function needs to be defined by the user and may optionally depend upon external tools. Note that optimize!() and step!() functions have the same calling signature. However, where a step!() function accepts the data and updates the messages and marginals, the optimize!() function operates vice-versa. Iterating these functions offers a general mechanism for implementing EM-type algorithms. See the demos for a detailed example.

Executing an algorithm

In section Specifying a model we introduced two main ways to learn from data, namely in an online and in an offline setting. We saw that the structure of the factor graph is different in each of these settings. In this section we will demonstrate how to feed data to an algorithm in both an online and an offline setting. We will use the same examples from section Online vs. offline learning.

Online learning

For convenience, let's reproduce the model specification for the problem of estimating the mean x of a Gaussian distributed random variable y, where x is modelled using another Gaussian distribution with hyperparameters m and v. Let's also generate a belief propagation algorithm for this inference problem like we have seen before.

g = FactorGraph() # create a new factor graph

m = placeholder(:m)
v = placeholder(:v)
@RV x ~ GaussianMeanVariance(m, v)
@RV y ~ GaussianMeanVariance(x, 1.0)
placeholder(y, :y)

algo = messagePassingAlgorithm(x)
eval(Meta.parse(algorithmSourceCode(algo))) # generate, parse and evaluate the algorithm

In order to execute this algorithm we first have to specify a prior for x. This is done by choosing some initial values for the hyperparameters m and v. In each processing step, the algorithm expects an observation and the current belief about x, i.e. the prior. We pass this information as elements of a data dictionary where the keys are the ids of their corresponding placeholders. The algorithm performs inference and returns the results inside a different dictionary (which we call marginals in the following script). In the next iteration, we repeat this process by feeding the algorithm with the next observation in the sequence and the posterior distribution of x that we obtained in the previous processing step. In other words, the current posterior becomes the prior for the next processing step. Let's illustrate this using an example where we will first generate a synthetic dataset by sampling observations from a Gaussian distribution that has a mean of 5.

using Plots, LaTeXStrings; theme(:default) ;
plot(fillalpha=0.3, leg=false, xlabel=L"x", ylabel=L"p(x|D)", yticks=nothing)

N = 50                      # number of samples
dataset = randn(N) .+ 5     # sample N observations from a Gaussian with m=5 and v=1

normal(ฮผ, ฯƒยฒ) = x -> (1/(sqrt(2ฯ€*ฯƒยฒ))) * exp(-(x - ฮผ)^2 / (2*ฯƒยฒ)) # to plot results

m_prior = 0.0   # initialize hyperparameter m
v_prior = 10    # initialize hyperparameter v

marginals = Dict()  # this is where the algorithm stores the results

anim = @animate for i in 1:N
    data = Dict(:y => dataset[i],
                :m => m_prior,
                :v => v_prior)

    plot(-10:0.01:10, normal(m_prior, v_prior), fill=true)

    step!(data, marginals) # feed in prior and data points 1 at a time

    global m_prior = mean(marginals[:x]) # today's posterior is tomorrow's prior
    global v_prior = var(marginals[:x])  # today's posterior is tomorrow's prior
end

Online learning

As we process more samples, our belief about the possible values of m becomes more confident.

Offline learning

Executing an algorithm in an offline fashion is much simpler than in the online case. Let's reproduce the model specification of the previous example in an offline setting (also shown in Online vs. offline learning.)

g = FactorGraph()   # create a new factor graph
N = 30              # number of observations
y = Vector{Variable}(undef, N)
@RV x ~ GaussianMeanVariance(0.0, 1.0)
for i = 1:N
    @RV y[i] ~ GaussianMeanVariance(x, 1.0)
    placeholder(y[i], :y, index=i)
end

algo = messagePassingAlgorithm(x)
eval(Meta.parse(algorithmSourceCode(algo))) # generate, parse and evaluate the algorithm

Since we have a placeholder linked to each observation in the sequence, we can process the complete dataset in one step. To do so, we first need to create a dictionary having the complete dataset array as its single element. We then need to pass this dictionary to the step! function which, in contrast with the online counterpart, we only need to call once.

data = Dict(:y => dataset)
marginals = step!(data) # Run the algorithm
plot(-10:0.01:10, normal(mean(marginals[:x]), var(marginals[:x])), fillalpha=0.3, fillrange = 0)
Note

ForneyLab is aimed at processing time-series data; batch processing does not perform well with large datasets at the moment. We are working on this issue.