# Make predictions with the Predictor¶

The objective is to predict one or more values for a given model represented by a graph. That graph could be a graph with the probability distributions of the contained variables and model parameters given by known priors. It could also be a graph obtained by training a model and extracting the resulting posterior graph.

The `Predictor`

is the objective class in Halerium for computing predictions.

To obtain the predictions, a `Predictor`

instance is created with a graph and, if available, prediction input data as inputs. That instance can then be queried by calling it with graph elements as input. Predictions for these elements are then returned.

## Imports¶

We first import the required packages, classes, functions, etc.

```
[1]:
```

```
# for handling data:
import numpy as np
# for building graphs:
from halerium.core import Graph, Variable, show
# for predicting:
from halerium import Predictor
```

## The graph and input data¶

Let us define a simple graph.

```
[2]:
```

```
graph = Graph("graph")
with graph:
x = Variable("x", mean=0, variance=1)
y = Variable("y", mean=x + 1, variance=1)
show(graph)
```

Let us specify some input data for the prediction. Here, we specify values for `graph.x`

in a dictionary:

```
[3]:
```

```
prediction_input_data = {graph.x: np.array([0, 1, 2, 3])}
```

## The predictor¶

Now we create a predictor, providing it with the graph and the input data.

```
[4]:
```

```
predictor = Predictor(graph=graph, data=prediction_input_data)
```

Note, the policy for objectives is to provide as much information as possible at the creation of the objective instance. Then, calling the objective instance only requires as input the graph elements one wishes to obtain information for. For the predictor the information required is the graph and the input data. Optional arguments will be discussed below.

We can now call the predictor to obtain predictions for the other variable in the graph, i.e. `graph.y`

.

```
[5]:
```

```
predictor(graph.y)
```

```
[5]:
```

```
array([1., 2., 3., 4.])
```

The argument of the predictor call, i.e. `graph.y`

in this example, is the target(s) of the prediction. The predictor can also be called with nested structures, e.g. to compute joint predictions for several variables.

```
[6]:
```

```
prediction = predictor({'x': graph.x, 'y': graph.y})
display(prediction)
y_prediction = prediction['y']
print('prediction for y:', y_prediction)
```

```
{'x': array([0., 1., 2., 3.]), 'y': array([1., 2., 3., 4.])}
```

```
prediction for y: [1. 2. 3. 4.]
```

If no target for the prediction is specified, a dictionary with all graph elements is returned.

```
[7]:
```

```
predictor()
```

```
[7]:
```

```
{'graph': None,
'graph/x': array([0., 1., 2., 3.]),
'graph/y': array([1., 2., 3., 4.])}
```

Since meaningful predictions can only be computed for (static) variables in the graph, the predictions for any other graph elements is `None`

.

## Options¶

### Choosing a method¶

The method for computing predictions can be specified using the `method`

argument.

This will create a predictor using the MAP (a.k.a. maximum posterior) method.

```
[8]:
```

```
map_predictor = Predictor(graph=graph,
data=prediction_input_data,
method="MAP")
```

Additional arguments to the underlying model implementing the method can be specified in `model_args`

. Arguments to the solver employed by the model can be specified using the `solver_args`

argument. See the documentation for the available methods and model and solver arguments they accept.

### Choosing a measure¶

By default, the predictor returns mean values as predictions. Other statistical measures can be computed as well by creating a predictor with those measures specified. A simple way to specify commonly used measures is by passing ‘mean’, ‘standard_deviation’, and/or ‘variance’ to the `measures`

argument. For example:

```
[9]:
```

```
msv_predictor = Predictor(graph=graph,
data=prediction_input_data,
measure={'mean': 'mean', 'std ': 'standard_deviation', 'var ': 'variance'})
```

Now the predictor returns mean, standard deviation, and variance.

```
[10]:
```

```
msv_predictor(graph.y)
```

```
[10]:
```

```
{'mean': array([1., 2., 3., 4.]),
'std ': array([0.94819425, 1.13848672, 0.85760815, 0.92906702]),
'var ': array([0.89907234, 1.29615202, 0.73549175, 0.86316552])}
```

A measure can also be specified by providing a function for it. For example:

```
[11]:
```

```
msv_predictor = Predictor(graph=graph,
data=prediction_input_data,
measure={'mean': np.mean, 'std ': np.std, 'var ': np.var})
```

```
[12]:
```

```
msv_predictor(graph.y)
```

```
[12]:
```

```
{'mean': array([1., 2., 3., 4.]),
'std ': array([0.77332388, 0.95210459, 0.92833568, 1.02772596]),
'var ': array([0.59802982, 0.90650316, 0.86180713, 1.05622066])}
```

One can also define a custom measure. It needs to take a numpy array as first argument, and the axis along which the examples are ordered as keyword argument. Just like, e.g., `np.mean`

or `np.var`

. Let us define a “mean plus standard deviation” function:

```
[13]:
```

```
def mean_plus_std(x, axis=0):
return np.mean(x, axis=axis) + np.std(x, axis=axis)
```

```
[14]:
```

```
mps_predictor = Predictor(graph=graph,
data=prediction_input_data,
measure={'mean': np.mean, 'std ': np.std, 'm+s ': mean_plus_std})
```

```
[15]:
```

```
mps_predictor(graph.y)
```

```
[15]:
```

```
{'mean': array([1.00003754, 1.99996995, 3.00004099, 3.99997172]),
'std ': array([0.88783754, 0.96053551, 0.93987009, 1.10669458]),
'm+s ': array([1.88787508, 2.96050546, 3.93991108, 5.10666629])}
```

### Accuracy¶

Besides choosing the method and its model and solver parameters, the accuracy of the predictions may be influenced by changing the number of examples used to estimate means, variances, etc. from.

A small number of examples makes predictions faster, but potentially less accurate. A MAP predictor does not require more than one example.

```
[16]:
```

```
fast_predictor = Predictor(graph=graph,
data=prediction_input_data,
method='MAP',
measure={'mean': 'mean', 'std ': 'standard_deviation', 'var ': 'variance'},
n_samples=1)
```

It however can only estimate mean values, but not variances or standard deviations.

```
[17]:
```

```
fast_predictor(graph.y)
```

```
[17]:
```

```
{'mean': array([1., 2., 3., 4.]),
'std ': array([0., 0., 0., 0.]),
'var ': array([0., 0., 0., 0.])}
```

A MAP-Fisher predictor with a few examples may be slower, but it provides a rough estimate of the spread of the predictions.

```
[18]:
```

```
medium_predictor = Predictor(graph=graph,
data=prediction_input_data,
method='MAPFisher',
measure={'mean': 'mean', 'std ': 'standard_deviation', 'var ': 'variance'},
n_samples=10)
```

```
[19]:
```

```
medium_predictor(graph.y)
```

```
[19]:
```

```
{'mean': array([1.00000522, 2.00005058, 3.00002869, 3.99995647]),
'std ': array([0.69023616, 1.02688213, 0.86882866, 0.94203897]),
'var ': array([0.47642596, 1.05448691, 0.75486324, 0.88743742])}
```

By choosing many examples, the predictor may compute more accurate predictions at the expense of speed.

```
[20]:
```

```
slow_predictor = Predictor(graph=graph,
data=prediction_input_data,
method='MAPFisher',
measure={'mean': 'mean', 'std ': 'standard_deviation', 'var ': 'variance'},
n_samples=10000)
```

```
[21]:
```

```
slow_predictor(graph.y)
```

```
[21]:
```

```
{'mean': array([1., 2., 3., 4.]),
'std ': array([1.00649971, 1.01384043, 0.99660784, 0.99779369]),
'var ': array([1.01304166, 1.02787241, 0.99322718, 0.99559225])}
```

```
[ ]:
```

```
```