Working with Programs#

In Brush, a Program is an executable data structure. You may think of it as a model or a function mapping feature inputs to data labels. We call them programs because that’s what they are: executable data structures,
and that is what they are called in the genetic algorithm literature, to distinguish them from optimizing bits or strings.

The Brush Program class operates similarly to a sklearn estimator: it has fit and predict methods that are called in during training or inference, respectively.

Types of Programs#

There are four fundamental “types” of Brush programs:

  • Regressors: map inputs to a continous endpoint

  • Binary Classifiers: map inputs to a binary endpoint, as well as a continuous value in \([0, 1]\)

  • Multi-class Classifiers: map inputs to a category

    • Under development

  • Representors: map inputs to a lower dimensional space.

    • Under development

Representation#

Internally, the programs are represented as syntax trees. We use the tree.hh tree class which gives trees an STL-like feel.

Generation#

We generate random programs using Sean Luke’s PTC2 algorithm.

Evaluation#

TODO

Visualizing Programs#

Programs in Brush are symbolic tree structures, and can be viewed in a few ways:

  1. As a string using get_model()

  2. As a string-like tree using get_model("tree")

  3. As a graph using graphviz and get_model("dot").

Let’s look at a regresion example.

import pandas as pd
from pybrush import BrushRegressor

# load data
df = pd.read_csv('../examples/datasets/d_enc.csv')
X = df.drop(columns='label')
y = df['label']
# import and make a regressor
est = BrushRegressor(
    functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
    verbosity=1 # set verbosity==1 to see a progress bar
)

# use like you would a sklearn regressor
est.fit(X,y)
y_pred = est.predict(X)
print('score:', est.score(X,y))
Completed 100% [====================]
score: 0.8972961690538603

You can see the fitness of the final individual by accessing the fitness attribute. Each fitness value corresponds to the objective of same index defined earlier for the BrushRegressor class. By default, it will try to minimize "error" and "size".

print(est.best_estimator_.fitness)
print(est.objectives)
Fitness(9.282899 19.000000 )
['error', 'size']

A fitness in Brush is actually more than a tuple. It is a class that has all boolean comparison operators overloaded to allow an ease of use when prototyping with Brush.

It also infers the weight of each objective to automatically handle minimization or maximization objetives.

To see the weights, you can try:

est.best_estimator_.fitness.weights
[-1.0, -1.0]

Serialization#

Brush let’s you serialize the entire individual, or just the program or fitness it wraps. It uses JSON to serialize the objects, and this is implemented with the get and set states of an object:

estimator_dict = est.best_estimator_.__getstate__()

for k, v in estimator_dict.items():
    print(k, v)
fitness {'complexity': 304, 'crowding_dist': 3.4028234663852886e+38, 'dcounter': 0, 'depth': 3, 'dominated': [0, 2, 29, 62, 80, 127, 146], 'loss': 9.282898902893066, 'loss_v': 9.282898902893066, 'rank': 1, 'size': 19, 'values': [9.282898902893066, 19.0], 'weights': [-1.0, -1.0], 'wvalues': [-9.282898902893066, -19.0]}
id 1910
objectives ['error', 'size']
parent_id [1858]
program {'Tree': [{'W': 0.75, 'arg_types': ['ArrayF', 'ArrayF'], 'center_op': True, 'feature': 'x0', 'fixed': False, 'is_weighted': False, 'name': 'SplitBest', 'node_type': 'SplitBest', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 9996486434638833164, 'sig_hash': 10001460114883919497}, {'W': 0.8050000071525574, 'arg_types': ['ArrayF', 'ArrayF'], 'center_op': True, 'feature': 'x0', 'fixed': False, 'is_weighted': False, 'name': 'SplitBest', 'node_type': 'SplitBest', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 9996486434638833164, 'sig_hash': 10001460114883919497}, {'W': 30.494491577148438, 'arg_types': [], 'center_op': True, 'feature': 'MeanLabel', 'fixed': False, 'is_weighted': True, 'name': 'MeanLabel', 'node_type': 'MeanLabel', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 509529941281334733, 'sig_hash': 17717457037689164349}, {'W': 49.47871017456055, 'arg_types': [], 'center_op': True, 'feature': 'x0', 'fixed': False, 'is_weighted': True, 'name': 'Terminal', 'node_type': 'Terminal', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 509529941281334733, 'sig_hash': 17717457037689164349}, {'W': 1.0, 'arg_types': ['ArrayF', 'ArrayF'], 'center_op': True, 'feature': '', 'fixed': False, 'is_weighted': False, 'name': 'Add', 'node_type': 'Add', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 9996486434638833164, 'sig_hash': 10001460114883919497}, {'W': 0.018234524875879288, 'arg_types': [], 'center_op': True, 'feature': 'x1', 'fixed': False, 'is_weighted': True, 'name': 'Terminal', 'node_type': 'Terminal', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 509529941281334733, 'sig_hash': 17717457037689164349}, {'W': 10.46687126159668, 'arg_types': [], 'center_op': True, 'feature': 'x6', 'fixed': False, 'is_weighted': True, 'name': 'Terminal', 'node_type': 'Terminal', 'prob_change': 1.0, 'ret_type': 'ArrayF', 'sig_dual_hash': 509529941281334733, 'sig_hash': 17717457037689164349}], 'is_fitted_': True}

With serialization, you can use pickle to save and load just programs or even the entire individual.

import pickle
import os, tempfile

individual_file = os.path.join(tempfile.mkdtemp(), 'individual.json')
with open(individual_file, "wb") as f:
    pickle.dump(est.best_estimator_, f)

program_file = os.path.join(tempfile.mkdtemp(), 'program.json')
with open(program_file, "wb") as f:
    pickle.dump(est.best_estimator_.program, f)

Then we can load it later with:

with open(individual_file, "rb") as f:
    loaded_estimator = pickle.load(f)
    print(loaded_estimator.get_model())
If(x0>0.75,If(x0>0.81,30.49*MeanLabel,49.48*x0),Add(0.02*x1,10.47*x6))

String#

Now that we have trained a model, est.best_estimator_ contains our symbolic model. We can view it as a string:

print(est.best_estimator_.get_model())
If(x0>0.75,If(x0>0.81,30.49*MeanLabel,49.48*x0),Add(0.02*x1,10.47*x6))

Quick Little Tree#

Or, we can view it as a compact tree:

print(est.best_estimator_.get_model("tree"))
SplitBest
|-SplitBest
  |-30.49*MeanLabel
  |-49.48*x0
|-Add
|  |-0.02*x1
|  |-10.47*x6

GraphViz#

If we are feeling fancy 🎩, we can also view it as a graph in dot format. Let’s import graphviz and make a nicer plot.

import graphviz

model = est.best_estimator_.get_model("dot")
graphviz.Source(model)
../_images/f565a56baa5caba4a8da5631b725e656c6e86247681e12b581b9f77f59485b4f.svg

The model variable is now a little program in the dot language, which we can inspect directly.

print(model)
digraph G {
"7f370003ebc0" [label="x0>0.75?"];
"7f370003ebc0" -> "7f37000b5410" [headlabel="",taillabel="Y"];
"7f370003ebc0" -> "7f370003f120" [headlabel="",taillabel="N"];
"7f37000b5410" [label="x0>0.81?"];
"7f37000b5410" -> "7f370003ef80" [headlabel="",taillabel="Y"];
"7f37000b5410" -> "x0" [headlabel="49.48",taillabel="N"];
"7f370003ef80" [label="30.49*MeanLabel"];
"x0" [label="x0"];
"7f370003f120" [label="Add"];
"7f370003f120" -> "x1" [label="0.02"];
"7f370003f120" -> "x6" [label="10.47"];
"x1" [label="x1"];
"x6" [label="x6"];
}

Tweaking Graphs#

The dot manual has lots of options for tweaking the graphs. You can do this by manually editing model, but brush also provides a function, get_dot_model(), to which you can pass additional arguments to dot.

For example, let’s view the graph from Left-to-Right:

model = est.best_estimator_.get_dot_model("rankdir=LR;")
graphviz.Source(model)
../_images/ece6b5dce4ed21d79db83a73c56ed18c60f1b752ea1eb6474fcc6d722b2e1082.svg