Saving and loading populations#
Another feature Brush implements is the ability to save and load entire populations. We use JSON notation to store the population into a file that is human readable. The same way, we can feed an estimator a previous population file to serve as starting point for the evolution.
In this notebook, we will walk through how to use the save_population
and load_population
parameters.
We start by getting a sample dataset and splitting it into X
and y
:
import pandas as pd
from pybrush import BrushRegressor
# load data
df = pd.read_csv('../examples/datasets/d_enc.csv')
X = df.drop(columns='label')
y = df['label']
To save the population after finishing the evolution, you nee to set save_population
parameter to a value different than an empty string. Then, the final population is going to be stored in that specific file.
In this example, we create a temporary file.
import pickle
import os, tempfile
pop_file = os.path.join(tempfile.mkdtemp(), 'population.json')
# set verbosity==2 to see the full report
est = BrushRegressor(
functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
max_gens=10,
objectives=["scorer", "complexity"],
scorer='mse',
save_population=pop_file,
use_arch=True, # Only the pareto front of last gen will be stored in archive
verbosity=2
)
est.fit(X,y)
y_pred = est.predict(X)
print('score:', est.score(X,y))
Generation 1/10 [////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.49*Sin(-164.64*x0),0.20*x1),-142.85*Cos(0.84*Exp(0.95*x4)))
Train Loss (Med): 10.44742 (74.37033)
Val Loss (Med): 10.44742 (74.37033)
Median Size (Max): 7 (123)
Median complexity (Max): 360 (2008928968)
Time (s): -0.22113
Generation 2/10 [/////////// ]
Best model on Val:46.86*Cos(-5.73*Logabs(-3.88*Exp(Sin(-16.12*Sin(If(x0>0.75,0.94*x2,1.00*x3))))))
Train Loss (Med): 10.37396 (38.19257)
Val Loss (Med): 10.37396 (38.19257)
Median Size (Max): 6 (123)
Median complexity (Max): 120 (1792519880)
Time (s): 1.33424
Generation 3/10 [//////////////// ]
Best model on Val:46.89*Cos(-5.73*Logabs(-3.88*Exp(Sin(-16.12*Sin(If(x0>0.75,0.94*x2,1.00*x3))))))
Train Loss (Med): 10.37383 (19.28615)
Val Loss (Med): 10.37383 (19.28615)
Median Size (Max): 6 (85)
Median complexity (Max): 360 (1792519880)
Time (s): 1.99511
Generation 4/10 [///////////////////// ]
Best model on Val:41.88*Cos(-5.94*Logabs(-3.68*Exp(Sin(-16.36*Sin(If(x0>0.75,0.94*x2,1.00*x3))))))
Train Loss (Med): 10.37381 (18.86849)
Val Loss (Med): 10.37381 (18.86849)
Median Size (Max): 7 (82)
Median complexity (Max): 72 (1792519880)
Time (s): 2.64494
Generation 5/10 [////////////////////////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.49*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (17.90351)
Val Loss (Med): 9.98502 (17.90351)
Median Size (Max): 7 (82)
Median complexity (Max): 278 (1792519880)
Time (s): 3.27326
Generation 6/10 [/////////////////////////////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.49*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (22.98119)
Val Loss (Med): 9.98502 (22.98119)
Median Size (Max): 7 (82)
Median complexity (Max): 120 (1792519880)
Time (s): 3.74381
Generation 7/10 [//////////////////////////////////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.47*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (17.94969)
Val Loss (Med): 9.98502 (17.94969)
Median Size (Max): 7 (82)
Median complexity (Max): 120 (105480)
Time (s): 4.23275
Generation 8/10 [///////////////////////////////////////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.47*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (17.94969)
Val Loss (Med): 9.98502 (17.94969)
Median Size (Max): 8 (66)
Median complexity (Max): 169 (105480)
Time (s): 4.65756
Generation 9/10 [////////////////////////////////////////////// ]
Best model on Val:If(x0>0.75,0.25*Add(31.47*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (17.94969)
Val Loss (Med): 9.98502 (17.94969)
Median Size (Max): 8 (66)
Median complexity (Max): 169 (105480)
Time (s): 5.21162
Generation 10/10 [//////////////////////////////////////////////////]
Best model on Val:If(x0>0.75,0.25*Add(31.47*Sin(-164.64*x0),0.20*x1),-16.97*Cos(5.02*x0))
Train Loss (Med): 9.98502 (14.64283)
Val Loss (Med): 9.98502 (14.64283)
Median Size (Max): 12 (66)
Median complexity (Max): 648 (105480)
Time (s): 5.65408
Saved population to file /tmp/tmpxpisicq_/population.json
score: 0.8895281125331685
Loading a previous population is done providing load_population
a string value corresponding to a JSON file generated by Brush. In our case, we will use the same file from the previous code block.
After loading the population, we run the evolution for 10 more generations, and we can see that the first generation started from the previous population. This means that the population was successfully saved and loaded.
est = BrushRegressor(
functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
load_population=pop_file,
max_gens=10,
verbosity=1
)
est.fit(X,y)
y_pred = est.predict(X)
print('score:', est.score(X,y))
Loaded population from /tmp/tmpxpisicq_/population.json of size = 200
Completed 100% [====================]
saving final population as archive...
score: 0.9298016661101332
There is a convenient way of accessing individuals on the population: just use the index of the individual on the est.population_
list.
# it will contain all individuals, differently than the archive
print("population size:", len(est.population_))
print("archive size :", len(est.archive_))
est.population_[0]
population size: 100
archive size : 13
{'fitness': {'complexity': 2,
'crowding_dist': 0.0,
'dcounter': 0,
'depth': 1,
'dominated': [75, 76],
'linear_complexity': 2,
'loss': 90.3851318359375,
'loss_v': 90.3851318359375,
'rank': 1,
'size': 1,
'values': [90.3851318359375, 1.0],
'weights': [-1.0, -1.0],
'wvalues': [-90.3851318359375, -1.0]},
'id': 0,
'is_fitted_': False,
'objectives': ['mse', 'size'],
'parent_id': [],
'program': {'Tree': [{'W': 24.585369110107422,
'arg_types': [],
'center_op': True,
'feature': 'Constant',
'fixed': False,
'is_weighted': True,
'name': 'Constant',
'node_type': 'Constant',
'prob_change': 0.6167147755622864,
'ret_type': 'ArrayF',
'sig_dual_hash': 509529941281334733,
'sig_hash': 17717457037689164349}],
'is_fitted_': True},
'variation': 'born'}
you can convert the json representation back to an fully functional individual by wrapping it in the individual class. It is important that the type of individual (i.e. classification, regression) is the same.
Differently from the archive (which is sorted by complexity), the individuals in the population have no specific order. So individual 5 may or may not be more complex than individual 10, for example.
from pybrush import individual
loaded_from_population = individual.RegressorIndividual.from_json(est.population_[2])
print(loaded_from_population.get_model("tree"))
24.59
Saving just the archive#
In case you want to use another expression rather than the final best_estimator_
, brush provides the archive option.
The archive is just the pareto front from the population. You can use predict_archive
(and predict_proba_archive
if using a BrushClassifier
) to call the prediction methods for the entire archive, instead of the selected best individual.
But first, you need to enable this option with use_arch=True
. When set to False, it will store the entire final population
est = BrushRegressor(
functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
load_population=pop_file,
use_arch=True,
max_gens=10,
verbosity=1
)
est.fit(X,y)
# accessing first expression from the archive. It is serialized as a dict
print(est.archive_[0]['fitness'])
Loaded population from /tmp/tmpxpisicq_/population.json of size = 200
Completed 100% [====================]
{'complexity': 486080840, 'crowding_dist': 0.0, 'dcounter': 0, 'depth': 7, 'dominated': [], 'linear_complexity': 105, 'loss': 3.7732295989990234, 'loss_v': 3.7732295989990234, 'rank': 1, 'size': 31, 'values': [3.7732295989990234, 31.0], 'weights': [-1.0, -1.0], 'wvalues': [-3.7732295989990234, -31.0]}
You can open the serialized file and change individuals’ programs manually.
This also allow us to have checkpoints in the execution.
Using population files with classification#
To give another example, we do a two-step fit in the cells below.
First, we run the evolution and save the population to a file; then, we load it and keep evolving the individuals.
What is different though is that the first run is optimizing scorer
and complexity
, and the second run is optimizing average_precision_score
and linear_complexity
.
from pybrush import BrushClassifier
# load data
df = pd.read_csv('../examples/datasets/d_analcatdata_aids.csv')
X = df.drop(columns='target')
y = df['target']
pop_file = os.path.join(tempfile.mkdtemp(), 'population.json')
est = BrushClassifier(
functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
max_gens=10,
objectives=["scorer", "complexity"],
scorer="log",
save_population=pop_file,
pop_size=200,
verbosity=2
)
est.fit(X,y)
print("Best model:", est.best_estimator_.get_model())
print('score:', est.score(X,y))
Generation 1/10 [////// ]
Best model on Val:Logistic(Sum(-0.63,If(AIDS>15890.50,10.04,If(Total>1572255.50,-0.62,0.72))))
Train Loss (Med): 0.50912 (0.69315)
Val Loss (Med): 0.50912 (0.69315)
Median Size (Max): 11 (91)
Median complexity (Max): 992 (2029842336)
Time (s): 0.46829
Generation 2/10 [/////////// ]
Best model on Val:Logistic(Sum(-0.63,If(AIDS>15890.50,10.04,If(Total>1572255.50,-0.62,0.72))))
Train Loss (Med): 0.50912 (0.69294)
Val Loss (Med): 0.50912 (0.69294)
Median Size (Max): 12 (87)
Median complexity (Max): 128 (2029842336)
Time (s): 0.65486
Generation 3/10 [//////////////// ]
Best model on Val:Logistic(Sum(3.80,If(AIDS>15890.50,1.00,-4.54*Exp(0.44*Cos(If(Total>1572255.50,-7.64,3.24*Mul(1.00,3.24*Cos(Sum(-1.47,0.13*AIDS)))))))))
Train Loss (Med): 0.44384 (0.64347)
Val Loss (Med): 0.44384 (0.64347)
Median Size (Max): 15 (87)
Median complexity (Max): 128 (2029842336)
Time (s): 0.96243
Generation 4/10 [///////////////////// ]
Best model on Val:Logistic(Sum(1.39,If(AIDS>15890.50,1.00,-2.70*Cos(If(Total>1572255.50,12.88,If(AIDS>123.00,-10.69,0.05*Logabs(-9.26*AIDS)))))))
Train Loss (Med): 0.41213 (0.57942)
Val Loss (Med): 0.41213 (0.57942)
Median Size (Max): 28 (87)
Median complexity (Max): 31904 (2029842336)
Time (s): 1.24020
Generation 5/10 [////////////////////////// ]
Best model on Val:Logistic(Sum(2.52,If(AIDS>15890.50,1.00,-3.80*Cos(Sum(0.47,If(Total>1572255.50,12.20,If(AIDS>123.00,1.00,-0.03*Logabs(443.94*AIDS))))))))
Train Loss (Med): 0.40162 (0.54763)
Val Loss (Med): 0.40162 (0.54763)
Median Size (Max): 30 (87)
Median complexity (Max): 31904 (1908211616)
Time (s): 1.58313
Generation 6/10 [/////////////////////////////// ]
Best model on Val:Logistic(Sum(4.04,If(AIDS>15890.50,1.00,-5.14*Cos(If(Total>1572255.50,0.00,If(AIDS>123.00,1.56*Mul(1.00,If(AIDS>1653.50,0.00,If(AIDS>199.00,10.84,0.95))),0.06))))))
Train Loss (Med): 0.36771 (0.50492)
Val Loss (Med): 0.36771 (0.50492)
Median Size (Max): 31 (79)
Median complexity (Max): 11360 (1908211616)
Time (s): 2.02055
Generation 7/10 [//////////////////////////////////// ]
Best model on Val:Logistic(Sum(4.04,If(AIDS>15890.50,1.00,-5.14*Cos(If(Total>1572255.50,0.00,If(AIDS>123.00,1.56*Mul(1.00,If(AIDS>1653.50,0.00,If(AIDS>199.00,10.84,0.95))),0.06))))))
Train Loss (Med): 0.36771 (0.48550)
Val Loss (Med): 0.36771 (0.48550)
Median Size (Max): 31 (74)
Median complexity (Max): 32288 (1908211616)
Time (s): 2.47527
Generation 8/10 [///////////////////////////////////////// ]
Best model on Val:Logistic(Sum(1.00,If(AIDS>15890.50,1.00,-2.00*Exp(-4.27*Cos(If(Total>1572255.50,-17.71*Logabs(51.81*AIDS),If(AIDS>123.00,1.00,-5.47*Cos(Sum(1.37,0.18*AIDS)))))))))
Train Loss (Med): 0.33648 (0.46860)
Val Loss (Med): 0.33648 (0.46860)
Median Size (Max): 32 (54)
Median complexity (Max): 56480 (1908211616)
Time (s): 2.98516
Generation 9/10 [////////////////////////////////////////////// ]
Best model on Val:Logistic(Sum(1.00,If(AIDS>15890.50,1.00,-2.00*Exp(-4.27*Cos(If(Total>1572255.50,-17.71*Logabs(51.81*AIDS),If(AIDS>123.00,1.00,-5.47*Cos(Sum(1.37,0.18*AIDS)))))))))
Train Loss (Med): 0.33648 (0.46043)
Val Loss (Med): 0.33648 (0.46043)
Median Size (Max): 31 (54)
Median complexity (Max): 56480 (1908211616)
Time (s): 3.47623
Generation 10/10 [//////////////////////////////////////////////////]
Best model on Val:Logistic(Sum(6.84,If(AIDS>15890.50,1.00,-7.70*Exp(-0.71*Cos(If(Total>1572255.50,4.64,If(AIDS>123.00,1.00,5.42*Cos(Sum(-4.96,0.20*AIDS)))))))))
Train Loss (Med): 0.33520 (0.45006)
Val Loss (Med): 0.33520 (0.45006)
Median Size (Max): 31 (66)
Median complexity (Max): 32672 (1625096096)
Time (s): 3.93048
saving final population as archive...
Saved population to file /tmp/tmpi5q6ccgl/population.json
Best model: Logistic(Sum(6.84,If(AIDS>15890.50,1.00,-7.70*Exp(-0.71*Cos(If(Total>1572255.50,4.64,If(AIDS>123.00,1.00,5.42*Cos(Sum(-4.96,0.20*AIDS)))))))))
score: 0.86
from sklearn.metrics import accuracy_score
accuracy_score(y, est.predict(X))
0.86
est = BrushClassifier(
functions=['SplitBest','Add','Mul','Sin','Cos','Exp','Logabs'],
load_population=pop_file,
objectives=["scorer", "complexity"],
scorer="average_precision_score",
max_gens=10,
validation_size=0.0,
pop_size=200, # make sure this is the same as loaded pop
use_arch=True,
verbosity=1
)
est.fit(X,y)
print("Best model:", est.best_estimator_.get_model())
print('score:', est.score(X,y))
Loaded population from /tmp/tmpi5q6ccgl/population.json of size = 400
Completed 100% [====================]
Best model: Logistic(Sum(1.44,If(AIDS>15890.50,1.00,-5.62*Exp(-5.09*Cos(If(Total>1572255.50,-13.53*Logabs(AIDS),If(AIDS>123.00,1.00,5.16*Cos(Sum(-1.47,0.13*AIDS)))))))))
score: 0.9
We can see the fitness object, and that the scorer now matches the average precision score metric:
# Fitness is (scorer, linear complexity)
print(est.best_estimator_.fitness)
Fitness(0.888867 80.000000 )
from sklearn.metrics import average_precision_score
# takes y_true as first argument, and y_pred as second argument.
average_precision_score(y, est.predict_proba(X)[:, 1]) #, average='weighted')
0.9224849564228874
Serialization with pickle#
You can save the entire model (best individual, parameters, and archive) with pickle.
At the current stage, Brush does not serialize the search space and dataset references, but only the necessary information to be able to load a previously trained model and do predictions with it.
est
BrushClassifier(algorithm='nsga2', bandit='dynamic_thompson', batch_size=1.0, constants_simplification=True, cx_prob=0.14285714285714285, functions=['SplitBest', 'Add', 'Mul', 'Sin', 'Cos', 'Exp', 'Logabs'], inexact_simplification=True, initialization='uniform', load_population='/tmp/tmpi5q6ccgl/population.json', logfile='', max_depth=10, max_gens=10, max_size=100, m... 'point': 0.16666666666666666, 'subtree': 0.16666666666666666, 'toggle_weight_off': 0.16666666666666666, 'toggle_weight_on': 0.16666666666666666}, n_jobs=1, num_islands=5, objectives=['scorer', 'complexity'], pop_size=200, random_state=None, save_population='', scorer='average_precision_score', sel='lexicase', shuffle_split=False, surv='nsga2', use_arch=True, val_from_arch=True, ...)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
BrushClassifier(algorithm='nsga2', bandit='dynamic_thompson', batch_size=1.0, constants_simplification=True, cx_prob=0.14285714285714285, functions=['SplitBest', 'Add', 'Mul', 'Sin', 'Cos', 'Exp', 'Logabs'], inexact_simplification=True, initialization='uniform', load_population='/tmp/tmpi5q6ccgl/population.json', logfile='', max_depth=10, max_gens=10, max_size=100, m... 'point': 0.16666666666666666, 'subtree': 0.16666666666666666, 'toggle_weight_off': 0.16666666666666666, 'toggle_weight_on': 0.16666666666666666}, n_jobs=1, num_islands=5, objectives=['scorer', 'complexity'], pop_size=200, random_state=None, save_population='', scorer='average_precision_score', sel='lexicase', shuffle_split=False, surv='nsga2', use_arch=True, val_from_arch=True, ...)
import pickle
est_file = os.path.join(tempfile.mkdtemp(), 'est.pkl')
with open(est_file, 'wb') as f:
pickle.dump(est, f)
loaded_est = pickle.load(open(est_file, 'rb'))
print(est.predict(X))
print(loaded_est.predict(X))
[ True True True True True True True True True True True True
True True True True True True True True True True True True
True False False False False True False False True False True False
True False True False False False False False False False False False
False False]
[ True True True True True True True True True True True True
True True True True True True True True True True True True
True False False False False True False False True False True False
True False True False False False False False False False False False
False False]
print(est.predict_archive(X)[0])
print(loaded_est.predict_archive(X)[0])
{'id': 494, 'y_pred': array([False, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False,
False, False, False, True, True, True, False, True, False,
True, False, True, False, False, False, False, False, False,
False, False, False, False, False])}
{'id': 494, 'y_pred': array([False, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, False, False,
False, False, False, True, True, True, False, True, False,
True, False, True, False, False, False, False, False, False,
False, False, False, False, False])}
Stop/resume the fitting of an estimator#
In the code below I try to mimic how pytorch models are trained: we can stop the training at any time, and we can resume it later.
The idea is to demonstrate how to use population files to store checkpoints, and continuing from the last saved checkpoint.
def train(est, X, y):
checkpoint = os.path.join(tempfile.mkdtemp(), 'brush_pop_checkpoint.json')
step = 5
max_gens = est.max_gens
est.max_gens = step
est.save_population = checkpoint
est.load_population = ""
# You can set validation_size to a value greater than zero
# and shuffle_split to true to have random bathes of data
est.shuffle_split = True
est.validation_size = 0.2
for g in range(max_gens // step):
print(f"Progress {g + 1}/{max_gens // step}")
est.fit(X, y) # Notice that this will reset the MAB everytime!
# Enable loading the checkpoint after a first run
est.load_population = checkpoint
print("Best model:", est.best_estimator_.get_model())
print('score :', est.score(X, y))
# Restoring initial state
est.max_gens = max_gens
est = BrushClassifier(
objectives=["scorer", "linear_complexity"],
scorer="balanced_accuracy",
max_gens=50,
validation_size=0.2,
pop_size=100,
max_depth=20,
max_size=50,
verbosity=1
)
train(est, X, y)
Progress 1/10
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(0.00,1.00*Prod(1.00*Max(AIDS,1.00*Pow(1.00,1.00),1.00),1.00*Tan(1.00*Median(1.00*Prod(1.00*Abs(AIDS),1.00,1.00,AIDS),1.00)))))
score : 0.52
Progress 2/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-20.23,AIDS))
score : 0.56
Progress 3/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-215.72,AIDS))
score : 0.62
Progress 4/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-0.32,0.00*AIDS))
score : 0.68
Progress 5/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-0.60,0.00*AIDS))
score : 0.68
Progress 6/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-125.60,0.04*AIDS))
score : 0.6
Progress 7/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-125.60,AIDS))
score : 0.64
Progress 8/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-0.14,If(AIDS>16496.00,1.00,0.00)))
score : 0.66
Progress 9/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-0.52,If(AIDS>14174.00,1.00,If(Total>2412823.50,0.00,If(AIDS>123.00,1.00,0.00)))))
score : 0.78
Progress 10/10
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Best model: Logistic(Sum(-0.17,If(AIDS>15890.50,1.00,0.00)))
score : 0.68
By default, sklearn estimators will reset when calling fit twice. To continue from last fit, you can call partial_fit
and brush will resume the training.
If you want, you can change parameters from the est
object before calling partial fit to update the execution settings.
It is important that the data has the same features (same name and dtype) as the data used in the previous fit
/partial_fit
.
print(est.best_estimator_.get_model())
print(est.best_estimator_.fitness)
est.partial_fit(X, y)
print(est.best_estimator_.get_model())
print(est.best_estimator_.fitness)
Logistic(Sum(-0.17,If(AIDS>15890.50,1.00,0.00)))
Fitness(0.800000 25.000000 )
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Logistic(Sum(-0.71,If(AIDS>15890.50,1.00,If(Total>1586725.00,0.00,0.00*AIDS))))
Fitness(0.800000 44.000000 )
The partial_fit
also allows you to fix an initial portion of the tree before doing the new fit.
You can also choose to leave leaves out of this locking mechanism, this way the terminals close to the root are unlocked and can change.
If you set a big depth and also force leaves to be locked, there may be some (smaller) programs in the population that will not change at all during the run.
print(est.best_estimator_.get_model())
print(est.best_estimator_.fitness)
est.partial_fit(X, y, lock_nodes_depth=2, skip_leaves=True)
print(est.best_estimator_.get_model())
print(est.best_estimator_.fitness)
Logistic(Sum(-0.71,If(AIDS>15890.50,1.00,If(Total>1586725.00,0.00,0.00*AIDS))))
Fitness(0.800000 44.000000 )
Loaded population from /tmp/tmpjgcfekf5/brush_pop_checkpoint.json of size = 200
Completed 100% [====================]
saving final population as archive...
Saved population to file /tmp/tmpjgcfekf5/brush_pop_checkpoint.json
Logistic(Sum(-0.48,If(AIDS>15890.50,1.00,If(Total>1572255.50,0.00,If(AIDS>123.00,1.00,If(AIDS>51.50,0.00,If(AIDS>20.00,1.00,0.00)))))))
Fitness(0.900000 69.000000 )