IntroductionΒΆ

Fomo is designed to be incredibly general. It works with any ML model that has a scikit-learn interface (i.e. fit() and predict() methods) and takes sample weights as part of its loss function. Specifically, the fit() method should optionally take an argument, sample_weight, that provides a weight to each observation in X,y. That covers nearly all estimators in sklearn, including linear models (linear and logistic regression, lasso), SVMs, neural nets, decision trees, and ensemble methods like random forests, gradient boosting, and XGBoost.

In addition, Fomo works with many different metrics of fairness and accuracy. It currently supports:

  • Subgroup Fairness (False Positive, False Negative, and Demographic Parity)

  • Differential Fairness (Demographic Parity and Calibration)

  • Multicalibration

  • Proportional Multicalibration

In addition, users can specify any callable function they would like to be optimized, as long as it matches the call signature of these functions. Users can specify the combination of performance metrics and fairness metrics that best suit the task they are studying. You can specify any number and combinatoin of these metrics.

Finally, Fomo works with many different optimization methods available from pymoo, including NSGA-II, NSGA-III, MOEAD, and others.