This paper explores fairness problems in machine learning and focuses on the “p% rule”
as a measure of fairness. While the “p% rule” provides a clear measure of fairness, it has
limitations and should be used alongside other metrics. The author proposes a measure of
decision boundary fairness that addresses both disparate treatment and disparate impact and
presents two complementary formulations: (1) Maximizing Accuracy within the Constraints
of Fairness; (2) Maximizing Fairness within the Constraints of Accuracy. The model is tested
by both synthetic data and real dataset. The experimental results show progress towards a
more fair and accurate model, but also highlight the limitations and trade‐offs of the proposed
approaches. The pursuit of fairness in machine learning requires ongoing research and col‐
laboration, with a focus on decreasing the influence of disparate treatment and expanding the
proposed mechanisms’ computational efficiency and individual fairness.