Sumon Biswas
Sumon Biswas
Home
Publication
Service
Projects
Teaching
News
Blogs
Contact
Light
Dark
Automatic
Fairness
FairSense: Long-Term Fairness Analysis of ML-Enabled Systems
We propose a simulation-based framework called FairSense to detect and analyze long-term unfairness in ML-enabled systems.
Yining She
,
Sumon Biswas
,
Christian Kästner
,
Eunsuk Kang
Cite
Preprint
Fairify: Fairness Verification of Neural Networks
We proposed Fairify, an approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages white-box access to the models in production and then apply formal analysis based pruning.
Sumon Biswas
,
Hridesh Rajan
Cite
Code
DOI
PDF
Towards Understanding Fairness and its Composition in Ensemble Machine Learning
We comprehensively study popular real-world ensembles: bagging, boosting, stacking and voting. We have developed a benchmark of 168 ensemble models collected from Kaggle on four popular fairness datasets. We use existing fairness metrics to understand the composition of fairness. Our results show that ensembles can be designed to be fairer without using mitigation techniques. We also identify the interplay between fairness composition and data characteristics to guide fair ensemble design.
Usman Gohar
,
Sumon Biswas
,
Hridesh Rajan
Cite
Code
DOI
PDF
Causal Fairness in Machine Learning Pipeline
We used causal reasoning to measure fairness of components and remove them from machine learning pipeline.
Fair Preprocessing: Towards Understanding Compositional Fairness of Data Transformers in Machine Learning Pipeline
We introduced the causal method of fairness to reason about the fairness impact of data preprocessing stages in ML pipeline. We leveraged existing metrics to define the fairness measures of the stages. Then we conducted a detailed fairness evaluation of the preprocessing stages in 37 pipelines collected from three different sources.
Sumon Biswas
,
Hridesh Rajan
Cite
Code
DOI
PDF
arXiv
Our Research Identifies Unfairness in the Component Level of AI Based Software
We proposed causal reasoning in machine learning pipeline to measure fairness of data preprocessing stages.
May 2, 2021
3 min read
Research
Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness
We have focused on the empirical evaluation of fairness and mitigations on real-world machine learning models. We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks, and then using a comprehensive set of fairness metrics, evaluated their fairness. Then, we have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
Sumon Biswas
,
Hridesh Rajan
Cite
Code
DOI
PDF
arXiv
Fairness Engineering in ML Models
We have studied the software engineering concerns of fairness in real-world machine learning models.
Cite
×