Skip to content
This repository has been archived by the owner on Nov 6, 2023. It is now read-only.

Equal odds postprocessing workflow and blog #360

Merged

Conversation

ZanMervic
Copy link
Contributor

Fifth of fairness workflows and blogs.

thumbImage = "/blog_img/2023/2023-08-30-fairness-equal-odds-postprocessing.png"
frontPageImage = "/blog_img/2023/2023-08-30-fairness-equal-odds-postprocessing.png"
blog = ["fairness", "equal odds postprocessing"]
shortExcerpt = "In this blog, we delve into the Equal Odds Postprocessing widget, a tool designed to enhance fairness in machine learning models. We break down how the algorithm works by modifying predictions to meet Equalized Odds criteria. Using a real-world example with the German credit dataset, we demonstrate its efficacy in improving fairness metrics while marginally affecting accuracy."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

short < long

weight = 1004
+++

Another way to mitigate bias besides the ones we have shown before is to use a postprocessing algorithm on the model's predictions. This workflow illustrates using the Equal Odds widget as a post-processor for the Logistic Regression model. To use the post-processor, we need to connect any model to the Equalized Odds Postprocessing widget along with any needed pre-processors. Doing so ensures our model's predictions get post-processed before we evaluate them. We then connect the Equalized Odds Postprocessing widget to the Test and Score widget and visualize the results using a Mosaic Display.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"besides the ones we have shown before" - workflows should work by themselvers, do not reference others

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, everything after "Doing so" is fairly obvious, non-specific to this use-case, and should thus be removed.


### Equal Odds Postprocessing:

The [Equal Odds Postprocessing](https://arxiv.org/abs/1610.02413) widget is a post-processing type of fairness mitigation algorithm designed to ensure fairness in supervised learning. It modifies the predictions of any given classifier to meet certain fairness criteria, specifically focusing on "Equalized Odds" or more relaxed criteria like Equal Opportunity. Because it is a post-processing algorithm, it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

designed to ensure fairness in supervised learning -> "for supervised learning" (the reader knows from the rest of the sentence that this is about fairness.

any support from the model - strange and unclear phrasing, I'd do this:
"it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model." -> "it works with general supervised models."


The [Equal Odds Postprocessing](https://arxiv.org/abs/1610.02413) widget is a post-processing type of fairness mitigation algorithm designed to ensure fairness in supervised learning. It modifies the predictions of any given classifier to meet certain fairness criteria, specifically focusing on "Equalized Odds" or more relaxed criteria like Equal Opportunity. Because it is a post-processing algorithm, it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model.

The widget Equalized Odds Postprocessing represents a novelty in the Orange environment, as until now, there has been no widget with similar functionality. Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input. It operates by first learning from the predictions of the model's training data. Then, it applies the Equalized Odds Postprocessing algorithm from the [AIF360](https://aif360.res.ibm.com/) library to the predictions the selected model makes on the test data. This ensures that the adjusted predictions conform to the fairness definition of equalized odds.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

represents a novelty - too ornamental, also, if something is novel, then "as until now, there has been no widget with similar functionality" is a tautology and thus needless.

"Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input." -> "It needs a model as an input." (also, does it need a learner or model?)

"It operates by first learning from the predictions of the model's training data." - I have not idea what you wanted to say with this

"This ensures that the adjusted predictions conform to the fairness definition of equalized odds." -> This is already said with the algorithm name, why repeat it here?


The widget Equalized Odds Postprocessing represents a novelty in the Orange environment, as until now, there has been no widget with similar functionality. Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input. It operates by first learning from the predictions of the model's training data. Then, it applies the Equalized Odds Postprocessing algorithm from the [AIF360](https://aif360.res.ibm.com/) library to the predictions the selected model makes on the test data. This ensures that the adjusted predictions conform to the fairness definition of equalized odds.

In more detail, the Equalized Odds Postprocessing algorithm works by solving a linear program with some constraints and the following objective function:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"In more detail" -> You lose no detail by removing it


The algorithm then finds the optimal solution to the linear program, which results in a set of probabilities with which to flip the model's predictions to equalize the odds of being correctly or incorrectly classified for both privileged and unprivileged groups:

- `sp2p`: Probability of flipping a label from positive to negative for the privileged group.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probability of flipping a label is repeating. Reformulate to avoid such repetitions.

- `on2p`: Probability of flipping a label from negative to positive for the unprivileged group.


## Orange use case
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not read further, but try to rewrite. :)

@ZanMervic ZanMervic force-pushed the fairness-equal-odds-postprocessing branch from 95eab38 to 9c41b8b Compare September 14, 2023 08:44
@markotoplak markotoplak merged commit da985f3 into biolab:master Sep 19, 2023
1 of 4 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants