-
Notifications
You must be signed in to change notification settings - Fork 21
Equal odds postprocessing workflow and blog #360
Equal odds postprocessing workflow and blog #360
Conversation
thumbImage = "/blog_img/2023/2023-08-30-fairness-equal-odds-postprocessing.png" | ||
frontPageImage = "/blog_img/2023/2023-08-30-fairness-equal-odds-postprocessing.png" | ||
blog = ["fairness", "equal odds postprocessing"] | ||
shortExcerpt = "In this blog, we delve into the Equal Odds Postprocessing widget, a tool designed to enhance fairness in machine learning models. We break down how the algorithm works by modifying predictions to meet Equalized Odds criteria. Using a real-world example with the German credit dataset, we demonstrate its efficacy in improving fairness metrics while marginally affecting accuracy." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
short < long
weight = 1004 | ||
+++ | ||
|
||
Another way to mitigate bias besides the ones we have shown before is to use a postprocessing algorithm on the model's predictions. This workflow illustrates using the Equal Odds widget as a post-processor for the Logistic Regression model. To use the post-processor, we need to connect any model to the Equalized Odds Postprocessing widget along with any needed pre-processors. Doing so ensures our model's predictions get post-processed before we evaluate them. We then connect the Equalized Odds Postprocessing widget to the Test and Score widget and visualize the results using a Mosaic Display. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"besides the ones we have shown before" - workflows should work by themselvers, do not reference others
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me, everything after "Doing so" is fairly obvious, non-specific to this use-case, and should thus be removed.
|
||
### Equal Odds Postprocessing: | ||
|
||
The [Equal Odds Postprocessing](https://arxiv.org/abs/1610.02413) widget is a post-processing type of fairness mitigation algorithm designed to ensure fairness in supervised learning. It modifies the predictions of any given classifier to meet certain fairness criteria, specifically focusing on "Equalized Odds" or more relaxed criteria like Equal Opportunity. Because it is a post-processing algorithm, it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
designed to ensure fairness in supervised learning -> "for supervised learning" (the reader knows from the rest of the sentence that this is about fairness.
any support from the model - strange and unclear phrasing, I'd do this:
"it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model." -> "it works with general supervised models."
|
||
The [Equal Odds Postprocessing](https://arxiv.org/abs/1610.02413) widget is a post-processing type of fairness mitigation algorithm designed to ensure fairness in supervised learning. It modifies the predictions of any given classifier to meet certain fairness criteria, specifically focusing on "Equalized Odds" or more relaxed criteria like Equal Opportunity. Because it is a post-processing algorithm, it does not require any support from the model (like reweighing, which requires the model to support weights) or changes to the model (like adversarial debiasing, which requires the model to be a neural network). This makes it a versatile algorithm that can be used with any model. | ||
|
||
The widget Equalized Odds Postprocessing represents a novelty in the Orange environment, as until now, there has been no widget with similar functionality. Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input. It operates by first learning from the predictions of the model's training data. Then, it applies the Equalized Odds Postprocessing algorithm from the [AIF360](https://aif360.res.ibm.com/) library to the predictions the selected model makes on the test data. This ensures that the adjusted predictions conform to the fairness definition of equalized odds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
represents a novelty - too ornamental, also, if something is novel, then "as until now, there has been no widget with similar functionality" is a tautology and thus needless.
"Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input." -> "It needs a model as an input." (also, does it need a learner or model?)
"It operates by first learning from the predictions of the model's training data." - I have not idea what you wanted to say with this
"This ensures that the adjusted predictions conform to the fairness definition of equalized odds." -> This is already said with the algorithm name, why repeat it here?
|
||
The widget Equalized Odds Postprocessing represents a novelty in the Orange environment, as until now, there has been no widget with similar functionality. Its operation is similar to other Orange widgets representing models, except it also expects one of the Orange models as an input. It operates by first learning from the predictions of the model's training data. Then, it applies the Equalized Odds Postprocessing algorithm from the [AIF360](https://aif360.res.ibm.com/) library to the predictions the selected model makes on the test data. This ensures that the adjusted predictions conform to the fairness definition of equalized odds. | ||
|
||
In more detail, the Equalized Odds Postprocessing algorithm works by solving a linear program with some constraints and the following objective function: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"In more detail" -> You lose no detail by removing it
|
||
The algorithm then finds the optimal solution to the linear program, which results in a set of probabilities with which to flip the model's predictions to equalize the odds of being correctly or incorrectly classified for both privileged and unprivileged groups: | ||
|
||
- `sp2p`: Probability of flipping a label from positive to negative for the privileged group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probability of flipping a label is repeating. Reformulate to avoid such repetitions.
- `on2p`: Probability of flipping a label from negative to positive for the unprivileged group. | ||
|
||
|
||
## Orange use case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not read further, but try to rewrite. :)
95eab38
to
9c41b8b
Compare
Fifth of fairness workflows and blogs.