Written by students who passed Immediately available after payment Read online or as PDF Wrong document? Swap it for free 4.6 TrustPilot
logo-home
Summary

Summary Explainable AI

Rating
-
Sold
-
Pages
11
Uploaded on
22-11-2022
Written in
2022/2023

This document contains notes and summaries covering the content of the course Human-Centered Machine Learning within the Artificial Intelligence Master at Utrecht University. It covers the following topics: - intro to XAI - interpretable models - model agnostic interpretability methods - neural network interpretability

Show more Read less
Institution
Course

Content preview

Course notes on Human-Centered Machine Learning - Fairness
Part

— Lecture 5: FairML intro —

Dual use
• Refers to possible beneficial but also harmful consequences of AI-powered
solutions
• Types of harms:
⁃ Allocative harms: “when a system withholds an opportunity or a
resource from certain groups”
⁃ Immediate, easier to measure
⁃ Examples: hiring processes, visa applications
⁃ Representational harms: “when systems reinforce the subordination of
some groups along the lines of identity - race, class, gender, etc.’’
⁃ Long term, more difficult to measure
⁃ Examples: google translate (nurse/doctor), CEO image search

Terminologies
• Bias in FairAI field is used differently than to referring to the bias term in a
linear model, such as linear regression
• Fair machine learning is just getting started, so there is no single definition for
“bias” or “fairness”
• Research articles often don’t define what they mean with these terms
• Different studies have different conceptualizations of bias

Data
• As usual: garbage in -> garbage out = bias in -> bias out
• Biased data:
⁃ First have the world as it should and could be
⁃ Then with retrospective injustice we introduce societal bias
⁃ You get the world as it actually is
⁃ Then with non-representative sampling and measurement errors we
introduce statistical bias
⁃ We get a representation of the world according to data
• If we would have a perfect representation of the world, we would only address
the statistical bias problem; but there are no real-world datasets free of
societal biases
• Statistical bias:
⁃ Because of non-representative sampling (e.g., commonly used image
datasets are often US/European centered)
⁃ Because of measurement errors
⁃ There’s often a disconnect between the target variable and the overall
goal
⁃ Example: being re-arrested vs. re-offending vs. risk to society
⁃ Example: repayment of loans vs. better lending policies
⁃ Often different stakeholders have different overarching goals

, Datasheets for Datasets (Gebru et.al 2021)
• Motivation: e.g., for what purpose was the dataset created?
• Composition: e.g., does the dataset contain data that might be considered
sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual
orientations, religious beliefs, etc.)?
• Collection process: e.g., what mechanisms or procedures were used to collect
the data (e.g., hardware apparatus or sensor, manual human curation,
software program, software API)?
• Uses: e.g., are there tasks for which the dataset should not be used?
• Distribution: e.g., how will the dataset be distributed (e.g., tarball on website,
API, GitHub)?
• Maintenance: will the dataset be updated (e.g., to correct labeling errors, add
new instances, delete instances)?

Fairness in development of ML models
• Sample size: usually performance tends to be lower for minority groups; this
even happens when data is fully representative of the world
• ML models can amplify biases in the data
⁃ Example from Zhao et al.: 33% of the cooking images have man in the
agent role, but during test time, only 16% of the agent roles are filled
with man
• Features:
⁃ Instances are represented by features
⁃ Which features are informative for a prediction may differ between
different groups
⁃ A particular feature set may lead to high accuracy for the majority
group, but not for a minority group
⁃ The quality of the features may differ between different groups
⁃ What about the inclusion of sensitive attributes as feature (e.g, gender,
race)? Would it:
⁃ Improve overall accuracy but lower accuracy for specific groups
⁃ Improve overall accuracy, for all groups
⁃ What if we need such information to evaluate the fairness of systems?
• Evaluation:
⁃ In ML, the evaluation often makes strong assumptions
⁃ Outcomes are not affected by decisions on others
⁃ Example: denying someone’s loan can impact the ability of a
family member to repay their loan
⁃ We don’t look at the type and distribution of errors
⁃ Decisions are evaluated simultaneously
⁃ Feedback loops & long-term effects
• Model cards for model reporting (Mitchell et al. 2019)
⁃ Aim: transparent model reporting, such as:
⁃ Model details (e.g., version, type, license, features)
⁃ Intended use (e.g., primary intended uses and users, out-of-
scope use cases)
⁃ Training data
⁃ Evaluation data
⁃ Ethical considerations

Written for

Institution
Study
Course

Document information

Uploaded on
November 22, 2022
Number of pages
11
Written in
2022/2023
Type
SUMMARY

Subjects

$6.03
Get access to the full document:

Wrong document? Swap it for free Within 14 days of purchase and before downloading, you can choose a different document. You can simply spend the amount again.
Written by students who passed
Immediately available after payment
Read online or as PDF

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
massimilianogarzoni Universiteit Utrecht
Follow You need to be logged in order to follow users or courses
Sold
20
Member since
8 year
Number of followers
13
Documents
17
Last sold
3 months ago

2.8

5 reviews

5
0
4
0
3
4
2
1
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Working on your references?

Create accurate citations in APA, MLA and Harvard with our free citation generator.

Working on your references?

Frequently asked questions