Fighting Automated Oppression: 2024 in Review

<

div class=”field field–name-body field–type-text-with-summary field–label-hidden”>

<

div class=”field__items”>

<

div class=”field__item even”>

EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.

This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult.

If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.

It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the ki

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Deeplinks

Read the original article: