Skip links

Algorithms can Increase Poverty

Michele Gilman a professor of Law at the University of Baltimore School of Law has written a report for poverty lawyers explaining the consequences of data-centric technologies in America. The report covers seven areas where algorithms can cause discrimination and harm people, especially low-income American families, if these are not protected by law. The following is a short resume of the problems presented the full report from Data & Society can be read here.

The collection of personal data and profiles made on people are currently being used to target low-income people with marketing schemes which can lure them into unfathomable debt, trick them in for-profit educational scams or identity theft. The data is being collected from e.g., court records which is a problem for particularly low-income families since they are, according to Gilman, more likely to resolve their family disputes in court and because court records are increasingly available online. Personal information such as health data or criminal records can thus be used by insurance companies and landlords to can be used to discriminate against protentional customers and tenants and thus create a vicious circle for low-income persons making it hard for them to change their financial situation.

The dangers for low-income people arise because digital profiles often operate as gatekeepers to affordable credit, jobs, employment, education, insurance, health care, and other life necessities.”

Data is also being used to create decision-making algorithms which cause problems, when programmers misinterpret the law and design systems that make improper denials of health care to e.g. pregnant, sick or disabled. “There are thousands of horror stories of disabled and needy people being denied desperately needed state support due to an algorithmic eligibility determination.”

Algorithms are also being used in the child welfare system to detect child abuse and neglect and in the school system to predict potential harmful behaviour of students. But if the algorithms are biased, they might fail to predict child abuse, or they might incorrectly predict risks where there are none, and thus cause unjust decisions and amplify the existing problems. But it is not only students who experience more surveillance, workers are exposed to timekeeping software that automatically deduce breaks assuming that workers take the full break, even when work conditions made this impossible. Low-wage workers are also routinely asked to take personality and drug tests when applying for a job and throughout their employment and there are few legal protections against student and worker surveillance in America.

Gilman proposes four advice for poverty lawyers to protect low-income families in America against being discriminated and harmed by data-centric technologies.

First, the interest of low-income families must be protected when politicians make decisions on adopting automated decision-making systems to guarantee accurate and ethical use of these systems. Furthermore, it is important for legal services and their clients to share information of their experiences and daily life with policymakers to ensure that emerging laws not only protect the interest of industry and the elite, but the interest of members from all layers of the society. It is also essential for legal service layers to get additional education in understanding algorithmic systems to be able to challenge them in court. And finally, it is important for poverty lawyers to collaborate with por bono lawyers in the fight for access to justice since the problems and tactics presented by Gilman are complex and not easily handled or fixed.

Get the report from Data & Society here

Signe Agerskov is a member of the European Group on Blockchain Ethics (EGBE) and is researching blockchain ethics at the European Blockchain Center