Research. How do you judge the intent of a piece of software? Accountability tools for regulators have not been developed as fast a new tools for algorithmic decision-making. Transparency will not solve this. This research from Princeton University reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.
Many decisions historically made by humans are made by computers today. Algorithms count votes, target citizens or neighborhoods for police scrutiny, select taxpayers for audit, approve loan and credit and grant or deny visas. The accountability mechanisms that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable.
The technological tools introduced here apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards.