Skip links

How the machine ‘thinks’: Understanding opacity in machine learning algorithms”

Research. In her new article “How the machine ‘thinks’: Understanding opacity in machine learning algorithms” (January 2016) Jenna Burrell from UC Berkley School of Information discusses methods to investigate opacity in algorithms. Once a technical, opaque word belonging to the sphere of computer scientists and programmers, “Algorithm” has today become a commonly used buzz word in business discourse. So much so that discussions about “big data” in an informed business community will always include a reference to the “Algorithmic Economy”. A new business adventure based on finding patterns in data, creating profiles, predicting and responding to data, making meaning out of data and transforming it into value.

Alongside this transformation of business discourse, we see an increasing ethical concern among the quickest policymakers, organisations and academics specifically connected with the opacity of the proprietary algorithmic systems (voiced most descriptively in law professor Frank Pasquales book from last year “The Black Box Society: The Secret Algorithms That Control Money and Information”)

The opacity of algorithms is one of this new era’s greatest dilemmas, because how can we assess the ethics of a service, a product, an internet thing if we do not know how it is designed to act on its data? How do we ensure ethical standards for the classification, analysis and action on data?

In the article on the opacity of algorithms published in January 2016 in Big Data & Society, Jenna Burrell discusses how to investigate opacity in algorithms with examples from actual coding processes and education. She distinguishes between different types of opacity e.g. “opacity as technical illiteracy” and “opacity as intentional corporate or state secrecy” and points to the complexity involved in investigating the opacity of machine learning algorithms and assessing their impact.

The investigative method needed is manyfold, she concludes:

“Ultimately partnerships between legal scholars, social scientists, domain experts, along with computer scientists may chip away at these challenging questions of fairness in classification in light of the barrier of opacity. Additionally, user populations and the general public can give voice to exclusions and forms of experienced discrimination (algorithmic or otherwise) that the ‘domain experts’ may lack insight into. Alleviating problems of black boxed classification will not be accomplished by a single tool or process, but some combination of regulations or audits (of the code itself and, more importantly, of the algorithms func- tioning), the use of alternatives that are more transpar- ent (i.e. open source), education of the general public as well as the sensitization of those bestowed with the power to write such consequential code. The particular combination of approaches will depend upon what a given application space requires.” ( “How the machine ‘thinks’: Understanding opacity in machine learning algorithms” , Burrel, 2016, Big Data & Society, Sage, p.10)

 

Comments are closed.