For AI, Bias May Be a Huge Problem

By Tanuja Thombre - Last Updated on April 22, 2018

A group of scientists from the Czech Republic and Germany recently carried out research to ascertain the effect human cognitive bias has on understanding the output used to create machine learning rules.

Machines don’t really have a bias within them. AI doesn’t ‘want’ something to be true or false for reasons that can’t be explained by logic. However, this human bias does exist in machine learning right from the creation of an algorithm to the interpretation of data.

According to a leader who is on the parliamentary committee on AI, artificial intelligence brings opportunity so long as it is treated ethically. The group spent time drafting recommendations on how the UK should develop the technology industry. This report, which was published on Monday, says that the UK is in a strong position to be a world leader in the development of artificial intelligence, and AI could provide a huge boost to the UK economy. The group also reported that ethics must be taken into consideration for any kind of development.

Speaking concerning the report, Bishop of Oxford, Dr. Stephen Croft told Premier, “AI has lots of good possibilities…but it also carries some significant dangers if it’s developed uncritically.” He referred to potential problems, such as the use of data without people being aware of it, and bias decision making, such as computers making decisions in human resources and job applications without completely understanding the criteria.

He added there are “quite serious implications in warfare and AI used weaponry. All these things are real and present dangers now; they’re not future impossibilities and it’s absolutely vital that there’s an informed public debate about how we inform and develop AI.”

The solution proposed by the researchers that would de-bias each cognitive bias was quite simple. According to the researchers, the solution to a lot of problems was as simple as changing the way data is represented. For example, making a change in the output of algorithms to use more natural numbers than ratio could considerably reduce the potential for misreading certain results

According to the team, “To our knowledge, cognitive biases have not yet been discussed in relation to interpretability of machine learning results. We thus initiated this review of research published in cognitive science with the intent to give a psychological basis to changes in inductive rule learning algorithms, and the way their results are communicated. Our review identified twenty cognitive biases, heuristics and effects that can give rise to systematic errors when inductively learned rules are interpreted.”

Tanuja Thombre | A Soft Skills and Behavior Trainer by passion and profession, with 8 years of experience into Mortgage Banking sector. Currently I am working as a Training Consultant and I cater to the training needs across various industries. This also allows me to interact with, train and learn various aspects of human modes. Adorned with certifications from various institutes like Dale Carnegie & Steven Covey. I have a natural instinct for writing; every once a while, a Blog, a short article and in the future I plan to author a Book. When it comes to writing, I believe there is seldom anything as appealing as Simplicity.

Tanuja Thombre | A Soft Skills and Behavior Trainer by passion and profession, with 8 years of experience into Mortgage Banking sector. Currently I am working as a...

Related Posts