/cloudfront-us-east-1.images.arcpublishing.com/gray/4KR6A2B4XFGJFMDKMN6MAOMSEQ.png)
COLLEGE STATION, Texas (KBTX) – Researches at Texas A&M are working to eliminate bias in artificial intelligence and machine learning.
Earlier this year, researchers there created Code^Shift Lab to study this bias and ways to prevent it. They say the bias is created by the data in the algorithms different machines use to learn.
“They learn just as children learn by repeatedly observing existing occurrences,” Texas A&M Health Communication Associate Professor and Code^Shift Lab Co-Director Lu Tang said. “Just like children can learn bias from their surroundings or their parents, A.I. can learn bias from the training data set.”
Tang says training data doesn’t come from a spreadsheet or a computer, but rather from human beings and the decisions they make. The human bias in collecting that data is unintentionally impressed upon the machines learning from those data sets.
It’s important to end this bias in machines as bigger parts of our lives become more influenced by A.I., Tang says. Automated technologies now play large roles in evaluating credit card and mortgage applications, prescribing medications, and exposing people to certain kinds of information.
”If we do not pay attention to the potential biases in A.I., then this kind of bias is going to be even more shielded from the public understanding,” Tang said. “Then we’ll just assume everything is fair when it is not. If that algorithm is biased, then there are certain groups of people that are going to have a much harder time getting those things.”
Tang says Code^Shift Lab will conduct research to look closer at these biases and teach classes and workshops to identify and combat them.
Copyright 2021 KBTX. All rights reserved.