Avatar feed
Responses: 3
LTC Eugene Chu
1
1
0
Artificial Intelligence requires initial program development by humans. If the humans who build the program have bias, it may reflect in coding and algorithms of the AI
(1)
Comment
(0)
SPC Kevin Ford
SPC Kevin Ford
>1 y
That can happen, but it is more likely a factor of what data is fed into it, particularly with deep learning and unsupervised learning.

There isn't as much "programming" as you may think. For example, when creating a TensorFlow model the layers are pretty generic and are not really programmed per se. Where conscious bias can come in is when the training data is classified in a particular way, what data is selected to use to train the model with or what initial weights are assigned to different data.

But that wasn't really my point. My point was the data that exists in the world as things are is "biased" because the current state of things has built in disparities between different groups. All things being equal, ML algorithms will pick up on those disparities unless the model creator consciously decides to alter the data in some way which itself has the opportunity to introduce bias.
(2)
Reply
(0)
LTC Eugene Chu
LTC Eugene Chu
>1 y
SPC Kevin Ford - Thanks for clarifying. I was unable to read earlier on work computer due to firewall
(2)
Reply
(0)
Avatar small
PO1 William "Chip" Nagel
0
0
0
SPC Kevin Ford Interesting.
(0)
Comment
(0)
Avatar small
SSG Robert Webster
0
0
0
I do not agree with your conclusion. Why, then we would not have inter-racial or inter-ethnic mixed families/marriages outside of war and conquest. We would not also have the voluntary spread of religious beliefs. To put it another way, why do we have religious strictures on diet and familial relationships?
And in reading the article linked explaining the reasoning behind the supposed withdrawal the bottom line is that the technology has been and is ripe for abuse and has been abused, and not because it leads to ML/AI supporting systemic racism.
(0)
Comment
(0)
SPC Kevin Ford
SPC Kevin Ford
>1 y
SSG Robert Webster Quite frankly because social and learning systems are not absolute and are extremely complex. Just because such a model is likely to get a certain result, does not mean it will always get the result you expect. This includes ML models that are much too complicated to understand on a detail level. The same is true of people. It also gets into all sorts of questions of nature vs nurture.

But pointing to outliers in a system does not disprove an inherent bias in a system itself, it never has. There is a reason that in the grand scheme of things social change tends to be slow with lots of periods of pushback. The status quo has substantial momentum. It's not like the Loving couple won their case and the next day a white person was just as likely to marry a black person. Indeed, it has only slowly become more common, very slowly. The momentum of proximity and learned culture is slow to change.

The same phenomenon can be observed with ML. Models can be continually updated and retrained as new data comes in, but the pace of change quickly slows over time as the momentum of prior learning adds weight and over times slows the pace of possible change.

As far as the article, the data that exists in the wild is part of the reason those technologies are abused, even unknowingly. If you want to know more there are some interesting books on the ethical implications of AI and ML models.

Edit to add: Just as an FYI, the example I gave was not completely theoretical.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
(2)
Reply
(0)
SSG Robert Webster
SSG Robert Webster
>1 y
SPC Kevin Ford - I already have some interesting books on the ethical implications of AI and ML models. I also have an interesting collection of articles and opinion pieces on the subject as well to include articles by Jerry Pournelle. I would have to thumb through the writings of Penn Jillette to see if he wrote any commentary on the subject, but it would not surprise me.

As for your example, I know that it is not completely theoretical, since it was one of many reasons listed by the article that you referenced originally, but it is only single factor, there are many others. And the issue that they are accounting as being 'biased' well the other name for it is pattern recognition. Sad to say that a man's resume that matched a woman's resume would be discarded as well but is not recognized as bias is a telling factor. And in that regards a woman's resume that matched a top 10 man's resume would be picked as well. And when you take a look at this area - resume writing and resume writing advice, women are in the top tier, along with being the majority in the field of HR, at least that is the way that I (and probably many others) perceive it. So what does this tell me - either my observations prove to be absurd or their statements as to the proof of bias by a programmatic algorithm is absurd; and that on its face is absurd.

One of the major issues is that entities such as FB and Google continue to use such tools to the detriment of all.

I have been at this for a long time and actually used such tools between about 1986 and 1996. Symantec had what I found to be a very useful tool in this field called Q&A 3.0 and its Intelligent Assistant tool. Look it up you may find that the strictures written about even then are still applicable today. From about 1989, I used it as a tool to help me select people to attend military schools and to assist me in advising the brigade S-3 and CSM in what individual training was need to keep a steady supply of individuals prepared to met the school entry criteria/prerequisites.

And when you get right down to it, the way today's tools are judged for bias is with trying to apply affirmative action to the equation which causes it to fail in their expectations.

In other words, are the AI/ML systems being asked the right or correct questions; and are they being fed unbiased data strip any information from the database that is biased such as name, sex, race, and age, and then ask the appropriate question do you have the same results. And I can tell you from first hand experience that a single checkbox missed will knock anyone out male or female, young or old, or any racial/ethnicity factor. And that single checkbox is not or should not be predicated on your age, sex, gender, race, ethnicity unless it is a direct impact on what is being looked for. Bottom line, if they are looking for a programmer and that box is not checked it, none of the rest of it matters; and that is not biased as they are trying to paint it.
(0)
Reply
(0)
SPC Kevin Ford
SPC Kevin Ford
>1 y
SSG Robert Webster - It's interesting that you are now making pretty much the same arguments I did. For example, when you state, "and are they being fed unbiased data strip any information from the database that is biased such as name, sex, race, and age, and then ask the appropriate question do you have the same results." this is what I referred to here, "the only way to stop it is to starve the algorithm of the data that would lead to the bias being formed."

My point was based on the fact that these models learn much the same way as people. The data as it exists in the world is not "unbiased", that is to say it reflects the current state of our society. Unlike a ML model, we cannot feed ourselves information stripped of "...name, sex, race, and age..." So my point isn't that ML models will always lead to biased results, but that if given unfiltered information those are the results that will happen because the data itself reflects a "biased" distribution. Directly my point is that people get the unfiltered version of the data "in the wild" so the systematic results on the human level shouldn't be surprising, we can't choose not to see a person's sex, race, etc.
(1)
Reply
(0)
SSG Robert Webster
SSG Robert Webster
>1 y
SPC Kevin Ford - Only at the interview level, not necessarily the paper drill level unless the information is there.
(0)
Reply
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close