Thursday 4 April 2019

Brainwashing and Social Control 2: Prejudice is an Inevitable Function of Wisdom Which Even Artificial Intelligence Demonstrates


Last week we examined why young people are dumb, concluding that it was simply a lack of experience which necessitated a greater degree of effort for young people to understand what older, wiser heads would grasp quickly.

This week we will look at the converse problem, that experience leads to prejudice in the process of discrimination.

What is Discrimination?

Discrimination is simply the process of making a choice. We all discriminate many times each day, which socks to wear, what to have for breakfast, in these and in many more situations we make our choice.

What is Prejudice?

Prejudice is the factors based on personal experience and observation of others which will lead in individual to quickly conclude that one option or other is preferable.

If yesterday I ate Shreddies for breakfast yesterday then I have the experience of having eaten them and depending on that experience I may decide to repeat it again today or I may choose to have a different breakfast.

My prejudice might be that I enjoyed the experience of eating Shreddies and I am open to that choice in the future or it may be that I did not enjoy the experience of eating Shreddies and it is not an experience I will repeat.

What I cannot do is act without prejudice because there is simply no way to unlearn the lessons from my previous experience.

So Prejudice is simply a function of experience. Indeed learning from previous experience is extolled as a virtue in most cases "Fool me once, shame on you; fool me twice, shame on me."

It seems that even artificial intelligence can grasp this basic concept.

Twitter AI Prejudice

Twitter's account banning, which the company says was a bug, was "unfairly" filtering 600,000

accounts, including some members of Congress in search auto-complete and results. CEO Jack Dorsey confirmed the figure during his opening statement to the House Energy and Commerce Committee in September last year.

AI of course does not have any concept of "fair" or "unfair", it is simply programmed to achieve a specified goal.

Dorsey explained that the shadow banning occurred due to algorithms that take into account how the people following those filtered accounts behave on the platform.

Ultimately, Twitter determined that wasn't a fair way to assess accounts, and changed course. "We'll always improve our technology and algorithms to drive healthier usage, and measure the impartiality of outcomes," he said.

AIs develop Prejudice without Human Interaction

Further research by computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

Whilst you are told that prejudice is a human-specific phenomenon that requires human cognition to form an opinion of, or to stereotype, a certain person or group, some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans, and further than that, this new work demonstrates AI evolving prejudicial groups on their own.

The findings, which have been published in the journal Scientific Reports, are based on computer simulations of how similarly prejudiced individuals, or virtual agents, can form a group and interact with each other.

In a game of give and take, each AI makes a decision as to whether they donate to somebody inside of their own group or in a different group, based on an individual’s reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders.
As the game unfolds and a supercomputer racks up thousands of simulations, each AI begins to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said: “By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.
The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

Science loves to point out the obvious

So in this simulation we have a large number of individuals AIs all acting in their own best interest (exactly like human society) and they not only learn from their own personal experience but from observing the behaviour and results of the decisions of the other AIs.

The problem for the AI, exactly as it is for human society is that too many people act with regard only to the short term consequences of the decision, one can only presume because the AIs like many human beings are not capable of accurately forecasting the long term consequences of their behaviour.

Ominously

A further interesting finding from the study was that under particular conditions, which include more distinct sub-populations being present within a population, it was more difficult for prejudice to take hold.
“With a greater number of sub-populations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group,” Professor Whitaker concluded.

No comments:

Post a Comment