Artificial intelligence brought a great acceleration in technology. Besides the improvements, ethical issues of artificial intelligence are also growing. Robots, automated cars, home assistants and much more future technologies have taken the job of humans. But what are the ethical issues of artificial intelligence that has brought a big change into society? Here are 7 ethical issues of artificial intelligence that the community is facing…
- Elimination of employment
- Economy distribution among machines
- Artificial idiocy — What if it Makes a Mistake
- Mimic human interaction
- AI unfairness
- Singularity and Keeping control over AIs
- What if AI empower human
1. Elimination of employment:
One of the primary ethical issues of artificial intelligence that people are facing is the elimination of jobs. What will happen if we integrate AI into society? What about those who work for and get paid hourly. According to the report, in the year 2030, there will be about 800 million people who will depart from their job due to artificial intelligence. People are being replaced by humans. Which are trained to make the right decisions?
In the restaurants, that robots have cleared the place for waiters. In recent times, about millions of individuals were unemployed just because of self-driving trucks. What if they would be completely integrated into the cities? Another point is, the development of self-driving trucks reduces the cases of accidents from happening on the road. That’s the major benefit of deploying artificial intelligence. Some humans say, there is still need of humans with self-driving trucks just owed to be a trust factor. AI is responsible for providing humans for creating better jobs.
2. Economy distribution among machines:
Economics systems based on the compensation of the economy distributed to the workers at their hourly wage. The companies disturbed the wealth among the workers and let the profit invest in production again. Training and creating more business results in more economic growth with maximum profit.
But what will happen if we introduce AI in such places? More of the workers have to wash their hands from their jobs. Their presence will be replaced by robots. As robots do not require any hourly wage to work. Revenues will be distributed to several people. Stakeholders and CEOs who highly invest in artificial intelligence possibly become richer. This creates a wider wealth gap in the economic industry.
3. Artificial idiocy — What if it Makes a Mistake?
Major artificial intelligence problem is artificial idiocy. Intelligence comes from training whether it’s a machine or a human. Every training takes some time intending to get the desired results. AI’s are not safe from mistakes. Theses are training by using deep learning techniques and fed them with more amount of data. Providing fewer data and commons programming errors are more immune to mistakes.
In 2016, “Tay Microsoft Racist” a chatbot released by Microsoft learned the behavior of people from twitter and trained itself accordingly. In one day, it learned unfair upset speaks. This was unfair. So, Microsoft immediately locks it down from learning the user’s behavior as with keep on going can impacts company standards. The training phases cannot cover all the possible examples. Humans can’t be deceived but machines are made tricked by presenting the dot patters to train accordingly.
4. Mimic human interaction:
Artificial intelligence bots are becoming better and better in imitating human conversation. In 2015, a bot named “Eugene Goostman” won the Turing test. In this text, the human rater gave a text input to chat then guessed whether they are talking to a human or a machine. Eugene Goostman fooled more than half of the raters in imitating the human conversation.
Should robots be granted citizenship? The created robot “Sofia” is the best example that mimics human interaction. And that robot was granted citizenship in Saudi Arabia. However, it could prove harmful.
5. AI unfairness:
AI is now evolving in computer vision and voice recognition. Though, artificial intelligence is speedier and more efficient in working. But the systems are still vulnerable to biases. Alphabet (Google’s parent company) is one of the groundbreakers in artificial intelligence. As seen in google photos and google images search how better it identifies the objects, people and scenes. The algorithms created by IBM, Microsoft all had biases in facial recognition. These algorithms were expected to make mistakes in detecting the gender of a person. Whiter skin men can be easily detected by algorithms instead of having darker skin.
Similarly, termination of AI hiring and recruitment is another artificial intelligence problems which show AI is unfair in predicting the human. The algorithm preferred more candidates over male instead of female. Because the data provided to the algorithm of about 10 years contains the majority of the males.
6. Singularity and Keeping control over AIs:
Will AIs exceed to pass human being. Human authority is just due to their skills and intelligence. The time comes when human is being controlled by an AI. Some deem, this sign ends the human period and it’s going to happen in early 2030. The point at which technology becomes more intelligent and surpass human beings is known as the technological singularity. AIs leading to human fatality.
Artificial intelligence has grown so ubiquitous, owed to advances in chip design, big-data and processing power that we rarely notice. Ultra-smart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. If an AI is adequately smart, it might have a better understanding of the controls than its creators do.
7. What if AI empower human:
Natural Probable Consequence occurs when artificial intelligence tasks are used improperly to perform the wrong action. In a Japanese motorcycle company, an intelligent robot empowered employee by identifying it as a threat to its mission. Autonomous weapons are an artificial intelligence system that is programmed to destroy. Creating a fully autonomous arm would allow AI systems to create fatalities. Therefore, there must have some control of some humans to control these artificially intelligent systems to prevent it from performing wrong actions.
Fully autonomous systems lack human intelligence. These systems can’t differentiate between a civilian from a warrior. AI systems lack the core principle of the laws of war. That’s the reason they are not allowed to fully perform their own decisions.