Ethical or one of the biggest challenges facing AI

â–¼
One of the biggest challenges faced by ethical or artificial intelligence comes from Baidu VR

According to the BBC, artificial intelligence (AI) is almost ubiquitous, and most aspects of our lives have been infiltrated by them. From what we decided to look at, what flights we booked, what we bought online, whether job applications were successful, and whether we received banks Loans even treat cancer and so on. All of these things can now be determined automatically using sophisticated software systems. With the shocking progress AI has made over the past few years, it can in many ways help our lives become better.

In the past two years, the rise of AI has been irresistible. A lot of money has been invested in AI startups, and many established technology companies (including giants such as Amazon, Microsoft, and Facebook) have opened new research labs. It is no exaggeration to say that software now means AI. Some people predict that AI is about to change dramatically and its impact even exceeds the Internet.

AI has proven its worth in many practical tasks, from marking pictures to diagnosing diseases, etc.

We have asked a number of technical experts, in this fast-changing world, full of countless brilliant machines will have what kind of impact on humans. It is worth noting that almost everyone's answers center around ethics. For Peter Norvig, Google research director and machine learning pioneer, data-driven AI technology has recently achieved many successes. The key question is to find out how to ensure that these new systems can improve society as a whole, not just control. Its main body. Novig said: "AI has proven its value in many practical tasks, from marking pictures, understanding language to helping diagnose diseases, etc. The challenge now is to ensure that everyone can benefit from this technology."

The biggest problem is that the complexity of the software often means that it is almost impossible to accurately explain why the AI ​​system makes such a decision. Today's AI is based primarily on successful technology called machine learning, but you can't uncover its cover and see through its inner workings. For this reason, we can only choose to believe it. The challenge has also come along. We need to find new ways of monitoring and auditing in many areas, especially where AI is playing an important role.

One of the dangers for Jonathan Zittrain, a professor of Internet law at Harvard Law School, is that increasingly sophisticated computer systems may prevent them from being subject to necessary censorship. He said: "With the help of technology, our systems have become more and more complicated. I am very worried that human autonomy will be reduced. If we set up the system and forget it, the self-evolution of the system will bring The consequences may make us regret it. There is currently no clear moral considerations."

AI will allow robots to do more complex tasks, such as Japan's shopping assistant robot is serving customers

This is where other technical experts worry. Missy Cummings, director of human and autonomous laboratories at Duke University, questioned: "How can we prove that these systems are safe?" Cummings was the first female pilot of the US Navy. Today, it is Drone expert.

AI does need supervision, but we are not yet clear how to supervise it. Cummings said: "At present, we do not have universally accepted methods, nor have we tested the industry standards for these systems. For these technologies, it is very difficult to implement extensive supervision." In a rapidly changing world, regulators often I found myself helpless. In many key areas, such as the criminal justice system and the medical field, many companies have used AI to explore attempts to make parole decisions or diagnose diseases. But by giving the decision to the machine, we may lose control. Who can guarantee that the machine will make the right decision in every case?

According to Danah Boyd, chief researcher at Microsoft Research, many of the serious issues relating to values ​​are being written into these AI systems. Who will ultimately be responsible? Boyd said: “Regulators, civil society And social theorists are increasingly eager to see these technologies remain fair and ethical, but these concepts are vague."

An area full of ethical issues is the workplace. AI will help robots do more complex work and lead to more human workers being replaced. For example, China's Foxconn plans to use robots to replace 60,000 workers. Ford’s factory in Cologne, Germany, also invests in robots and coordinates with human workers.

In many factories, human workers have begun to work with robots. Some people believe that this may have a huge impact on human mental health

More importantly, if more and more automation has had a huge impact on employment, it will also have a negative impact on people's mental health. Ezekiel Emanuel, a bioethicist and former President Obama’s medical advisor, said: “If you think about something that makes people's lives meaningful, you will find three things: Meaningful interpersonal relationships, strong interests, and meaningful work, where meaningful work is an important factor in defining a person’s life.In some areas, losing a job when a factory closes can lead to suicide, drug abuse, and the risk of depression increase."

As a result, we may need to see more ethical needs. According to Kate Darling, a specialist at MIT for legal and ethics, “Companies are following market incentives. This is not a bad thing, but we can't just rely on ethics to control it. It helps regulate In place, we have seen its existence in the area of ​​privacy and new technologies. We need to find out how to deal with it."

Darling pointed out that many big companies (such as Google) have established ethics committees to monitor the development and deployment of AI. Some people think that this mechanism should be widely adopted. Darling said: "We don't want to stifle innovation, but when we reach a certain level, we may want to create a structure."

Who knows who can be selected in the Google Ethics Committee and what it can do is still not known in these details. But in September 2016, Facebook, Google, and Amazon formed a joint organization with the goal of finding solutions to the security and privacy threats posed by AI. OpenAI is also a similar organization that aims to develop and promote open source AI that benefits everyone. Google's Novig said: "Machine learning technology is publicly researched and it is very important to spread through open publications and open source code. We can share all rewards."

If we can establish industry standards and ethical standards, and fully understand the risks of AI, then it is very important to establish a regulatory mechanism centered on ethicists, technical experts, and business leaders. This is the best way to use AI for the benefit of mankind. Strand said: "Our job is to reduce people's concern about robots taking over the world in sci-fi films, and pay more attention to how technology can be used to help humans think and make decisions instead of completely replacing it."

Zirconia Ceramics

Zirconia Ceramics,Zirconia Engineering Components,Zirconia Precision Parts,Zirconia Wear Resistant Parts

Yixing Guanming Special Ceramic Technology Co., Ltd , https://www.guanmingceramic.com