Responding to artificial intelligence challenges with human intelligence

With regard to the power of innovation in artificial intelligence, many bold ideas have been proposed. But we also need to hear some serious warnings. Last month, Kate Crawford, chief researcher at Microsoft Research, warned that if abused by authoritarian governments, the ever-increasing artificial intelligence could lead to a "fascist dream."

Crawford said at the SXSW Technology Conference: "When we see the pace of artificial intelligence growing at a rising pace, other things are happening: extreme nationalism, right-wing authoritarianism, and the rise of fascism."

She said that artificial intelligence could bring huge data registers, target specific population groups, abuse predictive policing, and manipulate political beliefs.

Crawford is not the only one who is worried about the use of powerful new technologies that are misused (sometimes in unexpected ways). Mark Walport, the UK's chief scientific adviser, warned that the use of artificial intelligence in areas such as medicine and law involving delicate human judgment could lead to devastating results and erode public interest in the technology. trust.

Although artificial intelligence has the potential to enhance human judgment, it can also lead to harmful prejudice and create a false sense of objectivity. In an article in Wired magazine, he wrote: "Machine learning may internalize all implicit biases in sentencing or medical history and externalize them through their algorithms."

As always, identifying dangers is still much easier than mitigating dangers. Regimes without a bottom line will never comply with regulations that limit the use of artificial intelligence. However, even in a functioning, law-based democracy, it is tricky to frame appropriate responses. Maximizing the positive contributions that artificial intelligence can make while minimizing its harmful consequences will be one of the most difficult public policy challenges of our time.

First of all, artificial intelligence technology is difficult to understand, and its use is often mysterious. It has become increasingly difficult to find independent experts who have not been dug up by the industry and that do not have other conflicts of interest.

Driven by competition in similar commercial arms races in this field, large technology companies have been vying for many of the best academic experts in the field of artificial intelligence. Therefore, many leading research is in the private sector, not the public sector.

It is worth noting that some leading technology companies recognize the need for transparency, albeit somewhat late. There is also a series of initiatives that encourage more policy research and open debate on artificial intelligence.

Elon Musk, founder of Tesla Motors, helped create OpenAI, a non-profit research organization, dedicated to developing artificial intelligence in a secure manner.

Amazon, Facebook, Google, DeepMind, IBM, Microsoft and Apple also launched Partnership on AI to launch more open discussions about the practical application of the technology.

Mustafa Suleyman, co-founder of Google DeepMind and co-founder of Partnership on AI, said that artificial intelligence can play a transformative role in meeting some of the biggest challenges of our time. But he believes that artificial intelligence is growing faster than we collectively understand and control these systems. Therefore, leading artificial intelligence companies must play a more innovative and proactive role in their accountability. To this end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics committee that will review all of the company's activities.

But Suleiman pointed out that our society must also design a better framework to guide these technologies to serve the collective interests. In an interview with the Financial Times Tech Tonic podcast, he said: "We must be able to control these systems so that they can do what we want to do at the time we want, rather than talking to ourselves."

Some observers say that the best way to do this is to adjust our legal system to ensure that artificial intelligence systems can “interpret” the public. In principle, this sounds simple, but it can be extremely complicated to do.

Mireille Hildebrandt, a professor of law and technology at the Free University of Brussels, says one of the dangers of artificial intelligence is that we become overly dependent on what we don't fully understand. The wisdom of the brain." She believes that the purpose and impact of these algorithms must be testable and arguable in court. She said: "If you can't meaningfully explain your system's decision, then you can't make them."

We will need more human intelligence to deal with artificial intelligence challenges.

High Power H Series Lead Acid Battery

High Rate Discharge Battery,High Energy Battery,High Output Battery,H Series Lead Acid Battery

Wolong Electric Group Zhejiang Dengta Power Source Co.,Ltd , https://www.wldtbattery.com