The quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k brxxxxxxxxxxxxxxxx banan abanod Bossgoo(China)Tecgnology.(Bossgoo(China)Tecgnology) , https://www.tlqcjs.com
Artificial intelligence is too dangerous: beware of erotic robots...
How dangerous is Westworld-level artificial intelligence? The foreign media Futurism wrote that the TV drama discussed some philosophical issues related to the ethical and consciousness of artificial intelligence. Although in reality it is unlikely that there will be bloody scenes in the play, we'd better treat our AI robots. "These brutal joys will end with atrocious endings." With the help of Shakespeare and Michael Clayton, HBO's "Western World" reveals some of the hidden dangers of creating advanced artificial intelligence technology. In "Western World", the AI ​​that looks no different from a person lives in a park that looks like the western part of the United States. Visitors can spend a large sum of money to the park for vintage western adventures. During the period, they can fight with AI at will, and they can also kill AI by adultery. Visitors' guns can hurt AI, but the latter can't hurt tourists. Every time a robot "dead", its body will be reclaimed, erased, erased and then sent back to the park. The AI ​​Security Problem in Western World The creation of Western World was inspired by Clayton’s previous film of the same name. It made us wonder how much we could control advanced scientific creations. Unlike the original movie, however, the robot in the Western World TV series was not a villainous villain, but was portrayed as a sympathetic, even human, character. Not surprisingly, the security concerns of the park quickly surfaced. The park is under the care of an old man. He can freely update the robot without going through other people's safety checks. The robots appear to recall signs of abuse. One of the characters mentioned that only one line of code can stop the robot from hurting humans. The program only addressed part of the problem that would bring about AI security concerns: a "malicious robot" uses advanced AI to intentionally injure humans; a small bug in software can have a deadly effect; and human protection at code level is inadequate. However, many security and ethical issues revealed by the show depend on whether the robot is conscious. In fact, the drama is very serious in exploring a very difficult question: What is consciousness? In addition, can humans create conscious things? If so, can we control it? Do we want to find the answer to these questions? To think about these issues, the author interviewed Mark Riddle, an AI researcher at the Georgia Institute of Technology, and David Chalmers, a New York University philosopher. Riddle is dedicated to researching and creating creative AI, and Chalmers is known for his description of “awareness problemsâ€. Can AI feel pain? When asked about the extent to which robots involved in programming have experienced pain, Riddle said, "First of all, I oppose violence against humans, animals, humanoid robots, or AI." He then explained that for humans and For animals, the perceived pain is a warning signal “to avoid specific stimuliâ€. However, for robots, "the closest analogy may be the experience that reinforcement-enhanced robots will encounter. Such robots require trial and error learning." Such AIs receive positive or negative actions after performing certain actions. Feedback, and then it will adjust its future behavior accordingly. Riddle said that negative feedback would be more "similar to losing points in computer games," rather than feeling pain. "Robots and AI can be programmed to express 'pain' as humans do," Riddle said. "But that kind of pain can be an illusion. There is a reason to make this illusion: to make robots easy to get The way to understand and resonate is to express your inner state to human beings.†Riddle is not worried that AI will feel pain. He also said that if the robot's memory was completely erased every night, it would feel like nothing happened. However, he pointed out a potential security issue. In order for reinforcement learning to work properly, AI needs to take actions that are optimized for positive feedback. If the robot's memory is not completely erased - if the robot starts to remember bad experiences that have occurred on itself - it may try to circumvent the actions or people that trigger negative feedback. Riddle said, "In theory, these robots can learn to plan in advance to reduce the possibility of obtaining negative feedback in the most cost-effective way ... If the robot in addition to knowing positive feedback or negative feedback, it is not clear that their actions will bring This may also mean that they will act in advance to prevent harm from humans." But he pointed out that in the foreseeable future, the robot's capabilities are not enough to bring this concern. If robots are so powerful, problems that can give them negative feedback can pose a threat to humans. Can AI get consciousness? Chalmers has a slightly different view. “I see the conscious issues like this. There are no doubts about the consciousness of these beings...they appear to have a very rich emotional life... This is reflected in their ability to feel pain and to think about issues... They don’t just show the conditions Reflective behavior, they will also think about their situation, they are reasoning." "Obviously, they are conscious." He added. Chalmers said that instead of trying to define what a robot is worth, it is better to think about what they lack. He pointed out that the most important thing is that they lack free will and memory. However, many of us are unconsciously caught up in the habit and hard to extricate themselves. Countless people have extreme memory problems, but no one feels wrong with rape or killing them. "In this program, if abuse of AI is allowed, is it because they lack something or because of other reasons?" Chalmers asked. In Chalmers' view, the specific scenario depicted in Western World may not be realistic, because he does not believe that the dichotomy of mind will bring awareness, even robots. "In contrast, it's much easier to program robots to directly monitor their own ideas." But this still poses a risk. "If you are dealing with robots that are as complex and intelligent as this one, do you think it is easy to control?" Chalmers asked. In any case, treating robots improperly may pose a threat to human security. To create unconscious robots, we face the risk that they will learn erroneous lessons; intentionally (or intentionally, as in the Western World) to build conscious robots, we also face the risk of rebelling against human beings because they are abused and suppressed. . In the second episode of “Western Worldâ€, the receptionist was asked if she was a “real person.†She replied, “If you distinguish, what is the so-called?†This sounds reasonable.