Home Undeterred, CCP To Ignore Risks Of AI Weapons, Experts Say

    Undeterred, CCP To Ignore Risks Of AI Weapons, Experts Say

    0
    122

    Undeterred, CCP To Ignore Risks Of AI Weapons, Experts Say Authored by Petr Svab via The Epoch Times (emphasis ours), Cutting-edge weapons powered by artificial intelligence are emerging as a global security hazard, especially in the hands of the Chinese Communist Party (CCP), according to several experts. Industrial robots at a booth the day before the 2015 China International Industry Fair at National Exhibition and Convention Center in Shanghai on Nov. 2, 2015. (Getty Images) Eager to militarily surpass the United States, the CCP is unlikely to heed safeguards around lethal AI technologies, which are increasingly dangerous in their own right, the experts have argued. The nature of the technology is prone to feeding some of the worst tendencies of the regime and the human psyche in general, they warned. โ€œThe implications are quite dramatic. And they may be the equal of the nuclear revolution,โ€ said Bradley Thayer, a senior fellow at the Center for Security Policy, an expert on a strategic assessment of China, and a contributor to The Epoch Times. Killer Robots The development of AI-powered autonomous weapons unfortunately is rapidly progressing, according to Alexander De Ridder, an AI developer and co-founder of Ink, an AI marketing firm. โ€œTheyโ€™re becoming quickly more efficient and quickly more effective,โ€ he told The Epoch Times, adding that โ€œtheyโ€™re not at the point where they can replace humans.โ€ Autonomous drones, tanks, ships, and submarines have become a reality along with more exotic modalities, such as the quadruped robot dogs, already armed with machine guns in China. Even AI-powered humanoid robots, the stuff of sci-fi horrors, are in production. Granted, theyโ€™re still rather clumsy in the real world, but they wonโ€™t be for long, De Ridder suggested. โ€œThe capabilities for such robots are quickly advancing,โ€ he said. Once they reach marketable usefulness and reliability, China is likely to turn its manufacturing might to their mass production, according to De Ridder. โ€œThe market will be flooded with humanoid robots, and then itโ€™s up to the programming how theyโ€™re used.โ€ That would mean military use, too. โ€œItโ€™s kind of inevitable,โ€ he said. Such AI-powered machines are very effective at processing images to discern objectsโ€”to detect a human with their optical sensors, for example, explained James Qiu, an AI expert, founder of GIT Research Institute, and former CTO at FileMaker. That makes AI robots very good at targeting. โ€œItโ€™s a very effective killing machine,โ€ he said. AI Generals On a broader level, multiple nations are working on an AI capable of informing and coordinating battlefield decisionsโ€”an electronic general, according to Jason Ma, an AI expert and data research lead at a multinational Fortune 500 company. He didnโ€™t want the companyโ€™s name mentioned to prevent any impression he was speaking on its behalf. The Peopleโ€™s Liberation Army (PLA), the CCP military, recently conducted battle exercises in which an AI was directly put in command. The U.S. military also has projects in this area, Ma noted. โ€œItโ€™s a very active research and development topic.โ€ The need is obvious, he explained. Battlefield decisions are informed by a staggering amount of data from historical context and past intelligence to near-real time satellite data, all the way to millisecond-by-millisecond input from every camera, microphone, and whatever sensor on the battlefield. Itโ€™s โ€œvery hardโ€ for humans to process such disparate and voluminous data streams, he said. โ€œThe more complex the warfare, the more important part it becomes how can you quickly integrate, summarize all this information to make the right decision, within seconds, or within even sub-second,โ€ he said. A Shield AI V-BAT Teams, a vertical take-off and lift (VTOL) Artificial Intelligence (AI) piloted Unmanned Aircraft System (UAS), on the opening day of the Farnborough International Airshow 2024, south west of London, on July 22, 2024. (Justin Tallis/AFP via Getty Images) Destabilization AI weapons are already redefining warfare, the experts agreed. But the consequences are much broader. The technology is making the world increasingly volatile, Thayer said. On the most rudimentary level, AI-powered weapon targeting will likely make it much easier to shoot down intercontinental ballistic missiles, detect and destroy submarines, and shoot down long-range bombers. That could neutralize the U.S. nuclear triad capabilities, allowing adversaries to โ€œescalate beyond the nuclear levelโ€ with impunity, he suggested. โ€œAI would affect each of those components, which we developed and understood during the Cold War as being absolutely essential for a stable nuclear deterrent relationship,โ€ he said. โ€œDuring the Cold War, there was a broad understanding that conventional war between nuclear powers wasnโ€™t feasible. โ€ฆ AI is undermining that, because it introduces the possibility of conventional conflict between two nuclear states.โ€ If people continue developing AI-powered weapon systems without restrictions, the volatility will only worsen, he predicted. โ€œAI is greatly affecting the battlefield, but itโ€™s not yet determinative,โ€ he said. If AI capabilities reach โ€œthe effect of nuclear war without using nuclear weapons,โ€ that would sit the world on a powder keg, he said. โ€œIf thatโ€™s possible, and itโ€™s quite likely that it is possible, then thatโ€™s an extremely dangerous situation and incredibly destabilizing situation because it compels somebody whoโ€™s on the receiving end of an attack to go first, not to endure the attack, but to aggress.โ€ In warfare lexicon, the concept is called โ€œdamage limitation,โ€ he said. โ€œYou donโ€™t want the guy to go first, because youโ€™re going to get badly hurt. So you go first. And thatโ€™s going to be enormously destabilizing in international politics.โ€ The concern is not just about killer robots or drones but also various unconventional AI weapons. An AI, for example, could be developed to find vulnerabilities in critical infrastructure such as the electric grid or water supply systems. Controlling the proliferation of such technologies appears particularly daunting. AI is just a piece of software. Even the largest models fit on a regular hard drive and can run on a small server farm. Simple but increasingly lethal AI weapons, such as killer drones, can be shipped in parts without raising alarm. โ€œBoth vertical and horizontal proliferation incentives are enormous, and itโ€™s easily done,โ€ Thayer said. De Ridder pointed out that the Chinese state wants to be seen as responsible on the world stage. But that hasnโ€™t stopped the CCP from supplying weapons or aiding weapon programs of other regimes and groups that arenโ€™t so reputationally constrained, other experts have noted. It wouldnโ€™t be a surprise if the CCP were to supply autonomous weapons to terrorist groups that would then tie up the U.S. military in endless asymmetrical conflicts. The CCP could even keep its distance and merely supply the parts, letting proxies assemble the drones, much like Chinese suppliers provide fentanyl precursors to Mexican cartels and let them manufacture, ship, and sell the drugs. The CCP, for example, has a long history of aiding Iranian weapon programs. Iran, in turn, supplies weapons to a panopticon of terrorist groups in the region. โ€œThere would be little disincentive for Iran to do this,โ€ Mr. Thayer said. An Iranian military truck carries an Arash drone during a military parade as part of a ceremony marking the country’s annual army day in Tehran on April 17, 2024. (Atta Kenare/AFP via Getty Images) Human in the Loop Itโ€™s generally accepted, at least in the United States and among its allies, that the most crucial safeguard against AI weapons wreaking unforeseen havoc is keeping a human in control of important decisions, particularly the use of deadly force. โ€œUnder no circumstances should any machines autonomously independently be allowed to take a human life ever,โ€ De Ridder said. The principle is commonly summarized in the phrase โ€œhuman in the loop.โ€ โ€œA human has a conscience and needs to wake up in the morning with remorse and the consequences of what theyโ€™ve done, so that they can learn from it and not repeat atrocities,โ€ said De Ridder. Some of the experts pointed out, however, that the principle is already being eroded by the nature of combat transformed by AI capabilities. In the Ukraine war, for example, the Ukrainian military had to equip its drones with some measure of autonomy to guide themselves to their targets because their communication with human operators was being jammed by the Russian military. Such drones only run simpler AI, Ma said, given the limited power of the droneโ€™s onboard computer. But that may soon change as both AI models and computers are getting faster and more efficient. Apple is already working on an AI that could run on a phone, says Ma. โ€œItโ€™s highly likely it will be in the future put into a small chip.โ€ Moreover, in a major conflict where hundreds or perhaps thousands of drones are deployed at once, they can share computational power to perform much more complex autonomous tasks. โ€œItโ€™s all possible,โ€ he said. โ€œItโ€™s gotten to the point where itโ€™s not science fiction; itโ€™s just [a matter of] if there is a group of people who want to devote the time to work on that. Itโ€™s tangible technology.โ€ Removing human control out of necessity isnโ€™t a new concept, according to James Fanell, former naval intelligence officer and an expert on China. He gave the example of the Aegis Combat System deployed on U.S.-guided missile cruisers and destroyers. It automatically detects and tracks aerial targets and launches missiles to shoot them down. Normally, a human operator controls the missile launches, but thereโ€™s also a way to switch it to automatic mode, such as when thereโ€™s too many targets for the human operator to track. The system then identifies and destroys targets on its own, Fanell said. In mass drone warfare, where an AI coordinates thousands of drones in a systematic attack, the side that gives its AI autonomy to shoot will gain a major speed advantage over the side where humans must approve each shot. โ€œOn the individual shooting level, people have to give up control because they canโ€™t really make all the decisions so quickly,โ€ Ma said. De Ridder pointed out that a drone shooting another drone on its own would be morally acceptable. But that could unleash a lot of autonomous shooting on a battlefield where there may be humans too, opening the door to untold collateral casualties. No Rules Whatever AI safeguards may be practicable, the CCP is unlikely to abide by them anyway, most of the experts agreed. โ€œI donโ€™t really see there will be any boundaries for China to be cautious about,โ€ Ma said. โ€œWhatever is possible, they will do it.โ€ โ€œThe idea that China would constrain themselves in the use of it, I donโ€™t see that,โ€ Fanell said. โ€œTheyโ€™re going to try to take advantage of it and be able to exploit it faster than we can.โ€ An UAV is shown during the military parade in Tiananmen Square, Beijing, China on Oct. 1, 2019. (Andrea Verdelli/Getty Images) The human-in-the-loop principle could simply be reinterpreted to apply to โ€œa bigger, whole battle levelโ€ rather than โ€œthe individual shooting level,โ€ Ma said. But once one accepts that AI can start shooting on its own in some circumstances, the principle of human control becomes malleable, Fanell said. โ€œIf youโ€™re willing to accept that in a tactical sense, whoโ€™s to say you wonโ€™t take it all the way up to the highest level of warfare?โ€ he said. โ€œItโ€™s the natural evolution of a technology like this, and Iโ€™m not sure what we can do to stop it. Itโ€™s not like youโ€™re going to have a code of ethics that says in warfare [letโ€™s abide by] the Marquess of Queensberry Rules of boxing. Itโ€™s not going to happen.โ€ Even if humans are kept in control of macro decisions, such as whether to launch a particular mission, AI can easily dominate the decision-making process, multiple experts agreed. The danger wouldnโ€™t be a poorly performing AI but rather one that works so well that it instills trust in the human operators. De Ridder was skeptical of prognostications about superintelligent AI that vastly exceeds humans. He acknowledged, though, that AI obviously exceeds humans in some regards, particularly speed. It can crunch mountains of data and spit out a conclusion almost immediately. Itโ€™s virtually impossible to figure out how exactly an AI comes to its conclusions, according toย  Ma and Qiu. De Ridder said that he and others are working on ways to restrict AI to a human-like workflow, so the individual steps of its reasoning are more transparent. But given the incredible amount of data involved, it would be impossible for the AI to explain how each piece of information factored into its reasoning without overwhelming the operator, Ma acknowledged. โ€œIf the human operator clearly knows this is a decision [produced] after the AI processed terabytes of data, he wonโ€™t really have the courage to overrule that in most cases. So I guess yes, it will be formality,โ€ he said. โ€œHuman in the loop is a comfortable kind of phrase, but in reality, humans will give up control quickly.โ€ Public Pressure Even if humans are kept in the loop only nominally, itโ€™s still important, De Ridder said. โ€œAs long as we keep humans in the loop, we can keep humans accountable,โ€ he said. Indeed, all the experts agreed that public pressure is likely to constrain AI weapon development and use, at least in the United States. Ma gave the example of Google terminating a defense contract over the objections of its staff. He couldnโ€™t envision an analogous situation in China, though. Qiu agrees. โ€œAnything inside China is a resource the CCP can leverage,โ€ he said. โ€œYou cannot say, โ€˜Oh, this is a private company.โ€™ย  There is no private company per se [in China].โ€ Even the CCP cannot dispose of public sentiment altogether,ย De Ridder said. โ€œThe government can only survive if the population wants to collaborate.โ€ But thereโ€™s no indication that the Chinese populace sees AI military use as an urgent concern. On the contrary, companies and universities in China appear to be eager to pick up military contracts,ย  Ma said. De Ridder called for โ€œan international regulatory framework that can be enforced.โ€ Itโ€™s not clear how such regulations could be enforced against China, which has a long history of refusing any limits on its military development. The United States has long vainly attempted to bring China to the table on nuclear disarmament. Recently, China refused a U.S. request to guarantee that it wouldnโ€™t use AI for nuclear strike decisions. If the United States regulates its own AI development, it could create a strategic vulnerability, multiple experts suggested. โ€œThose regulations will be very well studied by the CCP and used as an attack tool,โ€ Qiu said. Even if some kind of agreement is reached, the CCP has a poor track record of keeping promises, according to Thayer. โ€œAny agreement is a pie crust made to be broken.โ€ Solutions De Ridder says he hopes that perhaps nations would settle for using AI in less destructive ways. โ€œThereโ€™s a lot of ways that you can use AI to achieve your objectives that does not involve sending a swarm of killer drones to each other,โ€ he said. โ€œWhen push comes to shove, nobody wants these conflicts to happen.โ€ The other experts believed, however, that the CCP wouldnโ€™t mind starting such a conflictโ€”as long as it would see a clear path to victory. โ€œThe Chinese are not going to be constrained by our ruleset,โ€ Fanell said. โ€œTheyโ€™re going to do whatever it takes to win.โ€ Reliance on the whispers of an AI military advisor, one that instills confidence by processing mountains of data and producing convincing battle plans, could be particularly dangerous as it may create a vision of victory where there previously wasnโ€™t one, according to Thayer. โ€œYou can see how that might be very attractive to a decision maker, especially one that is hyper aggressive, as is the CCP,โ€ย  Thayer said. โ€œIt may make aggression more likely.โ€ โ€œThereโ€™s only one way to stop it, which is to be able to defeat it,โ€ Fanell said. An AI chip of Tongfu Microelectronics is displayed during the World Semiconductor Congress in Nanjing in China’s eastern Jiangsu Province on July 19, 2023. (STR/AFP via Getty Images) Chuck de Caro, former consultant for the Pentagonโ€™s Office of Net Assessment, recently called for the United States to develop electromagnetic weapons that could disable computer chips. It may even be possible to develop energy weapons that could disable a particular kind of chips, he wrote in a Blaze op-ed. โ€œObviously, without functioning chips, AI doesnโ€™t work.โ€ Another option might be to develop an AI superweapon that could serve as a deterrent. โ€œIs there an AI Manhattan Project that the U.S. is doing that can create the effect that Nagasaki and Hiroshima would have on the PRC and the Chinese Communist Party, which is to bring them to the realization that, โ€˜Okay, maybe we donโ€™t want to go there. This is mutually assured destruction?โ€™ I donโ€™t know. But thatโ€™s what I would be [doing],โ€ Fanell said. That could leave the world in a Cold War-like stand-offโ€”hardly an ideal state, but one likely seen as preferable to abnegating military advantage to the CCP. โ€œEvery country knows itโ€™s dangerous, but nobody can stop because they are afraid they will be left behind,โ€ Ma said. De Ridderโ€™s says it might take a profound shock to halt the AI arms race. โ€œWe might need like a world war, with immense human tragedy, to ban the use of autonomous AI killing machines,โ€ he said. Tyler Durden Tue, 08/06/2024 – 20:05

    Advertisment
    Previous articleDefense chief defends decision to throw out plea deal for 9/11 defendants
    Next articleGoogle and Cloudflare Summoned To Explain Their Plans To Defeat Pirate IPTV

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Chat with us!
      X
      Welcome to 4boca. I'm BocaBot.