defense – RoboticsBiz https://roboticsbiz.com Everything about robotics and AI Wed, 09 Apr 2025 15:57:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 China’s AI robot army: The dawn of autonomous warfare and global power shift https://roboticsbiz.com/chinas-ai-robot-army-the-dawn-of-autonomous-warfare-and-global-power-shift/ Wed, 09 Apr 2025 15:57:01 +0000 https://roboticsbiz.com/?p=12568 A silent revolution is unfolding on the global stage, not in the form of tanks or missiles, but as autonomous machines and artificial intelligence (AI) systems reshaping the very concept of warfare. At the forefront of this transformation stands China, orchestrating what may well be the most ambitious and far-reaching militarization of AI in modern […]

The post China’s AI robot army: The dawn of autonomous warfare and global power shift appeared first on RoboticsBiz.

]]>
A silent revolution is unfolding on the global stage, not in the form of tanks or missiles, but as autonomous machines and artificial intelligence (AI) systems reshaping the very concept of warfare. At the forefront of this transformation stands China, orchestrating what may well be the most ambitious and far-reaching militarization of AI in modern history.

With an estimated $15 billion invested in military AI development in a single year and reports of up to one million kamikaze drones in production, China is not merely experimenting with next-gen warfare—it is operationalizing it. The implications are as vast as they are urgent. We are witnessing the birth of a new kind of military might—one that operates at machine speed, beyond the limitations of human reflex and judgment.

This article explores the architecture of China’s AI-driven military ecosystem, analyzes its strategic impact, and raises critical questions about the future of war, ethics, and international security in an increasingly autonomous world.

The Strategic Shift: From Human Command to Algorithmic Execution

China’s embrace of AI warfare is not a collection of isolated programs—it is a comprehensive, multi-domain strategy. The country is no longer content with conventional dominance. Its military doctrine now revolves around “intelligentized warfare,” a concept in which AI systems don’t just support operations—they plan, execute, and adapt missions with minimal or no human intervention.

This represents a significant departure from traditional command structures. Human decision-making, once the core of military engagement, is gradually becoming supplementary. Algorithms now analyze battlefield data, deploy forces, and even engage targets independently.

The Speed Problem

While democratic nations debate the ethics and accountability of AI warfare, China pushes forward unencumbered. Its military AI systems are designed to operate faster than any human decision-making process, creating a tactical advantage that is difficult—if not impossible—to counter through conventional strategies.

Swarming the Skies: China’s Dominance in Aerial AI Systems

China’s most publicized breakthroughs come in the form of drone swarms—networks of AI-enabled flying machines that act collectively to complete missions. These swarms are not science fiction; they are operational.

Loitering Munitions and Kamikaze Drones

China has shifted its focus toward low-cost, high-volume aerial platforms like the CH-91 and Wing Loong series. These drones can linger over a target zone for extended periods before autonomously engaging, making them ideal for dynamic battlefields.

Perhaps most alarming is the reported production of one million kamikaze drones, small and expendable but deadly in numbers. Deployed en masse, they can overwhelm air defenses and saturate enemy positions, not through individual superiority but sheer scale—a disturbing throwback to attrition warfare, now powered by AI.

Drone Swarm Coordination

The real danger lies in networked intelligence. China’s drone swarms can:

  • Communicate in real time
  • Coordinate attacks
  • Adapt to defenses mid-mission
  • Continue operations even when individual units are destroyed

Such capabilities suggest that these drones are no longer mere tools—they are collaborative actors in a decentralized battle system.

Beneath the Waves: AI in Maritime Dominance

China’s ambitions extend below the surface. Autonomous Underwater Vehicles (AUVs) like the HSU-001 have been deployed to:

  • Conduct deep-sea reconnaissance
  • Lay surveillance networks
  • Track submarines over extended durations

These AUVs can operate for weeks without resurfacing, potentially turning the South China Sea into a digital minefield of autonomous sensors and offensive platforms. Their ability to “lock down” strategic maritime chokepoints gives China a silent but powerful hold over regional waters.

On the Ground: Humanoid Robots and Urban Combat Readiness

While most nations are just dabbling in humanoid robotics, China is already integrating legged robots into its military planning. The Unitree G1 is a prime example—a humanoid capable of traversing uneven terrain, carrying 20 kg payloads, and potentially executing urban operations.

Though marketed for civilian use, these robots are being adapted for tactical applications, including:

  • Surveillance
  • Supply transport in difficult terrain
  • Weaponized support roles

Their modular design makes them ideal for hybrid deployment, blurring the line between logistics and combat roles.

The Fangwang-1: A Turning Point in Autonomous Warfare

In November, at the Zhuhai Air Show, China unveiled something previously reserved for military science fiction: the Fangwang-1 integrated combat system. It was not a demo of individual technologies—it was a live orchestration of autonomous warfare.

Autonomous Battle Group in Action

This system:

  • Included aerial drones, ground robots, and electronic warfare systems
  • Operated without ongoing human control
  • Identified threats, deployed countermeasures, repositioned units, and launched strikes—all autonomously

Even more shocking was its resilience. When engineers simulated enemy jamming by cutting off communications, the system:

  • Reorganized internally
  • Established alternate communication pathways
  • Continued its mission seamlessly

This adaptive autonomy is a strategic leap—akin to giving machines not just orders, but intent.

The Brain Behind the Machine: AI Software as the Real Weapon

China’s real edge may not lie in the hardware but in its military neural networks—AI systems capable of:

  • Learning from every simulated engagement
  • Generating new strategies on the fly
  • Predicting enemy movement using pattern recognition
  • Coordinating multiple domains (air, land, sea, cyber) simultaneously

Reportedly, Chinese military AIs battle each other thousands of times a day in simulation environments, continuously refining tactics that human planners may never even conceive.

This allows Chinese systems to operate with tactical foresight, rather than simply reacting to threats. In combat, this could lead to:

  • Faster response times
  • Predictive preemptive strikes
  • Near-instantaneous redeployment of assets

Accountability in the Age of Autonomy

With all these advancements, one fundamental question arises: Who is responsible when AI makes the decision to kill?

International humanitarian law is built on accountability, but in a world where machines decide targeting priorities, that clarity is vanishing. When a drone swarm misidentifies civilians as combatants, where does the blame lie?

  • With the software developer?
  • With the commander who deployed the system?
  • Or with the algorithm that “learned” an error?

The diffusion of responsibility is perhaps the gravest legal and ethical challenge in AI warfare.

The Escalation Dilemma: War at Machine Speed

Autonomous weapons engage at algorithmic speeds, often too fast for human oversight. Imagine two opposing AI systems misinterpreting one another’s moves—a false positive or accidental incursion—escalating into full-blown conflict before any human can intervene.

In such scenarios, war becomes less a matter of intent and more a consequence of competing machine logic. This transforms:

  • Border skirmishes into automated retaliation spirals
  • Cyber intrusions into kinetic responses
  • Diplomatic missteps into military engagements

Global Implications and the Need for AI Warfare Governance

China’s operational deployment of AI military systems means the rest of the world must now rethink defense, diplomacy, and deterrence. The old frameworks—treaties, arms control agreements, and doctrines—were designed for human conflict. They are outdated in the face of:

  • Swarm intelligence
  • Humanoid combatants
  • Machine-generated strategies
  • Adaptive autonomous systems

Despite calls from the international community, no binding global treaty exists to regulate autonomous weapons. China’s rapid pace underscores the urgency of developing norms, verification systems, and failsafe protocols before AI-driven conflicts erupt uncontrollably.

Conclusion: Are We Ready for the New Battlefield?

The militarization of AI is no longer a theoretical debate—it is a lived reality. China’s development and deployment of autonomous combat systems mark a strategic inflection point in global security. From the skies to the seas to urban battlegrounds, autonomous platforms are not only supporting warfare—they are beginning to replace humans in executing it.

Whether this leads to fewer casualties through precision and automation, or more devastating wars due to speed and scale, will depend on how humanity chooses to govern the machines we’ve created.

The age of algorithmic war has begun. The question is—can human wisdom keep pace?

The post China’s AI robot army: The dawn of autonomous warfare and global power shift appeared first on RoboticsBiz.

]]>
Six reasons to ban lethal autonomous weapon systems (LAWS) https://roboticsbiz.com/six-reasons-to-ban-lethal-autonomous-weapon-systems-laws/ https://roboticsbiz.com/six-reasons-to-ban-lethal-autonomous-weapon-systems-laws/#respond Wed, 26 Jun 2024 11:30:13 +0000 https://roboticsbiz.com/?p=2231 Lethal Autonomous Weapon Systems (LAWS), often referred to as “killer robots,” are a new class of weapons that utilize sensors and algorithms to autonomously identify, engage, and neutralize targets without direct human intervention. While fully autonomous weapons have not yet been deployed, existing technologies like missile defense systems already demonstrate autonomous target identification and engagement. […]

The post Six reasons to ban lethal autonomous weapon systems (LAWS) appeared first on RoboticsBiz.

]]>
Lethal Autonomous Weapon Systems (LAWS), often referred to as “killer robots,” are a new class of weapons that utilize sensors and algorithms to autonomously identify, engage, and neutralize targets without direct human intervention. While fully autonomous weapons have not yet been deployed, existing technologies like missile defense systems already demonstrate autonomous target identification and engagement. With rapid advancements in artificial intelligence and robotics, concerns are mounting about the potential development and deployment of LAWS against human targets in the near future.

The international community, including numerous nations, the United Nations (UN), the International Committee of the Red Cross (ICRC), and non-governmental organizations, is calling for regulation or an outright ban on LAWS. This growing movement is fueled by ethical, moral, legal, accountability, and security concerns. Over 70 countries, 3,000 experts in robotics and artificial intelligence (including prominent figures like Stephen Hawking and Elon Musk), and numerous companies, religious leaders, and Nobel Peace Laureates have voiced their support for a ban on killer robots. China, a permanent member of the UN Security Council, has called for a legally binding ban within the Convention on Certain Conventional Weapons (CCW).

Why Ban LAWS? The Risks and Concerns

1. Unpredictability & Unreliability:

Lethal Autonomous Weapon Systems (LAWS), despite their sophisticated algorithms, are not foolproof. These systems can make errors in judgment, target identification, or engagement, leading to unintended harm to civilians and non-combatants. The use of machine learning in LAWS introduces an element of unpredictability as these systems learn and adapt, potentially resulting in unintended consequences. Integrating ethical standards and international humanitarian law into LAWS algorithms remains a complex challenge, raising concerns about their adherence to legal and ethical principles during deployment. For instance, a LAWS system deployed in a conflict zone might misidentify a civilian vehicle as a military target due to an algorithmic error or faulty sensor data, resulting in the unnecessary loss of life.

2. Arms Race and Proliferation:

The development of LAWS could trigger a global arms race as nations compete to acquire and deploy these weapons. This could lead to increased military spending, heightened tensions, and a greater risk of conflict. The relative affordability and ease of replication of LAWS technology raise concerns about proliferation, with the potential for non-state actors, including terrorist groups, to acquire and use these weapons. The rapid decision-making capabilities of LAWS, especially when interacting with other autonomous systems, could lead to unintended escalation of conflicts, potentially spiraling out of control. If multiple nations deploy LAWS in a conflict zone, the autonomous interactions between these systems could quickly escalate a minor skirmish into a full-scale war, with devastating consequences.

3. Humanity in Conflict: Ethical Concerns:

Machines lack the capacity for compassion, empathy, and moral reasoning that are essential for making life-or-death decisions in armed conflict. Replacing human judgment with algorithmic decision-making in LAWS raises profound ethical concerns about the devaluation of human life. Delegating the decision to kill to machines could lead to a loss of human agency and accountability in warfare. A LAWS system might make a decision to eliminate a target based on purely tactical considerations, disregarding the potential for collateral damage or the broader ethical implications of the action.

4. Responsibility and Accountability:

Determining responsibility for unlawful acts committed by LAWS is a significant challenge. Is the manufacturer, the programmer, the military commander, or the machine itself accountable for the actions of a LAWS? Existing legal frameworks may not adequately address the unique challenges posed by LAWS, and establishing clear legal guidelines for their development, deployment, and use is essential. The lack of clear accountability could undermine deterrence and punishment mechanisms for violations of international humanitarian law committed by LAWS. In the event of a LAWS system causing civilian casualties, determining who is legally responsible and ensuring appropriate consequences could be extremely difficult, potentially leading to a lack of justice for the victims.

5. Psychological Impact and Dehumanization of Warfare:

The removal of human soldiers from direct combat through the use of LAWS creates an emotional distance that can desensitize individuals and societies to the consequences of war. This could lead to a greater willingness to engage in conflicts, as the human cost becomes less tangible. Soldiers who oversee or operate LAWS may experience moral injury as they grapple with the consequences of decisions made by machines, potentially leading to psychological distress and trauma. Additionally, relying on machines to make lethal decisions can dehumanize the enemy, reducing them to mere targets and further eroding the ethical boundaries of warfare.

6. Socioeconomic Consequences and the Threat to Peace:

The proliferation of LAWS could disrupt the existing balance of power among nations, as countries with advanced technological capabilities gain a significant military advantage. The development and deployment of LAWS could divert resources away from essential social programs and economic development, exacerbating global inequalities and potentially contributing to instability. Furthermore, the increased reliance on autonomous systems in military operations could raise the risk of accidental war due to technical malfunctions, misinterpretations of data, or cyberattacks.

Conclusion

The potential deployment of Lethal Autonomous Weapon Systems (LAWS) raises serious concerns about their impact on international security, humanitarian law, and the ethical conduct of warfare. The unpredictability, potential for arms races, ethical dilemmas, and challenges in accountability all contribute to the growing calls for a ban on these weapons.

While technological advancements offer the potential for positive applications in various fields, the use of autonomous systems in warfare demands careful consideration and robust international regulations to ensure that human control and ethical principles remain at the forefront of military decision-making.

The post Six reasons to ban lethal autonomous weapon systems (LAWS) appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/six-reasons-to-ban-lethal-autonomous-weapon-systems-laws/feed/ 0
Emerging technologies and the future of military robotics https://roboticsbiz.com/emerging-technologies-and-the-future-of-military-robotics/ https://roboticsbiz.com/emerging-technologies-and-the-future-of-military-robotics/#respond Sun, 09 Jun 2024 20:00:36 +0000 https://roboticsbiz.com/?p=1117 Military robotics, a field with roots stretching back over a century, has evolved exponentially in recent decades. From early guided munitions to contemporary unmanned vehicles, the landscape of military technology continues to be shaped by innovation and strategic necessity. In this exploration, we delve into the convergence of traditional military robotics with emerging technologies, examining […]

The post Emerging technologies and the future of military robotics appeared first on RoboticsBiz.

]]>
Military robotics, a field with roots stretching back over a century, has evolved exponentially in recent decades. From early guided munitions to contemporary unmanned vehicles, the landscape of military technology continues to be shaped by innovation and strategic necessity. In this exploration, we delve into the convergence of traditional military robotics with emerging technologies, examining their impact on warfare, ethical considerations, and potential future trajectories.

Evolution of Military Robotics

Historically, military robotics has encompassed various forms, including mines, torpedoes, and early guided munitions. The past eighty years have witnessed significant advancements, such as the development of unmanned air, ground, underwater, and surface vehicles. The dual objectives of standoff have driven these innovations—keeping humans out of harm’s way—and precision—ensuring reliable and accurate operations.

Technological Foundations

The foundation of modern military robotics rests upon key technological pillars:

Electronic Miniaturization:

The relentless march of electronic miniaturization has catalyzed a paradigm shift in the design and functionality of military robotics. Over the past decades, advancements in semiconductor technology, integrated circuitry, and microelectromechanical systems (MEMS) have enabled the miniaturization of electronic components to an unprecedented degree. This miniaturization revolution has endowed robots with enhanced sensing capabilities, ranging from high-resolution imaging and laser-based rangefinding to sophisticated radar and lidar systems. By shrinking the footprint of sensors and processing units, electronic miniaturization has facilitated the integration of complex functionalities into compact and agile robotic platforms, enabling them to navigate dynamic environments, perceive subtle cues, and execute mission-critical tasks with unparalleled precision and efficiency.

Telecommunications:

Advanced telecommunications technologies have heralded a new era of connectivity and collaboration in military robotics. From long-range satellite communications to short-range wireless protocols, the spectrum of telecommunications capabilities available to unmanned systems has expanded exponentially, enabling seamless data exchange and real-time command and control across distributed robotic networks. High-bandwidth communication channels facilitate the transmission of sensor data, telemetry, and situational awareness feeds, empowering human operators to remotely monitor, supervise, and intervene in robotic operations from remote command centers or deployed platforms. Moreover, advances in secure communication protocols and encryption algorithms ensure the confidentiality, integrity, and availability of data transmitted between robotic assets, mitigating the risk of interception, tampering, or exploitation by adversaries.

Global Positioning:

The ubiquity of Global Positioning System (GPS) technology has emerged as a linchpin of modern military robotics, providing robots with precise localization, navigation, and timing capabilities across diverse operational theaters. By leveraging a constellation of orbiting satellites, GPS enables unmanned systems to determine their exact geographical coordinates, synchronize their internal clocks, and navigate predetermined routes with pinpoint accuracy. This spatial-temporal awareness is instrumental in facilitating mission planning, route optimization, and target acquisition, enhancing robotic platforms’ autonomy, reliability, and effectiveness in dynamic and contested environments. Furthermore, integrating GPS with inertial navigation systems (INS) and terrain mapping algorithms enables robots to maintain situational awareness in GPS-denied or degraded environments, ensuring operational resilience and mission continuity in adverse conditions.

Emerging Technologies

As the horizon of technological innovation expands, a constellation of emerging technologies converges to redefine the capabilities and possibilities of military robotics. From the frontier of Artificial Intelligence (AI) to the vanguard of Quantum Technology, these disruptive forces promise to propel unmanned systems into a new era of autonomy, adaptability, and strategic advantage. Against the backdrop of geopolitical competition and rapid technological change, integrating these emerging technologies into military robotics heralds a paradigm shift in warfare, offering unprecedented opportunities and profound challenges for policymakers, strategists, and ethicists alike.

Artificial Intelligence (AI):

At the forefront of the technological revolution stands Artificial Intelligence (AI), a domain characterized by algorithms, neural networks, and machine learning models that seek to mimic human cognitive functions. In military robotics, AI promises to augment unmanned systems with enhanced perception, decision-making, and adaptability, enabling them to autonomously navigate complex environments and execute mission-critical tasks with precision and efficiency. Real-world applications of AI in military robotics range from autonomous drones capable of conducting reconnaissance missions to unmanned ground vehicles equipped with intelligent navigation systems that can traverse rugged terrain and evade obstacles autonomously. Moreover, AI-driven predictive analytics and pattern recognition algorithms empower robotic platforms with predictive maintenance capabilities, enabling proactive maintenance interventions and maximizing operational readiness in dynamic and unpredictable operational environments.

Probabilistic Robotics:

Probabilistic robotics techniques represent a paradigm shift in how unmanned systems perceive and interact with their environments. These techniques mitigate uncertainties inherent in the Sense-Model-Plan-Act (SMPA) cycle by integrating imperfect sensor data into models and plans. Simultaneous Localization and Mapping (SLAM) has revolutionized robotic navigation and mapping. SLAM enables robots to autonomously perceive and map their surroundings in real-time, even in dynamic and uncertain environments. This means enhanced situational awareness for military applications, enabling unmanned systems to navigate complex terrains and execute missions with unprecedented precision and efficiency.

Networking:

Integrating military robots into networked systems has ushered in a new era of collaborative warfare, enabled by the ubiquitous connectivity of the Internet. Robots can seamlessly share sensor data, modeling information, and planning strategies through networked communication, facilitating coordinated action among multiple autonomous agents. Decentralized SMPA capabilities allow for distributed decision-making across the battlefield, enhancing adaptability and resilience in the face of dynamic threats. In practical terms, unmanned systems can autonomously coordinate their actions, respond to changing mission objectives, and adapt to unforeseen contingencies in real-time without human intervention.

Parallel Processing:

Parallel processing architectures represent the computational backbone of modern military robotics, enabling unmanned systems to perform complex tasks with unprecedented speed and efficiency. From multicore CPUs to specialized graphics processors, parallel processing accelerates computation-intensive tasks such as modeling, planning, and sensory processing. This enhanced computational power translates into faster decision-making and improved performance of military robots in dynamic and unpredictable environments. For example, unmanned aerial vehicles (UAVs) with parallel processing capabilities can rapidly analyze vast amounts of sensor data, identify potential threats, and execute evasive maneuvers with split-second precision, enhancing their survivability and mission effectiveness on the battlefield.

Quantum Technology:

As the frontier of quantum computing and quantum sensing unfolds, military robotics stand poised to benefit from the transformative potential of quantum technology. Quantum computing, with its promise of exponentially faster processing speeds and enhanced cryptographic capabilities, is key to unlocking new frontiers of data analytics, optimization, and decision support in unmanned systems. In quantum sensing, quantum gravimetry and quantum magnetometry advancements offer unprecedented navigation, mapping, and situational awareness capabilities in GPS-denied or GPS-degraded environments. Real-world examples of quantum-enhanced military robotics include autonomous drones equipped with quantum sensors capable of detecting subtle gravitational anomalies indicative of underground structures or concealed threats, thereby enhancing unmanned platforms’ operational effectiveness and survivability in contested environments.

Biotechnology:

The intersection of biotechnology and military robotics presents a landscape of possibilities and ethical dilemmas as advancements in gene editing, synthetic biology, and bio-inspired design reshape the boundaries of human-machine interaction. From bioengineered materials that mimic the resilience and adaptability of natural organisms to neuroprosthetic devices that augment human cognition and control, biotechnology promises to enhance robotic systems’ capabilities and resilience in diverse operational contexts. Real-world applications of biotechnology in military robotics include the development of biomimetic drones inspired by the flight mechanics of birds and insects, as well as the integration of biohybrid actuators and sensors derived from biological tissues into robotic platforms, enabling them to exhibit lifelike behaviors and adaptability in response to changing environmental conditions.

Directed Energy (DE) Weapons:

Beyond computation and biology, Directed Energy (DE) weapons represent a disruptive paradigm shift in military technology, offering a spectrum of defense, deterrence, and force projection capabilities. DE weapons, which utilize concentrated electromagnetic energy to incapacitate, damage, or destroy enemy targets, present novel opportunities for enhancing the lethality and effectiveness of unmanned systems in diverse operational environments. From high-powered microwave weapons capable of disabling electronics and communications systems to laser-based systems for precision targeting and counter-unmanned aircraft missions, DE weapons offer a versatile and scalable means of engaging threats with speed-of-light accuracy and minimal collateral damage. Moreover, the low cost per shot and nearly limitless magazines of DE weapons enable unmanned systems to defend against missile salvos, swarm attacks, and asymmetric threats with unprecedented efficiency and effectiveness, enhancing robotic platforms’ survivability and mission success in contested and degraded environments.

Lethal Autonomous Weapon Systems (LAWS):

As the frontier of military robotics expands, the development and deployment of Lethal Autonomous Weapon Systems (LAWS) raise profound ethical, legal, and strategic questions about the nature of warfare and the role of human agency in autonomous systems. LAWS, defined as weapon systems capable of independently identifying and engaging targets without human intervention, represent a paradigm shift in the conduct of armed conflict, offering opportunities for enhanced lethality and efficiency and risks of unintended consequences and ethical dilemmas. Real-world applications of LAWS include autonomous drones equipped with onboard target recognition algorithms and decision-making capabilities, as well as unmanned ground vehicles capable of engaging enemy forces based on pre-defined rules of engagement. However, concerns about accountability, proportionality, and the risk of unintended harm have led to calls for preemptive bans on LAWS by a coalition of countries and non-governmental organizations, highlighting the need for international norms, regulations, and ethical guidelines to govern the development and use of autonomous weapons in warfare.

Hypersonic Weapons:

In strategic warfare, the development and proliferation of hypersonic weapons capable of traveling at speeds exceeding Mach 5 pose novel challenges and opportunities for military robotics. Hypersonic weapons offer enhanced maneuverability and unpredictability, unlike traditional ballistic missiles, making them difficult to track and intercept by existing missile defense systems. Real-world examples of hypersonic weapons include hypersonic glide vehicles launched from rockets and hypersonic cruise missiles powered by high-speed engines throughout their flight. While some analysts argue that hypersonic weapons could enhance strategic stability by deterring adversaries and improving the effectiveness of precision strike capabilities, others warn of the risks of miscalculation, unintended escalation, and strategic instability resulting from hypersonic weapons’ compressed timelines and unpredictable flight paths. Moreover, concerns about the affordability, technological feasibility, and utility of hypersonic missile defense systems raise questions about the strategic implications of hypersonic warfare and the need for comprehensive arms control measures to mitigate the risks of proliferation and destabilization.

Conclusion

The depth and breadth of emerging technologies in military robotics—from Artificial Intelligence and Quantum Technology to Biotechnology, Directed Energy, Lethal Autonomous Weapon Systems, and Hypersonic Weapons—present a complex tapestry of opportunities and challenges for policymakers, strategists, and technologists alike. As humanity navigates the complexities of this technological frontier, it is imperative to approach the integration and deployment of emerging technologies in military robotics with foresight, prudence, and a steadfast commitment to upholding the principles of ethical conduct, international law, and human dignity in the pursuit of security, stability, and peace in the 21st century.

The post Emerging technologies and the future of military robotics appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/emerging-technologies-and-the-future-of-military-robotics/feed/ 0
Potential risks of AI in military applications https://roboticsbiz.com/potential-risks-of-ai-in-military-applications/ https://roboticsbiz.com/potential-risks-of-ai-in-military-applications/#respond Thu, 19 May 2022 16:33:21 +0000 https://roboticsbiz.com/?p=7618 Artificial Intelligence (AI) plays an increasing role in planning and supporting military operations. As a key tool in intelligence and analysis of the enemy’s intelligence, AI can cause a dramatic evolution, perhaps even a transformation, in the character of war. AI applications, which are frequently referred to as a tool for jobs that are “dull, […]

The post Potential risks of AI in military applications appeared first on RoboticsBiz.

]]>
Artificial Intelligence (AI) plays an increasing role in planning and supporting military operations. As a key tool in intelligence and analysis of the enemy’s intelligence, AI can cause a dramatic evolution, perhaps even a transformation, in the character of war.

AI applications, which are frequently referred to as a tool for jobs that are “dull, dirty, and dangerous,” provide a way to avoid endangering human lives or assigning humans to tasks that do not require human creativity. AI systems can also lower logistics and sensing costs while improving communication and transparency in complex systems.

National intelligence, surveillance, and reconnaissance capabilities have benefited significantly from AI-enabled systems and platforms. AI’s ability to assist in capturing, processing, storing, and analyzing visual and digital data has increased the quantity, quality, and accuracy of data available to decision-makers. They can use this data for everything from equipment maintenance optimization to minimizing civilian harm.

Potential Risks of Military AI

Artificial intelligence (AI) has the potential to change warfare in both positive and negative ways. It’s easy to think of AI technologies as primarily facilitating offensive operations, but they’ll also be useful for defensive operations. Because AI is a general-purpose technology, how it alters the offense-defense balance in various areas will vary depending on the specific application of AI and may evolve.

The following are some general characteristics of AI and associated risks but keep in mind that these are only possibilities. Technology does not determine fate, and states can choose whether or not to use AI technology. What choices states make will determine how these risks manifest. A concerted effort to avoid these dangers may be successful.

1. Accident Risk

In theory, automation has the potential to improve warfare precision and command and control over military forces, reducing civilian casualties and the risk of unintended escalation. Commercial airline autopilots have improved safety, and self-driving cars will follow suit in time. However, the difficulty in developing safe and reliable self-driving cars in all weather and driving conditions highlights AI’s current limitations. Driving or commercial flying is far less complex and adversarial than war.

Another issue militaries face a lack of data on the battlefield environment. Waymo has driven over 10 million miles on public roads to develop self-driving cars that withstand various driving conditions. It’s also a computer that simulates 10 million miles of driving every day. Waymo can now test its cars in a variety of environments. The issue for militaries is that they have very little ground-truth data about wartime conditions to evaluate their systems. Militaries can put their AI systems to the test in real-world or digital simulation environments. They won’t be able to test their actual performance in real-world scenarios until wartime. Fortunately, wars are a rare occurrence. However, this creates a problem when it comes to testing autonomous systems. In peacetime, militaries can try to replicate real operational conditions as closely as possible, but they will never be able to fully recreate the chaos and violence of war. Humans are adaptable and expected to innovate in times of war based on their prior training.

On the other hand, machine intelligence is not as adaptable as human intelligence. There’s a chance that military AI systems will perform well in training but fail in combat because the environment or operational context is different, even if only slightly. Failures could result in accidents or render military systems ineffective.

Accidents involving military systems could be disastrous. They have the potential to kill civilians or escalate a conflict unintentionally. Even if humans regained control, an incident in which enemy troops are killed could heighten tensions and inflame public opinion. National leaders found it difficult to back down from a crisis. Accidents, as well as vulnerabilities to hacking, could jeopardize crisis stability and complicate international escalation management.

2. Autonomy and Predelegated Authority

Even if AI systems perform flawlessly, nations may face difficulties predicting what actions they might want to take in a crisis. Humans delegate authority for certain actions to machines when deploying autonomous systems. The problem is that leaders might take a different approach in a real crisis. During the Cuban Missile Crisis, US leaders decided that if the Soviet Union shot down a US reconnaissance plane over Cuba, they would attack. They changed their minds after the plane was shot down. Projection bias is a cognitive flaw in which people fail to accurately predict their preferences in future situations. The danger is that autonomous systems will perform as programmed but not how human leaders want, potentially escalating crises or conflicts.

3. Prediction and Overtrust in Automation

Keeping humans informed and limiting AI systems to only giving advice is not a cure-all for these dangers. Automation bias is the tendency for humans to place too much faith in machines. In 2003, humans were involved in two fratricide incidents involving the highly automated US Patriot air and missile defense system, but they could not prevent the deaths. Even after being told the robot was broken, participants in one famous psychological experiment followed a robot the wrong way through a smoke-filled building simulating a fire emergency.

Even before a war begins, putting too much faith in machines could lead to accidents and miscalculations. The Soviet Union conducted Operation RYaN in the 1980s to warn of a surprise nuclear attack by the United States. The intelligence program monitored data on a variety of potential attack indicators, including the amount of blood in blood banks, the location of nuclear weapons and key decision-makers, and the activities of national leaders. This could be stabilizing if AI systems could provide accurate early warning of a surprise attack. Knowing that a surprise attack would be impossible, nations would be less likely to attempt one. However, prediction algorithms are only as good as the data used to train them. There simply isn’t enough data to determine what is indicative of an attack for rare events like a surprise attack. Data that is incorrect will result in incorrect analysis. However, the black-box nature of AI, with its internal reasoning hidden from human users, can obscure these issues. Human users may not be able to see that the algorithm’s analysis has gone wrong if there isn’t enough transparency to understand how it works.

4. Nuclear Stability Risks

All of these dangers are particularly serious in the case of nuclear weapons, where mishaps, delegated authority, or overconfidence in automation can have disastrous consequences. For example, false alarms in nuclear early warning systems could result in disaster. There have been numerous nuclear false alarms and safety lapses with nuclear weapons throughout the Cold War. A Soviet early warning satellite system called Oko falsely detected a launch of five US intercontinental ballistic missiles against the Soviet Union in one notable incident in 1983. The satellites detected sunlight reflected off cloud tops, but the automated system signaled “missile launch” to human operators. According to Soviet Lieutenant Colonel Stanislav Petrov, the system was malfunctioning, but the complexity and opacity of AI systems could lead human operators to overtrust them in future false alarms. Other aspects of nuclear operations that use AI or automation could also be dangerous. Accidents involving nuclear-armed unmanned aircraft (drones) could result in states losing control of their nuclear payloads or inadvertently signaling escalation to an adversary.

The post Potential risks of AI in military applications appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/potential-risks-of-ai-in-military-applications/feed/ 0
Large and micro defense drones for military operations https://roboticsbiz.com/large-and-micro-defense-drones-for-military-operations/ https://roboticsbiz.com/large-and-micro-defense-drones-for-military-operations/#respond Sun, 23 Jan 2022 15:14:46 +0000 https://roboticsbiz.com/?p=6998 Drones, or unmanned aerial vehicles (UAV), have reshaped modern warfare by allowing militaries to engage enemies precisely and gather intelligence without putting their soldiers’ lives in danger. The US Department of Defense (DoD) has been the primary buyer of multimillion-dollar drones. Continued defense spending in the United States will account for roughly 40% of the […]

The post Large and micro defense drones for military operations appeared first on RoboticsBiz.

]]>
Drones, or unmanned aerial vehicles (UAV), have reshaped modern warfare by allowing militaries to engage enemies precisely and gather intelligence without putting their soldiers’ lives in danger.

The US Department of Defense (DoD) has been the primary buyer of multimillion-dollar drones. Continued defense spending in the United States will account for roughly 40% of the entire drone market by 2020.

Drone spending in the United States is increasing with the migration of artificial intelligence (AI) and robotics from Silicon Valley to the defense industry. AI will add new functionality to legacy platforms over the next decade, allowing humans to use robotic systems to improve troop safety and decision-making. Future DoD spending will eventually fund fleets of self-driving vehicles across all military branches.

Military aircraft will become increasingly autonomous in the future, from surveillance to weapon delivery. Despite the ongoing winddown of the MQ-9 Reaper and the retirement of the MQ-1 Predator fleets, US defense spending on drone procurement and R&D is likely to increase over the next decade.

Large drones for military operations

Despite rising defense budgets, drone manufacturers in the United States face stiff competition in the “large drone” market from companies in China and Israel. The United States is not the only country spending more on defense. According to the Stockholm International Peace Research Institute (SIPRI), military spending is increasing worldwide.

Although the United States has the largest fleet of high-altitude, long-endurance (HALE) and medium-altitude, long-endurance (MALE) drones, China and Israel have increased their production and export of defense drones for foreign and domestic militaries. The Missile Technology Control Regime (MTCR), established in 1987 to limit missile technology proliferation, has been interpreted as limiting the export of MALE and HALE US drones.

In July 2020, the US government signed legislation allowing the sale of armed US drones with a top speed of fewer than 800 kilometers per hour to foreign governments that had previously been barred from purchasing them under the MTCR. As a result, exports of MALE and HALE US defense drones are expected to increase in the near future.

MALE drones, such as the MQ-9 Reaper, currently operate in undefended airspace and at altitudes where they can be seen and shot down; however, future war zones’ airspace may not allow loitering MQ-9s. The United States Air Force announced in 2020 that the MQ-9 would be phased out of service in favor of a lower-cost, reusable, and expendable replacement that can be lost in combat without exposing top-secret engineering.

Although the US wants to avoid incidents like the 2011 Iranian capture and eventual reverse engineering of the classified Lockheed Martin RQ-170 Sentinel, the US Air Force is likely to keep using stealth intelligence, surveillance, and reconnaissance (ISR) drones like the RQ-170 and Northrup Grumman’s RQ-180 for at least the next five years, while also accelerating the retirement of aging unclassified aircraft.

In aerial combat with autonomous drones, human-piloted advanced fighter aircraft like the F-22 and F-35 are likely to reign supreme for the next decade. However, military R&D investments in new drones suggest that in the long run, when advanced autonomous combat offerings develop the situational awareness and processing capacity required to consistently outperform humans, air superiority may shift to autonomous drones.

Microdrones for military operations

While large HALE and MALE drones will continue to dominate defense drone spending, defense forces will soon add more compact rotary drones to their arsenals. On the battlefield, small drones are nothing new. Since the mid-2000s and early-2010s, the US has used AeroVironment’s fixed-wing RQ-11 Raven and RQ-20 Puma drones at battalion level for midrange reconnaissance. The US Army awarded AeroVironment a $76 million Lethal Miniature Aerial Missile Systems (LMAMS) procurement contract in May 2020 for its Switchblade drone, a back-packable loitering munition drone for targets beyond visual line of sight (BVLOS).

On the other hand, Rotary drones have been conspicuously absent from military toolkits due to their slow speeds, short battery life, and fragility compared to fixed-winged counterparts. The integration of powerful onboard sensors and improvements in rotary drone performance has opened up tactical use cases for military adoption. The Swiss Armed Forces chose Parrot, Europe’s leading drone company, to supply rotary drones for its Mini UAV program in February 2020. The Defense Innovation Unit (DIU) of the US Department of Defense approved five US-made multi-rotor drones in August 2020.

In the military, small drones are used as situational awareness tools that can be deployed quickly. They can be used to get a bird’s eye view of the battlefield or navigate a building autonomously to clear rooms before troops arrive. Modern military ground forces will likely acquire at least one rotary drone per platoon, resulting in more than 19,000 units acquired by the US Army and US Marine Corps (USMC) to perform short-range, quick-look reconnaissance missions.

The post Large and micro defense drones for military operations appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/large-and-micro-defense-drones-for-military-operations/feed/ 0
Defending airports from drones: Counter-drone technologies https://roboticsbiz.com/defending-airports-from-drones-counter-drone-technologies/ https://roboticsbiz.com/defending-airports-from-drones-counter-drone-technologies/#respond Sat, 25 Jul 2020 15:02:07 +0000 https://roboticsbiz.com/?p=3838 Airports differ in size, capacity, air traffic, and proximity to populated areas. But some hazards associated with drones or unmanned aerial vehicles (UAVs) are common to all airports. Drone incidents at critical infrastructures like airport facilities are rapidly increasing in frequency, complexity, and severity, as drones become larger, more powerful and cheaper, posing a wide […]

The post Defending airports from drones: Counter-drone technologies appeared first on RoboticsBiz.

]]>
Airports differ in size, capacity, air traffic, and proximity to populated areas. But some hazards associated with drones or unmanned aerial vehicles (UAVs) are common to all airports.

Drone incidents at critical infrastructures like airport facilities are rapidly increasing in frequency, complexity, and severity, as drones become larger, more powerful and cheaper, posing a wide range of practical, legal, and policy challenges in airport’s environment.

Therefore, it is critical to deploy countermeasures to protect airports from aerial attacks and unwanted drone activities such as spying and tracking points of interest, conducting unauthorized mapping or surveillance through an effective vulnerability assessment, risk management, and resilience actions.

In the previous post, we read about four types of sensor technologies used by airport authorities to identify unauthorized drone presence near the airports. We have also seen the pros and cons of these commercially available C-UAS detection technologies. Today, we will discuss five counter-drone techniques used by airports as mitigation countermeasures to stop drones from invading airport premises.

1. Electronic interdiction or signal jamming

Electronic interdiction or signal jamming is an intentional use of RF transmission to block signals and disrupt communications between the GSC operator and the flying UAV. This results in the following reactions, depending on the drone’s design:

  • the drone makes a controlled landing in its current position
  • the drone returns to a user-set home location
  • the drone falls uncontrolled to the ground
  • the drone flies off in a random unchecked direction.

a. RF Jamming

A static, mobile, or handheld radio-frequency jammer transmits a large amount of RF energy towards the drone, masking the controller signal and preventing the operator from maneuvering the drone.

Pros

  • Use RF transmission to block signals and disrupt C2 between the GSC operator and UAV
  • Medium range up to a few kilometers, depending on emitting power.
  • Static, mobile, or handheld device
  • Programmable based on RF sensor scanning
  • Disrupts radio-frequency (RF) communication links and can include Wi-Fi links
  • Use of directional jamming to minimize interfering

Cons

  • RF interference in crowded RF areas. May also jam and interrupt other communication signals.
  • Cannot affect autonomous driven drones (without an active RF link)
  • Illegal in many countries
  • May cause uncontrolled UAV flights and crashes
  • Needs special licensing for approved use, based on electromagnetic compatibility regulations
  • A jammer’s ability relies on the strength of its radio transmitter.

b. GPS Jamming

GPS jamming is useful when UAVs use GPS navigation systems for navigation. Mitigating a satellite-navigated drone is a much larger challenge than jamming an RF-controlled drone. To effectively jam a satellite navigation signal, a stronger signal is sent to the drone, replacing the GPS communication.

Pros

  • Replaces GPS communication, increases the difficulty of controlling the drone.
  • Medium to short-range, depending on the satellite constellation
  • Disrupts Global Positioning Satellite communication link
  • Prevents return-to-home functionality

Cons

  • Cannot work if UAVs disable GPS or use encrypted GPS (military mission)
  • Dangerous when used near airports, because airplanes also use satellite navigation
  • Illegal procedure in many countries. Needs special licensing for approved use
  • May cause uncontrolled UAV flights and crashes

c. Protocol Manipulation

Protocol manipulation means to take control of a UAS by impersonating its remote control. In this method, signal instructions are emitted to confuse the UAS so that the manipulated signal is conceived as legitimate.

Pros

  • Replaces the communication link and takes control of drone operation
  • Employ algorithms enhanced with artificial intelligence.
  • Can drive a malicious UAV to a designated area
  • The low-cost technique, based on attackers’ ability

Cons

  • Illegal procedures for civilian use acts against computer fraud and abuse.
  • Not always successful, primarily when encryption is used for C2 links
  • A complicated method, not always successful.
  • Cannot affect autonomous driven UAVs not using GPS

2. Kinetic Interdiction

There are many types of kinetic options proposed by researchers and industry today.

a. Physical – Net Capturing and Birds of Prey

Net capturing is an attempt to physically capture a drone. An enforced and hardened UAV flies toward the intruding drone and carries attack nets to seize and bring back the targeted UAS. Such systems work relatively short distances and are effective when the nefarious drone navigates with a low speed or does not maneuver.

Birds of prey are trained birds with protective gear, which are used to attack and grab UAS when entering a restricted area. However, birds are restricted and pose hazards when flying around airport areas due to possible conflicts with arriving or landing aircraft.

Pros

  • Active and aggressive countermeasures
  • Net capturing: enforced and hardened UAVs physically capture a drone
  • Birds of prey are used to attack and grab UAS
  • Captures and drives UAVs in a specific area

Cons

  • May cause collateral fatalities to other aircraft. Not appropriate for airports
  • Net capturing efficiency depends on UAVs’ flight behavior, reaction time, etc.
  • Birds also pose hazards when flying around airports.
  • Depends on speed or maneuvering capabilities of rogue UAVs

b. Electronic – Microwave or Laser Guns

High-power microwave (HPM) or laser guns use high-power electromagnetic pulse or laser to target and shoot down UAVs. HPM or high-energy lasers destroy electronic circuits and other vital segments of the drone’s airframe, causing UAVs to crash.

Pros

  • Aggressive and long-range countermeasures
  • Destroys electronic systems of UAVs
  • Disables drone flight

Cons

  • Can have adverse effects on other passing aircraft with fatal consequences
  • May cause uncontrolled UAV flights and crashes
  • Illegal in civil aviation contexts. Violates aviation security laws

In many countries, mitigation counter-drone systems are not allowed in civilian environments, but only when applied by police and military operations. Most governments are yet to establish comprehensive C-UAS-specific policies for protecting aviation assets. The airspace regulators continue to develop regulations for UAVs’ integration into commercial and civilian uses.

The post Defending airports from drones: Counter-drone technologies appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/defending-airports-from-drones-counter-drone-technologies/feed/ 0
Top 10 military robots and unmanned ground vehicles in the world https://roboticsbiz.com/top-10-military-robots-and-unmanned-ground-vehicles-in-the-world/ https://roboticsbiz.com/top-10-military-robots-and-unmanned-ground-vehicles-in-the-world/#respond Thu, 19 Mar 2020 17:21:32 +0000 https://roboticsbiz.com/?p=2676 Many military organizations and governments in the world are creating advanced military robots and unmanned ground vehicles at an increasing rate to solve a variety of complex problems on the battlefield. Usually employed within integrated systems that include video screens, sensors, grippers, and cameras, these military robots come in different shapes and sizes, according to […]

The post Top 10 military robots and unmanned ground vehicles in the world appeared first on RoboticsBiz.

]]>
Many military organizations and governments in the world are creating advanced military robots and unmanned ground vehicles at an increasing rate to solve a variety of complex problems on the battlefield.

Usually employed within integrated systems that include video screens, sensors, grippers, and cameras, these military robots come in different shapes and sizes, according to their purposes in the modern warfare.

They are either autonomous or remote-controlled and can destroy the enemy’s manpower, demine territory, and perform other important tasks.

Major advantages of military robots

  • Replace soldiers in dangerous missions, reducing casualties.
  • Easily replaceable, unlike human life.
  • Produce an accurate and more precise result.
  • Make faster decisions than humans.
  • Flexible and can perform multiple roles.
  • Unaffected by anger, revenge, hunger, fear, fatigue, or stress.
  • Comparatively cheaper compared to hiring human labor.
  • Less noisy and can improve work speed by 50 percent.
  • It can withstand damage by bombs and other types of weaponry.
  • It comes in a variety of sizes, depending on the purpose.
  • Handful of personnel can supervise a squad of robots.

Today, we will show you the top 10 best military robots and unmanned ground vehicles in the world.

Harris T7

Harris T7 Explosive Ordnance Disposal (EOD) Robot provides best-in-class mobility, manipulation, and intuitive control, delivering uncompromised performance for critical missions. This multi-mission robotic system with a strength and dexterity to tackle any challenge is compatible with an array of attachments such as standard-issue of sensors and disruptors, supporting a wide range of commercial and military missions, including explosive ordnance disposal (EOD), hazardous materials (HAZMAT) cleanup, intelligence, special weapons and tactics (SWAT) missions, surveillance and reconnaissance (ISR).

MUTT

Multi-Utility Tactical Transport (MUTT) is an unmanned ground vehicle that comes in two versions – wheeled and tracked. The US Announced by Marine Corps in July 2016, MUTT accompanies the fighters, making travel easier by reducing the amount of equipment that they carry while crossing difficult terrain on foot. Besides, thanks to an integrated weapon station featuring a minigun, MUTT can provide fire support for its unit. MUTT has a payload of 273 kilograms when moving on the ground. Overwater, it can transport up to 136 kilograms of cargo.

Rambow

Rambow is a versatile 3.5-ton unmanned vehicle with a payload capacity of one ton. It can operate by remote control, using teleoperation and obstacle avoidance, operate autonomously. It can be used for different purposes by attaching various systems, including a remote-controlled weapons station that utilizes 12.7-millimeter caliber rounds. The six independently suspended wheels have in-hub electrical motor allowing high mobility over rough terrain. The vehicle can be battery-powered, granting it a range of 50 kilometers. It can also be powered by a diesel generator providing more than 160 kilometers of mileage.

HDT Hunter WOLF

The Hunter Wheeled Offload Logistics Follower (WOLF) is an unmanned ground vehicle developed by HDT global to meet the logistics requirements of the US Army. Extensively evaluated by the US Army, Marine Corps, and Special Operations Command, the WOLF, can carry a maximum payload of up to 450 kilograms. It can be installed with a remotely controlled weapon station holding an M240B 7 62 millimeter machine gun, m134 general-purpose machine gun, and an M2 point 50 caliber machine gun.

Milrem THeMIS

THEMIS (Tracked Hybrid Modular Infantry System) is a multi-mission unmanned ground vehicle developed by Estonia-based security and defense service provider Miller with support from the Estonian Defense Ministry. The UGV is designed to carry out a wide range of military missions and areas that are either dangerous or difficult to reach. The vehicle can be configured for various roles, including reconnaissance, observation, communications relay for target acquisition, rescue logistics support, firefighting, and medical evacuation.

Rheinmetall Mission Master

Rheinmetall Mission Master is a multimission Unmanned Ground Vehicle (UGV) that enables the performance of mullet mission profiles to force protection and surveillance using a variety of modular payloads, capable of performing in hazardous or hard-to-reach zones. The Rheinmetall Mission Master gives mounted and dismounted forces safety and security, increasing their operational effectiveness and keeping them out of the way of harm. The Rheinmetall mission master is adaptable and highly modular and features a platform that allows the operators to install the different payloads onto the vehicle to accomplish all kinds of missions.

Phantom

Phantom is a remotely operated mini tactical unmanned ground vehicle, developed by Ukraine’s state-owned enterprise Spets Techno Export. This UGB can perform a variety of tasks and missions in complex urban environments, including combat, reconnaissance, ammunition transportation, and evacuation of wounded soldiers on the battlefield. Based on a six by six all-wheel-drive chassis, the phantom is capable of carrying two injured soldiers or payloads weighing up to 350 kilograms.

Black Knight

The Black Knight is a prototype unmanned ground combat vehicle (UGCV) designed by BAE Systems. It is an early prototype that demonstrates advanced robotic technologies. Tested by the US Army, the Black Knight is apt for missions that are too risky for a manned vehicle, including forward scouting, intelligence gathering, and investigating hazardous areas. The Black Knight is equipped with a 25-millimeter cannon and coaxial 7.62-millimeter machine gun, derived from the Bradley.

VIHR

VIHR is a reconnaissance strike ground robotic complex (which was also mentioned under the name Udar), designed to strengthen units, reduce human losses, and protect facilities. Its weaponry can be used in conjunction with the movement for ground and air targets. The robotic firepower is provided by the ABM BSM 30 remote-controlled weapons station. The weapons station has an optical-electronic complex for detecting ground and air targets, a 30 millimeter to a 72 automatic gun, a 7 62 millimeter peak ATM machine gun, and a coordinate m8 EGM.

Uron 9

Russian military equipment manufacturer JSC 766 UPTK unveiled the Uran-9 multipurpose unmanned ground combat vehicle during the 2016 international military-technical forum. The vehicle is designed to provide remote recognition and fire support for a variety of tasks performed through counter-terrorism, recognition, and military units in urban environments. Equipped with a variety of sensors and weapons, the robot improves the fighting effectiveness of infantry squads while offering maximum personnel protection. It can detect and track enemy targets over daytime at a distance of 6 kilometers and 3 kilometers overnight.

The post Top 10 military robots and unmanned ground vehicles in the world appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/top-10-military-robots-and-unmanned-ground-vehicles-in-the-world/feed/ 0
Why do people hate facial recognition technology? https://roboticsbiz.com/why-do-people-hate-facial-recognition-technology/ https://roboticsbiz.com/why-do-people-hate-facial-recognition-technology/#respond Fri, 07 Feb 2020 18:20:03 +0000 https://roboticsbiz.com/?p=2500 Facial recognition is changing everything… the way we live and interact with our society. Like other technologies, it is widely used today in surveillance systems, mainly to track and identify criminals/fugitives in the fight against crimes such as human trafficking, kidnapping, etc. In business and finance, technology is becoming a popular choice in payment to […]

The post Why do people hate facial recognition technology? appeared first on RoboticsBiz.

]]>
Facial recognition is changing everything… the way we live and interact with our society. Like other technologies, it is widely used today in surveillance systems, mainly to track and identify criminals/fugitives in the fight against crimes such as human trafficking, kidnapping, etc. In business and finance, technology is becoming a popular choice in payment to maximize security and minimize fraud.

In transportation, the technology has been deployed in train stations and airports to save travelers time from checking in, help them pay for their fares, and identify unlicensed drivers and jaywalkers. In the medical field, it is used in patients identification, monitoring, sentiment analysis, and genetic disorder diagnoses, while in education, facial recognition helps to improve campus security, combat school bullying, as well as attendance tracking, etc.

Though facial recognition is benefiting our society in many ways, controversy and concerns are rising, and there are a good number of reasons why people are not happy about the use of technology. They include the potential risks related to privacy, security, accuracy, and bias.

Privacy

In February 2019, Security experts identified SenseNets, a facial recognition and security software company in Shenzhen, as having a severe data leak from an unprotected database, including more than 2.5 million records of citizens with personal information. In August 2019, more than 1 million people’s personal information, including biometric data such as fingerprints and facial recognition information, was found on a publicly accessible database used by UK metropolitan police, defense contractors and banks alike. Such data breaches can put the victims at a considerable disadvantage, particularly when considering biometric information is almost permanent, and the effects of the leak are severe and durable.

Security

Though typically considered as a means of identifying security, facial recognition is not considered to be sufficiently safe. Research shows that GAN-generated Deepfakes videos are challenging for facial recognition systems, and when considering the further development of face-swapping technology, such a challenge will be even more significant. In another research, ArcFace’s best public face ID system is attacked by adding printed paper stickers to a hat, and the Face ID model became confused.

Accuracy

In real-world scenarios, facial recognition systems aren’t always reliable. A report shows that the UK South Wales Police facial recognition system misidentified thousands of trials, resulting in 2,297 false positives of a total of 2,470 matches, with an error rate of around 92 percent. Critics worry that such deficient performance could lead to erroneous arrests as well as a drag on police work. Another evaluation from Essex University showed that the facial recognition technology of the Metropolitan Police only made eight correct in its 42 matches, with an error rate of 81 percent, and such deployment was likely to be found “illegal” if challenged in court.

Bias

In the project “Gender Shades,” conducted by MIT Media Lab and Microsoft Research, IBM, Microsoft, and Megvii facial analysis algorithms were evaluated, and the results show that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4 percent higher than those of lighter-skinned males. In an American Civil Liberties Union (ACLU) report, a test was done on Amazon’s “Recognition” facial recognition tool by comparing photos of 535 US Congress members with a face database of 25,000 arrest photographs. The results include 28 false matches, 39 percent of which are people of color, although they make up only 20 percent of the input.

In the United States, San Francisco legislators have unanimously voted to ban the use of facial recognition technology across local agencies, including transportation and law enforcement authorities. They argue that the ban will protect them against possible inaccuracy and bias, and preserve their privacy and freedom. A few months later, Somerville and Oakland also passed their ban on the use of facial recognition in cities. In March 2019, senators introduced a bipartisan bill, called the Commercial Facial Recognition Privacy Act, to offer legislative oversight over the commercial application of facial recognition. The bill is likely to prohibit commercial users from collecting and re-sharing facial data without their consent in order to identify or track consumers.

Meanwhile, a poll conducted by GlobalData shows that 53% of the people say ‘no’ to police use of facial recognition. However, the use of technology by police and other law enforcement is proving divisive. They said that they were not happy with the use of facial recognition technology by law enforcement, while 47% said they were pleased with its use by such organizations.

“The response comes as the EU is considering a ban on the use of facial recognition until the technology reaches a greater stage of maturity. A draft white paper, which was first published by the news website EURACTIV in January, showed that the European Commission was considering a temporary ban,” technology editor Lucy Ingham said.

“It proposed that ‘use of facial recognition technology in public spaces by private or public actors would be prohibited for 3–5 years during which a sound methodology could be identified and developed for assessing the impacts and possible risk management measures.”

“While this may seem extreme, particularly given that police forces around Europe are already using facial recognition, there is a case for the technology not yet being mature enough for regular use. For example, an independent report on the facial recognition technology used by the Metropolitan Police to identify potential suspects found that it was inaccurate in 81% of cases. However, the Met claimed that the error rate was only 1 in 1000.”

Since then, the Met Police has announced that it will now use the technology as part of routine operations, a move that Silkie Carlo, director of Big Brother Watch, branded “an enormous expansion of the surveillance state and a severe threat to civil liberties in the UK. However, police forces maintain that technology prevents crime and does not breach privacy.

“There are also issues with identifying people of color, with tests by the US government finding that even the most accurate facial recognition technologies misidentify black people at a rate at least five times higher than for white people,” Lucy said.

The post Why do people hate facial recognition technology? appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/why-do-people-hate-facial-recognition-technology/feed/ 0
10 greatest innovations in 2019 – Autonomous, wearable, robotics, and drones https://roboticsbiz.com/10-greatest-innovations-in-2019-autonomous-wearable-robotics-and-drones/ https://roboticsbiz.com/10-greatest-innovations-in-2019-autonomous-wearable-robotics-and-drones/#respond Fri, 06 Dec 2019 16:29:27 +0000 https://roboticsbiz.com/?p=2383 The 100 greatest innovations of 2019, shortlisted by America’s one of the oldest and most trusted magazines Popular Science, has the names of 10 cool products that RoboticsBiz has a particular interest! Why? All these ten innovations belong to four categories that we cover on our website – wearable, AI, robotics, drones, and defense. So, we […]

The post 10 greatest innovations in 2019 – Autonomous, wearable, robotics, and drones appeared first on RoboticsBiz.

]]>
The 100 greatest innovations of 2019, shortlisted by America’s one of the oldest and most trusted magazines Popular Science, has the names of 10 cool products that RoboticsBiz has a particular interest! Why? All these ten innovations belong to four categories that we cover on our website – wearable, AI, robotics, drones, and defense. So, we decided to present those products to our readers.

Popular Science’s “Best of What’s New” 2019 Awards represent the most significant innovations and advancements in 10 categories: Aerospace, Auto, Engineering, Entertainment, Gadgets, Health, Home, Personal Care, Recreation and Security. The awards celebrate the items that make our lives simpler, smarter, and better. So, let’s get started!

1. HeartGuide

HeartGuide smartwatch to track blood pressure

HeartGuide by Omron is the first FDA-approved smartwatch that can track blood pressure and heart data at any time. This clinically accurate, wearable device features an inflatable strap that functions just like a blood pressure cuff. It registers readings in 30 seconds and saves the last 100 stats so users can see trends and share them with their doctors. It can monitor sleep patterns and track fitness, set goals, and monitor your daily physical activity.

2. Oculus Quest

Oculus Quest – a wire-free virtual reality headset

Oculus Quest is a wire-free virtual reality headset created by Oculus VR, a division of Facebook. Released in May, the device features two, six degrees of freedom (6DOF) controllers, and runs on a Qualcomm Snapdragon 835 system-on-chip. This standalone headset with full positional tracking requires no phone, PC or game console and can magically create immersive VR. There’s no better mobile VR experience than the Oculus Quest in the market right now.

3. Black Hornet Personal Reconnaissance System

Black Hornet – a palm-size drone

Black Hornet Personal Reconnaissance System by FLIR Systems is a palm-size drone that can fit in a soldier’s hand. It weighs over an ounce and can stream hi-def video and photos, and their diminutive dimensions. It can fly without a GPS signal, making them adaptable at environments like bunkers and caves. It can fly in 15-knot wind, remain airborne 25 minutes, and venture as far as 1.5 miles on a charge. Black Hornet is easy to fly using a tablet and a pistol-grip-style controller. The drone comes in two versions: one for daytime use and another equipped with a thermal camera for low-light conditions.

4. Light Marine Air Defense Integrated System (LMADIS)

LMADIS – a drone-downing laser weapon

Light Marine Air Defense Integrated System by US Marine Corps is a drone-downing laser weapon that uses radar, gyro-stabilized cameras, radio-detection sensors, and electronic jamming equipment to track and attack targets autonomously. This maneuverable anti-drone system attaches to MRZR all-terrain vehicles and can scan the skies for enemy aircraft. Once it locates a threat, it uses radio frequencies to jam the drone.

5. Integrated Visual Augmentation System

Integrated Visual Augmentation System – an AR for the battlefield

Integrated Visual Augmentation System by US Army Futures Command and Microsoft is an AR for the battlefield. Based on the HoloLens 2 AR headset, the device is engineered specifically for the US Army to provide thermal vision, digital overlays highlighting people and objects, mission navigation with waypoints, weapons targeting, and more. The encrypted information appears within the soldier’s field of view, providing greater situational awareness and reducing the likelihood of civilian casualties.

6. Almond

Almond – an open-source virtual assistant

Almond is an open-source virtual assistant created by Stanford University. Unlike Alexa, Cortana, Google, and Siri, Almond comes with limitless customization options and has better privacy features. You can use Almond in a browser, on a Google phone, or as a command-line application. The assistant can integrate with Nest, GNOME, Gmail, Twitter, Slack, and many more services.

7. Fenix 6x Pro Solar Edition

Fenix 6x Pro Solar Edition smartwatch

Fenix 6x Pro Solar Edition by Garmin is a long-lasting adventure watch with a transparent solar face. This smartwatch has an enormous 450 mAh battery that lasts three weeks. Its transparent solar panel under the Gorilla Glass lens can consume enough sunlight to give three extra days of power, providing more time to check a compass reading during a hike, track a swim or run, or receive notifications from your phone. Even if you turn off the watch, that photovoltaic cell will generate a charge.

8. APT 70 cargo drone

APT 70 cargo drone – an autonomous quadcopter

APT 70 cargo drone by Bell is an autonomous quadcopter with a cargo capacity of up to 70 pounds. It stands 6 feet tall and 9 feet wide, weighing 300 pounds. This cargo drone, designed to transport large goods like industrial components or medical supplies, can cruise at a speed of 75 mph and cover 35 miles with a fully charged battery. This four-motor, vertical-lift electric UAV is one of the largest commercial cargo drone projects to reach the skies.

9. BLUE

BLUE – a low-cost, high-performance robotic arm

BLUE is a low-cost, high-performance robotic arm built by researchers at the University of California, Berkeley. It was designed to use recent advances in artificial intelligence (AI) and deep reinforcement learning to master intricate human tasks while remaining affordable and safe enough that every artificial intelligence researcher and eventually every home can use.

10. Project SVAN

Project SVAN – the world’s first autonomous ferry

Project SVAN is the world’s first autonomous ferry. Created by Kongsberg, the ferry carried 80 passengers, but with no crew, between two islands in a Finnish archipelago in December 2018. The captain sat 31 miles away, on-call in case of an emergency. The SVAN uses lasers, radar, and computer vision to navigate waters and can retrofit onto any ship. The system can prevent the 75 to 95 percent of marine accidents that occur from human-operator error.

The post 10 greatest innovations in 2019 – Autonomous, wearable, robotics, and drones appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/10-greatest-innovations-in-2019-autonomous-wearable-robotics-and-drones/feed/ 0
Three ways eye-tracking helps law enforcement in preventing crime https://roboticsbiz.com/three-ways-eye-tracking-helps-law-enforcement-in-preventing-crime/ https://roboticsbiz.com/three-ways-eye-tracking-helps-law-enforcement-in-preventing-crime/#respond Fri, 15 Nov 2019 15:33:39 +0000 https://roboticsbiz.com/?p=2274 The feats of technology never cease to amaze us. From the invention of the telephone in the 1800s to the invention of today’s artificially intelligent voice-detecting devices that can assist individuals in their daily tasks, technology is making our lives easier in all spheres. Eye-tracking technology is one such thrilling enterprise that is currently proven […]

The post Three ways eye-tracking helps law enforcement in preventing crime appeared first on RoboticsBiz.

]]>
The feats of technology never cease to amaze us. From the invention of the telephone in the 1800s to the invention of today’s artificially intelligent voice-detecting devices that can assist individuals in their daily tasks, technology is making our lives easier in all spheres. Eye-tracking technology is one such thrilling enterprise that is currently proven to be useful and convenient to humans in all domains!

Eye-tracking technology refers to the process of tracing the movement of one’s eyes when they look at a specific object or an entire collection of objects, known as the ‘point of gaze.’ A head-mounted, wearable device was initially used to measure the gaze points; however, a much-sophisticated wearable set of spectacles have been introduced that achieve the purpose just as easily.

Eye-tracking devices, such as the wearable Tobii pro glasses, work by observing the behavior of the eyes as they move around and record the patterns of behavior which constitute pupil dilation, focus loss, eye fixations, and so forth. The two significant points of focus while measuring the eye movements are “points of interest,” often called fixations and saccades, which is the movement of an individual’s eyes between the points of fixation.

This technology, to a greater extent, is used in market trend research. However, one of the fields where eye-tracking has been used, to an unquestionable degree successfully, is Crime Scene Investigation and Prevention. In recent times, several departments within law enforcement have started to rely on the accuracy of eye-tracking technology to investigate criminal activities and apprehend the guilty parties based on their eye behavior.

Crime Scene Investigation

During investigations, the uses of eye-tracking methods help investigators detect deceit. Within eye-tracking, the measurement of eye gaze movements can be used to analyze the behavior of individuals. Crime scene analysts are at the morrow of criminal investigations; the decisions made by them, the forensic evidence found and logged by them become crucial in bringing any investigation to its accurate resolution. Any misstep or failure to recognize items of significance can limit the progress of the investigation.

Therefore, it becomes critical for these analysts to properly acknowledge and evaluate evidence for the benefit of the case. This is why crime scene analysts often wear eye-tracking glasses while observing crime scenes as this helps pinpoint areas of interest throughout the scene, such as the pattern of blood spatter or signs of a struggle inside a bedroom. This also enables them to regard or disregard any item that may or may not be of relevance. Eye-tracking also helps reconstruct scenes just as they are since it does not depend on memory or individual reports, thereby limiting room for human error.

Suspect Interrogation

Interrogating suspects is somewhat tricky when the person being questioned happens to be a professional or a practiced liar. However, where words paint a seemingly honest picture, facial movements, micro-expressions, movement of one’s eyes often give them away. These are not always easy to assess or measure. One of the essential tools used by police detectives during the interrogation of suspects is observation.

Where polygraph tests (lie-detection tests) fail, eye-tracking technology helps! While both these tools are considered as indispensable during investigations, the precision of eye-tracking demonstrate a far greater acquisition of results. By tracking the eye movements, the gaze and the eye fixations, and the areas of interest, a suspect’s responses can be gauged and categorized as lies or the truth. How long the sight lingers on an image or object they recognize or not is very telling of the facts individuals hide or share with the authorities who question them.

During interrogation, the suspects are shown documents, pictures, and objects, some of which are relevant to the crime while others are crime-irrelevant (often known as “foil” objects). While the suspect goes through these objects, their eye-fixation is measured. When the eyes of the suspect move from unfamiliar objects to familiar ones, significantly fewer eye-fixations are produced. It happens due to the suspect’s recollection of those items if they recognize them.

Eyewitness Testimonies

Eyewitness accounts are attestations provided by individuals of an event that they have witnessed; these events could be range from recounting a robbery they saw or to provide alibis for people who may be under suspicion by a court of law. Eyewitness testimonies are generally considered unreliable as they are often subject to individual memory biases, influence of stress or anxiety, or simply wrong recollection of events.

This is why eye-tracking is considered to be a useful tool to avoid the faults in recall when interacting with individuals who are witnesses to certain crimes or criminals. This is done through the measurement of visual attention that is then distinguished based on the task being performed by an individual, such as identifying suspects in a lineup or looking at pictures/objects and so forth.

Eye-tracking is especially useful during sequential suspect lineups, where it provides real-time information about the decision processes the individual goes through as gaze is continually monitored throughout the entire process. The better the individual recognizes a “suspect,” the easier it becomes to identify them based on the pattern of fixations as well as areas of interest. These lineups generally consist of actual suspects as well as foil or crime-irrelevant individuals, making it even easier for witnesses to concentrate only on those they recognize from the crime scene with the convenience of eye-tracking technology.

Using advanced technology such as eye-tracking devices during criminal investigations makes it significantly easier and time-efficient for investigators to conduct investigations and apprehend the guilty party. Since eye-tracking does not depend on fallible human memory and also reduces the possibility of human error, it increases the prospect of quicker case resolution. Eye-tracking analysis can also lead to discoveries that might have been missed during the initial sweep on a crime scene.

The post Three ways eye-tracking helps law enforcement in preventing crime appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/three-ways-eye-tracking-helps-law-enforcement-in-preventing-crime/feed/ 0