Skip to content

AI-Powered Firearms: The Alarming Development of Autonomous Weapons

  • 7 min read

An engineer known online as STS 3D has recently developed a voice-controlled intelligent rifle using ChatGPT, which has garnered widespread attention and sparked intense debate. The device can respond to ChatGPT queries and guide the rifle to aim and fire at targets with remarkable speed. Videos of this technology have gone viral on social media, raising concerns about the potential transformation of dystopian futures from the Terminator movie series into reality. OpenAI, upon learning about this development, swiftly cut off the developer's access to ChatGPT services.

The video demonstration widely circulated on social media shows STS 3D's intelligent rifle with a cheerful voice and powerful command interpretation capabilities. In the video, the inventor stands next to a component the size of a washing machine, connected to the rifle, and casually instructs the system, "ChatGPT, we are under attack from the left front and right front, respond immediately." The rifle then unhesitatingly begins to act, aiming at the nearby wall and unleashing a barrage of firepower similar to blank ammunition towards two directions.

It is currently unclear how STS 3D fully integrated OpenAI into their project. What is certain is that OpenAI's Realtime API indeed allows users to establish "multimodal dialogue experiences with highly expressive voice models," which means it can easily equip a deadly weapon system with a cheerful voice and the ability to interpret verbal commands.

In August of last year, the US Department of Defense tested an AI autonomous firearm robot system called "Bullfrog," developed by defense contractor Allen Control Systems (ACS). This system consists of a 7.62mm M240 machine gun mounted on a specially designed rotating turret, equipped with electro-optical sensors, proprietary AI, and computer vision software. It is designed to fire small arms at drone targets with much higher accuracy than the standard weapons used by ordinary US soldiers, such as the M4 rifle or the next-generation XM7 rifle.

According to the current US policy on autonomous lethal weapons, Bullfrog should be designed to help humans "understand the situation" to avoid potential "unauthorized firing." In other words, the firearm can only point and continuously follow the target but should not fire unless a human operator gives the command. However, ACS claims that if the US military needs it in the future, the system can also operate fully autonomously, focusing on handling other more important tasks. Brice Cooper, ACS's Chief Strategy Officer and former head of the US Special Operations Command's counter-drone project, stated in an interview, "Our system is fully capable of autonomy and is currently waiting for the government to clarify its needs. Other traditional systems are far from reaching the same level."

Now, it seems that even amateur enthusiasts are developing such systems. After the incident, STS 3D's invention attracted the attention of OpenAI. The company responded by stating that they had quickly banned the engineer user for violating user policies. An OpenAI spokesperson stated, "We proactively discovered this behavior that violated user policies and notified the developer to stop the related activities before receiving any inquiries." The spokesperson also clarified that "OpenAI's usage policy prohibits users from using our services to develop or operate weapons, nor can they automatically execute systems that may affect personal safety."

It is worth noting that OpenAI quietly removed the wording prohibiting "using the service for activities with a high risk of physical harm, including the development of weapons and for military and warfare" in its usage terms last September. However, in fact, the company's revised policy still prohibits anyone from "using our services to harm oneself or others," including the development or "use of weapons."

As an individual user who seems to have no connection with the military or defense contractors, STS 3D's latest creation has clearly hit a snag. However, it is understood that the US Department of Defense is actively advancing in the field of autonomous weapons. For example, the AI autonomous firearm system Bullfrog, which ACS is currently testing for them, can provide sufficient precision support, achieving a unit kill cost comparable to lasers and microwave systems, without the need for frequent maintenance and logistical support. Steve Simoni, co-founder and CEO of ACS and a former naval nuclear engineer, stated, "Tracking and shooting is just one part of the entire strike system. Ultimately, the system will support more types of firearms, have a longer range, and be able to attack drones with different acceleration modes – all this can be achieved by simply updating the AI model."

Recently, the US Army has also tested several other gun-based anti-drone solutions, including the Ghost Robotics machine dog, which carries an AI-driven AR/M4 rifle turret on its back. In addition to the emergence of AI-driven weapon systems, the comprehensive integration of AI capabilities in the military field seems to be imminent. Last month, OpenAI announced a partnership with defense technology company Anduril to further help them transform their combat capabilities.

STS 3D's invention, which easily combines AI technology with lethal weapons, has sparked intense debate on social media. The reason it is impressive may be that people suddenly realize that even consumer-level AI technology can be easily used for violent purposes – we are rapidly moving towards a new era where lethal weapons have autonomous thinking capabilities?

A Reddit user joked in a comment on the video released by STS 3D, "At least three movies have explained why this is not to be done." Another user wrote, "Skynet 0.0.420.69 version is here!" referring to the evil neural network in the Terminator series.

Many netizens are also passionately opposed to such technological applications: "In recent years, I have been sighing about how revolutionary and beautiful LLM is, opening up infinite possibilities… If it is really fully connected to weapons, humanity is doomed," "If it falls into the hands of bad people, it could be disastrous," "This is not allowed… There must be built-in protective measures to prohibit this," "Don't give guns to machines that learn to understand human natural language."

"Is the experimenter trying to prove that any capable terrorist can put certain weapons under the control of AI? And will AI operate these weapons without checking the validity of threat statements?" another user questioned.

However, some netizens said, "The ability to connect a rifle mounted on a platform (or mounted on a UGV) with a remote operator using a keyboard or even an Xbox controller has existed for at least 20 years. At this point, it is not surprising that artificial intelligence can execute similar commands, and it is also possible for artificial intelligence to autonomously 'shoot any moving object.'"

There are not a few such views, and more netizens have said, "This thing could have been achieved at least ten years ago. I remember watching a video ten years ago where a guy made a turret with an air gun, and every time he got shot in the game, the turret would shoot at him and actively track him. The only difference is that this is voice-controlled, but we have also been able to do this for at least 10 years, and the response using GPT is the only novelty."

Overall, STS 3D's intelligent rifle paints a worrying future for the public, that is, AI weapon systems are used to kill targets without human intervention, even though this may not be the original intention of STS 3D.

Title: "AI-Powered Firearms: The Alarming Development of Autonomous Weapons"

Leave a Reply

Your email address will not be published. Required fields are marked *