The subject of artificial intelligence has occupied science, industry and the military for years. But since a few months it has become clear how far the development has progressed. With ChatGPT, the Silicon Valley company OpenAI has shown the whole world how powerful and easy to use so-called Large Language Models (LLM) are.
They are therefore also key to effectively using and controlling AI systems within complex structures – for example in the military.
The controversial software company Palantir is now involved Artificial Intelligence Platform, AIP for shortpresented its latest system and demonstrated in a video how AI can be used to revolutionize war.
What is the goal?
This is not about automated target detection of individual weapon systems, but about turning the AI into an efficient communication and command center.
A demo scenario should show what benefits this entails and that it can probably be implemented in a short time in highly developed armies and security companies.
How does it work?
In the video, Palantir describes a use case in a war scenario. It is interesting that the example chosen was a (commando) soldier who is responsible for monitoring troop activities in Eastern Europe.
The user interface is presented as the soldier in the field or the commander in the command post would likely see and use it. What follows then looks like something out of a computer game.
In the beginning, the system automatically sends a warning: the AI has detected unusual movements on satellite images at a distance of 30 kilometers from its own troops. The fictional command types:
“Show me more details.” The satellite image with the suspected enemy units appears, along with a tactical map showing the position of friendly and enemy units.
In the chat window, the soldier then asks what options are there to get up-to-date and high resolution images of the said area, the AI system provides Reaper drone flyover and satellite imagery.
“Order the drone to take a video of this place,” is the next command. The AI forwards the order to the appropriate departments, a moment later the responsible officer gives his approval, and then the video material appears: In fact, it is a T80 main battle tank.
But the AI’s role isn’t limited to simply querying information from a wide variety of troops and systems – it can also take on tactical planning. Because instead of discussing the best attack measure with his superior, the soldier types in another command: “Create three options for action to attack the enemy machine.”
The answer is a table of three attack options: by an F-16 fighter jet, by a Himar missile artillery system, or by a small unit of soldiers nearby. The individual variants are each provided with the likely duration of the relevant attack scenario down to the minute.
The distance to the target, armament, personnel costs and the status of the respective team are also listed, small icons on the map indicate the position of the units.
What data does the AI have access to?
To bring these capabilities together, the AI has access to hundreds, probably thousands, of different systems and datasets: it can query within fractions of a second which unit is currently nearby, which of them has suitable armaments and enough ammunition.
Palantir does not show exactly what data is used for the assessment in this example. It stresses, however, that the military can specifically decide which datasets — both public and classified — the system can access and which they cannot.
However, it is conceivable that as much data as possible is used for an optimal assessment. Up to and including the personnel files of individual soldiers, in which past injuries or the results of the most recent health screening are stored.
The final decision is not made by the AI in Palantir’s video. Instead, the soldier lets the AI pass the table to his commanding officer, who decides to let the infantry unit attack.
The AI support doesn’t end there, though. But for concrete planning, the support of artificial intelligence is used. The commander now has the AI analyze the terrain for the maneuverability of a platoon-strength wheeled armored car unit and then queries the system for the optimal route to the target.
The relevant opponent communication nodes are also identified by AI in the video and the appropriate units are immediately put on standby to disrupt them. The final betting order is also typed into the chat window, the system ensures that all relevant places are informed and provided with the necessary data.
How realistic is that?
Of course, this is all fictitious – advertising Palantir’s new product AIP. The company plans to make its system available to the first selected customers in the coming weeks. Whether all of this would actually work in this form is not said – nor is it said that the US military or other militaries or security companies would actually use the software. However, it is almost certain that the US military will at least test this solution soon – or is already testing it.
There are even indications that the scenario shown is essentially quite realistic and can be implemented in a relatively short time. This is mainly due to the possibilities of large language models such as ChatGPT. Only their ability to understand commands through natural language and perform complex tasks allows such smooth interaction in highly complex systems such as the military.
Only they allow the numerous AI systems of a high-tech army such as that in the US, which already operates autonomously and semi-autonomously, to interlock so effortlessly and adapt such systems quickly and flexibly to different requirements.
In the example of Palantir, AI can be extremely effective and still leave the important decisions to humans at a crucial point. Palantir also repeatedly emphasizes in his video that different transparency concepts are intended to ensure that it can always be traced which of the different AI systems was responsible for which decision and which sources were used for it. The system is “ethical” and “responsible”.
It remains to be seen whether this is the case. Because it is also clear: AI can offer a huge advantage on the battlefield. Anyone who can make faster and more informed decisions has a clear advantage. Armies that have to do without AI support in the future could be at a decisive disadvantage.
In Palantier’s demo video, people still have to make the essential decisions at a crucial point. Technically, the AI system could have done this on its own – the necessary data was available.
For safety and ethical reasons, this would hardly be allowed today. Whether that will always be the case is an open question. If the speed with which decisions are made can also determine the outcome of battles and therefore also human lives, the question is how long the human factor will remain a factor in the chain of command.
The question of whether Large Language Models not only make the faster, but also the better decisions is anything but clear today – after all, it is part of the functional principle of this form of artificial intelligence that it continues to invent things as they are. are only statistically likely enough.
Without such AI support, there will undoubtedly be no war in the future. However, many crucial questions are far from clear today.
Sources
- palantir.com: AIP for Defense
- Own research
(t-online/dsc)
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.