How Did Technology Affect Military Strategy During The Civil War – Artificial intelligence is among the many hot technologies that promise to change the face of warfare for years to come. Articles abound describing its capabilities and warning those lagging behind in the AI race. The Department of Defense duly created the Joint Center for Artificial Intelligence in hopes of winning the AI battle. There are visions of AI enabling autonomous systems to carry out missions, achieving sensor fusion, automating tasks and making better, faster decisions than humans. AI is improving rapidly and someday in the future these goals may be achieved. Meanwhile, the impact of AI will be in the more mundane, boring and monotonous tasks performed by our military in uncontested environments.
Artificial intelligence is a rapidly evolving capability. Extensive research from academia and industry leads to shorter system training times and increasingly better results. AI is effective at certain tasks, such as image recognition, recommendation systems, and language translation. Many systems designed for these tasks are in use today and produce very good results. In other areas, artificial intelligence falls far short of human-level achievements. Some of these areas include handling scenarios not seen before by AI; understanding the context of text (understanding sarcasm, for example) and objects; and multitasking (ie, being able to solve problems of multiple types). Most AI systems today are trained to perform one task and only do so under very specific circumstances. Unlike humans, they do not adapt well to new environments and new tasks.
Contents
How Did Technology Affect Military Strategy During The Civil War
AI models are improving daily and have proven their worth in many applications. The efficiency of these systems can make them very useful for tasks such as identifying a T-90 main battle tank in a satellite image, identifying high-value targets in a crowd using facial recognition, text translation for open source intelligence, and text generation for use in information operations. The application areas where AI has been most successful are those with large amounts of labeled data, such as Imagenet, Google Translate, and text generation. AI is also very capable in areas such as recommendation systems, anomaly detection, prediction systems and racing games. An artificial intelligence system in these areas could help the military detect fraud in contract services, predict when weapons systems will fail due to maintenance problems, or develop winning strategies in conflict simulations. All of these applications and more can be force multipliers in day-to-day operations and in the next conflict.
Is The U.s. Military’s Futurism Obsession Hurting National Security?
As the military looks to incorporate AI’s success in these tasks into its systems, some challenges must be acknowledged. The first is that developers need access to data. Many AI systems are trained using data that has been labeled by some expert system (eg labeling scenes that include an anti-aircraft battery), usually a human. Large data sets are often labeled by companies that use manual methods. Getting this data and sharing it is challenging, especially for an organization that prefers to classify data and restrict access to it. An example military dataset might be one with images created by thermal imaging systems and labeled by experts to describe the weapon systems detected in the image, if any. Without sharing this with preprocessors and developers, an AI that uses this set effectively cannot be created. AI systems are also vulnerable to becoming very large (and therefore slow) and therefore susceptible to “problems of size”. For example, training a system to recognize images of every possible weapon system in existence would involve thousands of categories. Such systems will require a huge amount of computing power and a lot of dedicated time for these resources. And since we’re training a model, the best model requires an infinite amount of these images to be completely accurate. This is something we cannot achieve. Also, while training these AI systems, we often try to force them to follow “human” rules, such as the rules of grammar. However, people often ignore these rules, making developing successful AI systems for things like sentiment analysis and speech recognition a challenge. Finally, AI systems can work well in uncontested, controlled domains. However, research shows that under adversarial conditions, AI systems can be easily tricked, leading to errors. Of course, many DoD AI applications will operate in contested spaces, such as the cyber domain, and thus we must be wary of their outcomes.
Ignoring the enemy’s efforts to defeat the AI systems we might use, there are limitations to these seemingly superhuman models. The AI’s ability to process images is not very robust when given images that differ from its training set—for example, images where the lighting conditions are poor, which are at an obtuse angle, or which are partially obscured. Unless these types of images were in the training set, the model may struggle (or fail) to accurately identify the content. Chatbots that can assist our information operations missions are limited to hundreds of words and therefore cannot fully replace a human who can type pages at a time. Forecasting systems, such as IBM’s Watson weather forecaster, struggle with issues of size and input availability due to the complexity of the systems they are trying to model. Research may solve some of these problems, but few will be solved as quickly as predicted or desired.
Another simple weakness of AI systems is their inability to multitask. One is able to identify an enemy vehicle, decide on a weapon system to use against it, predict its path, and then attack the target. This relatively simple set of tasks is currently impossible for an AI system to perform. At best, a mix of AIs can be built where separate tasks are given to separate models. This type of solution, even if feasible, would incur huge costs in sensing and computing power, not to mention training and testing the system. Many AI systems are not even able to transfer their learning within the same domain. For example, a system trained to identify a T-90 tank will most likely not be able to identify a Chinese Type 99 tank, despite the fact that both are tanks and both tasks are image recognition. Many researchers are working to allow systems to transfer their learning, but such systems are years away from production.
AI systems are also very poor at understanding incoming data and the context within it. AI recognition systems don’t understand what the image is, they just learn the textures and gradients of the image pixels. Given scenes with the same gradients, the AI easily identifies parts of the picture incorrectly. This lack of understanding can lead to misclassifications that people would not otherwise make, such as identifying a boat on a lake as a BMP.
Artificial Intelligence Is The Future Of Warfare (just Not In The Way You Think)
This leads to another weakness of these systems – the inability to explain how they made their decisions. Most of what happens in an AI system is a black box, and there is very little a human can do to understand how the system makes its decisions. This is a critical issue for high-risk systems, such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and find out why it went wrong is important from a legal and moral point of view. Additionally, questions about how we assess responsibility in cases where AI is involved are open research concerns. In the news recently, there have been many examples of AI systems making poor decisions based on hidden biases in areas such as loan approval and parole decisions. Unfortunately, work on explainable AI is many years away from bearing fruit.
AI systems also struggle to distinguish between correlation and causation. The infamous example often used to illustrate the difference is the correlation between drowning deaths and ice cream sales. An AI system fed statistics for these two items would not know that the two patterns correlate simply because they are both a function of warmer weather, and might conclude that to prevent drowning deaths, we should limit sales of ice cream. This type of problem can occur in a military fraud prevention system that receives purchase data by month. Such a system may wrongly conclude that fraud increases in September when spending increases, when in fact it is simply a function of year-end spending habits.
Even without these AI weaknesses, the main area that the military has to deal with right now is adversary attacks. We have to accept that potential adversaries will try to fool or break any available AI systems we use. Attempts will be made to fool image recognition engines and sensors; cyberattacks will attempt to evade intrusion detection systems; and logistics systems will be fed altered data to clog supply lines with false demands.
Opponent attacks can be divided into four categories: dodge, infer, poison, and extract. These types of attacks have been shown to be easy to execute and often require no computer skills. Stealth attacks attempt to fool an AI machine often in hopes of avoiding detection – for example, hiding a cyber attack or convincing a sensor that a tank is a school bus. The key survival skill of the future may be the ability to
International Relations Theory Suggests Great Power War Is Coming
Civil war military technology, military technology during the civil war, technology advances during the civil war, how did the civil war affect the northern economy, how did the civil war affect states rights, how did the civil war affect african american, civil war military strategy, how did the emancipation proclamation affect the civil war, how did technology affect the civil war, how did the civil war affect women, how did railroads affect the civil war, how did the cold war affect the civil rights movement