Industry attributes
Other attributes
Information warfare is a term used to describe various techniques and methods for the collection, distribution, modification, disruption, interference with, corruption, and degradation of information. This can be done to gain an advantage over an adversary. For example, in a combat-specific scenario, information denial can restrict the intelligence of a battlefield commander. And discord can be sown through a population and increase tensions at a civilian level to destabilize a region.
Information warfare has become increasingly popular as the world has moved into what is popularly called the "Information Age," since the evolution of microchips, microcomputers, the internet and cyberspace, and associated information technologies. These and related technologies, such as social media, have increased the access individuals have to the world and the access foreign governments and other agitating bodies have to the general populace, increasing the attack surface and vulnerabilities these parties can exploit. And on the battlefield, as technologies are more connected and militaries work to use connected technologies for increasing command and control capabilities, they open themselves to information warfare attacks and require militaries to strengthen technologies against those attacks.
Traditional efforts at information warfare have included the use of espionage and information-gathering systems for militaries. Information warfare has been practiced for a long time, with some suggesting some form of it has been practiced since warfare has existed. For example, during the First and Second World Wars, aircraft were used not only to survey battlefields but also to drop leaflets or materials to foster goodwill in a local community while trying to intimidate an enemy into leaving. In the age of radio and television, these efforts continued and diversified. And the introduction of digital technologies further allowed information warfare to diversify to invade or hobble an adversary's IT infrastructure or otherwise defend that infrastructure from such attacks.
For the United States Military, there tend to be two acknowledged models of information warfare—The Cornerstones of Information Warfare model and the Network-Centric model developed by Vice Admiral Arthur Cebrowski. The Cornerstones model was developed in 1997 and defines information warfare as any action to deny, exploit, corrupt, or destroy the enemy's information and its functions while protecting the military from those same actions against themselves. The Cornerstones model treats information as something that is collected, moved, stored, and transformed; however, it is unclear what transformation is or how it takes place. The model offers information attack measures as those actions to deny, exploit, corrupt, or destroy enemy information and its functions. This model encompasses traditional means of conducting information warfare, including psychological operations, electronic warfare, military deception, physical attack, and security measures.
Alternatively, the Network-Centric model sees an information grid that encompasses a sensor grid and an engagement grid. In this model, sensors and shooters are two classes of objects, and information warfare works toward command and control. Sensors and shooters provide data, and for data to become information, according to this model, there has to be a reduction of uncertainty in the data. Data can only be reduced in its uncertainty through the application or presence of knowledge. The model defines data without certainty as noise. The Network-Centric model is more focused on command and control operations, where the definition of information as separate from data becomes important in that it relegates information to a level of certainty allowing operations to act on it and reduce uncertainty. This is also known as emphasizing information superiority. The Network-Centric model does not see information as something to be exploited, denied, or otherwise used in the same way as the Cornerstone model.
The fundamental idea of information seen across militaries is that information is not collected, stored, moved, or used to reduce uncertainty. Rather, information is generated in the course of reducing uncertainty so decisions can be made. This is echoed in the Network-Centric model, in which information is the reduction of uncertainty. This sees a decision-maker presented with data from the world and using established knowledge to reduce the uncertainty about the data. Once the uncertainty has been sufficiently reduced and a decision can be made with confidence, this data can be called information. This is also known as a situation assessment.
On the battlefield, there are only four tasks performed on data: it is collected, moved, stored, and used to reduce uncertainty. The efficiency of the process depends on the amount of data available and the ambiguity in the data. The more ambiguity, the more noise in the data, the less useful the data becomes, and the further from the above definition it is. This reduces the ability of a decision-maker to make confident decisions and can increase the need for more context and situational data to generate more information.
Protecting the data collected and ensuring that confidence in the data can be maintained increases the ability of decision-makers to fulfill their situation assessments and make those decisions. Further, it increases the chance data can be considered information. Part of protecting data includes concealing data when it is being collected, which can increase the efficiency of a situation assessment, or, when used against an enemy, it can reduce their efficiency. Often the efficiency of situation assessment is measured at the rate at which information is generated.
There are considered to be four main types of attack measures: degrade, corrupt, deny, and exploit. Degrade is the preferred term, compared with destroy, as data can be degraded simply by delaying it until its usefulness is reduced or destroyed in full or in part. For example, concealing data against a collection task or using jamming to reduce the capacity of a communication channel can degrade the quality of data once it can be collected by an enemy. Corrupt refers to the insertion of false data to confuse the data or spoof the data to create increased uncertainty. This could include the use of dummies on the battlefield to confuse force numbers, or psychological operations to confuse the data and information humans collect.
Denying an enemy information often means denying a direct attack on a data store or some other information or data collection process. For example, using a high-energy laser to blind or destroy optic sensors can be considered a denial of collection, or using a computer virus to destroy operating systems of an enemy computer system can further deny the enemy's ability to collect information or conduct a situation assessment. Exploiting information is often defined as collecting against an adversary's movement of data, increasing the available data for a friendly situation assessment and making the generation of friendly information more efficient. Exploiting data can also be the introduction of new data points to overload or confuse an adversary's situation assessment and reduce the efficiency of an adversary developing information.
Much of the battlefield-centric view is focused on using or denying information for the efficiency of decision-making. In the calculation of efficiency, it relates to the amount and quality of data available and the amount of ambiguity in the data. In this model, efficiency depends on the ability to generate and use information from data. The ability to measure this efficiency, though, is difficult as there are no objective measures that can be applied to all capabilities of generating information. For example, using parametric radar to generate information versus intercepting adversary communications to generate information both have different determinations of efficiencies, based in part on the amount of traffic and noise in the data and the operator's capabilities.
Decision-making is a major component, if not the most important component, of information warfare from a battlefield-centric or command and control perspective. However, designing good decision-making strategies can be difficult, stemming from two facts:
- Each source of data comes with a cost, often in time, such as radar parameters, which, at the pulse level, are easy to obtain. While scan characteristics require a long observation time and are, therefore, expensive to determine.
- Each source of data offers different information contributions and comes with a different amount of ambiguity, and the level of contribution depends on the current state of the problem and whether other data elements have been previously consulted.
Taking these facts into account means the contributions of data to the timely solution of a problem must be considered. Computers help in this case, as they can resolve conditional probabilities while using data from various data sources. The result of this method is a near-optimal, standard strategy for making decisions. However, the use of computers, especially if connected to a network of data sources, introduces another potential point of attack that can be exploited by an adversary in the case the computer and network are insufficiently protected.
For any battlefield, there are strategies that are brought to bear in a given situation. In the case of information warfare, this is nothing different, with many potential strategies used to achieve the goal. In the context of information warfare, the goal is not dissimilar from other battlefield scenarios, where the goal tends to be to gain operational advantage from fully integrating information functions, capabilities, and resources to optimize decision-making and warfighting effects. This view is expressed above in the battlefield-centric view of information warfare, where information is presented as a tangible part of a military's strategy on the battlefield. Strategies refers to the specific actions that can be taken to achieve this. Any strategy, in this case, is focused on providing the following:
- Robust and agile command and control in all operating environments
- Superior knowledge of the current and predicted battlespace
- The ability to project power through both kinetic and non-kinetic means, including networks, cyber, and the electromagnetic spectrum
Assured Command and Control refers to the protection of the ability to communicate with and operate forces using a variety of methods and pathways that are flexible, resilient, and well-understood. This can include and require immediate workarounds in the case of a failure of normal communication channels and requires a defense-in-depth for information integrity, or the means to verify the accuracy and integrity of information.
Assured Command and Control requires decision-makers, be they commanders or tactical units, to have an awareness of the battlespace, including necessary information about a contact or target area and the air, land, or water around it. This includes being aware of the information available, what information is necessary, where to find the information, and how to access the information quickly, for a variety of potential situations. This is done by knowing where to find the data, gathering the data, sifting out appropriate data, and connecting the data to deliver the "so what" or the importance of that data to an operational commander or decision-maker. This can include providing an information-supported prediction of potential operational outcomes or actions of an adversary.
Integrated combat refers to using a systematic and deliberate methodology for the fusing of kinetic and non-kinetic weapons to achieve desired effects. This can include the deployment of cyber capabilities to project power to deter or de-escalate conflict by damaging an adversary's warfighting machinery or ability to make decisions. This non-kinetic method of warfare can be used either in lieu of or in concert with other (kinetic) ordnance, such as bombs or missiles, to specifically target portions of an adversary's warfighting infrastructure.
Further, the electromagnetic environment is fundamental to military operations and has been expressed as being critical to the national interests of the United States. This requires the electromagnetic environment to be treated similarly to a traditional domain of land, sea, air, and space. Some suggest future conflicts may be won specifically and only in the electromagnetic environment and cyberspace, with methods specific to these realms and without the use of kinetic weapon platforms.
In warfare, the information backbone, which includes reliable networks and communications circuits, works to give command and control the freedom to maneuver through cyberspace and has become an important infrastructure for decision-making and general warfighting. If a network or communication path goes down, the information backbone can offer immediate alternatives. This includes systems, sensors, databases, computers, and software applications that can fuse data into a concise and relevant visualization for knowledge of the operational environment. This information is also used to tailor the user's mission and strategies to achieve mission goals.
Further, moving into new information warfare strategies includes breaking down what the military often calls stovepipes, also known as information silos, to further consolidate, link, and integrate technologies, functionalities, and information sources to create smaller and more coherent tools. Making these tools available to appropriate networks and systems while offering a cohesive look and feel regardless of community or security domains can offer all levels of service members a unified look and feel to use these tools more intuitively. Further, offering a similar look and feel to these tools and systems can allow developers to maintain simultaneous version updates at all levels to maintain system security.
The application of combat and operational sensor data, intelligence, knowledge of the environment, and targeting information can allow users to execute the full range of mission goals. This can include the handling of knowledge flows, which work to ensure those who have a need to know, know, and they can use that knowledge or information to take appropriate action. This can be through all kinds of systems, such as email, messaging, websites, or other transmission channels, that can provide those with the information. Further, information can be packaged and stored in a logically organized system to allow other users to intuitively access knowledge or information. The information also has to be safeguarded, to ensure those without a need to know or without permission to access, or otherwise trying for unauthorized access cannot do so.
Another development in information warfare is the use of information as a weapon itself. This is also known as advanced electronic warfare, which can be used to inform and warn friendly forces, and deny adversary forces access to or the ability to actively use their information structures. Cyber capabilities can be also used to achieve specific operational effects in a battlespace while being integrated with more traditional, or kinetic, methods.
A concept common to information warfare is cyberwarfare. This is usually defined as a cyber attack or series attacks that target either military infrastructure or a country's infrastructure. Cyberwarfare has the potential to wreak havoc on government or civilian infrastructure, capable of disrupting critical systems and damaging the state if not resulting in loss of life. However, what specific activity can be considered cyberwarfare has been debated. The United States Department of Defense recognizes the potential to threaten national security through the use of internet, enacted by malicious actors, but does not result in a clearer definition.
Some definitions consider cyberwarfare to occur only as the result of death as part of a cyber attack. However, a more general definition used has cyberwarfare typically involving a nation-state perpetrating cyber attacks on another. Although this definition also allows attacks to be carried out by terrorist organizations or other non-state actors seeking to further hostile goals and perpetrated on a nation-state. There are several examples of cyberwarfare, but whether these acts constitute an act of war depends on the nation in question, and many are recognized as a gray zone.
Types of cyberwarfare attacks
One development that has come out of the concerns around cyberwarfare has led nations to engage in cyber wargames to conduct real-life exercises or simulations and test how governments and private organizations respond to this scenario. This can expose gaps in defenses, improve coordination between entities, and help defenders learn how to act quickly to protect infrastructure and save lives. These are conducted to help cities, states, or countries improve their readiness. Through these practices, and through real-world events, governments of many countries have issued operational national security policies to protect information infrastructure. These policies typically focus on a layered defense approach, which includes the following:
- Securing the cyber ecosystem
- Raising awareness for cybersecurity
- Promoting open standards for combating cyber threats
- Implementing a national cybersecurity assurance framework
- Work with private organizations to improve cybersecurity capabilities
Any cyberwarfare defense strategy has to include addressing the resilience of local businesses and enterprises to cyber attacks. Especially as businesses of all sizes become more integrated through societies, ensuring their security measures are adequate to reduce the potential of a successful attack from a nation-state has become a part of a national security approach. Businesses of all sizes can secure their networks by following measures such as those below:
- Creating obstacles to breaching the network
- Using web application firewalls to detect, investigate, and block malicious traffic
- Responding to a breach and restoring operations
- Facilitating cooperation between the public and private sectors
- Using local hackers to protect against foreign cyber threats
Examples of cyberwarfare operations
Information warfare has typically targeted the general population of a nation-state, either in their roles as civilians, in government positions, or in positions in various companies. In this scenario, information warfare can be used to compromise infrastructure, can sow discord in a general population, or decrease trust in a nation-state's government. Further, it can be used to acquire information about an opponent, destroy information systems, and disrupt the information flow. Even though these kinds of attacks or attempts to subvert a population against itself or its government through propaganda have been done for almost as long as humans have gone to war against each other, the internet and the increased connectivity of the world has increased the potential for information warfare and its potential to wreak havoc on a population and a nation-state.
Popular awareness of information warfare, and its potential impacts on society, have increased since, arguably, 2014. This came as Russia annexed Crimea in that year, causing a Russian-Ukrainian conflict categorized as a civil war, in which Russia was found to be influencing the Ukrainians and the international community to promote Russian goals and its version of events. This was achieved through the use of traditional media sources that had been previously compromised and controlled by Russia and its intelligence services. Perhaps more importantly, this was achieved by one of the most important developments that has aided and abetted information warfare, social media, where Russians were able to use various methods to try to influence Ukrainian citizens and spread Russian propaganda.
The internet has enhanced and vastly expanded the possibilities of data acquisition, information defense, and information disruption. It has made it easy and possible for citizens of a nation-state to reach an international community, and for an international community to reach the citizens of a nation-state. Social media has increased the ease with which this communication and reach has increased and provided popular and easy-to-reach places where the wide coverage and low-cost of (dis)information campaigns can be executed. Further, social media offers a place where like-minded people can be, and often are, grouped and which specific information or disinformation dissemination campaigns can target for maximum effect. This has spawned various new techniques to target civilians over the internet, including tactics such as troll factories, bots and bot farms, fake news or disinformation campaigns, and memetic warfare.
A "troll" on the internet is a person who posts inflammatory, insincere, digressive, or off-topic messages in an online community to provoke and upset others for their own amusement. The activity of "trolling" has been around as long as the earliest version of the internet when members of a message board would engage in these tactics for their own entertainment. This seemingly juvenile behavior was brought from those message boards to social media, where people have had a harder time discerning between trolls and honest interlocutors and actors on a platform.
This has spawned a tactic for information warfare in which individuals are contracted to act as "trolls" as part of a disinformation campaign. The resulting "troll factories" often have inconspicuous names, and they engage users in political and economic spheres where they aim to attack political opponents, attack a company, or cause other disruption in a population. To achieve these ends, troll factories will use similar behaviors to the "trolls" they receive their name from, including using hate speech, disinformation or fake news, and propaganda. These activities will be supported by fake social media profiles. In the most obvious, these profiles will actively instigate a community or group the troll factory has targeted. In the most insidious examples, a fake social media profile will agree with a group in order to mislead them through accrued trust.
This trust, and regular incitement campaigns, can be further supported by entire websites created for the purpose of supporting any operations as necessary. Further, these posts can be replicated across various websites, social media sites, and amongst various different trolls, in order to create an "amount" of evidence that seems to support itself. These activities can work to advance military and propaganda goals, and, in some cases, work to turn populations and groups within a nation-state against each other, to make any military action more divisive and make people debate "concerns" and "facts" that may otherwise be erroneous to a given situation.
In more cynical activities, a country or government can take troll factories and aim them at their own population in a new kind of propaganda campaign to get the population to either engage in an activity or agree with a course of action the government has considered good or necessary but could not otherwise demand a population take without seeming too authoritarian or over-stepping democratic bounds.
A bot, which is short for robot and also called an internet bot, is a program that operates as an agent for a user or otherwise works to simulate a human. Bots can be used to automate repetitive tasks or carry out useful functions, but they can be malicious and bring malware with them. Or they can be used to amplify the attacks of troll factories. The bots have increased in popularity, as some estimates have found only 40 to 60 percent of users on Twitter are real people, while the rest are bots. This means, every day, people on the social media platform are engaging with bots, but treat them as if they are people. Bots can skew assessments through confirmation bias, amplifying the volumes of data through fiction, and render strategic decisions ineffective in the face of poor data.
On social media, bots are incredibly common, used for various campaigns as mundane as advertising and PR to try and gain favor for people or properties. For military and information warfare purposes, these same bots can be used to spread disinformation, engaging in the distribution, amplification, distortion, hijacking, flooding, and fracturing of information and for various purposes and goals. Bots can also be used in "botnets," which are similar to troll factories in that they use hundreds or more automated programs to progress toward specific ends.
Misinformation, disinformation, and fake news are as old as human war. Misinformation is often classified as unintentionally inaccurate information that is spread by oblivious actors. Whereas disinformation is intentionally inaccurate information spread by nefarious actors. Disinformation has been used in armed conflict and on the battlefield to trick adversaries for a long time, but its use against civilian populations has increased in the age of information, as the dissemination of it has been made increasingly easy, and it can distract populations or create discord in populations to distract away from, or toward, a specific issue.
Fake news is another name for disinformation, that is often peddled or transmitted over social media and reaching mainstream media. Fake news is often defined as false information passed off as actual, similar to disinformation. Fake news tends to be considered a part of psychological cyber warfare, which works to present a false picture of reality as the truth. Some definitions of fake news can include rumors in wide circulation, conspiracy theories that can play on psychological factors (like confirmation bias: if an individual wants a political group to be criminals, a story saying they are criminals is more likely to be unquestioningly believed by the individual as it confirms their bias), and can include the delusions of leaders or others.
Fake news and disinformation are not limited to social media and can be included in mainstream news channels, radio, and podcasts, able to reach people anywhere they consume their media. These are persuasion and deception tactics, which can create tension in a population and fuel political fires. Further, the continual alerts from news sources, blogs, social media, and headlines that make all news seem like breaking news have created negative feelings like anxiety, hopelessness, despair, and sadness, fueled by the 24-hours news cycle. Fake news and disinformation play upon these negative emotions, which also create fertile ground for "alternative" facts or disinformation to be accepted by individuals who are conditioned to accept more and more extreme headlines and news.
Journalists have reported on war and related conflicts, in which they are trained to be extremely careful in terms of what information is taken as true and what could be part of intelligence disinformation campaigns. And these journalists work to verify their information while conducting investigations. However, as journalists are increasingly online and sourcing stories online, and disinformation has been online, journalists have become targets of disinformation campaigns. The above-noted infrastructure developed by troll factories and through the replication of facts through the use of bots have made it increasingly difficult for journalists to verify information online, especially related to international relations.
Part of the problem that has complicated these problems has been the 24-hour news cycle and online news, which works to publish new stories repeatedly to keep consumers on websites, television channels, radio stations, and reading stories, in part to serve advertisers. This reduces the amount of time journalists have to verify information, increasing their vulnerability to disinformation campaigns. A complicating problem journalists have to work with is the hacking of websites, either to confirm disinformation passed along to a journalist or to place stories in opposition to the activities of a state on a news website.
The disinformation campaigns and the lack of verification or authentication in journalism, which have, as noted above, fed into a culture of paranoia, fear, anxiety, and a bias towards the worst possible outcome, have further increased the level of hateful dialogue in the mainstream, even in some cases coming from politicians towards their opponent's constituents, and has increased distrust in mainstream media, with more consumers reporting receiving their news from social media or alternative media sources. This has led to more calls to develop tools and businesses capable of verifying the credibility of media sources and capable of signaling emerging risks to help journalists de-bunk disinformation and restore trust in traditional media sources.
One suggested potential solution has been using companies or tools for "fact-checking" or the verification of a news story; however, these tools themselves have been in some cases suggested to be as easily compromised by fake news and disinformation campaigns, let alone intelligence community interference, as traditional journalism; especially as many of these systems pair traditional journalistic practice with artificial intelligence and natural language processing to try and determine what is truthful and what is not. However, this does not get around some of the psychological factors that, when studied, have shown sociopolitical beliefs will bias an individual's perception of an event.
Media users, whether utilizing traditional media or social media, have increasingly become the victims of information warfare. Unlike in a warzone, or during an open conflict, where individuals are prepared for some kind of information warfare campaign, regular media users in any country and at any time can become the unwitting targets of propaganda and disinformation spread through the media. In part, because of this, media users have become or are being made aware that they are the objects of disinformation activities with the aim of affecting their perception of reality.
This has created increasing distrust in information appearing in official circulation and traditional media, causing internet users to turn to alternative sources of information, such as social media and civil media, where the apparent transparency in the reporting of news and information suggests increased trust. However, one impact of social media and the algorithms that serve users with posts or videos they might be interested in is the "information bubble" or "echo chamber."
The "information bubble" is a term to describe a situation in which an individual's access to information is artificially restricted by the algorithms, which, in an effort to keep the user on the social media platform, serve that user with more content in line with what they have previously seen, rather than serving diverse sources of information on a given topic. These information bubbles result in a narrowing of an individual's understanding and knowledge about the world. It has affected politics in various countries; but studies have suggested that, as easy to infiltrate as these information bubbles are in information warfare, trying to diversify the media served to individuals will further entrench rather than soften their beliefs and attitudes towards those with differing views. Part of this problem comes in the increasing polarization of news media itself, as even traditional media has drifted from the value of objective news coverage towards partisan political news coverage, especially as those business models have proven more lucrative than objective news coverage has.
Although there are a lot of proposed technological solutions intended to attempt to reduce the impact of propaganda, disinformation, and misinformation on a general population, one of the more difficult and perhaps most effective ways to fight against the intrusion of information warfare on media users is increased media literacy. An improved media literacy can help media users think critically about information, especially before they may share that information, and can help users seek more diverse sources of information, without expecting those users to change their media diet or restrict them in any way. Whereas, many of the technological solutions, other than deleting bot accounts and fake accounts, bring with them concerns over increased censorship of difficult conversations and ideas, if not political erasure of some communities, whether good or bad.
While there is not necessarily a hard beginning of information warfare, the importance of information and intelligence in war dates as far back as the writings of Sun Tzu, if not earlier. But the eighteenth and nineteenth centuries saw leaders conduct information warfare using information-related intelligence gathering, military deception, military information support operations, and operations security. And these led to the increasing use of information warfare in the beginning of the twentieth century, especially as the French army during the First World War conducted some of the first electronic warfare techniques through the interception of wireless and telephone communications.
Early espionage networks utilized information warfare techniques, such as that established by Frederick the Great in the mid-1750s. This included gathering information from travelers about what tactics and weapons were popular in their home countries and searching for any information enabling him to develop character-pictures of the rulers and generals of various countries. However, this information gathering remained challenging as verifying information or ensuring the information was accurate was difficult, if not impossible in many scenarios.
Since the beginning of the twentieth century and the First World War, the technologies included in information war have increased. From the interception of wireless and radio wires on behalf of the French, to cutting and rerouting telegraph lines during the trench warfare, to the increasing use of radio technology during the Second World War, and the famous cracking of enigma machine, enabling Allied forces to intercept German communications and give them up-to-date information of German troop movements and confirmation of front line reports.
It could be said the modern version of information warfare developed during the Cold War Era. Here, the Soviet Union engaged in what it called "active measures," which included manipulating the media, such as tampering with otherwise legitimate documentaries to aggravate tensions in West Germany. While the United States would respond with its own efforts. This is also a period that is notorious for espionage efforts throughout West and East Berlin, let alone those efforts that occurred in the former Soviet Union and in the United States.
During this period, the U.S. Congress mandated the CIA should not allow any propaganda and covert efforts against the Soviet Union to reach the American public, or, essentially, they were proscribed from inadvertently propagandizing the American public. This was especially as many American efforts against the Soviet Union included partial truths, if not outright lies.
One example of this was a propaganda story in which the U.S. claimed the Soviet forces used toy bombs in Afghanistan during the decade-long occupation of Afghanistan by the Soviet Union. This story sparked global outrage, and even U.S. news networks acquired staged footage of how these bombs worked, and other news agencies were able to capture a "toy bomb" and explode it to show the potentially devastating effect. The effect of the fake story was to knock the Soviet Union off-balance.
Russia has largely been considered the first entity to unleash the new age of information warfare. With the fall of the Soviet Union and the declared end of the Cold War, global information warfare seemed to have ended for much of the world. While there were some information campaigns during the United States wars in Afghanistan and Iraq at the beginning of the twenty-first century, they were not on the scale of the Cold War campaigns, and many focused on the "Hearts and Minds" campaigns, which tried to ingratiate U.S. soldiers with the local communities and increase the relationship and trust between those entities.
Despite claims that Russia started the internet-based version of information war, this largely began in January of 1999, when a website advertising the campaign for East Timorese independence from Indonesia was attacked by hackers. The hackers were from over eighteen locations, including Australia, the Netherlands, and Japan. They caused the U.S. Connect-Ireland, the site host, to have to shut the website down after defenses for the site had been breached. This has been considered one of the first hacking attacks orchestrated by a government. The website would be reconfigured by mid-February of 1999, but the vote in August in favor of independence restarted the attacks and hacking attempts, with much of the campaign aimed at computers controlling the banking, finance, military, and aviation platforms.
In March 1999, NATO forces began what would be an eleven-week bombardment of Yugoslavia. At this time, NATO's Southern Command (AFSOUTH) at Naples was subjected to a barrage of internet messages that attempted to jam communications for AFSOUTH. But the attack eventually failed as the intervention did not disrupt troop movements, did not damage any equipment, and people were still able to pass along information of troop movements and atrocities to NATO forces, despite Yugoslav government controls and propaganda.
But in 2014, during the Russian incursion into Ukraine and the eventual annexation of Crimea, Russia largely deployed what has been suggested to be the modern version of information warfare. This has been called Reflexive Control, which has origins in Soviet Doctrine and relies on the ability to take advantage of preexisting dispositions to push enemies to concur with Russia's preferred course of action. To do this, engaged in various cyber campaigns, in the course of a hybrid warfare strategy that married information warfare to gain an information advantage while engaging in a kinetic campaign based on information gleaned through the information campaign. However, during this period, the Russian information campaign, insofar as it worked to deny its involvement in the unrest in the Crimean region, would fail as the denials in the face of concrete proof could not be ignored.
In 2022, the Information Warfare campaign Russia launched against Ukraine as a part of the larger Russian invasion of Ukraine was more extensive. This campaign included stories presented on the Russian news network RT, where Russian views of events were expressed, with views of events that differed from what was presented in European and American media. This created another conflict that extended into social media, where counter-programming flourished on messaging applications like Telegram, where the Russian logic for what they referred to as a "special military operation" in Ukraine was used to advocate for Russian aggression and used to smear Ukrainians. Some cyber attacks against Ukraine have included malicious cyberwarfare aimed at Ukraine's public, energy, media, financial, and business sectors. But the wider disinformation campaigns have probably been more talked about and have had a wider impact.
These efforts extended toward destabilization, where Russian troll factories have been tied to disinformation campaigns that have worked to exploit sensitive issues in other countries, such as COVID vaccines, racial tensions, and human rights. In a country like Latvia, where Russian media and troll factories were not removed from media or social media platforms, and where the impact of the Soviet era is still felt, people living together were found to be increasingly confrontational against each other after being served starkly different takes on the conflict.
These campaigns have made use of various social media platforms, such as Facebook, Twitter, TikTok, YouTube, and Telegram. But Russia has not been alone in the use of social media to advance its perspective on the conflict, as Ukraine has done the same. For example, in the first week of the war, TikToks from a range of sources with the hashtag Russia and Ukraine had amassed 37.2 billion and 8.5 billion views respectively. The narratives presented by Russia and Ukraine through these videos were diametrically opposed. And this has led some to call the early days of the conflict a "TikTok War."
Known as the "Ghostwriter" disinformation campaign, this campaign used a series of networks of proxy servers and virtual private networks to avoid detection and hack the social media accounts of European political figures and news outlets to spread fabricated content critical of the North Atlantic Treaty Organization (NATO) and treaty countries across Eastern Europe. This campaign spread false content and false posts to try to encourage civilians across Eastern Europe to put pressure on their politicians to vote against inclusion in NATO and to erode regional support for NATO. This campaign was originally believed to be a Russian campaign, but would later be found with a "high confidence" to be linked to the Belarusian government. This went along with observed support the Ghostwriter campaign gave to Belarussian government interests. This group continued to work with Belarussian government, including with autocratic President Alexander Lukashenko, and worked to steal information and spread further disinformation as the conflict between Russia and Ukraine, which according to some reports largely hinges upon Ukraine's potential to join NATO.
In 2020, at the beginning of the COVID-19 pandemic, it was reported that espionage and foreign interference efforts were found to be increasing as more communities and individuals conducted their business online. One report from the Canadian Security Intelligent Service noted that the People's Republic of China was targeting government and non-governmental organizations in Canada, including academic institutions, the private sector, and civil society. This was in part an attempt to steal sensitive information, classified documents, and technology. Further, the report found agents from the People's Republic of China, among other foreign states, used Canada to covertly gather political, economic, and military information through targeted threat activities that sought to take advantage of the collaborative and open nature of Canada's society and government.
This was often done through individuals with little to no formal intelligence training, who often do not believe in a threat to intelligence. Often these operations extended to support foreign political agendas through deceptive influence over the Canadian government. Other efforts sought to monitor and intimidate various communities to fulfill strategic and economic objectives, or to use Canada as a chance to target allied nations. Some of these efforts, during the early days of the pandemic, were focused on simply pushing the blame for the COVID-19 pandemic to the West, away from China, while also working to discredit democratic responses to the pandemic.
As the United States, in some estimates, leads the push to networked soldiers through the internet of things (IoT), and as commercial technology through the United States is being increasingly linked through IoT technologies and devices on the internet are increasingly becoming a part of people's everyday lives, information warfare and its related techniques represent a near existential threat. For its military structure, the pentagon reports receiving 10 million attempts to hack its systems a day; the National Nuclear Security Administration, part of the Energy Department, records 10 million hacks a day; and Utah says it faces 20 million attempts per day, up from 1 million a day in 2015. This shows the efforts against the United States have continued to increase.
The United States strategy toward information warfare has shifted over time. Previously, the U.S. Information Command (USIA) ran the efforts of the U.S. to combat global information warfare efforts, and engage in its own information campaigns from 1953 until the agency was dissolved in 1953. However, modern Russian information warfare campaigns, such as those noted above, have seen some call for the USIA to be reinstated and improved, such as the agency toward its historical end was largely marginalized, limited, and separated from foreign policy and foreign policy goals and would be hobbled by some, such as Henry Kissinger and Senator J. William Fulbright, who questioned the agency and its usefulness.
The usefulness of the USIA was limited, in part, because the agency was never given a leadership role or the level of integration with foreign policy goals, which would have allowed the agency to counter propaganda, which even during its existence was not part of the USIA's mandate but was led by the short-lived Active Measures Working Group. Further, active propaganda efforts in foreign countries were given to the U.S. Agency for Global Media (USAGM), which had operated under the USIA until 1999, and targeted operations across countries such as North Korea, Russia, China, Ukraine, Indonesia, and Latin America. However, when the USAGM was founded to be operating separately from the wider foreign policy of a given administration, it would lose credibility with policymakers. The story of the USIA and other, related, information warfare agencies in the United States show they require closer links between the agency's operations and strategy planning.
One of the efforts the United States has made to combat information warfare is the Army Cyber Command. However, for some, the Army Cyber Command is unfit or otherwise not capable of meeting the wider information warfare demands of the country. The Army Cyber Command has a more all-encompassing vision of information warfare and could be used to meet these demands, as the United States military works to wrestle with doctrine in the information realm. Defined in Joint Publications 3-13, initially published in 2008, the Army Cyber Command's field operations include the following:
The integrated employment of core capabilities of electronic warfare, computer network operations, psychological operations, military deception, and operations security, in concert with specific supporting and related capabilities, to influence, disrupt, corrupt, or usurp adversarial human and automated decision-making while protecting our own.
There are challenges to this remit, especially as some consider the United States' electronic warfare capabilities to have peaked during the 1970s and 1980s and to have since atrophied from under or non-use. However, as enemies have increased the competitive challenges in the case of information warfare, it is considered by many to be one of the few areas of warfare in which the United States has fallen behind. Often considered to lead the way in kinetic warfare capabilities and technologies, as the world has been increasingly connected, and the level of non-kinetic threat from competitors or adversaries has increased, there have come calls for the United States to follow suit and increase their capabilities in the information warfare realm, with a more aggressive Army Cyber Command considered a potential start.