Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Introduction
The current literature addresses the problem of the information revolution and its impact on modern society. Proponents have argued that a well-placed attack on a nation’s information infrastructure might permit a nation to go around the other side’s forces and strike directly at its infrastructure. Abuse of systems comes in many forms. Commonplace abuses rely on normally common motivations (e.g., greed, thrills), and many are only high-tech versions of carjacking and joyriding. Others less common are serious and difficult to anticipate. The owners of systems can be expected to protect themselves (to an economically optimal level) against commonplace threats, the probability, and patterns of which can be predicted from experience. Less common but serious threats are less liable to be watched for because they arise from motives that surface less often. Systems that face a known pattern of threat (and whose owners would bear most or all the cost of an attack) can determine an optimal level of protection. There is no reason to believe that these owners provide less protection against information attacks than they do against other threats to their well-being (e.g., shoplifting, embezzlement, customer lawsuits). In information warfare, there is no predetermined lead-time between ignition and detonation. Bad code might be inserted into a system years before it is needed simply because an opportunity to insert it arose unexpectedly. Software upgrades may clean them out and the longer. The code sits the greater the odds it is found or ignites early. The threat of information attacks is real and can come from different sources.
Main body
The large layer of literature discusses the problem of information warfare and its impact on the citizens. The article “The Ethics of Computer Network Attack” argues that information warfare opens a gap between what one appears able to do and what indeed one can do. If appearance deters, actuality may be irrelevant. The counterargument – that deterrence depends on accurate mutual assessment leading to predictions of outcomes which would cause all but the clear winner to desist – is poorly supported by history. Understanding the enemy’s information warfare capabilities is almost a contradiction in terms – to understand a capability is to take a large step toward being able to nullify it. To know the holes in one’s system through which an enemy will attempt passage is to know what needs to be plugged. To know how well an opponent can hide from one’s sensors suggests what features of one’s sensors are easiest to spoof or evade – and thus what needs most work. If an opponent knew how well one could counter it, that could be only because it sensed how one could do so, which creates a basis for counter-countermeasures, and so on. To make matters worse, any measure of a nation’s capability for information warfare may be meaningless unless measured against a specific opponent. The author portrays that one nation may be able to disrupt another’s information infrastructure if that infrastructure is centralized and protected by firewalls, but not if it is dispersed and protected by redundancy. Another nation may be stymied by firewalls but operate more easily against networks. Some nations may hide their forces by stealthy technology; others may use cover, concealment, and deception. The deterrence value of information warfare echoes longstanding debates over submarines and battleships. Similar ideas are expressed by Brenner (2007) who states that important computer systems can be secured against hacker attacks at a reasonable cost, that does not mean that they will be secured. Increasingly common and sophisticated attempts may be the best guarantor of the security of national computer systems. If the absence of important incidents lulls systems administrators into inattention, entrée is created for some group to launch a broad, simultaneous, disruptive attack across a variety of critical systems. For this reason, a sequence of pinpricks or even a steady increase of attacks is the wrong strategy: it creates its own inoculation. Strategic effectiveness requires attacking an infrastructure in force and all at once. The concept of a single government commander for information defense is, anyway, a stretch.
Current literature pays special attention to future problems and threats caused by information warfare. Adams (2001) and Molina (2003) try to predict and foreshadow the impact of information technology on society. In the article “Future Warfare and the Decline of Human Decision-making” Adams questions criteria that differentiate an actionable information warfare attack from one that is ignored. Special attention is given to changing decision-making and problem-solving skills influenced by information warfare threats and limitations. The author criticizes information technology and its widespread implementation in the military. Hacker attacks – information warfare in microcosm – are numerous and for the most part trivial. There may be a million break-ins on the Internet every year. Most are home-grown although some originate overseas – a fraction of which may be state-sponsored. Most of the million are pranks and do no damage. Even if the damage is done, usually it is scarcely more than an annoyance. And even if either is grounds for individual punishment, it does not necessarily follow that they are sufficiently grave grounds for international retaliation. To retaliate against every break-in would tax the principle of proportionality. Defining an actionable incident means determining how much harm is enough A threshold may be arbitrary, and no measure of the effect of an incident may be exact. If forced to use other means of payment, some customers might use cash, others might come back another day, and still, others might never make the intended purchase.
Another layer of literature addresses the problems of cybercrime and violation of laws and regulations possible with information technology. Brenner (2007) and Intoccia et al (2006) claim that the United States’s struggle against a closed society raised the need for intelligence and, with that, the status of intelligence agencies. In a more open world (even with an increase in “peace” operations), the need for intelligence would seem logical to shrink-open sources would mostly suffice information warfare brings back the need. As struggles over information–thus, intelligence – increasingly affect the conduct of conventional conflict, the mindset of intelligence is bound to pervade the warrior’s mental constructs. In conventional combat, information on the performance of systems is only the beginning of a strategy to counter those systems; a charging tank is terrifying even if the soldier knows its top speed, but data on the other side’s information warfare systems constitute much, even most, of what is required to defeat those systems. The United States (and other nations) need to hide the extent of its true capabilities (and vulnerabilities) and devote considerable effort to determining counterpart strengths and weaknesses. The more the struggle for information dominance determines the outcome of a war, the more public debate grows increasingly uninformed and therefore immaterial. But the public influence on the generation and use of military power – an effective secondary form of civilian control – is meaningless if insufficient information is public. Intelligence is the cousin to deception. As hiding and seeking assume larger roles in outcomes, each side will necessarily put more effort into testing the other’s capabilities, to see what is and is not detectable. One side may feint, the other may fake (ostensibly responding to false negatives and allowing some positives to seem to move unscathed).
Some authors analyze and evaluate the role of information warfare in global communication and social relations. Libicki (1998) depicts the problem of war and peace typical for modern communication technologies. Information warfare offers opportunities for retrogression. It presents two obstacles to the Services’ use of commercial systems. First, neither commercial hardware nor software is today well protected against painstaking malice. Commercial communications equipment, for instance, is rarely hardened against jamming or otherwise made invulnerable to spoofing (although spread-spectrum technologies in digital cellular phones offer some protection). Commercial software systems, developed for low-threat environments, are poorly protected against rogue code. Commercial networks are penetrated all the time. The military, which needs to operate in contested realms cannot afford such vulnerability. Yet if dependent on today’s commercial systems, they have no good choice but to insert security after the fact; the more security, the more often a proprietary solution is less expensive. Second, some in the Armed Services maintain that unless commercial hardware and software are rigorously inspected, no one can be sure they have not been tampered with. Most commercial electronics originate, in whole or in large part, from Asia. What guarantee is there that someone there did not sneak a circuit onto a chip that, on being awakened, will send out a previously unseen signal to disable or corrupt the unit it sits in? The software provides numerous opportunities for planted bugs. Information warfare as a policy issue has yet to break the surface into public consciousness. This source is informative thus it is subjective presenting the personal position of the author and his views.
All authors agree that a policy of deterrence presumes incident and response are tightly linked. But is it wise policy to promise a response, regardless of the identity of the perpetrator? One would not want a retaliatory policy with no flexibility whatsoever, yet clarity is the hallmark of deterrence and sophistication tends to cause blurriness. U.S. strategic retaliation designed during the Cold War projected a tough adversary; other potential attackers were lesser cases. In information warfare, there is no canonical foe and no lesser case. Ordinarily, retaliation serves to deter the recurrence of incidents, yet the United States is vulnerable to attacks because systems security is weak and weak systems security reflects the perception that potentially damaging attacks are rare. A sufficiently nasty attack might catch people’s attention and promote security. A second attack would therefore be harder to pull off. An explicit specification requires a nation to respond to what, in the case of information attacks, could prove to be gauzy circumstances. Lack of a specification does not prevent ad hoc retaliation. It is difficult to see how an explicitly declared deterrence policy could be made to work, but it is easier to see what the problems are in trying. A declared policy that could not be reliably instantiated would soon lack credibility. If thresholds were too low or the proof that nation-sponsored terrorism was not sufficiently convincing, then retaliation would make the United States appear the aggressor. If thresholds were too high and standards of proof too strict, a policy of retaliation would prove hollow. If the United States were to retaliate against nations regardless of other political considerations, it would risk unwanted confrontation and escalation; if its responses were seen as too expedient, retaliation would seem merely a cover for more cynical purposes.
Shabazz (1999) and Walker (2000) state that informs policy and security issues, and their impact on society. They change the lifestyle of people and increase the importance of security and privacy issues. The authors argue that many innovations carry new security risks. Some Web browsers and spreadsheet macros allow the unwary to download viruses. Distributing software objects and agents over networks may introduce similar problems. If systems use what they learn to reconfigure themselves continuously, the classic response to suspected corruption–starting fresh with original media–will set back system capabilities. Isolation and privacy concerns change the traditional lifestyle of people towards isolation and separation for the mass. The more a nation depends on the integrity of its information infrastructure, the more it can be put at risk by attacks there. The threat of massive disruption through information warfare has been posited as a potential successor to massive destruction by nuclear warfare. Similar ideas are expressed by Libicki (1998) who states that abuse of systems comes in many forms. Commonplace abuses rely on normally common motivations (e.g., greed, thrills), and many are only high-tech versions of carjacking and joyriding. Others less common are serious and difficult to anticipate. The owners of systems can be expected to protect themselves (to an economically optimal level) against commonplace threats, the probability, and patterns of which can be predicted from experience. Less common but serious threats are less liable to be watched for because they arise from motives that surface less often. Threats against individuals although a potential tool of guerilla warfare, are more probably motivated by private grudges. The fourth case, the theft of data, is simply a high-tech version of espionage – something the DOD already takes seriously every day. Fifth and sixth, corruption and disruption, however, best characterize the unexpectedness and malevolence of information warfare: attackers require an external goal and both a concerted strategy and the time to carry it out.
Emergent business opportunities are discussed by Brenner (2007) and Williams (2003). On the one hand, increasing demands for security systems and special programs give computer businesses a chance to innovate and create new products. Systems that face a known pattern of threat (and whose owners would bear most or all the cost of an attack) can determine an optimal level of protection. There is no reason to believe that these owners provide less protection against information attacks than they do against other threats to their well-being. Although many computer systems run with insufficient regard for security, they can be made quite secure. A system that is secured only by keeping out every bad guy makes it difficult or impossible for good guys to do their work. The second dimension is resources (money, time, attention) spent on sophistication. A sophisticated system keeps bad guys out without great inconvenience to authorized users. The authors admit that the opportunity for business is that unfortunately, systems must accept changes to core operating programs all the time. In the absence of sophisticated filters, a tight security curtain may be needed around the few applications and superusers allowed to initiate changes (authorized users might need to work from specific terminals hardwired to the network, an option in Digital’s VAX operating system).
Williams (2003) states that the other problem with a single set of security policies is that each sector differs greatly not only in its vulnerabilities and in what an attack might do but, more importantly, in how government can influence its adoption of security measures (e.g., some sectors are regulated monopolies). Seeing to it that various private efforts to defend themselves are not at odds can help. A high-level coordinator could ensure that the various agencies do what they are tasked to do; lower-level coordinators could work across-the-board issues (e.g., public key infrastructures). Beyond these, no czar is needed. No good alternative exists to having system owners attend to their own protection. By contrast, having the government protect systems requires it to know details of everyone’s operating systems and administrative practices an alternative impossible to implement, even if it did not violate commonly understood boundaries between private and public affairs. In cyberspace, the forcible entry does not exist, unless mandated by misguided policy. The two factors against retaliation in kind are asymmetry and controllability. If a nation that sponsored an attack on the U.S. infrastructure itself lacked a reliable infrastructure to attack, it could not be substantially harmed in kind and therefore would not be deterred by equal and opposite threat. Controllability the ability not just to achieve effects but to predict their scope is difficult. To predict what an attack on someone’s information system will do requires good intelligence about how to get in it, what to do inside, and what secondary effects might result. The more complex systems become, the harder predicting secondary effects becomes–not only effects inside the system but also outside it or even outside the country. Retaliation may produce nothing, may produce nothing that can be made to look like something, may produce something, may produce everything, or may affect third parties, including neutrals, friends, or U.S. interests. This source is concise based on smooth narration and interesting ideas. Any attempt to “war-room” an information crisis will find the commander armed with buttons attached to little outside immediate government control. Repair and prevention are largely in the hands of system owners, who manage their own systems, employ their own systems administrators, and rarely need to call on shared resources (so there is little need for central allocation). The author argues that little evidence exists of recovery or protection synergy which cuts across sectors under attack (say, power companies and funds transfer systems). This article is concise and details proposing to readers interesting facts and ideas about information warfare and its impact on modern society. The author suggests that information warfare remains a phenomenon that must be understood separately from warfare as a whole.
Literature review proves that information warfare is a global problem. Special attention is given to globalization issues and asses to information technology. The author states that people rarely think about information warfare from first principles; for the most part, information warfare involves phenomena few people have experienced. It is warfare by virtue of analogy, or, better, metaphor. It is warfare because it resembles activities that surely are warfare. Used properly, a metaphor can be a starting point for analysis, a littoral, as it were, between the land of the known and the ocean of the unfamiliar. A good metaphor can help frame questions that might otherwise not arise, it can illustrate relationships whose importance might otherwise be overlooked, and it can provide a useful heuristic device, a way to play with concepts, to hold them up to the light to catch the right reflections, and to tease out questions for further inquiry. The information domain, however, is almost entirely man-made. Thus command-and-control warfare may attack the enemy’s command structure but the command structure, itself, can be shaped, almost at will. Hacker warfare proceeds entirely over a terrain of the defender’s making be it hardware, networks, operating systems, applications, or access architecture. The success of attempts to deceive the other side’s system of systems is a function of its makeup from one year, day, or minute to the next. Psychological warfare against enemy commanders or troops works best when it works off preconceived notions they share. Warfare, in general, has been likened to chess in which two players contest over a fixed board using pieces with predetermined behaviors. The characteristics of space satellites descend from immutable laws of orbital mechanics. For the most part, even the land terrain precedes ground combat and is the context for what works and what does not work in war.
Conclusion
Information warfare strategies tend to split into those dealing with attacks on or by the use of electronic devices (as in intelligence-based warfare, electronic warfare, or hacker warfare) and those dealing with psychological warfare bytes and memes, as it were. The intersection of the two is rather small yet both strategies are often lumped into the same discipline, information warfare. These strategies, however, can also be related as follows: because ascertaining the potential of computer warfare is difficult, its psychological impact may be disproportionate to its tangible impact. The power of computers in general, and of information warfare in particular, is not well understood by the public or most military or national leaders. For this reason, computer-based information warfare can play a huge role in psychological warfare; conversely, powerful techniques may lack psychological impact. Others may invent an enemy whose information warfare tricks are so insidious they deter themselves. Information technology also reduces any nation’s ability to understand the capabilities of another nation’s weapons systems, even conventional ones. The actual testing of weapons allows humans or sensors to see them in practice and to gauge how well they work. With increasing sensitivity to field hazards and decreasing costs of information technology, weapons are now often tested through simulation, leaving few opportunities for those not directly involved to measure performance or gauge effectiveness. In strategic terms, a nation can suddenly become a force of surprising, even decisive, capability. If the capabilities of specific instruments of war are harder to measure, the outcomes of the potential conflict itself are harder to forecast.
Bibliography
Adams, Th. K. 2001, Future Warfare and the Decline of Human Decision-making. Parameters, 31 (4), p. 57.
Aldrich, R. W. 2000, How Do You Know You Are at War in the Information Age? Houston Journal of International Law 22 (2), 23.
Bayles, W. J. 2001, The Ethics of Computer Network Attack. Parameters, 31 (1), 44.
Brenner, S. W. 2007, “At Light Speed”: Attribution and Response to Cyber crime/ terrorism/warfare. Journal of Criminal Law and Criminology 97 (2), 379.
Frost, E. L. 1998, Horse Trading in Cyberspace: U.S. Trade Policy in the Information Age. Journal of International Affairs 51 (2), 473.
Intoccia, G. F., Wesley, J., Moore, J. 2006, Communications Technology, Warfare, and the Law: Is the Network a Weapon System? Houston Journal of International Law, 28 (3), 232.
Libicki, M. C. 1998, Information War, Information Peace. Journal of International Affairs, 51 (2), 411.
Molina, A. 2003, Cyberspace: The “Color Line” of the 21st Century. Social Justice, 30 (2), 143.
O’Connell, K. Tomes, R. 2003, Keeping the Information Edge. Policy Review, 122 (1), 19.
Shabazz, D. 1999, Internet Politics and the Creation of a Virtual World. International Journal on World Peace, 16 (1), 27.
Walker, G. K. Information Warfare and Neutrality. Vanderbilt Journal of Transnational Law, 33 (2000), 467.
Williams, Th. J. 2003, Strategic Leader Readiness and Competencies for Asymmetric Warfare. Parameters, 33 (2), 19.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.