Your Ad Here

Super Charging

Super Charging

The power output of an engine depends up on the amount of air indicated per unit time, the degree of utilization of this air and the thermal efficiency of the engine. The amount of air inducted per unit time can be increased by increasing the engine speed pr by increasing the engine speed or by increasing the density of air intake. The method increasing the inlet air density, called supercharging, is usually employed to increase the power output of the engine. This is done by supplying air to a pressure higher than the pressure at which the engine naturally aspirates air from the atmosphere by using a pressure boosting device called a super charger.

OBJECTS OF SUPERCHARGING

The increase in the amount of air inducted per unit time by supercharging is obtained mainly to burn a greater amount of fuel in a given engine and thus increase its power output. The objects of supercharging include one or more of the following.

1. To increase the power output for a given weight and bulk of the engine. This is important for aircraft, marine and automotive engines where weight and space are important.

2. To compensate for the loss of power due to altitude. This mainly relates to aircraft engines which lose power at an approximate rate of one percent 100 meters altitude. This is also relevant for others engines which are used at high altitudes.

A supercharger is an air compressor used for forced induction of an internal combustion engine. The greater mass flow-rate provides more oxygen to support combustion than would be available in a naturally-aspirated engine, which allows more fuel to be provided and more work to be done per cycle, increasing the power output of the engine.

A supercharger can be powered mechanically by a belt, gear, shaft, or chain connected to the engine's crankshaft. It can also be powered by an exhaust gas turbine. A turbine-driven supercharger is known as a turbosupercharger or turbocharger. The term supercharging technically refers to any pump that forces air into an engine—but in common usage, it refers to pumps that are driven directly by the engine as opposed to turbochargers that are driven by the pressure of the exhaust gases.

Thermal Barrier Coatings


Thermal Barrier Coatings

Definition

Thermal barrier coatings (TBC) are layer systems deposited on thermally highly loaded metallic components, as for instance in gas turbines. The accompanying figure shows a stator blade of a stationary gas turbine, furnished with a plasma sprayed thermal barrier coating of YSZ (Siemens Power Generation). The cooling of the components causes a pronounced reduction of the metal temperature, which leads to a prolongation of the mechanical component's lifetime. Alternatively, the use of thermal barrier coatings allows to raise the process temperature, obtaining thus an increased efficiency.

Heat engines are based on considering various factors such as durability, performance and efficiency with the objective of minimizing the life cycle cost. For example, the turbine inlet temperature of a gas turbine having advanced air cooling and improved component materials is about 1500oC. Metallic coatings were introduced to sustain these high temperatures. The trend for the most efficient gas turbines is to exploit more recent advances in material and cooling technology by going to engine operating cycles which employ a large fraction of the maximum turbine inlet temperature capability for the entire operating cycle. Thermal Barrier Coatings (TBC) performs the important function of insulating components such as gas turbine and aero engine parts operating at elevated temperatures. Thermal barrier coatings (TBC) are layer systems deposited on thermally highly loaded metallic components, as for instance in gas turbines. TBC's are characterized by their low thermal conductivity, the coating bearing a large temperature gradient when exposed to heat flow. The most commonly used TBC material is Yttrium Stabilized Zirconia (YSZ), which exhibits resistance to thermal shock and thermal fatigue up to 1150oC. YSZ is generally deposited by plasma spraying and electron beam physical vapour deposition (EBPVD) processes. It can also be deposited by HVOF spraying for applications such as blade tip wear prevention, where the wear resistant properties of this material can also be used. The use of the TBC raises the process temperature and thus increases the efficiency.

Structure Of Thermal Barrier Coatings

Thermal Barrier Coating consists of two layers (duplex structure). The first layer, a metallic one, is called bond coat, whose function is to protect the basic material against oxidation and corrosion. The second layer is an oxide ceramic layer, which is glued or attached by a metallic bond coat to the super alloy. The oxide that is commonly used is Zirconia oxide (ZrO2) and Yttrium oxide (Y2O3). The metallic bond coat is an oxidation/hot corrosion resistant layer. The bond coat is empherically represented as MCrAlY alloy where

M - Metals like Ni, Co or Fe.
Y - Reactive metals like Yttrium.
CrAl - base metal.

Coatings are well established as an important underpinning technology for the manufacture of aeroengine and industrial turbines. Higher turbine combustion temperatures are desirable for increased engine efficiency and environmental reasons (reduction in pollutant emissions, particularly NOx), but place severe demands on the physical and chemical properties of the basic materials of fabrication.

In this context, MCrAlY coatings (where M = Co, Ni or Co/Ni) are widely applied to first and second stage turbine blades and nozzle guide vanes, where they may be used as corrosion resistant overlays or as bond-coats for use with thermal barrier coatings. In the first and second stage of a gas turbine, metal temperatures may exceed 850°C, and two predominant corrosion mechanisms have been identified:

Accelerated high temperature oxidation (>950°C) where reactions between the coating and oxidants in the gaseous phase produce oxides on the coating surface as well as internal penetration of oxides/sulphides within the coating, depending on the level of gas phase contaminants

Type I hot corrosion (850 - 950°C) where corrosion occurs through reaction with salts deposited from the vapour phase (from impurities in the fuel). Molten sulphates flux the oxide scales, and non-protective scales, extensive internal suplhidation and a depletion zone of scale-forming elements characterize the microstructure.

Thermal barrier coatings are highly advanced material systems applied to metallic surfaces, such as gas turbine or aero-engine parts, operating at elevated temperatures. These coatings serve to insulate metallic components from large and prolonged heat loads by utilizing thermally insulating materials which can sustain an appreciable temperature difference between the load bearing alloys and the coating surface.In doing so, these coatings can allow for higher operating temperatures while limiting the thermal exposure of structural components, extending part life by reducing oxidation and thermal fatigue. In fact, in conjunction with active film cooling, TBCs permit working fluid temperatures higher than the melting point of the metal airfoil in some turbine applications.

Background

Thermal Barrier Caotings are typically ceramic composites based on zirconia, alumina, and titanium.. The high hardnes, wear resisrance and good chemical stability of TBCs make them very desirable in cutting tool applications. TBCs provide good resistance against the corrosive, high temperature environment of aircraft engines as well. Wear resistance can increase between 200 to 500% with the addition of a TBC to a tool. Chemical vapor deposition (CVD), plasma vapor deposition (PVD), and electron beam physical vapor deposition (EBPVD) are the primary deposition methods for these coatings.


The adhesion quality of the TBC to the substrate is considered to be one of the limiting factors for use of these materials. Previous studies using pull-off methods to determine the adhesion do not sufficiently describe the mechanism of failure. In addition, a large difference in the thermal expansion coefficient between the interface and the substrate is a potential cause for spalling of the coating. This research will attempt to characterize the mechanical properties of the interface, in particular the interface fracture resistance and progressive debonding.

Cutting Tool Applications

The addition of a TBC is credited for increasing cutting speeds of tools and for providing deeper cuts. In particular, TBCs provide excellent wear resistance which is necessary in the harsh tool environment. Wear mechanisms of cutting tools, include crater, attrition, flank, and abrasive wear. It has been shown that TBCs can limit the crater and attrition wear processes.

Multi-layer coatings increase performance as the combination exhibits the best qualities of each coating. These coatings are also known to produce finer grain sizes and minimize chopping. The research will demonstrate the reliability of the combinations already in service.

The Hy-Wire Car


The Hy-Wire Car
Definition

Cars are immensely complicated machines, but when you get down to it, they do an incredibly simple job. Most of the complex stuff in a car is dedicated to turning wheels, which grip the road to pull the car body and passengers along. The steering system tilts the wheels side to side to turn the car, and brake and acceleration systems control the speed of the wheels.

Given that the overall function of a car is so basic (it just needs to provide rotary motion to wheels), it seems a little strange that almost all cars have the same collection of complex devices crammed under the hood and the same general mass of mechanical and hydraulic linkages running throughout. Why do cars necessarily need a steering column, brake and acceleration pedals, a combustion engine, a catalytic converter and the rest of it?

According to many leading automotive engineers, they don't; and more to the point, in the near future, they won't. Most likely, a lot of us will be driving radically different cars within 20 years. And the difference won't just be under the hood -- owning and driving cars will change significantly, too.

In this article, we'll look at one interesting vision of the future, General Motor's remarkable concept car, the Hy-wire. GM may never actually sell the Hy-wire to the public, but it is certainly a good illustration of various ways cars might evolve in the near future.

Hy-Wire Basics

Two basic elements largely dictate car design today: the internal combustion engine and mechanical and hydraulic linkages. If you've ever looked under the hood of a car, you know an internal combustion engine requires a lot of additional equipment to function correctly. No matter what else they do with a car, designers always have to make room for this equipment.

The same goes for mechanical and hydraulic linkages. The basic idea of this system is that the driver maneuvers the various actuators in the car (the wheels, brakes, etc.) more or less directly, by manipulating driving controls connected to those actuators by shafts, gears and hydraulics. In a rack-and-pinion steering system, for example, turning the steering wheel rotates a shaft connected to a pinion gear, which moves a rack gear connected to the car's front wheels. In addition to restricting how the car is built, the linkage concept also dictates how we drive: The steering wheel, pedal and gear-shift system were all designed around the linkage idea.

Ubiquitous computing




Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing.

Ubiquitous computing (ubicomp) is a post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. As opposed to the desktop paradigm, in which a single user consciously engages a single device for a specialized purpose, someone "using" ubiquitous computing engages many computational devices and systems simultaneously, in the course of ordinary activities, and may not necessarily even be aware that they are doing so.

Core concept

At their core, all models of ubiquitous computing (also called pervasive computing) share a vision of small, inexpensive, robust networked processing devices, distributed at all scales throughout everyday life and generally turned to distinctly common-place ends. For example, a domestic ubiquitous computing environment might interconnect lighting and environmental controls with personal biometric monitors woven into clothing so that illumination and heating conditions in a room might be modulated, continuously and imperceptibly. Another common scenario posits refrigerators "aware" of their suitably-tagged contents, able to both plan a variety of menus from the food actually on hand, and warn users of stale or spoiled food.

Ubiquitous computing presents challenges across computer science: in systems design and engineering, in systems modelling, and in user interface design. Contemporary human-computer interaction models, whether command-line, menu-driven, or GUI-based, are inappropriate and inadequate to the ubiquitous case. This suggests that the "natural" interaction paradigm appropriate to a fully robust ubiquitous computing has yet to emerge - although there is also recognition in the field that in many ways we are already living in an ubicomp world. Contemporary devices that lend some support to this latter idea include mobile phones, digital audio players, radio-frequency identification tags, GPS, and interactive whiteboards.

In his book The Rise of the Network Society, Manuel Castells suggests that there is an ongoing shift from already-decentralised, stand-alone microcomputers and mainframes towards entirely pervasive computing. In his model of a pervasive computing system, Castells uses the example of the Internet as the start of a pervasive computing system. The logical progression from that paradigm is a system where that networking logic becomes applicable in every realm of daily activity, in every location and every context. Castells envisages a system where billions of miniature, ubiquitous inter-communication devices will be spread worldwide, "like pigment in the wall paint".

Tripwire

A tripwire is a passive triggering mechanism, usually/originally employed for military purposes, although its principle has been used since prehistory for methods of trapping game.

Typically, a wire or cord is attached to some device for detecting or reacting to physical movement. From this basic meaning, several extended and metaphorical uses of the term have developed. For example, the Berlin Brigade stationed in the divided city of Berlin during the Cold War was given the mission to be the "tripwire" for a Soviet incursion into West Germany.

Military usage may designate a tripwire as a wire attached to one or more mines — normally bounding mines and the fragmentation type — in order to increase their activation area. Alternatively, tripwires are frequently used in boobytraps, whereby a tug on the wire (or release of tension on it) will detonate the explosives.

Soldiers sometimes detect the presence of tripwires by spraying the area with Silly String. If the string falls to the ground there are no tripwires. If there is a tripwire, the string will be suspended in the air without pulling the wire. It is being used by U.S. troops in Iraq for this purpose.

10 Gigabit Ethernet



The 10 Gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards. As of 2008 10 Gigabit Ethernet is still an emerging technology with only 1 million ports shipped in 2007, and it remains to be seen which of the PHYs will gain widespread commercial acceptance. A networking device may support different PHY types by means of pluggable PHY modules.

At the time the 10 Gigabit Ethernet standard was developed there was much interest in 10GbE as a WAN transport and this led to the introduction of the concept of the WAN PHY for 10GbE. This operates at a slightly slower data-rate than the LAN PHY and adds some extra encapsulation. The WAN PHY and LAN PHY are specified to share the same PMDs (Physical Medium Dependent) so 10GBASE-LR and 10GBASE-LW can use the same optics. In terms of number of ports shipped the LAN PHY greatly outsells the WAN PHY.

From its inception, 10G Ethernet was intended to retain backward compatibility and full interoperability with 10/100/1000M bit/sec Ethernet while adding a tenfold increase in performance.

In 10/100 and Gigabit Ethernet, the MAC layer works in a linear manner - data moves serially in and out of the MAC layer with all the starting and ending control messages (including clocking and synchronization) embedded inside the datastream. With 10G Ethernet, it is much more complex.

To attain a 10G bit/sec bandwidth rate, the IEEE altered the way that MAC layer interprets signaling. Rather than producing a serial stream, the 10G Ethernet layer operates in parallel to interpret data. The transmit and receive paths each comprise four data lanes, and the datastream broken down into bytes is handled in round-robin fashion across the four lanes, numbered 0 to 3. On the transmit path, for example, the first byte aligns to Lane 0, the second byte to Lane 1, the third byte to Lane 2, the fourth byte to Lane 3, the fifth byte back to Lane 0, and so on.

10 Gigabit Ethernet

Ethernet frames have clearly defined beginning and ending boundaries, or delimiters. These are marked by special characters and a 12-byte interpacket gap (IPG) that dictates the minimum amount of space or idle time between packets.

Because of the parallel nature of the 10G Ethernet MAC layer, it is impossible to predict the lane in which the ending byte of the previous datastream will fall. This makes finding the starting bit - a requirement for maintaining timing and synchronization - more difficult. The 802.3ae standard mandated an elegant solution: the "start control character," or very first byte of a new data frame, must always align on Lane 0. However, this solution complicates the way the MAC handles the IPG, directly affecting performance. Nevertheless, the IEEE provided three options for the vendors to address this issue: 1) pad (increase), 2) shrink, or 3) average the "minimum" IPG.

Windows DNA

Windows DNA is short for Windows Distributed interNet Applications Architecture, a marketing name for a collection of Microsoft technologies that enable the Windows platform and the Internet to work together. Some of the principal technologies comprising DNA include ActiveX, Dynamic HTML (DHTML) and COM. Windows DNA has been largely superseded by the Microsoft .Net Framework, and Microsoft no longer uses the term.


Touch Screen


A touchscreen is a display which can detect the presence and location of a touch within the display area. The term generally refers to touch or contact to the display of the device by a finger or hand. Touchscreens can also sense other passive objects, such as a stylus. However, if the object sensed is active, as with a light pen, the term touchscreen is generally not applicable. The ability to interact directly with a display typically indicates the presence of a touchscreen.

Until the early 1980s, most consumer touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This is starting to change with the commercialisation of multi-touch technology.

The touchscreen has two main attributes. First, it enables one to interact with what is displayed directly on the screen, where it is displayed, rather than indirectly with a mouse or touchpad. Secondly, it lets one do so without requiring any intermediate device, again, such as a stylus that needs to be held in the hand. Such displays can be attached to computers or, as terminals, to networks. They also play a prominent role in the design of digital appliances such as the personal digital assistant (PDA), satellite navigation devices and mobile phones.

Touchscreens emerged from academic and corporate research labs in the second half of the 1960s. One of the first places where they gained some visibility was in the terminal of a computer-assisted learning terminal that came out in 1972 as part of the PLATO project. They have subsequently become familiar in kiosk systems, such as in retail and tourist settings, on point of sale systems, on ATMs and on PDAs where a stylus is sometimes used to manipulate the GUI and to enter data. The popularity of smart phones, PDAs, portable game consoles and many types of information appliances is driving the demand for, and the acceptance of, touchscreens.

The HP-150 from 1983 was probably the world's earliest commercial touchscreen computer. It doesn't actually have a touchscreen in the strict sense, but a 9" Sony CRT surrounded by infrared transmitters and receivers which detect the position of any non-transparent object on the screen.

Touchscreens are popular in heavy industry and in other situations, such as museum displays or room automation, where keyboard and mouse systems do not allow a satisfactory, intuitive, rapid, or accurate interaction by the user with the display's content.

Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators and not by display, chip or motherboard manufacturers. With time, however, display manufacturers and System On Chip (SOC) manufacturers worldwide have acknowledged the trend toward acceptance of touchscreens as a highly desirable user interface component and have begun to integrate touchscreen functionality into the fundamental design of their products. In the portable consumer electronics space, the touchscreen has evolved from the Electronic Organizer to the Apple iphone, Samsung Eternity, LG Vu, and Blackberry Storm.

A basic touchscreen has three main components: a touch sensor, a controller, and a software driver. The touchscreen is an input device, so it needs to be combined with a display and a PC or other device to make a complete touch input system.

1. Touch Sensor
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen causes a voltage or signal change. This voltage change is used to determine the location of the touch to the screen.

2. Controller
The controller is a small PC card that connects between the touch sensor and the PC. It takes information from the touch sensor and translates it into information that PC can understand. The controller is usually installed inside the monitor for integrated monitors or it is housed in a plastic case for external touch add-ons/overlays. The controller determines what type of interface/connection you will need on the PC. Integrated touch monitors will have an extra cable connection on the back for the touchscreen. Controllers are available that can connect to a Serial/COM port (PC) or to a USB port (PC or Macintosh). Specialized controllers are also available that work with DVD players and other devices.

3. Software Driver
The driver is a software update for the PC system that allows the touchscreen and computer to work together. It tells the computer's operating system how to interpret the touch event information that is sent from the controller. Most touch screen drivers today are a mouse-emulation type driver. This makes touching the screen the same as clicking your mouse at the same location on the screen. This allows the touchscreen to work with existing software and allows new applications to be developed without the need for touchscreen specific programming. Some equipment such as thin client terminals, DVD players, and specialized computer systems either do not use software drivers or they have their own built-in touch screen driver.

Touchscreens Add-ons and Integrated Touchscreen Monitors

We offer two main types of touchscreen products, touchscreen add-ons and integrated touchscreen monitors. Touchscreen add-ons are touchscreen panels that hang over an existing computer monitor. Integrated touchscreen monitors are computer displays that have the touchscreen built-in. Both product types work in the same way, basically as an input device like a mouse or trackpad.

Touchscreens As Input Device

All of the touchscreens that we offer basically work like a mouse. Once the software driver for the touchscreen is installed, the touchscreen emulates mouse functions. Touching the screen is basically the same as clicking your mouse at the same point at the screen. When you touch the touchscreen, the mouse cursor will move to that point and make a mouse click. You can tap the screen twice to perform a double-click, and you can also drag your finger across the touchscreen to perform drag-and-drops. The touchscreens will normally emulate left mouse clicks. Through software, you can also switch the touchscreen to perform right mouse clicks instead.

What Are Touchscreens Used For?

The touch screen is one of the easiest PC interfaces to use, making it the interface of choice for a wide variety of applications. Here are a few examples of how touch input systems are being used today:

Public Information Displays
Information kiosks, tourism displays, trade show displays, and other electronic displays are used by many people that have little or no computing experience. The user-friendly touch screen interface can be less intimidating and easier to use than other input devices, especially for novice users. A touchscreen can help make your information more easily accessible by allowing users to navigate your presentation by simply touching the display screen.

Retail and Restaurant Systems
Time is money, especially in a fast paced retail or restaurant environment. Touchscreen systems are easy to use so employees can get work done faster, and training time can be reduced for new employees. And because input is done right on the screen, valuable counter space can be saved. Touchscreens can be used in cash registers, order entry stations, seating and reservation systems, and more.

Customer Self-Service
In today's fast pace world, waiting in line is one of the things that has yet to speed up. Self-service touch screen terminals can be used to improve customer service at busy stores, fast service restaurants, transportation hubs, and more. Customers can quickly place their own orders or check themselves in or out, saving them time, and decreasing wait times for other customers. Automated bank teller (ATM) and airline e-ticket terminals are examples of self-service stations that can benefit from touchscreen input.


Control and Automation Systems
The touch screen interface is useful in systems ranging from industrial process control to home automation. By integrating the input device with the display, valuable workspace can be saved. And with a graphical interface, operators can monitor and control complex operations in real-time by simply touching the screen.

Computer Based Training
Because the touch screen interface is more user-friendly than other input devices, overall training time for computer novices, and therefore training expense, can be reduced. It can also help to make learning more fun and interactive, which can lead to a more beneficial training experience for both students and educators.


Assistive Technology
The touch screen interface can be beneficial to those that have difficulty using other input devices such as a mouse or keyboard. When used in conjunction with software such as on-screen keyboards, or other assistive technology, they can help make computing resources more available to people that have difficulty using computers.


And many more uses...
The touch screen interface is being used in a wide variety of applications to improve human-computer interaction. Other applications include digital jukeboxes, computerized gaming, student registration systems, multimedia software, financial and scientific applications, and more.

Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. UML includes a set of graphical notation techniques to create abstract models of specific systems.

The Unified Modeling Language (UML) is a graphical language for visualizing, specifying and constructing the artifacts of a software-intensive system. The Unified Modeling Language offers a standard way to write a system's blueprints, including conceptual things such as business processes and system functions as well as concrete things such as programming language statements, database schemas, and reusable software components. UML combines the best practice from data modeling concepts such as entity relationship diagrams, business modeling (work flow), object modeling and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies.

Modeling

It is very important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphical representation of a system's model. The model also contains a "semantic backplane" — documentation such as written use cases that drive the model elements and diagrams.

UML diagrams represent three different views of a system model.

* Functional requirements view: Emphasizes the functional requirements of the system from the user's point of view. And includes use case diagrams.
* Static structural view: Emphasizes the static structure of the system using objects, attributes, operations and relationships. And includes class diagrams and composite structure diagrams.
* Dynamic behavior view: Emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. And includes sequence diagrams, activity diagrams and state machine diagrams.

UML models can be exchanged among UML tools by using the XMI interchange format.


Cyberterrorism

What Is Cyberterrorism?

There have been several stumbling blocks to creating a clear and consistent definition of the term "cyberterrorism." First, as just noted, much of the discussion of cyberterrorism has been conducted in the popular media, where journalists typically strive for drama and sensation rather than for good operational definitions of new terms. Second, it has been especially common when dealing with computers to coin new words simply by placing the word "cyber," "computer," or "information" before another word. Thus, an entire arsenal of words—cybercrime, infowar, netwar, cyberterrorism, cyberharassment, virtual warfare, digital terrorism, cybertactics, computer warfare, cyberattack, and cyber-break-ins—is used to describe what some military and political strategists describe as the "new terrorism" of our times.

Fortunately, some efforts have been made to introduce greater semantic precision. Most notably, Dorothy Denning, a professor of computer science, has put forward an admirably unambiguous definition in numerous articles and in her testimony on the subject before the House Armed Services Committee in May 2000:
Cyberterrorism is the convergence of cyberspace and terrorism. It refers to unlawful attacks and threats of attacks against computers, networks and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives. Further, to qualify as cyberterrorism, an attack should result in violence against persons or property, or at least cause enough harm to generate fear. Attacks that lead to death or bodily injury, explosions, or severe economic loss would be examples. Serious attacks against critical infrastructures could be acts of cyberterrorism, depending on their impact. Attacks that disrupt nonessential services or that are mainly a costly nuisance would not.

Summary

* The potential threat posed by cyberterrorism has provoked considerable alarm. Numerous security experts, politicians, and others have publicized the danger of cyberterrorists hacking into government and private computer systems and crippling the military, financial, and service sectors of advanced economies.
* The potential threat is, indeed, very alarming. And yet, despite all the gloomy predictions, no single instance of real cyberterrorism has been recorded. This raises the question: just how real is the threat?
* Psychological, political, and economic forces have combined to promote the fear of cyberterrorism. From a psychological perspective, two of the greatest fears of modern time are combined in the term "cyberterrorism." The fear of random, violent victimization blends well with the distrust and outright fear of computer technology.
* Even before 9/11, a number of exercises identified apparent vulnerabilities in the computer networks of the U.S. military and energy sectors. After 9/11, the security and terrorism discourse soon featured cyberterrorism prominently, promoted by interested actors from the political, business, and security circles.
* Cyberterrorism is, to be sure, an attractive option for modern terrorists, who value its anonymity, its potential to inflict massive damage, its psychological impact, and its media appeal.
* Cyberfears have, however, been exaggerated. Cyberattacks on critical components of the national infrastructure are not uncommon, but they have not been conducted by terrorists and have not sought to inflict the kind of damage that would qualify as cyberterrorism.
* Nuclear weapons and other sensitive military systems, as well as the computer systems of the CIA and FBI, are "air-gapped," making them inaccessible to outside hackers. Systems in the private sector tend to be less well protected, but they are far from defenseless, and nightmarish tales of their vulnerability tend to be largely apocryphal.
* But although the fear of cyberterrorism may be manipulated and exaggerated, we can neither deny nor ignore it. Paradoxically, success in the "war on terror" is likely to make terrorists turn increasingly to unconventional weapons, such as cyberterrorism. And as a new, more computer-savvy generation of terrorists comes of age, the danger seems set to increase.


Introduction

Broken Computer Screen
An image from al Qaeda's website features a "cracked" or broken computer which indicates a potential cyberattack.

The threat posed by cyberterrorism has grabbed the attention of the mass media, the security community, and the information technology (IT) industry. Journalists, politicians, and experts in a variety of fields have popularized a scenario in which sophisticated cyberterrorists electronically break into computers that control dams or air traffic control systems, wreaking havoc and endangering not only millions of lives but national security itself. And yet, despite all the gloomy predictions of a cyber-generated doomsday, no single instance of real cyberterrorism has been recorded.

Just how real is the threat that cyberterrorism poses? Because most critical infrastructure in Western societies is networked through computers, the potential threat from cyberterrorism is, to be sure, very alarming. Hackers, although not motivated by the same goals that inspire terrorists, have demonstrated that individuals can gain access to sensitive information and to the operation of crucial services. Terrorists, at least in theory, could thus follow the hackers' lead and then, having broken into government and private computer systems, cripple or at least disable the military, financial, and service sectors of advanced economies. The growing dependence of our societies on information technology has created a new form of vulnerability, giving terrorists the chance to approach targets that would otherwise be utterly unassailable, such as national defense systems and air traffic control systems. The more technologically developed a country is, the more vulnerable it becomes to cyberattacks against its infrastructure.

Concern about the potential danger posed by cyberterrorism is thus well founded. That does not mean, however, that all the fears that have been voiced in the media, in Congress, and in other public forums are rational and reasonable. Some fears are simply unjustified, while others are highly exaggerated. In addition, the distinction between the potential and the actual damage inflicted by cyberterrorists has too often been ignored, and the relatively benign activities of most hackers have been conflated with the specter of pure cyberterrorism.

This report examines the reality of the cyberterrorism threat, present and future. It begins by outlining why cyberterrorism angst has gripped so many people, defines what qualifies as "cyberterrorism" and what does not, and charts cyberterrorism's appeal for terrorists. The report then looks at the evidence both for and against Western society's vulnerability to cyberattacks, drawing on a variety of recent studies and publications to illustrate the kinds of fears that have been expressed and to assess whether we need to be so concerned. The conclusion looks to the future and argues that we must remain alert to real dangers while not becoming victims of overblown fears.
TOP

Cyberterrorism Angst

The roots of the notion of cyberterrorism can be traced back to the early 1990s, when the rapid growth in Internet use and the debate on the emerging "information society" sparked several studies on the potential risks faced by the highly networked, high-tech-dependent United States. As early as 1990, the National Academy of Sciences began a report on computer security with the words, "We are at risk. Increasingly, America depends on computers. . . . Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb." At the same time, the prototypical term "electronic Pearl Harbor" was coined, linking the threat of a computer attack to an American historical trauma.

Psychological, political, and economic forces have combined to promote the fear of cyberterrorism. From a psychological perspective, two of the greatest fears of modern time are combined in the term "cyberterrorism." The fear of random, violent victimization blends well with the distrust and outright fear of computer technology. An unknown threat is perceived as more threatening than a known threat. Although cyberterrorism does not entail a direct threat of violence, its psychological impact on anxious societies can be as powerful as the effect of terrorist bombs. Moreover, the most destructive forces working against an understanding of the actual threat of cyberterrorism are a fear of the unknown and a lack of information or, worse, too much misinformation.

After 9/11, the security and terrorism discourse soon featured cyberterrorism prominently. This was understandable, given that more nightmarish attacks were expected and that cyberterrorism seemed to offer al Qaeda opportunities to inflict enormous damage. But there was also a political dimension to the new focus on cyberterrorism. Debates about national security, including the security of cyberspace, always attract political actors with agendas that extend beyond the specific issue at hand—and the debate over cyberterrorism was no exception to this pattern. For instance, Yonah Alexander, a terrorism researcher at the Potomac Institute—a think tank with close links to the Pentagon—announced in December 2001 the existence of an "Iraq Net." This network supposedly consisted of more than one hundred websites set up across the world by Iraq since the mid-nineties to launch denial-of-service (DoS) attacks against U.S. companies (such attacks render computer systems inaccessible, unusable, or inoperable). "Saddam Hussein would not hesitate to use the cyber tool he has. . . . It is not a question of if but when. The entire United States is the front line," Alexander claimed. (See Ralf Bendrath's article "The American Cyber-Angst and the Real World," published in 2003 in Bombs and Bandwidth, edited by Robert Latham.) Whatever the intentions of its author, such a statement was clearly likely to support arguments then being made for an aggressive U.S. policy toward Iraq. No evidence of an Iraq Net has yet come to light.

Combating cyberterrorism has become not only a highly politicized issue but also an economically rewarding one. An entire industry has emerged to grapple with the threat of cyberterrorism: think tanks have launched elaborate projects and issued alarming white papers on the subject, experts have testified to cyberterrorism's dangers before Congress, and private companies have hastily deployed security consultants and software designed to protect public and private targets. Following the 9/11 attacks, the federal government requested $4.5 billion for infrastructure security, and the FBI now boasts more than one thousand "cyber investigators."

Before September 11, 2001, George W. Bush, then a presidential candidate, warned that "American forces are overused and underfunded precisely when they are confronted by a host of new threats and challenges—the spread of weapons of mass destruction, the rise of cyberterrorism, the proliferation of missile technology." After the 9/11 attacks, President Bush created the Office of Cyberspace Security in the White House and appointed his former counterterrorism coordinator, Richard Clarke, to head it. The warnings came now from the president, the vice president, security advisors, and government officials: "Terrorists can sit at one computer connected to one network and can create worldwide havoc," cautioned Tom Ridge, director of the Department of Homeland Security, in a representative observation in April 2003. "[They] don't necessarily need a bomb or explosives to cripple a sector of the economy or shut down a power grid." These warnings certainly had a powerful impact on the media, on the public, and on the administration. For instance, a survey of 725 cities conducted in 2003 by the National League of Cities found that cyberterrorism ranked alongside biological and chemical weapons at the top of a list of city officials' fears.

The mass media have added their voice to the fearful chorus, running scary front-page headlines such as the following, which appeared in the Washington Post in June 2003: "Cyber-Attacks by Al Qaeda Feared, Terrorists at Threshold of Using Internet as Tool of Bloodshed, Experts Say." Cyberterrorism, the media have discovered, makes for eye-catching, dramatic copy. Screenwriters and novelists have likewise seen the dramatic potential, with movies such as the 1995 James Bond feature, Goldeneye, and 2002's Code Hunter and novels such as Tom Clancy and Steve R. Pieczenik's Netforce popularizing a wide range of cyberterrorist scenarios.

The net effect of all this attention has been to create a climate in which instances of hacking into government websites, online thefts of proprietary data from companies, and outbreaks of new computer viruses are all likely to be labeled by the media as suspected cases of "cyberterrorism." Indeed, the term has been improperly used and overused to such an extent that, if we are to have any hope of reaching a clear understanding of the danger posed by cyberterrorism, we must begin by defining it with some precision.
TOP

The Appeal of Cyberterrorism for Terrorists

Cyberterrorism is an attractive option for modern terrorists for several reasons.

* First, it is cheaper than traditional terrorist methods. All that the terrorist needs is a personal computer and an online connection. Terrorists do not need to buy weapons such as guns and explosives; instead, they can create and deliver computer viruses through a telephone line, a cable, or a wireless connection.
* Second, cyberterrorism is more anonymous than traditional terrorist methods. Like many Internet surfers, terrorists use online nicknames—"screen names"—or log on to a website as an unidentified "guest user," making it very hard for security agencies and police forces to track down the terrorists' real identity. And in cyberspace there are no physical barriers such as checkpoints to navigate, no borders to cross, and no customs agents to outsmart.
* Third, the variety and number of targets are enormous. The cyberterrorist could target the computers and computer networks of governments, individuals, public utilities, private airlines, and so forth. The sheer number and complexity of potential targets guarantee that terrorists can find weaknesses and vulnerabilities to exploit. Several studies have shown that critical infrastructures, such as electric power grids and emergency services, are vulnerable to a cyberterrorist attack because the infrastructures and the computer systems that run them are highly complex, making it effectively impossible to eliminate all weaknesses.
* Fourth, cyberterrorism can be conducted remotely, a feature that is especially appealing to terrorists. Cyberterrorism requires less physical training, psychological investment, risk of mortality, and travel than conventional forms of terrorism, making it easier for terrorist organizations to recruit and retain followers.
* Fifth, as the I LOVE YOU virus showed, cyberterrorism has the potential to affect directly a larger number of people than traditional terrorist methods, thereby generating greater media coverage, which is ultimately what terrorists want.

A Growing Sense of Vulnerability

Black Ice: The Invisible Threat of Cyber-Terror, a book published in 2003 and written by Computerworld journalist and former intelligence officer Dan Verton, describes the 1997 exercise code-named "Eligible Receiver," conducted by the National Security Agency (NSA). (The following account draws from "Black Ice," Computerworld, August 13, 2003.) The exercise began when NSA officials instructed a "Red Team" of thirty-five hackers to attempt to hack into and disrupt U.S. national security systems. They were told to play the part of hackers hired by the North Korean intelligence service, and their primary target was to be the U.S. Pacific Command in Hawaii. They were allowed to penetrate any Pentagon network but were prohibited from breaking any U.S. laws, and they could only use hacking software that could be downloaded freely from the Internet. They started mapping networks and obtaining passwords gained through "brute-force cracking" (a trial-and-error method of decoding encrypted data such as passwords or encryption keys by trying all possible combinations). Often they used simpler tactics such as calling somebody on the telephone, pretending to be a technician or high-ranking official, and asking for the password. The hackers managed to gain access to dozens of critical Pentagon computer systems. Once they entered the systems, they could easily create user accounts, delete existing accounts, reformat hard drives, scramble stored data, or shut systems down. They broke the network defenses with relative ease and did so without being traced or identified by the authorities.

The results shocked the organizers. In the first place, the Red Team had shown that it was possible to break into the U.S. Pacific military's command-and-control system and, potentially, cripple it. In the second place, the NSA officials who examined the experiment's results found that much of the private-sector infrastructure in the United States, such as the telecommunications and electric power grids, could easily be invaded and abused in the same way.

The vulnerability of the energy industry is at the heart of Black Ice. Verton argues that America's energy sector would be the first domino to fall in a strategic cyberterrorist attack against the United States. The book explores in frightening detail how the impact of such an attack could rival, or even exceed, the consequences of a more traditional, physical attack. Verton claims that during any given year, an average large utility company in the United States experiences about 1 million cyberintrusions. Data collected by Riptech, Inc.—a Virginia-based company specializing in the security of online information and financial systems—on cyberattacks during the six months following the 9/11 attacks showed that companies in the energy industry suffered intrusions at twice the rate of other industries, with the number of severe or critical attacks requiring immediate intervention averaging 12.5 per company.

Deregulation and the increased focus on profitability have made utilities and other companies move more and more of their operations to the Internet in search of greater efficiency and lower costs. Verton argues that the energy industry and many other sectors have become potential targets for various cyberdisruptions by creating Internet links (both physical and wireless) between their networks and supervisory control and data acquisition (SCADA) systems. These SCADA systems manage the flow of electricity and natural gas and control various industrial systems and facilities, including chemical processing plants, water purification and water delivery operations, wastewater management facilities, and a host of manufacturing firms. A terrorist's ability to control, disrupt, or alter the command and monitoring functions performed by these systems could threaten regional and possibly national security.

According to Symantec, one of the world's corporate leaders in the field of cybersecurity, new vulnerabilities to a cyberattack are being discovered all the time. The company reported that the number of "software holes" (software security flaws that allow malicious hackers to exploit the system) grew by 80 percent in 2002. Still, Symantec claimed that no single cyberterrorist attack was recorded (applying the definition that such an attack must originate in a country on the State Department's terror watch list). This may reflect the fact that terrorists do not yet have the required know-how. Alternatively, it may illustrate that hackers are not sympathetic to the goals of terrorist organizations—should the two groups join forces, however, the results could be devastating.

Equally alarming is the prospect of terrorists themselves designing computer software for government agencies. Remarkably, as Denning describes in "Is Cyber Terror Next?" at least one instance of such a situation is known to have occurred:

In March 2000, Japan's Metropolitan Police Department reported that a software system they had procured to track 150 police vehicles, including unmarked cars, had been developed by the Aum Shinryko cult, the same group that gassed the Tokyo subway in 1995, killing 12 people and injuring 6,000 more. At the time of the discovery, the cult had received classified tracking data on 115 vehicles. Further, the cult had developed software for at least 80 Japanese firms and 10 government agencies. They had worked as subcontractors to other firms, making it almost impossible for the organizations to know who was developing the software. As subcontractors, the cult could have installed Trojan horses to launch or facilitate cyber terrorist attacks at a later date.

Despite stepped-up security measures in the wake of 9/11, a survey of almost four hundred IT professionals conducted for the Business Software Alliance during June 2002 revealed widespread concern. (See Robyn Greenspan, "Cyberterrorism Concerns IT Pros," Internetnews.com, August 16, 2002.) About half (49 percent) of the IT professionals felt that an attack is likely, and more than half (55 percent) said the risk of a major cyberattack on the United States has increased since 9/11. The figure jumped to 59 percent among those respondents who are in charge of their company's computer and Internet security. Seventy-two percent agreed with the statement "there is a gap between the threat of a major cyberattack and the government's ability to defend against it," and the agreement rate rose to 84 percent among respondents who are most knowledgeable about security. Those surveyed were concerned about attacks not only on the government but also on private targets. Almost three-quarters (74 percent) believed that national financial institutions such as major national banks would be likely targets within the next year, and around two-thirds believed that attacks were likely to be launched within the next twelve months against the computer systems that run communications networks (e.g., telephones and the Internet), transportation infrastructure (e.g., air traffic control computer systems), and utilities (e.g., water stations, dams, and power plants).

A study released in December 2003 (and reported in the Washington Post on January 31, 2004) appeared to confirm the IT professionals' skepticism about the ability of the government to defend itself against cyberattack. Conducted by the House Government Reform Subcommittee on Technology, the study examined computer security in federal agencies over the course of a year and awarded grades. Scores were based on numerous criteria, including how well an agency trained its employees in security and the extent to which it met established security procedures such as limiting access to privileged data and eliminating easily guessed passwords. More than half the federal agencies surveyed received a grade of D or F. The Department of Homeland Security, which has a division devoted to monitoring cybersecurity, received the lowest overall score of the twenty-four agencies surveyed. Also earning an F was the Justice Department, the agency charged with investigating and prosecuting cases of hacking and other forms of cybercrime. Thirteen agencies improved their scores slightly compared with the previous year, nudging the overall government grade from an F up to a D. Commenting on these results, Rep. Adam H. Putnam (R-Fl.), chairman of the House Government Reform Subcommittee on Technology, declared that "the threat of cyberattack is real. . . . The damage that could be inflicted both in terms of financial loss and, potentially, loss of life is considerable."

Such studies, together with the enormous media interest in the subject, have fueled popular fears about cyberterrorism. A study by the Pew Internet and American Life Project found in 2003 that nearly half of the one thousand Americans surveyed were worried that terrorists could launch attacks through the networks connecting home computers and power utilities. The Pew study found that 11 percent of respondents were "very worried" and 38 percent were "somewhat worried" about an attack launched through computer networks. The survey was taken in early August, before the major blackout struck the Northeast and before several damaging new viruses afflicted computers throughout the country.
TOP

Is the Cyberterror Threat Exaggerated?

Amid all the dire warnings and alarming statistics that the subject of cyberterrorism generates, it is important to remember one simple statistic: so far, there has been no recorded instance of a terrorist cyberattack on U.S. public facilities, transportation systems, nuclear power plants, power grids, or other key components of the national infrastructure. Cyberattacks are common, but they have not been conducted by terrorists and they have not sought to inflict the kind of damage that would qualify them as cyberterrorism.

When U.S. troops recovered al Qaeda laptops in Afghanistan, officials were surprised to find its members more technologically adept than previously believed. They discovered structural and engineering software, electronic models of a dam, and information on computerized water systems, nuclear power plants, and U.S. and European stadiums. But the evidence did not suggest that al Qaeda operatives were planning cyberattacks, only that they were using the Internet to communicate and coordinate physical attacks. Neither al Qaeda nor any other terrorist organization appears to have tried to stage a serious cyberattack. For now, the most damaging attacks and intrusions, experts say, are typically carried out either by disgruntled corporate insiders intent on embezzlement or sabotage or by individual hackers—typically young and male—seeking thrills and notoriety. According to a report issued in 2002 by IBM Global Security Analysis Lab, 90 percent of hackers are amateurs with limited technical proficiency, 9 percent are more skilled at gaining unauthorized access but do not damage the files they read, and only 1 percent are highly skilled and intent on copying files or damaging programs and systems. Most hackers, it should be noted, concentrate on writing programs that expose security flaws in computer software, mainly in the operating systems produced by Microsoft. Their efforts in this direction have sometimes embarrassed corporations but have also been responsible for alerting the public and security professionals to major security flaws in software. Moreover, although there are hackers with the ability to damage systems, disrupt e-commerce, and force websites offline, the vast majority of hackers do not have the necessary skills and knowledge. The ones who do, generally do not seek to wreak havoc. Douglas Thomas, a professor at the University of Southern California, spent seven years studying computer hackers in an effort to understand better who they are and what motivates them. Thomas interviewed hundreds of hackers and explored their "literature." In testimony on July 24, 2002, before the House Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, Thomas argued that "with the vast majority of hackers, I would say 99 percent of them, the risk [of cyberterrorism] is negligible for the simple reason that those hackers do not have the skill or ability to organize or execute an attack that would be anything more than a minor inconvenience." His judgment was echoed in Assessing the Risks of Cyberterrorism, Cyber War, and Other Cyber Threats, a 2002 report for the Center for Strategic and International Studies, written by Jim Lewis, a sixteen-year veteran of the State and Commerce Departments. "The idea that hackers are going to bring the nation to its knees is too far-fetched a scenario to be taken seriously," Lewis argued. "Nations are more robust than the early analysts of cyberterrorism and cyberwarfare give them credit for. Infastructure systems [are] more flexible and responsive in restoring service than the early analysts realized, in part because they have to deal with failure on a routine basis."

Many computer security specialists believe it is virtually impossible to use the Internet to inflict death on a large scale and scoff at the notion that terrorists would bother trying. The resilience of computer systems to attack, they point out, is no accident but rather the result of significant investments of time, money, and expertise. Nuclear weapons and other sensitive military systems enjoy the most basic form of Internet security. They are "air-gapped," meaning that they are not physically connected to the Internet and are therefore inaccessible to outside hackers. The Defense Department has been particularly vigilant in protecting key systems by isolating them from the Internet and even from the Pentagon's internal computer network. All new software must be submitted to the National Security Agency for security testing.

The 9/11 hijackings understandably led to an outcry that airliners are particularly susceptible to cyberterrorist attacks. In 2002, for instance, Senator Charles Schumer (D-N.Y.) described "the absolute havoc and devastation that would result if cyberterrorists suddenly shut down our air traffic control system, with thousands of planes in mid-flight." In fact, cybersecurity experts give some of their highest marks to the Federal Aviation Authority, which separates its administrative and air traffic control systems. And there is a reason the 9/11 hijackers used box-cutters instead of keyboards: it is impossible to hijack a plane remotely, which eliminates the possibility of a high-tech 9/11 scenario in which planes are used as weapons. Another source of concern is terrorist infiltration of the intelligence agencies. But here, too, the risk is described by Internet experts as slim. The CIA's classified computers are also air-gapped, as is the FBI's entire computer system.

That leaves less-well-protected secondary targets such as power grids, oil pipelines, and dams that do not present opportunities as nightmarish as do nuclear weapons systems but that nonetheless might be used to inflict other forms of mass destruction. Because most of these systems are in the private sector, they tend to be less secure than government systems. In addition, as noted above, companies increasingly use the Internet to manage SCADA systems that control such processes as regulating the flow of oil in pipelines and the level of water in dams. To illustrate the supposed ease with which terrorists could seize control of a dam, a story in the Washington Post in June 2003 on al Qaeda cyberterrorism related an anecdote about a twelve-year-old who hacked into the SCADA system at Arizona's Theodore Roosevelt Dam in 1998 and was, according to the article, within mere keystrokes of unleashing millions of gallons of water upon helpless downstream communities. However, a subsequent investigation by the tech-news site CNet.com revealed the tale to be largely apocryphal; investigators concluded that the hacker could not have gained control of the dam and that no lives or property were ever at risk.

To assess the potential threat of cyberterrorism, experts such as Denning suggest that two questions be asked: Are there targets that are vulnerable to cyberattacks? And are there actors with the capability and motivation to carry out such attacks? The answer to the first question is yes: critical infrastructure systems are complex and therefore bound to contain weaknesses that might be exploited, and even systems that seem "hardened" to outside manipulation might be accessed by insiders, acting alone or in concert with terrorists, to cause considerable harm. But what of the second question?

Few people besides a company's own employees possess the specific technical know-how required to run a specialized SCADA system. The most commonly cited example of SCADA exploitation bears this out. In April 2002, an Australian man used an Internet connection to release a million gallons of raw sewage along Queensland's Sunshine Coast after being turned down for a government job. When police arrested him, they discovered that he had worked for the company that designed the sewage treatment plant's control software.

It is possible, of course, that such disgruntled employees might be recruited by terrorist groups, but even if the terrorists did enlist inside help, the degree of damage they could cause would still be limited. As Joshua Green argues in "The Myth of Cyberterrorism" (which appeared in the Washington Monthly in November 2002), the employees of companies that handle power grids, oil and gas utilities, and communications are well rehearsed in dealing with the fallout from hurricanes, floods, tornadoes, and other natural disasters. They are also equally adept at containing and remedying problems that stem from human action.

In August 1999, the Center for the Study of Terrorism and Irregular Warfare at the Naval Postgraduate School (NPS) in Monterey, California, issued a report titled Cyber-terror: Prospects and Implications. The report concluded that terrorists generally lack the wherewithal and human capital needed to mount attacks that involve more than annoying but relatively harmless hacks. The study examined five types of terrorist groups: religious, New Age, ethnonationalist separatist, revolutionary, and far-right extremists. Of these, only the religious groups were adjudged likely to seek the capacity to inflict massive damage. Hacker groups, the study determined, are psychologically and organizationally ill suited to cyberterrorism, and any massive disruption of the information infrastructure would run counter to their self-interest.

In October 2000, the NPS group issued a second report, this one examining the decision-making process by which substate groups engaged in armed resistance develop new operational methods, including cyberterrorism. The report concluded that while substate groups may find cyberterror attractive as a nonlethal weapon, terrorists have not yet integrated information technology into their strategy and tactics and that significant barriers between hackers and terrorists may prevent their integration into one group.

Another illustration of the limited likelihood of terrorists launching a highly damaging cyberattack comes from a simulation sponsored by the U.S. Naval War College. The college contracted with a research group to simulate a massive cyberattack on the nation's information infrastructure. Government hackers and security analysts gathered in July 2002 in Newport, R.I., for a war game dubbed "Digital Pearl Harbor." The results were far from devastating: the hackers failed to crash the Internet, although they did cause serious sporadic damage. According to a CNet.com report on the exercise published in August 2002, officials concluded that terrorists hoping to stage such an attack "would require a syndicate with significant resources, including $200 million, country-level intelligence and five years of preparation time."
TOP

Cyberterrorism Today and Tomorrow

It seems fair to say that the current threat posed by cyberterrorism has been exaggerated. No single instance of cyberterrorism has yet been recorded; U.S. defense and intelligence computer systems are air-gapped and thus isolated from the Internet; the systems run by private companies are more vulnerable to attack but also more resilient than is often supposed; the vast majority of cyberattacks are launched by hackers with few, if any, political goals and no desire to cause the mayhem and carnage of which terrorists dream. So, then, why has so much concern been expressed over a relatively minor threat?

The reasons are many. First, as Denning has observed, "cyberterrorism and cyberattacks are sexy right now. . . . [Cyberterrorism is] novel, original, it captures people's imagination." Second, the mass media frequently fail to distinguish between hacking and cyberterrorism and exaggerate the threat of the latter by reasoning from false analogies such as the following: "If a sixteen-year-old could do this, then what could a well-funded terrorist group do?" Ignorance is a third factor. Green argues that cyberterrorism merges two spheres—terrorism and technology—that many people, including most lawmakers and senior administration officials, do not fully understand and therefore tend to fear. Moreover, some groups are eager to exploit this ignorance. Numerous technology companies, still reeling from the collapse of the high-tech bubble, have sought to attract federal research grants by recasting themselves as innovators in computer security and thus vital contributors to national security. Law enforcement and security consultants are likewise highly motivated to have us believe that the threat to our nation's security is severe. A fourth reason is that some politicians, whether out of genuine conviction or out of a desire to stoke public anxiety about terrorism in order to advance their own agendas, have played the role of prophets of doom. And a fifth factor is ambiguity about the very meaning of "cyberterrorism," which has confused the public and given rise to countless myths.

Verton argues that "al Qaeda [has] shown itself to have an incessant appetite for modern technology" and provides numerous citations from bin Laden and other al Qaeda leaders to show their recognition of this new cyberweapon. In the wake of the 9/11 attacks, bin Laden reportedly gave a statement to an editor of an Arab newspaper claiming that "hundreds of Muslim scientists were with him who would use their knowledge . . . ranging from computers to electronics against the infidels." Sheikh Omar Bakri Muhammad, a supporter of bin Laden and often the conduit for his messages to the Western world, declared in an interview with Verton, "I would advise those who doubt al Qaeda's interest in cyber-weapons to take Osama bin Laden very seriously. The third letter from Osama bin Laden . . . was clearly addressing using the technology in order to destroy the economy of the capitalist states."

"While bin Laden may have his finger on the trigger, his grandchildren may have their fingers on the computer mouse," remarked Frank Cilluffo of the Office of Homeland Security in a statement that has been widely cited. Future terrorists may indeed see greater potential for cyberterrorism than do the terrorists of today. Furthermore, as Denning argues, the next generation of terrorists is now growing up in a digital world, one in which hacking tools are sure to become more powerful, simpler to use, and easier to access. Cyberterrorism may also become more attractive as the real and virtual worlds become more closely coupled. For instance, a terrorist group might simultaneously explode a bomb at a train station and launch a cyberattack on the communications infrastructure, thus magnifying the impact of the event. Unless these systems are carefully secured, conducting an online operation that physically harms someone may be as easy tomorrow as penetrating a website is today.

Paradoxically, success in the "war on terror" is likely to make terrorists turn increasingly to unconventional weapons such as cyberterrorism. The challenge before us is to assess what needs to be done to address this ambiguous but potential threat of cyberterrorism—but to do so without inflating its real significance and manipulating the fear it inspires. Denning and other terrorism experts conclude that, at least for now, hijacked vehicles, truck bombs, and biological weapons seem to pose a greater threat than does cyberterrorism. However, just as the events of 9/11 caught the world by surprise, so could a major cyberassault. The threat of cyberterrorism may be exaggerated and manipulated, but we can neither deny it nor dare to ignore it.
TOP

About the Report

The threat posed by cyberterrorism has grabbed headlines and the attention of politicians, security experts, and the public. But just how real is the threat? Could terrorists cripple critical military, financial, and service computer systems? This report charts the rise of cyberangst and examines the evidence cited by those who predict imminent catastrophe. Many of these fears, the report contends, are exaggerated: not a single case of cyberterrorism has yet been recorded, hackers are regularly mistaken for terrorists, and cyberdefenses are more robust than is commonly supposed. Even so, the potential threat is undeniable and seems likely to increase, making it all the more important to address the danger without inflating or manipulating it.

Gabriel Weimann is a senior fellow at the United States Institute of Peace and professor of communication at the University of Haifa, Israel. He has written widely on modern terrorism, political campaigns, and the mass media. This report complements a previous report, www.terror.net, issued in March 2004, which examined the variety of uses to which terrorists routinely put the Internet. Both reports distill some of the findings from an ongoing, six-year study of terrorism and the Internet. A book based on that larger study is to be published in 2006.

The views expressed in this report do not necessarily reflect views of the United States Institute of Peace, which does not advocate specific policy positions.
TOP

Online Resources

* Terror on the Internet: The New Arena, the New Challenges USIP Press Books, 2006
* www.terror.net: How Modern Terrorism Uses the Internet, by Gabriel Weimann
Special Report 116, February 2004
* Terrorism in the Horn of Africa Special Report 113, January 2004
* Global Terrorism after the Iraq War Special Report 111, October 2003
* The Diplomacy of Counterterrorism: Lessons Learned, Ignored, and Disputed Special Report 80, January 2002
* USIP Topics: Terrorism
* On the Web: Terrorism/Counter-Terrorism
* On the Web: U.S. National Security and Foreign Policy

Cyberterrorism is a controversial term. Some authors choose a very narrow definition, relating to deployments, by known terrorist organizations, of disruption attacks against information systems for the primary purpose of creating alarm and panic. By this narrow definition, it is difficult to identify any instances of cyberterrorism. Cyberterrorism can also be defined much more generally, for example, as “The premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives.

As the Internet becomes more pervasive in all areas of human endeavor, individuals or groups can use the anonymity afforded by cyberspace to threaten citizens, specific groups (i.e. with membership based on ethnicity or belief), communities and entire countries, without the inherent threat of capture, injury, or death to the attacker that being physically present would bring.

As the Internet continues to expand, and computer systems continue to be assigned more responsibility while becoming more and more complex and interdependent, sabotage or terrorism via cyberspace may become a more serious threat.

Cyberterrorism can have a serious large-scale influence on significant numbers of people. It can weaken countries' economy greatly, thereby stripping it of its resources and making it more vulnerable to military attack.

Cyberterror can also affect internet-based businesses. Like brick and mortar retailers and service providers, most websites that produce income (whether by advertising, monetary exchange for goods or paid services) could stand to lose money in the event of downtime created by cyber criminals.

As internet-businesses have increasing economic importance to countries, what is normally cybercrime becomes more political and therefore "terror" related.