Your Ad Here

Rapid Prototyping

Miller Cycle

Air Suspension System

Oil Drilling

Atkinson cycle engine

Gasoline direct injection

Ball Valve

VTEC

MegaSquirt

Weber carburetors

Mass Airflow Sensor

Hybrid Synergy Drive

Extreme Programming

Extreme Programming (XP) is actually a deliberate and disciplined approach to software development. About six years old, it has already been proven at many companies of all different sizes and industries worldwide. XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers software developers to confidently respond to changing customer requirements, even late in the life cycle. This methodology also emphasizes teamwork. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development. XP improves a software project in four essential ways; communication, simplicity feedback, and courage. XP programmers communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. They deliver the system to the customers as early as possible and implement changes as suggested. With this foundation XP programmers are able to courageously respond to changing requirements and technology. XP is different. It is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This is a significant departure from traditional software development methods and ushers in a change in the way we program.If one or two developers have become bottlenecks because they own the core classes in the system and must make all the changes, then try collective code ownership. You will also need unit tests. Let everyone make changes to the core classes whenever they need to. You could continue this way until no problems are left. Then just add the remaining practices as you can. The first practice you add will seem easy. You are solving a large problem with a little extra effort. The second might seem easy too. But at some point between having a few XP rules and all of the XP rules it will take some persistence to make it work. Your problems will have been solved and your project is under control. It might seem good to abandon the new methodology and go back to what is familiar and comfortable, but continuing does pay off in the end.

Mobile IP

While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience. However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting.

Motes

Sensor networks have been applied to various research areas at a number of academic institutions. In particular, environmental monitoring has received a lot of attention with major projects at UCB, UCLA and other places. In addition, commercial pilot projects are staring to emerge as well. There are a number of start-up companies active in this space and they are providing mote hardware as well as application software and back-end infrastructure solutions. The University of California at Berkeley in conjunction with the local Intel Lab is conducting an environmental monitoring project using mote based sensor networks on Great Duck Island off the coast of Maine. This endeavor includes the deployment of tens of motes and several gateways in a fairly harsh outdoor environment. The motes are equipped with a variety of environmental sensors (temperature, humidity, light, atmospheric pressure, motion, etc.). They form a self-organizing multi-hop sensor net work that is linked via gateways to a base station on the island. There, the data is collected and transmitted via a satellite link to the Internet. This setup enabled researchers to continuously monitor an endangered bird species on the island without constant perturbation of their habitat. The motes gather detailed data on the bird population and their environment around the clock which would.The Intel Mote has been designed after a careful study of the application space for sensor networks. We have interviewed a number of researchers in this space and collected their feedback on desired im-provements over currently available mote designs. A list of requests that have been repeatedly mentioned includes the following key items: o Increased CPU processing power. In particular, for applications such as acoustic sensing and localization additional computational resources are required. o Increased main memory size. Similar to the item above, sensor network applications are beginning to stretch the limits of existing hardware designs. This need is amplified by the desire to perform localized computation on the motes.

Param 10000

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985-1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.SD2000 uses PARAM 10000. It used up to 4 UltraSPARC-II processors. The PARAM systems can be extended to a cluster supercomputer. A clustered system with 1200 processors can deliver a peak performance of up to 1TFlops/s. Even though PARAM 10000 system is not ranked within top 500 supercomputers, it has a possibility of gaining a high rank. It uses a variation of MPI developed in CDAC. No performance data is available, although one would presume that it will not be very different from that of other UltraSPARC-II based systems using MPI. Because SD2000 is a commercial product, it is impossible to gather detailed data about algorithm and performance of the product.

PON Topologies

There are several topologies suitable for the access network: tree, ring, or bus. A PON can also be deployed in redundant configuration as double ring or doubletree; or redundancy may be added only to a part of the PON, say the trunk of the tree. For the rest of this article, we will focus our attention on the tree topology; however, most of the conclusions made are equally relevant to other topologies.All transmissions in a PON are performed between Optical Line Terminal (OLT) and Optical Network Units (ONU). Therefore, in the downstream direction (from OLT to ONUs), a PON is a point-to-multipoint network, and in the upstream direction it is a multipoint-to-point network. The OLT resides in the local exchange (central office), connecting the optical access network to an IP, ATM, or SONET backbone. The ONU is located either at the curb (FTTC solution), or at the end-user location (FTTH, FTTB solutions), and provides broadband voice, data, and video services. In the downstream direction, a PON is a P2MP network, and in the upstream direction it is a MP2P network.

The advantages of using PONs in subscriber access networks are numerous.
1. PONs allow for long reach between central offices and customer premises, operating at distances over 20km.
2. PONs minimizes fiber deployment in both the local exchange office and local loop.
3. PONs provides higher bandwidth due to deeper fiber penetration, offering gigabit per second solutions.
4. Operating in the downstream as a broadcast network, PONs allow for video broadcasting as either IP video or analog video using a separate wavelength overlay.
5. PONs eliminate the necessity to install active multiplexer at splitting locations thus relieving network operators
6. Being optically transparent end to end PONs allow upgrades to higher bit rates or additional wavelengths.

Structured Cabling

As today's communication networks become more complex-as more users share peripherals, as more mission-critical tasks are accomplished over networks and as the need for faster access to information increases-a good foundation for these networks becomes increasingly important. The first step toward the adaptability, flexibility and longevity required of today's networks begins with structured cabling-the foundation of any information system. It is vital that communications cabling be able to support a variety of applications and last for the life of a network. If that cabling is part of a well-designed structured cabling system, it can allow for easy administration of moves, adds and changes and smooth migration to new network topologies. On the other hand,"worry-about-it-when-you-need-to" systems will make moves, adds and changes a hassle and make new network topologies too difficult to implement. Network problems occur more often, and are more difficult and timeconsuming to troubleshoot.When communication systems fail, employees and assets sit idle, causing a loss of revenues and profits. Even worse, the perceptions of customers and suppliers can be adversely affected.The purpose of this white paper is to present the advantages of using a standards-based structured cabling system for a business enterprise. The paper will cover a brief historical perspective of structured cabling, a review of the current standards, media types and performance criteria, system design and installation recommendations. Particular attention will be given to the ANSI/TIA/EIA-568-A standard and the horizontal cabling subsystem in that standard.

The Evolution of Structured Cabling
In the early 1980s, when computers were first linked together in order to exchange information, many different cabling designs were used. Some companies built their systems to run over coaxial cables. Others thought that twinaxial or other cables would work best. With these cables, certain parameters had to be followed in order to make the system work.

Surface Computer

Surface Computer users can fingerpaint digitally, resize and interact with photos and videos, and even "digitize" some real-life events, such as splitting up a restaurant bill and researching wines. The Surface Computer can recognize some real-world objects and creates onscreen versions to interact with.Microsoft has just announced its Surface Computing technology, a project that has been kept under wraps for five years. Using a giant table-like display, users are able to draw, interact with media, and use another new technology called domino tagging, in which a real-life object on the computer's surface is identified and becomes an on-screen object. Picture a surface that can recognize physical objects from a paintbrush to a cell phone and allows hands-on, direct control of content such as photos, music and maps. Today at the Wall Street Journal's D: All Things Digital conference, Microsoft Corp. CEO Steve Ballmer will unveil Microsoft Surface™, the first in a new category of surface computing products from Microsoft that breaks down traditional barriers between people and technology. Surface turns an ordinary tabletop into a vibrant, dynamic surface that provides effortless interaction with all forms of digital content through natural gestures, touch and physical objects. Beginning at the end of this year, consumers will be able to interact with Surface in hotels, retail establishments, restaurants and public entertainment venues.The intuitive user interface works without a traditional mouse or keyboard, allowing people to interact with content and information on their own or collaboratively with their friends and families, just like in the real world. Surface is a 30-inch display in a table-like form factor that small groups can use at the same time. From digital finger painting to a virtual concierge, Surface brings natural interaction to the digital world in a new and exciting way.

Ubiquitous Networking

Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Unlicensed Mobile Access

During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:
o UMTS
o Soft MSCs
o IMS Data Services
o I-WLAN
o IMS Telephony
This whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "The UMA Opportunity," available on the Kineto web site (www.kineto.com).

Mobile Network Reference Model

To best understand the role UMA plays in mobile network evolution, it is helpful to first
introduce a reference model for today's mobile networks. Figure 1 provides a simplified
model for the majority of 3GPP-based mobile networks currently in deployment.

Virtual LAN Technology

These are special purpose devices and computers that just transfer messages from one network to another. Before we look deep into the topic Virtual LAN's, let us see the basic devices used in the network backbone. They are

1. Bridges.
2. Switches.
3. Routers.
4. Gateways.
5. Hubs.

BRIDGES :-Bridges operate at the data link layer. They connect two LAN segments that use the same data link and network protocol.

SWITCHES :-Like bridges, switches operate at the data link layer. Switches connect two or more computers or network segments that use the same data link and network protocol.

ROUTERS :-Routers operate at the network layer. Routers connect two or more LANs that use the same or different data link protocols, but the same both the basic system interconnection and the necessary translation between the protocols in both directions.

HUBS :- Physical layer devices that are really just multiple port repeaters. When an electronic digital signal is received on a port, the signal is reampli-fied or regenerated and forwarded out all segments except the segment from which the signal was received.

Windows DNA

Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work.

X- Internet

As the Internet expands, two new waves of innovation -- comprising what Forrester calls the X Internet -- are already eclipsing the Web: an executable Net that greatly improves the online experience and an extended Net that connects the real world.

An executable Net that supplants today's Web will move code to user PCs and cause devices to captivate consumers in ways static pages never could. Today's news, sports, and weather offered on static Web pages is essentially the same content presented on paper, making the online experience more like reading in a dusty library than participating in a new medium.

The extended Internet is reshaping technology's role in business through Internet devices and applications which sense, analyze, and control data, therefore providing more real-time information than ever before about what is going on in the real world.

The X Internet will not be a new invention, but rather the evolution of today's Internet of static Web pages and cumbersome e-commerce mechanisms into a Net that relies on executable software code to deliver more interactive experiences.

Executable Internet applications use downloaded code like Java and XML to enhance the user experience with pop-up menus, pick lists, graphics and simple calculations, according to a recent Forrester report entitled "The X Internet."

An easy way to understand how the X Internet will work is to imagine that a band wants to distribute a song over the Net. Rather than worrying about which audio player people want to use, an executable file will deliver the song and the audio player at the same time.

"With an executable, you can distribute movies the same way you distribute songs," Forrester research director and report author Carl Howe told NewsFactor Network. "It just makes the models work better."

Building the X-Net

The report also employs an example of a person building a house. With today's Internet, a builder would have to find, then try to follow, an article detailing how to frame a window. When it was time to install the bathroom, the would-be plumber would then have to find an article dealing with that topic.

Executable Internet applications would demonstrate to a builder, step-by-step, how to frame a window. When it came time to install the bathroom, the carpenter would simply be replaced by a plumber.

"Instead of reading a book, you have a conversation about the work you're trying to do," Howe wrote.

Forrester is also predicting the widespread adoption of another X Internet -- but this X stands for "extended." The extended Internet will include the widespread adoption of real-world appliances, like air conditioners or car tires, that communicate with owners or manufacturers via the Internet.

The extended Internet will come with the inclusion of cheap sensors in thousands of everyday products, an era that will begin around 2005, Forrester predicts.

Many people think the Internet and the Web are the same thing. They're not. The Internet is a piece of wire that goes from me to you and from me to 300 million other people in the world. The Web is software that I put on my end of the wire, and you put on your end -- allowing us to exchange information. While the Internet (the wire) evolves gradually, the software on the wire can change quickly. Before the Web, other software was clamped onto the Internet. WAIS, Gopher, and Usenet were the dominant systems, and there were companies that were doing commerce using those software models. I call this the "executable Internet," or X Internet, for short. X Internet offers several important advantages over the Web: 1) It rides Moore's Law -- the wide availability of cheap, powerful, low real-estate processing; 2) it leverages ever dear bandwidth -- once the connection is made, a small number of bits will be exchanged, unlike the Web where lots of pages are shuttled out to the client; and 3) X Internet will be far more peer-to-peer -- unlike the server-centric Web. This scenario could be marred by two threats: viruses and lack of standards. Once executables start to move fluidly through the Net, viruses will have perfect conditions to propagate. Standards, or rather the lack thereof, will block the quick arrival of X Internet. I can't see Microsoft, Sun, IBM, or other traditionalists setting the standards. The Web-killer's design will emerge from pure research, academe, or open source -- as did the Web.

What It Means -- No. 1: Web-centric companies get stuck holding the bag. They will wake up one day with hundreds of millions of dollars of legacy code on their hands. Yes, their brands will remain intact, but their technology will suddenly be very outmoded. Yahoo!, eBay, and AOL will find themselves competing with a new wave of commerce players that market, deliver, and service using the superior technology of X Internet. One of the upstarts will Amazon Amazon.

Wireless Networked Digital Devices

The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations. This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.This paper first examines the technology used for wireless communication. Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper. Designing a network infrastructure is much more complex. The second half of the paper describes several research projects that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe? Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital world, compatibility and interoperation are possible.

Valvetronic

Butterfly Valve

Diesel Particulate Filter

Camless Engine

Nanotechnology

Smart Material Principle

Gate Valve

Check Valve

Space Shuttle

Chameleon chip

Chameleon Systems Inc, San Jose, California is one of the new breed of reconfigurable processor makers who claim that Chameleon chip which is its first Reconfigurable Communications Processor (RCP) is a reconfigurable processor which provides a design environment that allows customer to convert their algorithms to hardware configuration on the fly.

Advantages

1. Early and fast designs
2. Enabling Field upgrades
3. Creating product differentiation for suppliers
4. Creating flexible & adaptive products
5. Reducing power
6. Reducing manufacturing costs
7. Increasing bandwidths

Disadvantages

1. Inertia – Engineers slow to change
2. RCP designs requires comprehensive set of tools
3. 'Learning curve' for designers unfamiliar with reconfigurable logic

Applications

1. Wireless Base stations
2. Packetized voice(VOIP)
3. Digital Subscriber Line(DSL)
4. Software Defined Radio(SDR)


Global System for Mobiles

A GSM network is composed of several functional entities, whose functions and interfaces are specified. Figure 1 shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber. The Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center (MSC), performs the switching of calls between the mobile users, and between mobile and fixed network users. The MSC also handles the mobility management operations. Not shown is the Operations and Maintenance Center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile services Switching Center across the A interface.

TETRA

Terrestrial Enhanced(changed from European) Trunked Radio (TETRA) is a specialist Professional Mobile Radio and walkie talkie standard used by police, fire departments, ambulance and military. Its main advantage over technologies such as GSM are:
the much lower frequency used, which permits very high levels of geographic coverage with a smaller number of transmitters, cutting infrastructure cost.
fast call set-up - a one to many group call is generally set-up within 0.5 seconds compared with the many seconds that are required for a GSM network.
the fact that its infrastructure can be separated from that of the public cellphone network, and made substantially more diverse and resilient by the fact that base stations can be some distance from the area served.
unlike most cellular technologies, TETRA networks typically provide a number of fall-back modes such as the ability for a base station to process local calls in the absence of the rest of the network, and for 'direct mode' where mobiles can continue to share channels directly if the infrastructure fails or is out-of-reach.
gateway mode - where a single mobile with connection to the network can act as a relay for other nearby mobiles that are out of contact with the infrastructure.
TETRA also provides a point-to-point function that traditional analogue emergency services radio systems didn't provide. This enables users to have a one-to-one trunked 'radio' link between sets without the need for the direct involvement of a control room operator/dispatcher.
unlike the cellular technologies, which connects one subscriber to one other subscriber (one-to-one) then Tetra is built to do one-to-one, one-to-many and many-to-many. These operational modes are directly relevant to the public safety and professional users.
Radio aspects
TETRA uses a digital modulation scheme known as PI/4 DQPSK which is a form of phase shift keying. TETRA uses TDMA (see above). The symbol rate is 18,000 symbols per second, and each symbol maps to 2 bits. A single slot consists of 255 symbols, a single frame constist of 4 slots, and a multiframe (whose duration is approximately 1 second) consists of 18 frames. As a form of phase shift keying the downlink power is constant. The downlink (i.e. the output of the basestation) is a continuous transmission consisting of either specific communications with mobiles, synchronisation or other general broadcasts. Although the system uses 18 frames per second only 17 of these are used for traffic channel, with the 18th frame reserved for signalling or synchronisation. TETRA does not employ amplitude modulation. However, TETRA has 17.65 slots per second (18000 symbols/sec / 255 symbols/slot / 4 slots/frame), which is the cause of the PERCEIVED 'amplitude modulation' at 17Hz.

OFDMA

Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.
OFDMA features
OFDMA is the 'multi-user' version of OFDM
Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index
Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users
More than one sub-carrier can be assigned to one user to support high rate applications
Allows simultaneous transmission from several users ⇒ better spectral efficiency
Multiuser interference is introduced if there is frequency synchronization error
The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents. It is used in the mobility mode of IEEE 802.16 WirelessMAN Air Interface standard, commonly referred to as WiMAX.

SIDAC

The SIDAC, or SIlicon Diode for Alternating Current, is a semiconductor of the thyristor family. Also referred to as a SYDAC (Silicon thYristor for Alternating Current), bi-directional thyristor breakover diode, or more simply a bi-directional thyristor diode, it is technically specified as a bilateral voltage triggered switch. Its operation is identical to that of the DIAC; the distinction in naming between the two devices being subject to the particular manufacturer. In general, SIDACs have higher breakover voltages and current handling capacities than DIACs. The operation of the SIDAC is quite simple and is functionally identical to that of a spark gap or similar to two inverse parallel Zener diodes. The SIDAC remains nonconducting until the applied voltage meets or exceeds its rated breakover voltage. Once entering this conductive state, the SIDAC continues to conduct, regardless of voltage, until the applied current falls below its rated holding current. At this point, the SIDAC returns to its initial nonconductive state to begin the cycle once again. Somewhat uncommon in most electronics, the SIDAC is relegated to the status of a special purpose device. However, where part-counts are to be kept low, simple relaxation oscillators are needed, and the voltages are too low for practical operation of a spark gap, the SIDAC is an indispensable component.


Wibree

Wibree is an innovative digital radio technology that can soon become a benchmark for the open wireless communication. Working almost equivalent to the bluethooth technology, this modern technology functions within an ISM band of 2.4 GHz and amid a physical layer bit rate of 1 Mbps.

Widely used in may appliances like the wrist watches, wireless keyboards, toys and sports sensors due to its key feature of very low consumption of power within the prescribed ranges of 10 meters or 30 feet using the low cost transceiver microchips, it can generate an output power ofm-6 dBm.

Conceived by the Nokia company in 10-03-2006, it is today licensed and further researched by some of the major corporates that includes Nordic Semiconductor, Broadcom Corporation, CSR, Epson, Suunto and Taiyo Yuden. According to Bob lannucci, the head of Nokia’s research centre, this groundbreaking technology that is 10 times more capable than the bluethooth technology will soon replace it. Already the corporate giant Nordic Semiconductor is working on the technology so as to bring out the model chips by the mid of 2007.

Trisil

A Trisil is an electronic component designed to protect electronic circuits against overvoltage. Unlike a Transil it acts as a crowbar device, switching on when the voltage on it exceeds its breakover voltage.A Trisil is bipolar, behaving the same way in both directions. It is principally a voltage-controlled triac without gate. In 1982, the only manufacturer was Thomson SA. This type of crowbar protector is widely used for protecting telecom equipment from lightning induced transients and induced currents from power lines. Other manufacturers of this type of device include Bourns and Littelfuse. Rather than using the natural breakdown voltage of the device, an extra region is fabricated within the device to form a zener diode. This allows a much tighter control of the breakdown voltage. It is also possible to make gated versions of this type of protector. In this case, the gate is connected to the telecom circuit power supply (via a diode or transistor) so that the device will crowbar if the transient exceeds the power supply voltage. The main advantage of this configuration is that the protection voltage tracks the power supply, so eliminating the problem of selecting a particular breakdown voltage for the protection circuit.

Space Shuttle

Previously, all ventures in to space were achieved with giant rockets which, after a certain amount of time , were directed back in to the earth’s atmosphere to be reduced to a cinder by the enormous heat of re entry –after the crew and their capsule had been ejected virtually all of that tremendously expensive equipment was destroyed after only one use.

Following are the main supporting systems of a space shuttle.

1. Propulsion system
2. External fuel tank
3. Space shuttle orbiter

gate valve

A gate valve is a valve that opens by lifting a round or rectangular gate out of the path of the fluid. The distinct feature of a gate valve is the sealing surfaces between the gate and seats are planar. The gate faces can form a wedge shape or they can be parallel. Gate valves are sometimes used for regulating flow, but many are not suited for that purpose, having been designed to be fully opened or closed. When fully open, the typical gate valve has no obstruction in the flow path, resulting in very low friction loss.
Bonnets provide leakproof closure for the valve body. Gate valves may have a screw-in, union, or bolted bonnet. Screw-in bonnet is the simplest, offering a durable, pressure-tight seal. Union bonnet is suitable for applications requiring frequent inspection and cleaning. It also gives the body added strength. Bolted bonnet is used for larger valves and higher pressure applications.
Another type of bonnet construction in a gate valve is pressure seal bonnet. This construction is adopted for valves for high pressure service, typically in excess of 2250 psi. The unique feature about the pressure seal bonnet is that the body - bonnet joints seals improves as the internal pressure in the valve increases, compared to other constructions where the increase in internal pressure tends to create leaks in the body-bonnet joint.

SMART MATERIAL

In the field of massive and complex manufacturing we are now in need of materials, with properties, that can be manipulated according to our needs. Smart materials are one among those unique materials, which can change its shape or size simply by adding a little bit of heat, or can change from a liquid to a solid almost instantly when near a magnet. These materials include piezoelectric materials, magneto-rheostatic materials, electro-rheostatic materials, and shape memory alloys. Shape memory alloys are metals, which exhibit two very unique properties, pseudo-elasticity (an almost rubber-like flexibility, under-loading), and the shape memory effect (ability to be severely deformed and then return to its original shape simply by heating). The two unique properties described above are made possible through a solid state phase change that is a molecular rearrangement, in which the molecules remain closely packed so that the substance remains a solid. The two phases, which occur in shape memory alloys, are Martensite, and Austenite.

Nanotechnology, development and production of artefacts in which a dimension of less than 100 nanometres (nm) is critical to functioning (1 nm = 10-9 m/40 billionths of an inch). Nanotechnology is a hybrid science combining engineering and chemistry. Atoms and molecules stick together because they have complementary shapes that lock tog- ether, or charges that attract. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of nanotechnology is to manipulate atoms individually and place them in a pattern to produce a desired structure. Nanotechnology is likely to change the way almost everything, including medicine, computers and cars, are designed and constructed. Nanotechnology holds out the promise of materials of precisely specified composition and properties, which could yield structures of unprecedented strength and computers of extraordinary compactness and power. Nanotechnology may lead to revolutionary methods of atom-by-atom manufacturing and to surgery on the cellular scale. Scientists have made some progress at building devices, including computer components, at nanoscales. Nanotechnology is anywhere from five to 15 years in the future.

Airbag

For years, the trusty seatbelts provided the sole form of passive restraints in our car. There were debates about their safety, especially related to children, but over time, much of the country adopted mandatory seat-belt laws. Statistics have show that seat-belts have saved thousands of lives that might have been lost in collisions.Airbags have been under development for many years. The first commercial airbags appeared in automobiles in the 1980s.They are a proven safety device that save a growing number of lives, and prevent a large number of head and chest injuries. They are reducing driver deaths by 14 percent and passenger bags reduce deaths by about 11 percent.
People who use seat-belts think they do not need airbags. But they do. Airbags and lap/shoulder belts work together as a system, and one without the other isn't as effective. Deaths are 12 percent lower among drivers with belts and 9 percent lower among belted passengers.
Since model year, all new cars have been required to have airbags on both driver and passenger sides. Light trucks came under the rule in 1999.Newer than steering-wheel-mounted or dashboard-mounted bags are seat-mounted door-mounted and window airbags. Airbags are subject of serious government and industry researches and tests.
Airbags can cause some unintended adverse effects. Nearly all of these are minor injuries like bruises and abrasions that are more than offset by the lives airbags are saving.You can eliminate this risk, and position is what counts. Serious inflation injuries occur primarily because of peoples position when airbags first begin inflating.
Stopping an objects momentum requires force acting over a period of time. When a car crashes the force required to stop an object is very great because the car's momentum has changed instantly while the passengers has not. The goal of any supplement restraint system is to help stop the passengers while doing as little damage to him or her as possible.
What an airbag want to do is to slow down the passenger's speed to zero with little or no damage .The constraints that it has to work are huge .The airbag has the space between the passenger and the steering wheel or dashboard and a fraction of a second to work with. Even that tiny amount of space and time is valuable

Cam less Engines

The cam has been an integral part of the IC engine from its invention. The cam controls the "breathing channels" of the IC engines, that is, the valves through which the fuel air mixture (in SI engines) or air (in CI engines) is supplied and exhaust driven out.
Besieged by demands for better fuel economy, more power, and less pollution, motor engineers around the world are pursuing a radical "camless" design that promises to deliver the internal-combustion engine's biggest efficiency improvement in years. The aim of all this effort is liberation from a constraint that has handcuffed performance since the birth of the internal-combustion engine more than a century ago. Camless engine technology is soon to be a reality for commercial vehicles. In the camless valvetrain, the valve motion is controlled directly by a valve actuator - there's no camshaft or connecting mechanisms. Precise electronic circuit controls the operation of the mechanism, thus bringing in more flexibility and accuracy in opening and closing the valves. The seminar looks at the working of the electronically controlled camless engine, its general features and benefits over conventional engine.


The engines powering today's vehicles, whether they burn gasoline or diesel fuel, rely on a system of valves to admit fuel and air to the cylinders and to let exhaust gases escape after combustion. Rotating steel camshafts with precision-machined egg-shaped lobes, or cams, are the hard-tooled "brains" of the system. They push open the valves at the proper time and guide their closure, typically through an arrangement of pushrods, rocker arms, and other hardware. Stiff springs return the valves to their closed position

Ball Piston machines

From the day machines with reciprocating piston has come into existence efforts have been undertaken to improve the efficiency of the machines .The main drawbacks of reciprocating machines are the considerably large number of moving parts due to the presence of valves , greater inertial loads which reduces dynamic balance and leakage and friction due to the presence of piston rings . The invention urges has reached on Rotary machines .

One main advantage to be gained with a rotary machine is reduction of inertial loads and better dynamic balance. The Wankel rotary engine has been the most successful example to date , but sealing problems contributed to its decline . There , came the ideas of ball piston machines . In the compressor and pump arena, reduction of reciprocating mass in positive displacement machines has always been an objective, and has been achieved most effectively by lobe, gear, sliding vane, liquid ring, and screw compressors and pumps , but at the cost of hardware complexity or higher losses. Lobe, gear, and screw machines have relatively complex rotating element shapes and friction losses. Sliding vane machines have sealing and friction issues . Liquid ring compressors have fluid turbulence losses.

The new design concept of the Ball Piston Engine uses a different approach that has many advantages, including low part count and simplicity of design , very low friction , low heat loss, high power to weight ratio , perfect dynamic balance , and cycle thermodynamic tailoring capability

Diamond is the hardest material known to man kind. When used on tools, diamond grinds away material on micro (Nano) level. Diamond is the hardest substance known and is given a value of 10 in the Mohs hardness scale, devised by the German mineralogist Friedrich Mohs to indicate relative hardness of substances on a rating scale from 1 to 10. Its hardness varies in every diamond with the crystallographic direction. Moreover, hardness on the same face or surface varies with the direction of the cut.

Diamond crystallizes in different forms. Eight and twelve sided crystal forms are most commonly found. Cubical, rounded, and paired crystals are also common. Crystalline diamonds always separate cleanly along planes parallel to the faces. The specific gravity for pure diamond crystals is almost always 3.52. Other properties of the diamond are frequently useful in differentiating between true diamonds and imitations: Because diamonds are excellent conductors of heat, they are cold to the touch; Most diamonds are not good electrical conductors and become charged with positive electricity when rubbed; Diamond is resistant to attack by acids or bases; Transparent diamond crystals heated in oxygen burn at about 1470° F, forming carbon dioxide

Pyrometers

The Technique of measuring high temperature is known as pyrometry and the instrument employed is called pyrometer. Pyrometer is specialized type of thermometer used to measure high temperatures in the production and heat treatment of metal and alloys. Ordinary temperatures can be measured by ordinary thermometer, instead pyrometer is employed for measuring higher temperature.
Any metallic surface when heated emits radiation of different wavelengths which are not visible at low temperatures but at about 5400C radiations are in shorter wavelength and are visible to eye and from colour judgement is made as to probable temperature, the colour scale is roughly as follows.

Dark red - 5400C
Red - 7000C
Bright red - 3500C
Orange - 9000C
Yellow - 10100C
White - 12050C and above

The Technique of measuring high temperature is known as pyrometry and the instrument employed is called pyrometer. Pyrometer is specialized type of thermometer used to measure high temperatures in the production and heat treatment of metal and alloys. Ordinary temperatures can be measured by ordinary thermometer, instead pyrometer is employed for measuring higher temperature.
Any metallic surface when heated emits radiation of different wavelengths which are not visible at low temperatures but at about 5400C radiations are in shorter wavelength and are visible to eye and from colour judgement is made as to probable temperature, the colour scale is roughly as follows.

Dark red - 5400C
Red - 7000C
Bright red - 3500C
Orange - 9000C
Yellow - 10100C
White - 12050C and above

When a substance receives heat, change in pressure, electric resistance, radiation, thermoelectric e.m.f and or colour may takeplace. Any of these change can be used for measurement of temperature. Inorder to exercise provision control over the heat treatment and melting operation in the industry temperaturemeasuring device known as pyrometers are used. Also accurate measurement of temperature of Furnaces, molten metals and other heated materials

Smart combustors

This seminar will review the state of the art of active control of gas turbine combustors processes. The seminar will first discuss recently developed approaches for active control of detrimental combustion instabilities by use of 'fast' injectors that modulate the fuel injection rate at the frequency of the instability and appropriate phase and gain. Next, the paper discusses two additional approaches for damping of combustion instabilities; i.e., active modification of the combustion process characteristics and open loop modulation of the fuel injection rate at frequencies that differ from the instability frequency. The second part of the seminar will discuss active control of lean blowout in combustors that burn fuel in a lean premixed mode of combustion to reduce NOx emissions. This discussion will describe recent developments of optical and acoustic sensing techniques that employ sophisticated data analysis approaches to detect the presence of lean blowout precursors in the measured data. It will be shown that this approach can be used to determine in advance the onset of lean blowout and that the problem can be prevented by active control of the relative amounts of fuel supplied to the main, premixed, combustion region and a premixed pilot flame. The will close with a discussion of research needs, with emphasis on the integration of utilized active control and health monitoring and prognostication systems into a single combustor control system

Green Engine

This seminar representing the Green engine effect. That is for increasing the efficiency of the engine and avoiding excessive pollution a new method adopting, a ceramic coating [non-metallic solid coating]is done on the parts like piston and crown of the engine used in automobiles

Factors affecting the efficiency

Incomplete combustion
Carbon deposition
Thermal shocking
Pollution control

For avoiding these factors we adopt the method of ceramic coating on the engine Features of ceramic coating We conduct various process in this coating

1. Physical vapour deposition
2. Chemical vapour deposition
3. Ion plating
4. Spattering
5. HAIPAP

Advantages

1. This prevents he deposition of carbon over the cylinder head and piston

2. It acts as a thermal barrier which reduces the amount of heat leakage

3. It help complete combustion of fuels

4. It avoids thermal shocking

5. All the factors above contribute increases the efficiency up to 9% and reduction in pollution in a wide rate

Limitations

1. Additional reactions takes place due to coating.
2. High expense for coating

E85

E85 is an alcohol fuel mixture of 85% ethanol and 15% gasoline, by volume. ethanol derived from crops (bioethanol) is a biofuel.
E85 as a fuel is widely used in Sweden and is becoming increasingly common in the United States, mainly in the Midwest where corn is a major crop and is the primary source material for ethanol fuel production.
E85 is usually used in engines modified to accept higher concentrations of ethanol. Such flexible-fuel engines are designed to run on any mixture of gasoline or ethanol with up to 85% ethanol by volume. The primary differences from non-FFVs is the elimination of bare magnesium, aluminum, and rubber parts in the fuel system, the use of fuel pumps capable of operating with electrically-conductive (ethanol) instead of non-conducting dielectric (gasoline) fuel, specially-coated wear-resistant engine parts, fuel injection control systems having a wider range of pulse widths (for injecting approximately 30% more fuel), the selection of stainless steel fuel lines (sometimes lined with plastic), the selection of stainless steel fuel tanks in place of terne fuel tanks, and, in some cases, the use of acid-neutralizing motor oil. For vehicles with fuel-tank mounted fuel pumps, additional differences to prevent arcing, as well as flame arrestors positioned in the tank's fill pipe, are also sometimes used.

CVCC

CVCC is a trademark by the Honda Motor Company for a device used to reduce automotive emissions called Compound Vortex Controlled Combustion. This technology allowed Honda's cars to meet the 1970s US Emission requirements without a catalytic converter, and first appeared on the 1975 ED1 engine. It is a form of stratified charge engine.
Honda CVCC engines have normal inlet and exhaust valves, plus a small auxiliary inlet valve which provides a relatively rich air / fuel mixture to a volume near the spark plug. The remaining air / fuel charge, drawn into the cylinder through the main inlet valve is leaner than normal. The volume near the spark plug is contained by a small perforated metal plate. Upon ignition flame fronts emerge from the perforations and ignite the remainder of the air / fuel charge. The remaining engine cycle is as per a standard four stroke engine.
This combination of a rich mixture near the spark plug, and a lean mixture in the cylinder allowed stable running, yet complete combustion of fuel, thus reducing CO (carbon monoxide) and hydrocarbon emissions.

A Diesel Particulate Filter, sometimes called a DPF, is device designed to remove Diesel Particulate Matter or soot from the exhaust gas of a Diesel engine, most of which are rated at 85% efficiency, but often attaining efficiencies of over 90%. A Diesel-powered vehicle with a filter installed will emit no visible smoke from its exhaust pipe.
In addition to collecting the particulate, a method must be designed to get rid of it. Some filters are single use (disposable), while others are designed to burn off the accumulated particulate, either through the use of a catalyst (passive), or through an active technology, such as a fuel burner which heats the filter to soot combustion temperatures, or through engine modifications (the engine is set to run a certain specific way when the filter load reachs a pre-determined level, either to heat the exhaust gasses, or to produce high amounts of No2, which will oxidize the particualte at relatively low temperatures). This procedure is known as 'filter regeneration.' Fuel sulfur interferes many 'Regeneration' strategies, and all jurisdictions that are interested in reduction of particulate emissions, are also passing regulations governing fuel sulfur levels.

Butterfly valve

A Butterfly valve is a type of flow control device, used to make a fluid start or stop flowing through a section of pipe. The valve is similar in operation to a ball valve. A flat circular plate is positioned in the center of the pipe. The plate has a rod through it connected to a handle on the outside of the valve. Rotating the handle turns the plate either parallel or perpendicular to the flow of water, shutting off the flow. It is a very robust and reliable design. However, unlike the ball valve, the plate does not rotate out of the flow of water, so that a pressure drop is induced in the flow.
There are three types of butterfly valve:
1.Resilient butterfly valve which has a flexible rubber seat. Working pressure up to 1.6 Mpa.
2.High performance butterfly valve which is usually double eccentric in design . Working pressure up to 5.0 Mpa.
3.Tricentric butterfly valve which is usually with metal seated design. Working pressure up to 10.0 Mpa.
Butterfly valves are also commonly utilised in conjunction with carburetors to control the flow of air through the intake manifold and hence the flow of fuel and air into an internal combustion engine. The butterfly valve in this circumstance called a throttle as it is 'throttling' the engines aspiration. It is controlled via a cable or electronics by the furthest right pedal in the drivers footwell (although adaptions for hand control do exist). This is why the accelerator pedal in some countries is called a throttle pedal.

Globe valves


Globe valves are named for their spherical body shape. The two halves of the valve body are separated by a baffle with a disc in the center. Globe valves operate by screw action of the handwheel. They are used for applications requiring throttling and frequent operation. Since the baffle restricts flow, they're not recommended where full, unobstructed flow is required.


A bonnet provides leakproof closure for the valve body. Globe valves may have a screw-in, union, or bolted bonnet. Screw-in bonnet is the simplest bonnet, offering a durable, pressure-tight seal. Union bonnet is suitable for applications requiring frequent inspection or cleaning. It also gives the body added strength. Bolted bonnet is used for larger or higher pressure applications.
Many globe valves have a class rating that corresponds to the pressure specifications of ANSI 16.34. Other different types of valve usually are called globe style valves because of the shape of the body or the way of closure of the disk. As an example typical swing check valves could be called globe type.

stratified charge engine

The stratified charge engine is a type of internal-combustion engine, similar in some ways to the Diesel cycle, but running on normal gasoline. The name refers to the layering of fuel/air mixture, the charge inside the cylinder.

In a traditional Otto cycle engine the fuel and air are mixed outside the cylinder and are drawn into it during the intake stroke. The air/fuel ratio is kept very close to stoichiometric, which is defined as the exact amount of air necessary for a complete combustion of the fuel. This mixture is easily ignited and burns smoothly.

The problem with this design is that after the combustion process is complete, the resulting exhaust stream contains a considerable amount of free single atoms of oxygen and nitrogen, the result of the heat of combustion splitting the O2 and N2 molecules in the air. These will readily react with each other to create NOx, a pollutant. A catalytic converter in the exhaust system re-combines the NOx back into O2 and N2 in modern vehicles.

A Diesel engine, on the other hand, injects the fuel into the cylinder directly. This has the advantage of avoiding premature spontaneous combustion—a problem known as detonation or ping that plagues Otto cycle engines—and allows the Diesel to run at much higher compression ratios. This leads to a more fuel-efficient engine. That is why they are commonly found in applications where they are being run for long periods of time, such as in trucks.

BlueTec

BlueTec is DaimlerChrysler's name for its two nitrogen oxide (NOx) reducing systems, for use in their Diesel automobile engines. One is a urea catalyst called AdBlue, the other is called DeNOx and uses an oxidising catalytic converter and particular filter combined with other NOx introduced the systems in the reducing systems. Both systems were designed to slash emissions further than ever before. Mercedes-BenzE-Class (using the 'DeNOx' system) and GL-Class (using 'AdBlue') at the 2006 North American International Auto Show as the E 320 and GL 320 Bluetec. This system makes these vehicles 45-state and 50-state legal respectively in the United States, and is expected to meet all emissions regulations through 2009. It also makes DaimlerChrysler the only car manufacturer in the the US committed to selling diesel models in the 2007 model year.

MAP sensor

A MAP sensor (manifold absolute pressure) is one of the sensors used in an internal combustion engine's electronic control system. Engines that use a MAP sensor are typically fuel injected. The manifold absolute pressure sensor provides instantaneous pressure information to the engine's electronic control unit (ECU). This is necessary to calculate air density and determine the engine's air mass flow rate, which in turn is used to calculate the appropriate fuel flow. (See stoichiometry.)

An engine control system that uses manifold absolute pressure to calculate air mass, is using the speed-density method. Engine speed (RPM) and air temperature are also necessary to complete the speed-density calculation. Not all fuel injected engines use a MAP sensor to infer mass air flow, some use a MAF sensor (mass air flow).

Valvetronic

The Valvetronic system is the first variable valve timing system to offer continuously variable timing (on both intake and exhaust camshafts) along with continuously variable intake valve lift, from ~0 to 10 mm, on the intake camshaft only. Valvetronic-equipped engines are unique in that they rely on the amount of valve lift to throttle the engine rather than a butterfly valve in the intake tract. In other words, in normal driving, the 'gas pedal' controls the Valvetronic hardware rather than the throttle plate

First introduced by BMW on the 316ti compact in 2001, Valvetronic has since been added to many of BMW's engines. The Valvetronic system is coupled with BMW's proven double-VANOS, to further enhance both power and efficiency across the engine speed range. Valvetronic will not be coupled to BMW's N53 and N54, 'High Precision Injection' (gasoline direct injection) technology due to lack of room in the cylinder head.

Cylinder heads with Valvetronic use an extra set of rocker arms, called intermediate arms (lift scaler), positioned between the valve stem and the camshaft. These intermediate arms are able to pivot on a central point, by means of an extra, electronicly actuated camshaft. This movement alone, without any movement of the intake camshaft, can open or close the intake valves.

Because the intake valves now have the ability to move from fully closed to fully open positions, and everywhere in between, the primary means of engine load control is transferred from the throttle plate to the intake valvetrain. By eliminating the throttle plate's 'bottleneck' in the intake track, pumping losses are reduced, fuel economy and responsiveness are improved.

regenerative brake

A regenerative brake is an apparatus, a device or system which allows a vehicle to recapture part of the kinetic energy that would otherwise be lost to heat when braking and make use of that power either by storing it for future use or feeding it back into a power system for other vehicles to use.

It is similar to an electromagnetic brake, which generates heat instead of electricity and is unable to completely stop a rotor.

Regenerative brakes are a form of dynamo generator, originally discovered in 1832 by Hippolyte Pixii. The dynamo's rotor slows as the kinetic energy is converted to electrical energy through electromagnetic induction. The dynamo can be used as either generator or brake by converting motion into electricity or be reversed to convert electricity into motion.

Using a dynamo as an regenerative brake was discovered co-incident with the modern electric motor. In 1873, Zénobe Gramme attached the wires from two dynamos together. When one dynamo rotor was turned as a regenerative brake, the other became an electric motor.

It is estimated that regenerative braking systems in vehicles currently reach 31.3% electric generation efficiency, with most of the remaining energy being released as heat; the actual efficiency depends on numerous factors, such as the state of charge of the battery, how many wheels are equipped to use the regenerative braking system, and whether the topology used is parallel or serial in nature. The system is no more efficient than conventional friction brakes, but reduces the use of contact elements like brake pads, which eventually wear out. Traditional friction-based brakes must also be provided to be used when rapid, powerful braking is required.