Your Ad Here

Rapid Prototyping

Miller Cycle

Air Suspension System

Oil Drilling

Atkinson cycle engine

Gasoline direct injection

Ball Valve

VTEC

MegaSquirt

Weber carburetors

Mass Airflow Sensor

Hybrid Synergy Drive

Extreme Programming

Extreme Programming (XP) is actually a deliberate and disciplined approach to software development. About six years old, it has already been proven at many companies of all different sizes and industries worldwide. XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers software developers to confidently respond to changing customer requirements, even late in the life cycle. This methodology also emphasizes teamwork. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development. XP improves a software project in four essential ways; communication, simplicity feedback, and courage. XP programmers communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. They deliver the system to the customers as early as possible and implement changes as suggested. With this foundation XP programmers are able to courageously respond to changing requirements and technology. XP is different. It is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This is a significant departure from traditional software development methods and ushers in a change in the way we program.If one or two developers have become bottlenecks because they own the core classes in the system and must make all the changes, then try collective code ownership. You will also need unit tests. Let everyone make changes to the core classes whenever they need to. You could continue this way until no problems are left. Then just add the remaining practices as you can. The first practice you add will seem easy. You are solving a large problem with a little extra effort. The second might seem easy too. But at some point between having a few XP rules and all of the XP rules it will take some persistence to make it work. Your problems will have been solved and your project is under control. It might seem good to abandon the new methodology and go back to what is familiar and comfortable, but continuing does pay off in the end.

Mobile IP

While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience. However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting.

Motes

Sensor networks have been applied to various research areas at a number of academic institutions. In particular, environmental monitoring has received a lot of attention with major projects at UCB, UCLA and other places. In addition, commercial pilot projects are staring to emerge as well. There are a number of start-up companies active in this space and they are providing mote hardware as well as application software and back-end infrastructure solutions. The University of California at Berkeley in conjunction with the local Intel Lab is conducting an environmental monitoring project using mote based sensor networks on Great Duck Island off the coast of Maine. This endeavor includes the deployment of tens of motes and several gateways in a fairly harsh outdoor environment. The motes are equipped with a variety of environmental sensors (temperature, humidity, light, atmospheric pressure, motion, etc.). They form a self-organizing multi-hop sensor net work that is linked via gateways to a base station on the island. There, the data is collected and transmitted via a satellite link to the Internet. This setup enabled researchers to continuously monitor an endangered bird species on the island without constant perturbation of their habitat. The motes gather detailed data on the bird population and their environment around the clock which would.The Intel Mote has been designed after a careful study of the application space for sensor networks. We have interviewed a number of researchers in this space and collected their feedback on desired im-provements over currently available mote designs. A list of requests that have been repeatedly mentioned includes the following key items: o Increased CPU processing power. In particular, for applications such as acoustic sensing and localization additional computational resources are required. o Increased main memory size. Similar to the item above, sensor network applications are beginning to stretch the limits of existing hardware designs. This need is amplified by the desire to perform localized computation on the motes.

Param 10000

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985-1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.SD2000 uses PARAM 10000. It used up to 4 UltraSPARC-II processors. The PARAM systems can be extended to a cluster supercomputer. A clustered system with 1200 processors can deliver a peak performance of up to 1TFlops/s. Even though PARAM 10000 system is not ranked within top 500 supercomputers, it has a possibility of gaining a high rank. It uses a variation of MPI developed in CDAC. No performance data is available, although one would presume that it will not be very different from that of other UltraSPARC-II based systems using MPI. Because SD2000 is a commercial product, it is impossible to gather detailed data about algorithm and performance of the product.

PON Topologies

There are several topologies suitable for the access network: tree, ring, or bus. A PON can also be deployed in redundant configuration as double ring or doubletree; or redundancy may be added only to a part of the PON, say the trunk of the tree. For the rest of this article, we will focus our attention on the tree topology; however, most of the conclusions made are equally relevant to other topologies.All transmissions in a PON are performed between Optical Line Terminal (OLT) and Optical Network Units (ONU). Therefore, in the downstream direction (from OLT to ONUs), a PON is a point-to-multipoint network, and in the upstream direction it is a multipoint-to-point network. The OLT resides in the local exchange (central office), connecting the optical access network to an IP, ATM, or SONET backbone. The ONU is located either at the curb (FTTC solution), or at the end-user location (FTTH, FTTB solutions), and provides broadband voice, data, and video services. In the downstream direction, a PON is a P2MP network, and in the upstream direction it is a MP2P network.

The advantages of using PONs in subscriber access networks are numerous.
1. PONs allow for long reach between central offices and customer premises, operating at distances over 20km.
2. PONs minimizes fiber deployment in both the local exchange office and local loop.
3. PONs provides higher bandwidth due to deeper fiber penetration, offering gigabit per second solutions.
4. Operating in the downstream as a broadcast network, PONs allow for video broadcasting as either IP video or analog video using a separate wavelength overlay.
5. PONs eliminate the necessity to install active multiplexer at splitting locations thus relieving network operators
6. Being optically transparent end to end PONs allow upgrades to higher bit rates or additional wavelengths.

Structured Cabling

As today's communication networks become more complex-as more users share peripherals, as more mission-critical tasks are accomplished over networks and as the need for faster access to information increases-a good foundation for these networks becomes increasingly important. The first step toward the adaptability, flexibility and longevity required of today's networks begins with structured cabling-the foundation of any information system. It is vital that communications cabling be able to support a variety of applications and last for the life of a network. If that cabling is part of a well-designed structured cabling system, it can allow for easy administration of moves, adds and changes and smooth migration to new network topologies. On the other hand,"worry-about-it-when-you-need-to" systems will make moves, adds and changes a hassle and make new network topologies too difficult to implement. Network problems occur more often, and are more difficult and timeconsuming to troubleshoot.When communication systems fail, employees and assets sit idle, causing a loss of revenues and profits. Even worse, the perceptions of customers and suppliers can be adversely affected.The purpose of this white paper is to present the advantages of using a standards-based structured cabling system for a business enterprise. The paper will cover a brief historical perspective of structured cabling, a review of the current standards, media types and performance criteria, system design and installation recommendations. Particular attention will be given to the ANSI/TIA/EIA-568-A standard and the horizontal cabling subsystem in that standard.

The Evolution of Structured Cabling
In the early 1980s, when computers were first linked together in order to exchange information, many different cabling designs were used. Some companies built their systems to run over coaxial cables. Others thought that twinaxial or other cables would work best. With these cables, certain parameters had to be followed in order to make the system work.

Surface Computer

Surface Computer users can fingerpaint digitally, resize and interact with photos and videos, and even "digitize" some real-life events, such as splitting up a restaurant bill and researching wines. The Surface Computer can recognize some real-world objects and creates onscreen versions to interact with.Microsoft has just announced its Surface Computing technology, a project that has been kept under wraps for five years. Using a giant table-like display, users are able to draw, interact with media, and use another new technology called domino tagging, in which a real-life object on the computer's surface is identified and becomes an on-screen object. Picture a surface that can recognize physical objects from a paintbrush to a cell phone and allows hands-on, direct control of content such as photos, music and maps. Today at the Wall Street Journal's D: All Things Digital conference, Microsoft Corp. CEO Steve Ballmer will unveil Microsoft Surface™, the first in a new category of surface computing products from Microsoft that breaks down traditional barriers between people and technology. Surface turns an ordinary tabletop into a vibrant, dynamic surface that provides effortless interaction with all forms of digital content through natural gestures, touch and physical objects. Beginning at the end of this year, consumers will be able to interact with Surface in hotels, retail establishments, restaurants and public entertainment venues.The intuitive user interface works without a traditional mouse or keyboard, allowing people to interact with content and information on their own or collaboratively with their friends and families, just like in the real world. Surface is a 30-inch display in a table-like form factor that small groups can use at the same time. From digital finger painting to a virtual concierge, Surface brings natural interaction to the digital world in a new and exciting way.

Ubiquitous Networking

Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Unlicensed Mobile Access

During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:
o UMTS
o Soft MSCs
o IMS Data Services
o I-WLAN
o IMS Telephony
This whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "The UMA Opportunity," available on the Kineto web site (www.kineto.com).

Mobile Network Reference Model

To best understand the role UMA plays in mobile network evolution, it is helpful to first
introduce a reference model for today's mobile networks. Figure 1 provides a simplified
model for the majority of 3GPP-based mobile networks currently in deployment.

Virtual LAN Technology

These are special purpose devices and computers that just transfer messages from one network to another. Before we look deep into the topic Virtual LAN's, let us see the basic devices used in the network backbone. They are

1. Bridges.
2. Switches.
3. Routers.
4. Gateways.
5. Hubs.

BRIDGES :-Bridges operate at the data link layer. They connect two LAN segments that use the same data link and network protocol.

SWITCHES :-Like bridges, switches operate at the data link layer. Switches connect two or more computers or network segments that use the same data link and network protocol.

ROUTERS :-Routers operate at the network layer. Routers connect two or more LANs that use the same or different data link protocols, but the same both the basic system interconnection and the necessary translation between the protocols in both directions.

HUBS :- Physical layer devices that are really just multiple port repeaters. When an electronic digital signal is received on a port, the signal is reampli-fied or regenerated and forwarded out all segments except the segment from which the signal was received.

Windows DNA

Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work.

X- Internet

As the Internet expands, two new waves of innovation -- comprising what Forrester calls the X Internet -- are already eclipsing the Web: an executable Net that greatly improves the online experience and an extended Net that connects the real world.

An executable Net that supplants today's Web will move code to user PCs and cause devices to captivate consumers in ways static pages never could. Today's news, sports, and weather offered on static Web pages is essentially the same content presented on paper, making the online experience more like reading in a dusty library than participating in a new medium.

The extended Internet is reshaping technology's role in business through Internet devices and applications which sense, analyze, and control data, therefore providing more real-time information than ever before about what is going on in the real world.

The X Internet will not be a new invention, but rather the evolution of today's Internet of static Web pages and cumbersome e-commerce mechanisms into a Net that relies on executable software code to deliver more interactive experiences.

Executable Internet applications use downloaded code like Java and XML to enhance the user experience with pop-up menus, pick lists, graphics and simple calculations, according to a recent Forrester report entitled "The X Internet."

An easy way to understand how the X Internet will work is to imagine that a band wants to distribute a song over the Net. Rather than worrying about which audio player people want to use, an executable file will deliver the song and the audio player at the same time.

"With an executable, you can distribute movies the same way you distribute songs," Forrester research director and report author Carl Howe told NewsFactor Network. "It just makes the models work better."

Building the X-Net

The report also employs an example of a person building a house. With today's Internet, a builder would have to find, then try to follow, an article detailing how to frame a window. When it was time to install the bathroom, the would-be plumber would then have to find an article dealing with that topic.

Executable Internet applications would demonstrate to a builder, step-by-step, how to frame a window. When it came time to install the bathroom, the carpenter would simply be replaced by a plumber.

"Instead of reading a book, you have a conversation about the work you're trying to do," Howe wrote.

Forrester is also predicting the widespread adoption of another X Internet -- but this X stands for "extended." The extended Internet will include the widespread adoption of real-world appliances, like air conditioners or car tires, that communicate with owners or manufacturers via the Internet.

The extended Internet will come with the inclusion of cheap sensors in thousands of everyday products, an era that will begin around 2005, Forrester predicts.

Many people think the Internet and the Web are the same thing. They're not. The Internet is a piece of wire that goes from me to you and from me to 300 million other people in the world. The Web is software that I put on my end of the wire, and you put on your end -- allowing us to exchange information. While the Internet (the wire) evolves gradually, the software on the wire can change quickly. Before the Web, other software was clamped onto the Internet. WAIS, Gopher, and Usenet were the dominant systems, and there were companies that were doing commerce using those software models. I call this the "executable Internet," or X Internet, for short. X Internet offers several important advantages over the Web: 1) It rides Moore's Law -- the wide availability of cheap, powerful, low real-estate processing; 2) it leverages ever dear bandwidth -- once the connection is made, a small number of bits will be exchanged, unlike the Web where lots of pages are shuttled out to the client; and 3) X Internet will be far more peer-to-peer -- unlike the server-centric Web. This scenario could be marred by two threats: viruses and lack of standards. Once executables start to move fluidly through the Net, viruses will have perfect conditions to propagate. Standards, or rather the lack thereof, will block the quick arrival of X Internet. I can't see Microsoft, Sun, IBM, or other traditionalists setting the standards. The Web-killer's design will emerge from pure research, academe, or open source -- as did the Web.

What It Means -- No. 1: Web-centric companies get stuck holding the bag. They will wake up one day with hundreds of millions of dollars of legacy code on their hands. Yes, their brands will remain intact, but their technology will suddenly be very outmoded. Yahoo!, eBay, and AOL will find themselves competing with a new wave of commerce players that market, deliver, and service using the superior technology of X Internet. One of the upstarts will Amazon Amazon.

Wireless Networked Digital Devices

The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations. This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.This paper first examines the technology used for wireless communication. Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper. Designing a network infrastructure is much more complex. The second half of the paper describes several research projects that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe? Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital world, compatibility and interoperation are possible.

Valvetronic

Butterfly Valve

Diesel Particulate Filter

Camless Engine

Nanotechnology

Smart Material Principle

Gate Valve

Check Valve

Space Shuttle

Chameleon chip

Chameleon Systems Inc, San Jose, California is one of the new breed of reconfigurable processor makers who claim that Chameleon chip which is its first Reconfigurable Communications Processor (RCP) is a reconfigurable processor which provides a design environment that allows customer to convert their algorithms to hardware configuration on the fly.

Advantages

1. Early and fast designs
2. Enabling Field upgrades
3. Creating product differentiation for suppliers
4. Creating flexible & adaptive products
5. Reducing power
6. Reducing manufacturing costs
7. Increasing bandwidths

Disadvantages

1. Inertia – Engineers slow to change
2. RCP designs requires comprehensive set of tools
3. 'Learning curve' for designers unfamiliar with reconfigurable logic

Applications

1. Wireless Base stations
2. Packetized voice(VOIP)
3. Digital Subscriber Line(DSL)
4. Software Defined Radio(SDR)


Global System for Mobiles

A GSM network is composed of several functional entities, whose functions and interfaces are specified. Figure 1 shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber. The Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center (MSC), performs the switching of calls between the mobile users, and between mobile and fixed network users. The MSC also handles the mobility management operations. Not shown is the Operations and Maintenance Center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile services Switching Center across the A interface.

TETRA

Terrestrial Enhanced(changed from European) Trunked Radio (TETRA) is a specialist Professional Mobile Radio and walkie talkie standard used by police, fire departments, ambulance and military. Its main advantage over technologies such as GSM are:
the much lower frequency used, which permits very high levels of geographic coverage with a smaller number of transmitters, cutting infrastructure cost.
fast call set-up - a one to many group call is generally set-up within 0.5 seconds compared with the many seconds that are required for a GSM network.
the fact that its infrastructure can be separated from that of the public cellphone network, and made substantially more diverse and resilient by the fact that base stations can be some distance from the area served.
unlike most cellular technologies, TETRA networks typically provide a number of fall-back modes such as the ability for a base station to process local calls in the absence of the rest of the network, and for 'direct mode' where mobiles can continue to share channels directly if the infrastructure fails or is out-of-reach.
gateway mode - where a single mobile with connection to the network can act as a relay for other nearby mobiles that are out of contact with the infrastructure.
TETRA also provides a point-to-point function that traditional analogue emergency services radio systems didn't provide. This enables users to have a one-to-one trunked 'radio' link between sets without the need for the direct involvement of a control room operator/dispatcher.
unlike the cellular technologies, which connects one subscriber to one other subscriber (one-to-one) then Tetra is built to do one-to-one, one-to-many and many-to-many. These operational modes are directly relevant to the public safety and professional users.
Radio aspects
TETRA uses a digital modulation scheme known as PI/4 DQPSK which is a form of phase shift keying. TETRA uses TDMA (see above). The symbol rate is 18,000 symbols per second, and each symbol maps to 2 bits. A single slot consists of 255 symbols, a single frame constist of 4 slots, and a multiframe (whose duration is approximately 1 second) consists of 18 frames. As a form of phase shift keying the downlink power is constant. The downlink (i.e. the output of the basestation) is a continuous transmission consisting of either specific communications with mobiles, synchronisation or other general broadcasts. Although the system uses 18 frames per second only 17 of these are used for traffic channel, with the 18th frame reserved for signalling or synchronisation. TETRA does not employ amplitude modulation. However, TETRA has 17.65 slots per second (18000 symbols/sec / 255 symbols/slot / 4 slots/frame), which is the cause of the PERCEIVED 'amplitude modulation' at 17Hz.

OFDMA

Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.
OFDMA features
OFDMA is the 'multi-user' version of OFDM
Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index
Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users
More than one sub-carrier can be assigned to one user to support high rate applications
Allows simultaneous transmission from several users ⇒ better spectral efficiency
Multiuser interference is introduced if there is frequency synchronization error
The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents. It is used in the mobility mode of IEEE 802.16 WirelessMAN Air Interface standard, commonly referred to as WiMAX.

SIDAC

The SIDAC, or SIlicon Diode for Alternating Current, is a semiconductor of the thyristor family. Also referred to as a SYDAC (Silicon thYristor for Alternating Current), bi-directional thyristor breakover diode, or more simply a bi-directional thyristor diode, it is technically specified as a bilateral voltage triggered switch. Its operation is identical to that of the DIAC; the distinction in naming between the two devices being subject to the particular manufacturer. In general, SIDACs have higher breakover voltages and current handling capacities than DIACs. The operation of the SIDAC is quite simple and is functionally identical to that of a spark gap or similar to two inverse parallel Zener diodes. The SIDAC remains nonconducting until the applied voltage meets or exceeds its rated breakover voltage. Once entering this conductive state, the SIDAC continues to conduct, regardless of voltage, until the applied current falls below its rated holding current. At this point, the SIDAC returns to its initial nonconductive state to begin the cycle once again. Somewhat uncommon in most electronics, the SIDAC is relegated to the status of a special purpose device. However, where part-counts are to be kept low, simple relaxation oscillators are needed, and the voltages are too low for practical operation of a spark gap, the SIDAC is an indispensable component.


Wibree

Wibree is an innovative digital radio technology that can soon become a benchmark for the open wireless communication. Working almost equivalent to the bluethooth technology, this modern technology functions within an ISM band of 2.4 GHz and amid a physical layer bit rate of 1 Mbps.

Widely used in may appliances like the wrist watches, wireless keyboards, toys and sports sensors due to its key feature of very low consumption of power within the prescribed ranges of 10 meters or 30 feet using the low cost transceiver microchips, it can generate an output power ofm-6 dBm.

Conceived by the Nokia company in 10-03-2006, it is today licensed and further researched by some of the major corporates that includes Nordic Semiconductor, Broadcom Corporation, CSR, Epson, Suunto and Taiyo Yuden. According to Bob lannucci, the head of Nokia’s research centre, this groundbreaking technology that is 10 times more capable than the bluethooth technology will soon replace it. Already the corporate giant Nordic Semiconductor is working on the technology so as to bring out the model chips by the mid of 2007.

Trisil

A Trisil is an electronic component designed to protect electronic circuits against overvoltage. Unlike a Transil it acts as a crowbar device, switching on when the voltage on it exceeds its breakover voltage.A Trisil is bipolar, behaving the same way in both directions. It is principally a voltage-controlled triac without gate. In 1982, the only manufacturer was Thomson SA. This type of crowbar protector is widely used for protecting telecom equipment from lightning induced transients and induced currents from power lines. Other manufacturers of this type of device include Bourns and Littelfuse. Rather than using the natural breakdown voltage of the device, an extra region is fabricated within the device to form a zener diode. This allows a much tighter control of the breakdown voltage. It is also possible to make gated versions of this type of protector. In this case, the gate is connected to the telecom circuit power supply (via a diode or transistor) so that the device will crowbar if the transient exceeds the power supply voltage. The main advantage of this configuration is that the protection voltage tracks the power supply, so eliminating the problem of selecting a particular breakdown voltage for the protection circuit.