Your Ad Here

Atkinson cycle engine



The Atkinson cycle engine is a type of Internal combustion engine invented by James Atkinson in 1882. The Atkinson cycle is designed to provide efficiency at the expense of power.
The Atkinson cycle allows the intake, compression, power, and exhaust strokes of the four-stroke cycle to occur in a single turn of the crankshaft. Owing to the linkage, the expansion ratio is greater than the compression ratio, leading to greater efficiency than with engines using the alternative Otto cycle.
The Atkinson cycle may also refer to a four stroke engine in which the intake valve is held open longer than normal to allow a reverse flow of intake air into the intake manifold. This reduces the effective compression ratio and, when combined with an increased stroke and/or reduced combustion chamber volume, allows the expansion ratio to exceed the compression ratio while retaining a normal compression pressure. This is desirable for improved fuel economy because the compression ratio in a spark ignition engine is limited by the octane rating of the fuel used. A high expansion ratio delivers a longer power stroke, allowing more expansion of the combustion gases and reducing the amount of heat wasted in the exhaust. This makes for a more efficient engine.
The disadvantage of the four-stroke Atkinson cycle engine versus the more common Otto cycle engine is reduced power density. Because a smaller portion of the intake stroke is devoted to compressing the intake air, an Atkinson cycle engine does not intake as much air as would a similarly-designed and sized Otto cycle engine.
Four stroke engines of this type with this same type of intake valve motion but with forced induction (supercharging) are known as Miller cycle engines.
Multiple production vehicles use Atkinson cycle engines:
Toyota Prius hybrid electric (front-wheel-drive)
Ford Escape hybrid electric (front- and four-wheel drive)
In all of these vehicles, the lower power level of the Atkinson cycle engine is compensated for through the use of electric motors in a hybrid electric drive train. These electric motors can be used independent of, or in combination with, the Atkinson cycle engine.

Ball Piston machines

From the day machines with reciprocating piston has come into existence efforts have been undertaken to improve the efficiency of the machines .The main drawbacks of reciprocating machines are the considerably large number of moving parts due to the presence of valves , greater inertial loads which reduces dynamic balance and leakage and friction due to the presence of piston rings . The invention urges has reached on Rotary machines .
One main advantage to be gained with a rotary machine is reduction of inertial loads and better dynamic balance. The Wankel rotary engine has been the most successful example to date , but sealing problems contributed to its decline . There , came the ideas of ball piston machines . In the compressor and pump arena, reduction of reciprocating mass in positive displacement machines has always been an objective, and has been achieved most effectively by lobe, gear, sliding vane, liquid ring, and screw compressors and pumps , but at the cost of hardware complexity or higher losses. Lobe, gear, and screw machines have relatively complex rotating element shapes and friction losses. Sliding vane machines have sealing and friction issues . Liquid ring compressors have fluid turbulence losses.The new design concept of the Ball Piston Engine uses a different approach that has many advantages, including low part count and simplicity of design , very low friction , low heat loss, high power to weight ratio , perfect dynamic balance , and cycle thermodynamic tailoring capability

Oil Drilling


Oil and natural gas furnish about three-fourths of our energy needs, fueling our homes, work places, factories and transportation systems. Using variety of methods, on land and at sea, small crews of specialized workers search for geologic formations that are likely to contain oil and gas. Seismic prospecting - a technique based on measuring the time it takes sound waves to travel through underground formations and return to surface-has revolutionized oil and gas exploration.
After finding the presence of oil beneath the ground, the most important conventional technique known as Rotary Drilling is employed and oil is extracted from the oil well. Advanced techniques like Horizontal Directional Drilling and Drilling with Laser can also be employed in Oil drilling


Sensotronic Brake Control (SBC) is the name given to an innovative electronically controlled brake system which Mercedes-Benz will fit to future passenger car models. Following on from the Mercedes innovations ABS, ASR, ESP® and Brake Assist, this system is regarded as yet another important milestone to enhance driving safety. With Sensotronic Brake Control electric impulses are used to pass the driver’s braking commands onto a microcomputer which processes various sensor signals simultaneously and, depending on the particular driving situation, calculates the optimum brake pressure for each wheel. As a result, SBC offers even greater active safety than conventional brake systems when braking in a corner or on a slippery surface. A high-pressure reservoir and electronically controllable valves ensure that maximum brake pressure is available much sooner. Moreover, the system offers innovative additional functions to reduce the driver’s workload. These include Traffic Jam Assist, which brakes the vehicle automatically in stop-and-go traffic once the driver takes his or her foot off the accelerator. The Soft-Stop function – another first – allows particularly soft and smooth stopping in town traffic.
Mechatronics – a new term is gaining popularity within the automotive industry and is rapidly developing into the catchword of a quiet technological revolution which in many fields stands century-old principles on their head. Mechatronics brings together two disciplines which in many cases were thought to be irreconcilable, namely mechanics and electronics.
Hence automobile functions which hitherto worked purely mechanically and partly with hydraulic assistance will in future be controlled by high-performance microcomputers and electronically controllable actuators. These either replace the conventional mechanical components or else enhance their function. The mechatronic interplay therefore opens up hitherto inconceivable possibilities to further raise the safety and comfort levels of modern passenger cars. For example: it was only thanks to mechatronics that an electronically controlled suspension system which instantly adapts to prevailing conditions when driving off, braking or cornering -- thus providing a totally new driving experience -- became a reality. In 1999 Mercedes-Benz launched this system under the name Active Body Control (ABC) in the flagship CL coupé, thereby signalling the advent of a new era of suspension technology.
This electronically controlled suspension system will quickly be followed by the electronic brake system: Mercedes-Benz and Bosch have teamed up on this benchmark development project which will shortly enter into series production at the Stuttgart automobile brand under the name Sensotronic Brake Control -- or SBC for short.
It turns the conventional hydraulic brake into an even more powerful mechatronic system. Its microcomputer is integrated into the car’s data network and processes information from various electronic control units. In this way, electric impulses and sensor signals can be instantly converted into braking commands, providing a marked safety and comfort gain for drivers.
Recently, automotive industry pays more attention on the improvement of the safety and comfort of their vehicle models. The new car model of Mercedes- Benz SL 500 justifies this, by incorporating the new technological innovations like Active body suspension (ABS), Electronic Stability Program (ESP), Sensotronic brake control (SBC) etc. Sensotronic brake control is an innovative electro hydraulic brake system which gives maximum safety and comfort on braking.
This Seminar illustrates mechanism and performance characteristic of Sensotronic brake control with the aid of the theory and mechanism of a common hydraulic brake system. In SBC, various factors while braking such as wheel speed, braking force for each wheel, steering angle etc are senses by electronic means. With electronic impulses are used to pass drivers braking commands onto a micro computer, which processes various sensor signal simultaneously, and depends on particular driving situation, calculates optimum pressure for each wheel. As a result SBC offers even greater active safety then conventional braking systems when braking in a corner and slippery surface.

Air Suspension System



Suspension is the term given to the system of
springs, shock absorbers and linkages that connects a vehicle to its wheels. Suspension systems serve a dual purpose – contributing to the car's handling and braking for good active safety and driving pleasure, and keeping vehicle occupants comfortable and reasonably well isolated from road noise, bumps, and vibrations. These goals are generally at odds, so the tuning of suspensions involves finding the right compromise. The suspension also protects the vehicle itself and any cargo or luggage from damage and wear. The design of front and rear suspension of a car may be different.
This article is primarily about four-wheeled (or more) vehicle suspension. For information on two-wheeled vehicles' suspensions see the
suspension (motorcycle), motorcycle fork, bicycle suspension, and bicycle fork articles.
Air suspension system The conventional metal springs faced some drawbacks which were air suspension system overcomes and so they are preferred and used in more these days. Let's see some of the plus points of this system. 1) The automatic control devices installed in the vehicle allows making optimum use of the variable space for deflection of wheel. 2) The height of the automobile remains steady and so the changes in the alignment of headlamp due to varying loads are restricted. 3) It helps to reduce the load while the vehicle in motion i.e. the dynamic loading as the spring rate variation between laden and unladen weight is much less. 4) It gives smooth and comfort ride of the vehicle. Air springs are classified into two types: 1) Bellow type and 2) Piston type Please refer to the sketch of air suspension system The air springs shown are mounted on the front and rear axle. The atmospheric air first passes through the filter where the dirt is removed and passed on to the compressor. Air is compressed here and the pressure of air is raised from atmospheric to about 250 Mpa. This pressure is maintained by the accumulator tank. The safety relief valve is provided on the accumulator as a safety device and it opens when the pressure rises above 250 Mpa. This air then moves to lift control valve and through leveling valves to te air springs.

Air Cylinders



Air cylinders resemble any other cylinder in steam engine or cylinder in hydraulic circuit. Air cylinders are called actuators or motors. Air Cylinder is a device, which converts pneumatic power in to respired Mechanical power by reducing the pressure of the compressed air to atmospheric pressure.
The advantage of air cylinder is that it cannot be over loaded. It simply stalls when the cylinder is over loaded.The essential parts are of Air Cylinder
1. Cylinder tube2. Piston3. Piston rod4. Air inlet5. Air outlet
Classification of Air Cylinder
1. According to motion obtaineda. Rotating b. Non rotating
2. According to operating loadsa. Light duty b. Medium dutyc. Heavy duty
3. According to number of pressure faces of pistona. Single acting b. Double acting
4. According to the piston of arrangementa. Tanden b. Duplexc. Double ended d. air cushione. Multi position cylinderf. Impact cylinderg. Cable cylinder
5. According to mounting of cylindera. Centre line mountingb. Rod end flange mountingc. Trunnion mountingd. Hinged mountinge. Horizontal pedestal mountingf. Rabbetted mounting6. According to constructiona. Piston type b. Diaphram type

Babbitt metal

Babbitt metal, also called white metal, is an alloy used to provide the bearing surface in a plain bearing. It was invented in 1839 by Isaac Babbitt in Taunton, Massachusetts, USA. The term is used today to describe a series of alloys used as a bearing metal. Babbit metal is characterized by its resistance to gall.

Antifriction alloy of copper, antimony, and tin used as bearing material for axles and crankshafts, occasionally in dentistry for dies and counterdies in swaging dental plates. In present-day usage the term is applied to a whole class of silver-white bearing metals, or “white metals.” These alloys usually consist of relatively hard crystals embedded in a softer matrix, a structure important for machine bearings. They are composed primarily of tin, copper, and antimony, with traces of other metals added in some cases and lead substituted for tin in others.Common compositions for Babbitt alloys:

90% tin 10% copper

89% tin 7% antimony 4% copper

80% lead 15% antimony 5% tin

Originally used as a cast in place bulk bearing material, it is now more commonly used as a thin surface layer in a complex, multi metal structure.

Babbitt metal is soft and easily damaged, and seems at first sight an unlikely candidate for a bearing surface, but this appearance is deceptive. The structure of the alloy is made up of small hard crystals dispersed in a matrix of softer alloy. As the bearing wears the harder crystal is exposed, with the matrix eroding somewhat to provide a path for the lubricant between the high spots that provide the actual bearing surface.

VVT-i

VVT-i, or Variable Valve Timing with intelligence, is an automobile variable valve timing technology developed by Toyota. The Toyota VVT-i system replaces the Toyota VVT offered starting in 1991 on the 4A-GE 20-Valve engine. The VVT system is a 2-stage hydraulically controlled cam phasing system.

VVT-i, introduced in 1996, varies the timing of the intake valves by adjusting the relationship between the camshaft drive (belt, scissor-gear or chain) and intake camshaft. Engine oil pressure is applied to an actuator to adjust the camshaft position. In 1998, 'Dual' VVT-i (adjusts both intake and exhaust camshafts) was first introduced in the RS200 Altezza's 3S-GE engine. Dual VVT-i is also found in Toyota's new generation V6 engine, the 3.5L 2GR-FE V6. This engine can be found in the Avalon, RAV4, and Camry in the US, the Aurion in Australia, and various models in Japan, including the Estima. Other Dual VVT-i engines will be seen in upcoming Toyota models, including a new 4 cylinder Dual VVT-i engine for the new generation 2007/2008 Corolla. Another notable implementation of the Dual VVT-i is the 2GR-FSE D-4S engine of the Lexus GS450h. By adjusting the valve timing, engine start and stop occur virtually unnoticable at minimum compression, and fast heating of the catalytic converter to its light-off temperature is possible, thereby reducing HC emissions considerably.

In 1998, Toyota started offering a new technology, VVTL-i, which can alter valve lift (and duration) as well as valve timing. In the case of the 16 valve 2ZZ-GE, the engine has 2 camshafts, one operating intake valves and one operating exhaust valves. Each camshaft has two lobes per cylinder, one low rpm lobe and one high rpm, high lift, long duration lobe. Each cylinder has two intake valves and two exhaust valves. Each set of two valves are controlled by one rocker arm, which is operated by the camshaft. Each rocker arm has a slipper follower mounted to the rocker arm with a spring, allowing the slipper follower to move up and down with the high lobe with out affecting the rocker arm. When the engine is operating below 6000 rpm, the low lobe is operating the rocker arm and thus the valves. When the engine is operating above 6000 rpm, the ECU activates an oil pressure switch which pushes a sliding pin under the slipper follower on each rocker arm. This in effect, switches to the high lobe causing high lift and longer duration.

Toyota has now ceased production of its VVTL-i engines for most markets, because the engine does not meet Euro IV specifications for emissions. As a result, some Toyota models have been discontinued, including the Corolla T-Sport (Europe), Corolla Sportivo (Australia), Celica, Corolla XRS, Toyota Matrix XRS, and the Pontiac Vibe GT, all of which had the 2ZZ-GE engine fitted.

Two-way catalytic converters

A two-way catalytic converter has two simultaneous tasks:

  1. Oxidation of carbon monoxide to carbon dioxide: 2CO + O2 → 2CO2
  2. Oxidation of unburnt hydrocarbons (unburnt and partially-burnt fuel) to carbon dioxide and water: 2CxHy + (2x+y/2)O2 → 2xCO2 + yH2O

This type of catalytic converter is widely used on diesel engines to reduce hydrocarbon and carbon monoxide emissions. They were also used on spark ignition (gasoline) engines in USA market automobiles up until 1981, when they were replaced by three-way converters due to regulatory changes requiring reductions on NOx emissions. Reduction of the NOx emissions requires an additional step. Platinum catalysis can be used.

Instead of catalysis, for diesel engines a true reactant, ammonia pyrolyzed in situ from urea, can be used to reduce the NOx into nitrogen, see AdBlue.

The regulations regarding hydrocarbons vary according to the engine regulated, as well as the jurisdiction. In some cases, "non-methane hydrocarbons" are regulated, while in other cases, "total hydrocarbons" are regulated. Technology for one application (to meet a non-methane hydrocarbon standard) may not be suitable for use in an application that has to meet a total hydrocarbon standard. Methane is not toxic, but is more difficult to break down in a catalytic converter, so in effect a "non-methane hydrocarbon" standard can be considered to be looser. Since methane is a greenhouse gas, interest is rising in how to eliminate emissions of it.

catalytic converter

A catalytic converter automobiles in the (colloquially, 'cat' or 'catcon') is a device used to reduce the toxicity of emissions from an internal combustion engine. First widely introduced on series-productionUS market for the 1975 model year to comply with tightening EPA regulations on auto exhaust, catalytic converters are still most commonly used in motor vehicle exhaust systems. Catalytic converters are also used on generator sets, forklifts, mining equipment, trucks, buses, trains, and other engine-equipped machines. A catalytic converter provides an environment for a chemical reaction wherein toxic combustion byproducts are converted to less-toxic gases. The catalytic converter was invented at Trinity College (Connecticut).

Three-way catalytic converters

A three-way catalytic converter has three simultaneous tasks:

  1. Reduction of nitrogen oxides to nitrogen and oxygen: 2NOx → xO2 + N2
  2. Oxidation of carbon monoxide to carbon dioxide: 2CO + O2 → 2CO2
  3. Oxidation of unburnt hydrocarbons (HC) to carbon dioxide and water: 2CxHy + (2x+y/2)O2 → 2xCO2 + yH2O

These three reactions occur most efficiently when the catalytic converter receives exhaust from an engine running slightly above the stoichiometric point. This is between 14.8 and 14.9 parts air to 1 part fuel, by weight, for gasoline (the ratio for LPG, natural gas and ethanol fuels is slightly different, requiring modified fuel system settings when using those fuels). When there is more oxygen than required, then the system is said to be running lean, and the system is in oxidizing condition. In that case, the converter's two oxidizing reactions (oxidation of CO and hydrocarbons) are favoured, at the expense of the reducing reaction. When there is excessive fuel, then the engine is running rich. The reduction of NOx is favoured, at the expense of CO and HC oxidation. If an engine could be operated with infinitesimally small oscillationsn about the stoichiometric point for the fuel used, it is theoretically possible to reach 100% conversion efficiencies.

Since 1981, three-way catalytic converters have been at the heart of vehicle emission control systems in North American roadgoing vehicles, and have been used on "large spark ignition" (LSI) engines since 2001 in California, and from 2004 in the other 49 states LSI engines are used in forklifts, aerial boom lifts, ice resurfacing machines and construction equipment. The converters used in those types of machines are three-way types, and are designed to reduce combined NOx+HC emissions from 12 gram/BHP-hour to 3 gram/BHP-hour or less, as mandated by the United States Environmental Protection Agency's (EPA) 2004 regulations A further drop to 2 gram/BHP-hour of NOx+HC emissions is mandated in 2007note: NOx is the industry standard short form for nitric oxide (NO) and nitrogen dioxide (NO2) both of which are smog precursors. HC is the industry short form for hydrocarbons). The EPA intends to introduce emissions rules for stationary spark ignition engines, to take effect in January 2008.


Tiptronic transmission is the latest innovation in the field of automatic power transmission systems, which combines the both worlds of transmissions, manual and automatic. In this mode tiptronic offers a choice of five different gear patterns ranging from economy to sport Porsche introduced tiptronic transmission in their sports cars.

Types of power transmission

There are 4 types of transmission

1. Automatic

2. Semi Automatic

3. Manual transmission

4. Constantly variable transmission

The principle behind this type transmission is one of being an automatic but with no real gear selection capabilities. One such type was known as variomatic transmission. It relied on centrifugal forces throwing out bob weight within a pulley assembly and drive belts drove the whole lot.

Tiptronic is a type of discrete automatic transmission developed by Porsche and used in its vehicles and those of its licensees. A Tiptronic transmission can operate just as the common type of automatic transmission, but it also allows the driver to override the automatic mode by moving the shift lever into a second (Tiptronic) shift gate equipped with two spring-loaded positions: "upshift" and "downshift". Once in this gate, the driver takes over most of the shifting decisions ordinarily performed by the transmission's computer, permitting, for example, the delaying of an upshift for increased acceleration or to increase the braking effect of the engine. On some models, the upshift and downshift operations can also be commanded by pushbuttons or paddle shifters installed on the steering wheel with an optional display in the instrument panel indicating the current gear selection.

Though Tiptronic transmissions allow the driver a certain measure of discrete control, the Tiptronic design is implemented using a torque converter like other automatic transmissions. A Tiptronic is not a computer controlled clutch-manual transmission or semi-automatic transmission. Most Tiptronic implementations still make some shifts automatically, primarily to protect the engine and transmission. For example, as used by Audi, a five-speed Tiptronic will make the upshifts from 1 to 2 automatically when moving off from a stop even when in Tiptronic mode; the transmission then waits for the user's upshift command before proceeding from 2 to 3, 3 to 4 and 4 to 5, although the transmission will still upshift if the redline is approached. On deceleration, the transmission will make all downshifts automatically to avoid running the engine at too-low an RPM although the user can accelerate any downshift (that would not violate the redline), thus allowing improved engine braking or preparation for future acceleration. There are some exceptions to this; the system used in the Aston Martin DB9 is designed to hold the gear at the engine's redline, though it will still downshift automatically. This system also allows the engine to blip the throttle during downshifts for a smoother shift, reducing the "jerk" by the wheels, which affects traction.

Most luxury vehicles with a Tiptronic transmission have two fully-automatic modes: One, identified as "Comfort" or similar, and another, usually called "Sport," which delays upshifts for a sportier driving at the expense of fuel, wear, comfort, and noise. Then, within each major mode there are additional hidden modes selected by the transmission itself; these modes adapt to the demands being placed upon the car by the driver. In this way, shift quality has been improved due to better electronic controls; these electronics modify the shift points to adapt to a given operator's driving style.

Some makers such as Aston Martin, BMW and Smart offer paddle shifter behind the steering wheel for controlling their similar transmissions.

Some systems such as Ferrari's F1-Superfast, Toyota SMT, and Volkswagen's DSG are different from Tiptronic transmissions in that they are actually based on sequential transmissions but have an electronically controlled clutch (or in Volkswagen's case, two clutches). These are generally not referred to as Tiptronics but are considered to be true semi-automatic transmissions.

Rheology

Rheology is the study of the deformation and flow of matter under the influence of an applied stress. The term was coined by Eugene Bingham, a professor at Lehigh University, in 1920, from a suggestion by a colleague, Markus Reiner. The term was inspired by Heraclitus's famous expression panta rei, 'everything flows'. Rheology unites the seemingly unrelated fields of plasticity and non-Newtonian fluids by recognising that both these types of materials are unable to support a shear stress in static equilibrium. In this sense, a plastic solid is a fluid. Granular rheology refers to the continuum mechanical description of granular materials. One of the tasks of rheology is to empirically establish the relationships between deformations and stresses, respectively their derivatives by adequate measurements. These experimental techniques are known as rheometry. Such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. Rheology has important applications in engineering, geophysics and physiology. In particular, hemorheology, the study of blood flow, has an enormous medical significance. In geology, solid Earth materials that exhibit viscous flow over long time scales are known as rheids. In engineering, rheology has had its predominant application in the development and use of polymeric materials (plasticity theory has been similarly important for the design of metal forming processes, but in the engineering community is often not considered a part of rheology).

Rheology has important applications in engineering, geophysics, pharmaceutics and physiology. In particular, hemorheology, the study of blood flow, has an enormous medical significance. In geology, solid Earth materials that exhibit viscous flow over long time scales are known as rheids. In engineering, rheology has had its predominant application in the development and use of polymeric materials (plasticity theory has been similarly important for the design of metal forming processes, but in the engineering community is often not considered a part of rheology). Rheology modifiers are also a key element in the development of paints in achieving paints that will level but not sag on vertical surfaces.

Filament-Wound Pipes

This topic is devoted to the analysis of the mechanical behavior of filament-wound pipes. The process of filament winding makes it possible to produce rotating parts made of composite material containing polymers reinforced with long fibers. Most of the time, this process consists in winding a fiber tow coated with a thermosetting polymeric matrix around a mandrel, and thus covering the entire mandrel after successive passages. Following a polymerization phase, the mandrel can be removed in order to get only the composite structure. In certain cases, the mandrel can be left in place and is then used as a liner.

Filament winding is used for the manufacturing of gas/fluid vessels. But one of the main uses is to perform pipes for fluid transportation in nuclear or oil industries, or for marine applications. Pipes made of composite materials offer a good resistance to adverse environmental conditions. In fact, glass fibers are generally used for these applications. The loading of the pipes structures is often a complex phenomenon and involves the study of the mechanical behavior under multiaxial loading.

The photoacoustic effect - production of sound from light - may be exploited for detection and localization of gas leaks on the surface of otherwise sealed components. The technique involves filling the test component with a photoactive tracer gas, and irradiating the component to produce photoacoustic sound from any leak site where a tracer gas cloud forms. This presentation describes experiments utilizing 10.6-micron radiation from a carbon-dioxide laser with sulfur hexafluoride as a tracer gas. Here, photoacoustic sounds from a laminar plume of sulfur hexafluoride and several NIST-traceable calibrated leak sources with leak rates between 1 cubic centimeter in 4.6 hrs and 1cubic centimeter in 6.3 years were recorded with four or twelve microphones in a bandwidth from 3 kHz to ~100 kHz. The measured photoacoustic waveforms compare well with those from an approximate theoretical development based on the forced wave equation when the unmeasured size of the photoactive gas cloud is adjusted within the likely gas-cloud diffusion zone. However, for small gas clouds, the photoacoustic sound amplitudes predicted by the approximate theory fall far below the experimental observations and several potential reasons for this mismatch will be offered. Interestingly, the higher measured signal amplitudes imply that the sensitivity of photoacoustic leak testing may reach or even exceed the capabilities of the most sensitive commercial leak test systems based on helium mass-spectrometers.

The most fundamental consideration in CFD is how one treats a continuous fluid in a discretized fashion on a computer. One method is to discretize the spatial domain into small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the equations of motion (Euler equations for inviscid, and Navier-Stokes equations for viscid flow). In addition, such a mesh can be either irregular (for instance consisting of triangles in 2D, or pyramidal solids in 3D) or regular; the distinguishing characteristic of the former is that each cell must be stored separately in memory. Lastly, if the problem is highly dynamic and occupies a wide range of scales, the grid itself can be dynamically modified in time, as in adaptive mesh refinement methods.

If one chooses not to proceed with a mesh-based method, a number of alternatives exist, notably :

  • smoothed particle hydrodynamics, a Lagrangian method of solving fluid problems,
  • Spectral methods, a technique where the equations are projected onto basis functions like the spherical harmonics and Chebyshev polynomials
  • Lattice Boltzmann methods, which simulate an equivalent mesoscopic system on a Cartesian grid, instead of solving the macroscopic system (or the real microscopic physics).

Methodology

In all of these approaches the same basic procedure is followed.

  1. The geometry (physical bounds) of the problem is defined.
  2. The volume occupied by the fluid is divided into discrete cells (the mesh).
  3. The physical modelling is defined - for example, the equations of motions + enthalpy + species conservation
  4. Boundary conditions are defined. This involves specifying the fluid behaviour and properties at the boundaries of the problem. For transient problems, the initial conditions are also defined.
  5. The equations are solved iteratively as a steady-state or transient.
  6. Analysis and visualization of the resulting solution.

Mechanosynthesis

In conventional chemical synthesis or chemosynthesis, reactive molecules encounter one another through random thermal motion in a liquid or vapor. In a hypothesized process of mechanosynthesis, reactive molecules would be attached to molecular mechanical systems, and their encounters would result from mechanical motions bringing them together in planned sequences, positions, and orientations. It is envisioned that mechanosynthesis would avoid unwanted reactions by keeping potential reactants apart, and would strongly favor desired reactions by holding reactants together in optimal orientations for many molecular vibration cycles. Mechanosynthetic systems would be designed to resemble some biological mechanisms.

While the description of mechanosynthesis given above has not yet been achieved, primitive mechanochemistry has been performed at cryogenic temperatures using scanning tunneling microscopes). So far, such devices provide the closest approach to fabrication tools for molecular engineering. Broader exploitation of mechanosynthesis awaits more advanced technology for constructing molecular machine systems - including a molecular assembler or precursors thereof.

Much of the excitement regarding mechanochemistry regards its potential use in automated assembly of molecular-scale devices. Such techniques appear to have many applications in medicine, aviation, resource extraction, manufacturing and warfare. Most theoretical explorations of such machines have focused on using Carbon, because of the many strong bonds it can form, the many types of chemistry these bonds permit, and utility of these bonds in medical and mechanical applications. Carbon forms diamond, for example, which if cheaply available, would be an excellent material for many machines. It has been suggested, notably by K. Eric Drexler, that mechanosynthesis will be fundamental to molecular manufacturing based on nanofactories capable of building macroscopic objects with atomic precision. The potential for these has been disputed, notably by Nobel Laureate Richard Smalley, leading to a famous dispute between the two of them - see nanotechnology. The Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000, is a focused ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development.

In practice, getting exactly one molecule to a known place on the microscope's tip is possible, but has proven difficult to automate. Since practical products require at least several hundred million atoms, this technique has not yet proven practical in forming a real product.

The goal of mechanoassembly research at this point focuses on overcoming these problems by calibration, and selection of appropriate synthesis reactions. The first product to be built by these means will probably be a specialized, very small (roughly 1,000 nanometers on a side) machine tool that can build copies of itself using mechanochemical means, under the control of an external computer. In the literature, such a tool is called an assembler or molecular assembler. Once assemblers exist, geometric growth (copies making copies) could reduce the cost of assemblers rapidly. Control by an external computer should then permit large groups of assemblers to construct large, useful projects to atomic precision. One such project would combine molecular-level conveyor belts with permanently-mounted assemblers to produce a factory.

In part to resolve this and related questions about the dangers of industrial accidents and runaway events equivalent to Chernobyl and Bhopal, and the more remote issue of ecophagy, grey goo and green goo (various potential disasters arising from runaway replicators, which could be built using mechanosynthesis) the UK Royal Society and UK Royal Academy of Engineering in 2003 commissioned a study to deal with these issues and larger social and ecological implications, led by mechanical engineering professor Ann Dowling. This was anticipated by some to take a strong position on these problems and potentials - and suggest any development path to a general theory of so-called mechanosynthesis. However, the Royal Society's nanotech report did not address molecular manufacturing at all, except to dismiss it along with grey goo.

Current technical proposals for nanofactories do not include self-replicating nanorobots, and recent ethical guidelines would prohibit development of unconstrained self-replication capabilities in nanomachines.

The technique of moving single atoms mechanically was proposed by Eric Drexler in his 1986 book The Engines of Creation.

In 1988, researchers at IBM's Zürich Research Institute successfully spelled the letters "IBM" in Xenon atoms on a cryogenic copper surface, grossly validating the approach. Since then, a number of research projects have undertaken to use similar techniques to store computer data in a compact fashion. More recently the technique has been used to explore novel physical chemistries, sometimes using lasers to excite the tips to particular energy states, or examine the quantum chemistry of particular chemical bonds.

In 2003, Oyabu et al reported the first instance of purely mechanical-based covalent bond-making and bond-breaking, i.e., the first experimental demonstration of true mechanosynthesis -- albeit with silicon atoms, not carbon.

In 2005, the first patent application on diamond mechanosynthesis was filed.

See also molecular nanotechnology, a more general explanation of the possible products, and discussion of other assembly techniques.

Kalina cycle

The Kalina cycle is a thermodynamic cycle for converting thermal energy to mechanical power which utilizes working fluid comprised of at least two different components and a ratio between those components is varied in different parts of the system to increase thermodynamical reversibility and therefore increase overall thermodynamic efficiency. There are multiple variants of Kalina cycle systems specifically applicable for different types of heat sources. Several proof of concept power plants using the kalina have already been built.

The Kalina cycle was invented by the Russian engineer Aleksandr Kalina

The Kalina cycle engine, which is at least 10 percent more efficient than the other heat engines, is simple in design and can use readily available, off-the-shelf components. This new technology is similar to the Rankine cycle except that it heats two fluids, such as ammonia and water, instead of one. Instead of being discarded as waste at the turbine exhaust, the dual component vapor (70% ammonia, 30% water) enters a distillation subsystem. This subsystem creates three additional mixtures. One is a 40/60 mixture, which can be completely condensed against normal cooling sources. After condensing, it is pumped to a higher pressure, where it is mixed with a rich vapor produced during the distillation process. This recreates the 70/30 working fluid. The elevated pressure completely condenses the working fluid and returns it to the boiler to complete the cycle. The mixture's composition varies throughout the cycle. The advantages of this process include variable temperature boiling and condensing, and a high level of recuperation.

The U.S. Department of Energy completed a power plant using a Kalina cycle engine in 1991, at the Energy Technology Engineering Center in Canoga Park, California. The power plant may also improve heat engine efficiency through better thermodynamic matching in the boiler and distillation subsystem, and through recuperation of the heat from the turbine exhaust. Data from the early operating trials confirmed the principle of the Kalina Cycle technology. The technology is now being used in geothermal power plants.

Lenoir cycle

The Lenoir cycle is an idealised thermodynamic cycle for the pulse jet engine. An ideal gas undergoes

  1. constant volume heating
  2. reversible adiabatic expansion.
  3. isobaric compression to the volume at the start of the cycle.

The expansion process is isentropic and hence involves no heat interaction. Energy is absorbed as heat during the constant volume process and rejected as heat during the constant pressure process.

Miller cycle

In engineering, the Miller cycle is a combustion process used in a type of four-stroke internal combustion engine. The Miller cycle was patented by Ralph Miller (engineer), an American engineer, in the 1940s.

This type of engine was first used in ships and stationary power-generating plants, but was adapted by Mazda for their KJ-ZEM V6, used in the Millenia sedan. More recently, Subaru has combined a Miller cycle flat-4 with a hybrid driveline for their 'Turbo Parallel Hybrid' car, known as the Subaru B5-TPH.

A traditional Otto cycle engine uses four 'strokes', of which two can be considered 'high power' – the compression stroke (high power consumption) and power stroke (high power production). Much of the internal power loss of an engine is due to the energy needed to compress the charge during the compression stroke, so systems that reduce this power consumption can lead to greater efficiency.

In the Miller cycle, the intake valve is left open longer than it would be in an Otto cycle engine. In effect, the compression stroke is two discrete cycles: the initial portion when the intake valve is open and final portion when the intake valve is closed. This two-stage intake stroke creates the so called 'fifth' cycle that the Miller cycle introduces. As the piston initially moves upwards in what is traditionally the compression stroke, the charge is being pushed back out the still-open valve. Typically this loss of charge air would result in a loss of power. However, in the Miller cycle, the piston is over-fed with charge air from a supercharger, so pushing some of the charge air back out into the intake manifold is entirely planned. The supercharger typically will need to be of the positive displacement type due its ability to produce boost at relatively low engine speeds. Otherwise, low-rpm torque will suffer.

A key aspect of the Miller cycle is that the compression stroke actually starts only after the piston has pushed out this 'extra' charge and the intake valve closes. This happens at around 20% to 30% into the compression stroke. In other words, the actual compression occurs in the latter 70% to 80% of the compression stroke. The piston gets the same resulting compression as it would in a standard Otto cycle engine for 70% of the work.

The Miller cycle results in an advantage as long as the supercharger can compress the charge using less energy than the piston would use to do the same work. Over the entire compression range required by an engine, the supercharger is used to generate low levels of compression, where it is most efficient. Then, the piston is used to generate the remaining higher levels compression, operating in the range where it is more efficient than a supercharger. Thus the Miller cycle uses the supercharger for the portion of the compression where it is best, and the piston for the portion where it is best. In total, this reduces in the power needed to run the engine by 10% to 15%. To this end, successful production engines using this cycle have typically used variable valve timing to effectively switch off the Miller cycle in regions of operation where it does not offer an advantage.

In a typical spark ignition engine, the Miller cycle yields an additional benefit. The intake air is first compressed by the supercharger and then cooled by an intercooler. This lower intake charge temperature, combined with the lower compression of the intake stroke, yields a lower final charge temperature than would be obtained by simply increasing the compression of the piston. This allows ignition timing to be altered to beyond what is normally allowed before the onset of detonation, thus increasing the overall efficiency still further.

Efficiency is increased by raising the compression ratio. In a typical gasoline engine, the compression ratio is limited due to self-ignition (detonation) of the compressed, and therefore hot, air/fuel mixture. Due to the reduced compression stroke of a Miller cycle engine, a higher overall compression ratio (supercharger compression plus piston compression) is possible, and therefore a Miller cycle engine has a better efficiency.

It should be noted that the benefits of utilizing positive displacement superchargers do not come without a cost. 15% to 20% of the power generated by a supercharged engine is usually required to do the work of driving the supercharger, which compresses the intake charge (also known as boost).

A similar delayed-valve closing method is used in some modern versions of Atkinson cycle engines, but without the supercharging. These engines are generally found on hybrid electric vehicles, where efficiency is the goal, and the power lost compared to the Miller cycle is made up through the use of electric motors.

The internal combustion engine is a heat engine in which the burning of a fuel occurs in a confined space called a combustion chamber. This exothermic reaction of a fuel with an oxidizer creates gases of high temperature and pressure, which are permitted to expand. The defining feature of an internal combustion engine is that useful work is performed by the expanding hot gases acting directly to cause movement, for example by acting on pistons, rotors, or even by pressing on and moving the entire engine itself.

This contrasts with external combustion engines such as steam engines which use the combustion process to heat a separate working fluid, typically water or steam, which then in turn does work, for example by pressing on a steam actuated piston.

The term Internal Combustion Engine (ICE) is almost always used to refer specifically to reciprocating engines, Wankel engines and similar designs in which combustion is intermittent. However, continuous combustion engines, such as Jet engines, most rockets and many gas turbines are also very definitely internal combustion engines.

History

The first internal combustion engines did not have compression, but ran on an air/fuel mixture sucked or blown in during the first part of the intake stroke. The most significant distinction between modern internal combustion engines and the early designs is the use of compression and, in particular, in-cylinder compression.

  • 1206: Al-Jazari described a double-acting reciprocating piston pump with a crankshaft-connecting rod mechanism.
  • 1509: Leonardo da Vinci described a compressionless engine.
  • 1673: Christiaan Huygens described a compressionless engine.
  • 17th century: English inventor Sir Samuel Morland used gunpowder to drive water pumps, essentially creating the first rudimentary internal combustion engine.
  • 1780's: Alessandro Volta built a toy electric pistol ([1]) in which an electric spark exploded a mixture of air and hydrogen, firing a cork from the end of the gun.
  • 1794: Robert Street built a compressionless engine whose principle of operation would dominate for nearly a century.
  • 1806: Swiss engineer François Isaac de Rivaz built an internal combustion engine powered by a mixture of hydrogen and oxygen.
  • 1823: Samuel Brown patented the first internal combustion engine to be applied industrially. It was compressionless and based on what Hardenberg calls the "Leonardo cycle," which, as the name implies, was already out of date at that time.
  • 1824: French physicist Sadi Carnot established the thermodynamic theory of idealized heat engines. This scientifically established the need for compression to increase the difference between the upper and lower working temperatures.
  • 1826 April 1: The American Samuel Morey received a patent for a compressionless "Gas or Vapor Engine."
  • 1838: a patent was granted to William Barnet (English). This was the first recorded suggestion of in-cylinder compression.
  • 1854: The Italians Eugenio Barsanti and Felice Matteucci patented the first working efficient internal combustion engine in London (pt. Num. 1072) but did not go into production with it. It was similar in concept to the successful Otto Langen indirect engine, but wasn't so well worked out in detail.
  • 1856: in Florence at Fonderia del Pignone (now Nuovo Pignone, a subsidiary of General Electric), Pietro Benini realized a working prototype of the Barsanti-Matteucci engine, supplying 5 HP. In subsequent years he developed more powerful engines—with one or two pistons—which served as steady power sources, replacing steam engines.
  • 1860: Belgian Jean Joseph Etienne Lenoir (1822–1900) produced a gas-fired internal combustion engine similar in appearance to a horizontal double-acting steam beam engine, with cylinders, pistons, connecting rods, and flywheel in which the gas essentially took the place of the steam. This was the first internal combustion engine to be produced in numbers.
  • 1862: German inventor Nikolaus Otto designed an indirect-acting free-piston compressionless engine whose greater efficiency won the support of Langen and then most of the market, which at that time was mostly for small stationary engines fueled by lighting gas.
  • 1870: In Vienna, Siegfried Marcus put the first mobile gasoline engine on a handcart.
  • 1876: Nikolaus Otto, working with Gottlieb Daimler and Wilhelm Maybach, developed a practical four-stroke cycle (Otto cycle) engine. The German courts, however, did not hold his patent to cover all in-cylinder compression engines or even the four-stroke cycle, and after this decision, in-cylinder compression became universal.
  • 1879: Karl Benz, working independently, was granted a patent for his internal combustion engine, a reliable two-stroke gas engine, based on Nikolaus Otto's design of the four-stroke engine. Later, Benz designed and built his own four-stroke engine that was used in his automobiles, which became the first automobiles in production.
  • 1882: James Atkinson invented the Atkinson cycle engine. Atkinson’s engine had one power phase per revolution together with different intake and expansion volumes, making it more efficient than the Otto cycle.
  • 1891: Herbert Akroyd Stuart built his oil engine, leasing rights to Hornsby of England to build them. They built the first cold-start compression-ignition engines. In 1892, they installed the first ones in a water pumping station. In the same year, an experimental higher-pressure version produced self-sustaining ignition through compression alone.
  • 1892: Rudolf Diesel developed his Carnot heat engine type motor burning powdered coal dust.
  • 1893 February 23: Rudolf Diesel received a patent for the diesel engine.
  • 1896: Karl Benz invented the boxer engine, also known as the horizontally opposed engine, in which the corresponding pistons reach top dead center at the same time, thus balancing each other in momentum.
  • 1900: Rudolf Diesel demonstrated the diesel engine in the 1900 Exposition Universelle (World's Fair) using peanut oil (see biodiesel).
  • 1900: Wilhelm Maybach designed an engine built at Daimler Motoren Gesellschaft—following the specifications of Emil Jellinek—who required the engine to be named Daimler-Mercedes after his daughter. In 1902 automobiles with that engine were put into production by DMG.
  • 1908: New Zealand inventor, Ernest Godward started a motorcycle business in Invercargill and fitted the imported bikes with his own invention – a petrol economiser. His economisers worked as well in cars as they did in motorcycles.

Applications

Internal combustion engines are most commonly used for mobile propulsion systems. In mobile scenarios internal combustion is advantageous, since it can provide high power to weight ratios together with excellent fuel energy-density. These engines have appeared in almost all automobiles, motorbikes, many boats, and in a wide variety of aircraft and locomotives. Where very high power is required, such as jet aircraft, helicopters and large ships, they appear mostly in the form of gas turbines. They are also used for electric generators and by industry.

Internal combustion mechanics

The potato cannon uses basic principles behind any reciprocating internal combustion engine: If a tiny amount of high-energy fuel (like gasoline) is put into a small, enclosed space and ignited, an incredible amount of energy is released in the form of expanding gas. That energy can be used to propel a potato 500 feet. In this case, the energy is translated into potato motion. It can also be used for more interesting purposes. For example, a cycle can be created that allows one to set off explosions like this hundreds of times per minute, and if that energy can be harnessed in a useful way, it is the same as the core of a car engine!

Almost all cars currently use what is called a four-stroke combustion cycle to convert gasoline into motion. The four-stroke approach is also known as the Otto cycle, in honor of Nikolaus Otto, who invented it in 1867. The four strokes are:

  1. Intake stroke
  2. Compression stroke
  3. Combustion stroke
  4. Exhaust stroke