January 20, 2019

More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.


Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.


The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket


The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.


The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.


Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.



One of the best things the automotive industry accomplishes is showing us what might be in our future.  They all have the finances, creative talent and vision to provide a glimpse into their “wish list” for upcoming vehicles.  Mercedes Benz has done just that with their futuristic F 015 Luxury in Motion.

In order to provide a foundation for the new autonomous F 015 Luxury in Motion research vehicle, an interdisciplinary team of experts from Mercedes-Benz has devised a scenario that incorporates different aspects of day-to-day mobility. Above and beyond its mobility function, this scenario perceives the motor car as a private retreat that additionally offers an important added value for society at large. (I like the word retreat.) If you take a look at how much time the “average” individual spends in his or her automobile or truck, we see the following:

  • On average, Americans drive 29.2 miles per day, making two trips with an average total duration of forty-six (46) minutes. This and other revealing data are the result of a ground-breaking study currently underway by the AAA Foundation for Traffic Safety and the Urban Institute.
  • Motorists age sixteen (16) years and older drive, on average, 29.2 miles per day or 10,658 miles per year.
  • Women take more driving trips, but men spend twenty-five (25) percent more time behind the wheel and drive thirty-five (35) percent more miles than women.
  • Both teenagers and seniors over the age of seventy-five (75) drive less than any other age group; motorists 30-49 years old drive an average 13,140 miles annually, more than any other age group.
  • The average distance and time spent driving increase in relation to higher levels of education. A driver with a grade school or some high school education drove an average of 19.9 miles and 32 minutes daily, while a college graduate drove an average of 37.2 miles and 58 minutes.
  • Drivers who reported living “in the country” or “a small town” drive greater distances (12,264 miles annually) and spend a greater amount of time driving than people who described living in a “medium sized town” or city (9,709 miles annually).
  • Motorists in the South drive the most (11,826 miles annually), while those in the Northeast drive the least (8,468 miles annually).

With this being the case, why not enjoy it?

The F 015 made its debut at the Consumer Electronics Show in Las Vegas more than two years ago. It’s packed with advanced (or what was considered advanced in 2015) autonomous technology, and can, in theory, run for almost 900 kilometers on a mixture of pure electric power and a hydrogen fuel cell.

But while countless other vehicles are still trying to prove that cars can, literally, drive themselves, the Mercedes-Benz offering takes this for granted. Instead, this vehicle wants us to consider what we’ll actually do while the car is driving us around.

The steering wheel slides into the dashboard to create more of a “lounge” space. The seating configuration allows four people to face each other if they want to talk. And when the onboard conversation dries up, a bewildering collection of screens — one on the rear wall, and one on each of the doors — offers plenty of opportunity to interact with various media.

The F 015 could have done all of this as a flash-in-the-pan show car — seen at a couple of major events before vanishing without trace. But in fact, it has been touring almost constantly since that Vegas debut.

“Anyone who focuses solely on the technology has not yet grasped how autonomous driving will change our society,” emphasizes Dr Dieter Zetsche, Chairman of the Board of Management of Daimler AG and Head of Mercedes-Benz Cars. “The car is growing beyond its role as a mere means of transport and will ultimately become a mobile living space.”

The visionary research vehicle was born, a vehicle which raises comfort and luxury to a new level by offering a maximum of space and a lounge character on the inside. Every facet of the F 015 Luxury in Motion is the utmost reflection of the Mercedes way of interpreting the terms “modern luxury”, emotion and intelligence.

This innovative four-seater is a forerunner of a mobility revolution, and this is immediately apparent from its futuristic appearance. Sensuousness and clarity, the core elements of the Mercedes-Benz design philosophy, combine to create a unique, progressive aesthetic appeal.

OK, with this being the case, let us now take a pictorial look at what the “Benz” has to offer.

One look and you can see the car is definitely aerodynamic in styling.  I am very sure that much time has been spent with this “ride” in wind tunnels with slip streams being monitored carefully.  That is where drag coefficients are determined initially.

The two JPEGs above indicate the front and rear swept glass windshields that definitely reduce induced drag.

The interiors are the most striking feature of this automobile.

Please note, this version is a four-seater but with plenty of leg-room.

Each occupant has a touch screen, presumably for accessing wireless or the Internet.  One thing, as yet there is no published list price for the car.  I’m sure that is being considered at this time but no USD numbers to date.  Also, as mentioned the car is self-driving so that brings on added complexities.  By design, this vehicle is a moving computer.  It has to be.  I am always very interested in maintenance and training necessary to diagnose and repair a vehicle such as this.  Infrastructure MUST be in place to facilitate quick turnaround when trouble arises–both mechanical and electrical.

As always, I welcome your comments.


February 2, 2017

If you ever ever hear these words used relative to an investigation your doctor wants you undertake—RUN AWAY.  I say this advisedly because I just experienced this test due to issues I was and am having with acid reflux.  The first test was a barium swallow with pill.  This was not so bad and took a fairly short period of time. The motility test is definitely a horse of a different color.  Let’s examine the motility test and take a look at what all is involved.

ESOPHAGEAL MOTILITY:  We start first with a definition as follows:

An esophageal motility disorder is any medical disorder causing difficulty in swallowing, regurgitation of food and a spasm-type pain which can be brought on by an allergic reaction to certain foods. The most prominent one is dysphagia.  Dysphagia is the medical term used to describe difficulty swallowing. … In contrast, dysphagia is a symptom that only occurs when attempting to swallow. Globus can sometimes be seen in acid reflux disease, but more often, it is due to increased sensitivity in the throat or esophagus. There are several very popular over-the-counter medication to mitigate acid reflux.  Just a few are. 1.) TUMS, 2.) Alka-Seltzer, 3.) Milk of Magnesia, 4.) Pepto-Bismol, 5.) ZANTAC, 6.) Pepcid, 7.) Tagamet, and 8.) Prilosec OTC.  These medications work and work well but I really wanted to get an answer as to WHY I was having the reflux.  For this, testing was necessary.

The tubular esophagus is a muscular organ, approximately 25 cm in length, and has specialized sphincters at proximal and distal ends. (That upper and lower portions of the esophagus.) The upper esophageal sphincter (UES) is comprised of several striated muscles, creating a tonically closed valve and preventing air from entering into the gastrointestinal tract. The lower esophageal sphincter (LES) is composed entirely of smooth muscle and maintains a steady baseline tone to prevent gastric reflux into the esophagus.

Esophageal motility disorders are less common than mechanical and inflammatory diseases affecting the esophagus, such as reflux esophagitis, peptic strictures, and mucosal rings. The clinical presentation of a motility disorder is varied, but, classically, dysphagia and chest pain are reported. This was my case, chest pain accompanied with reflux after every meal. Before entertaining a diagnosis of a motility disorder, first and foremost, the physician must evaluate for a mechanical obstructing lesion. This is the motility test.

THE PROCEDURE: The procedure takes about forty-five (45) minutes from start to finish.  Please note, the patient, in this case ME, is fully awake so commands may be received and followed.

  • The nurse will verify that you had nothing by mouth in the last 6 hours prior to the test. It is a fasting test.  I also took none of the medications I normally take A.M. This is very important.
  • Your nostril and throat is numbed with a topical anesthetic while you are sitting upright. This topical anesthetic BURNS LIKE HELL and gives the sensation your nostril is stopped up. It actually is I suppose.
  • A thin flexible tube about one-eighth inch in diameter (approximately the size of pencil) is then passed through the nostril, down the back of the throat into the esophagus and the stomach, while the patient swallows water.  (Are you getting this?)  The nurse snakes a tube with thirty-six (36) pressure-sensing rings or holes through your nose and down your throat right into the upper portion of your stomach. OH by the way—you feel it all the way down!
  • The tube has holes in it that sense pressure along the esophagus. It will be positioned in different areas of your esophagus. The nurse moves the tube as the test progresses.
  • With the tube inside the esophagus, you will lie down on your left side.  This is to prevent ingesting bile and aspirating that into your lungs if it does occur.  (Now do I have your attention?)
  • The nurse will give you small sips of water during the test to record the progression of the swallow.  Each sip is metered and measured using a syringe. Five Ml, ten Ml, etc etc.
  • The contractions of the esophageal muscle will be measured at rest and during swallows.
  • Pressure recordings are made while the tube is in place and as the tube is slowly withdrawn.
  • The results of the manometry test are displayed as a graph with a wave pattern that can be interpreted to determine if the esophagus is functioning normally.  The digital image on the left below will indicate the location of the tube and on the right, the pressure spikes as you swallow. During the test, I started coughing and had difficulties in calming down.  With each cough, the tube would rattle around and bounce right and left hitting the walls of my esophagus.  Really great feeling.
  • Since your throat was numbed, you have to wait one hour after completion of the test before you can eat or drink anything. This is to protect you from burning your throat or choking.


The actual display on the monitor looks like the images below.  Again, location on the left and pressure on the right.


I will certainly say this; the nurse was very patient with me as the tube was inserted and withdrawn.  The insertion feels like someone trying to slip a garden hose through the eye of a needle. One of the most uncomfortable feelings I have ever had. I am told some patients simply cannot tolerate the test and have to bail out.  It really was a struggle for me but I decided I needed an answer more than I needed immediate relief.

The technology monitoring the pressure is fabulous and very accurate.  As it turns out, my problem seems to be with the lower sphincter valve. It does not close tightly enough to prevent acid reflux.  I have no idea as to what the “fix” might be.  I find that out on 14 February.  I suppose that information will be my Valentine’s Day present.  I can promise you two things: 1.) Ain’t no way I’m repeating the test—ever and 2.) if I have to live on Prilosec for the rest of my life I will.  No surgery.


March 22, 2015

The following post used the following references as resources: 1.) Aviation Week and 2.) the Boeing Company web site for the 777 aircraft configurations and history of the Boeing Company.

I don’t think there is much doubt that The Boeing Company is and has been the foremost company in the world when it comes to building commercial aircraft. The history of aviation, specifically commercial aviation, would NOT be complete without Boeing being in the picture. There have been five (5) companies that figured prominently in aviation history relative to the United States. Let’s take a look.


During the last one hundred (100) years, humans have gone from walking on Earth to walking on the moon. They went from riding horses to flying jet airplanes. With each decade, aviation technology crossed another frontier, and, with each crossing, the world changed.

During the 20th century, five companies charted the course of aerospace history in the United States. They were the Boeing Airplane Co., Douglas Aircraft Co., McDonnell Aircraft Corp., North American Aviation and Hughes Aircraft. By the dawning of the new millennium, they had joined forces to share a legacy of victory and discovery, cooperation and competition, high adventure and hard struggle.

Their stories began with five men who shared the vision that gave tangible wings to the eternal dream of flight. William Edward Boeing, born in 1881 in Detroit, Mich., began building floatplanes near Seattle, Wash. Donald Wills Douglas, born in 1892 in New York, began building bombers and passenger transports in Santa Monica, Calif. James Smith McDonnell, born in 1899 in Denver, Colo., began building jet fighters in St. Louis, Mo. James Howard “Dutch” Kindelberger, born in 1895 in Wheeling, W.Va., began building trainers in Los Angeles, Calif. Howard Hughes Jr. was born in Houston, Texas, in 1905. The Hughes Space and Communications Co. built the world’s first geosynchronous communications satellite in 1963.

These companies began their journey across the frontiers of aerospace at different times and under different circumstances. Their paths merged and their contributions are the common heritage of The Boeing Company today.

In 1903, two events launched the history of modern aviation. The Wright brothers made their first flight at Kitty Hawk, N.C., and twenty-two (22) year-old William Boeing left Yale engineering college for the West Coast.

After making his fortune trading forest lands around Grays Harbor, Wash., Boeing moved to Seattle, Wash., in 1908 and, two years later, went to Los Angeles, Calif., for the first American air meet. Boeing tried to get a ride in one of the airplanes, but not one of the dozen aviators participating in the event would oblige. Boeing came back to Seattle disappointed, but determined to learn more about this new science of aviation.

For the next five years, Boeing’s air travel was mostly theoretical, explored during conversations at Seattle’s University Club with George Conrad Westervelt, a Navy engineer who had taken several aeronautics courses from the Massachusetts Institute of Technology.

The two checked out biplane construction and were passengers on an early Curtiss Airplane and Motor Co.-designed biplane that required the pilot and passenger to sit on the wing. Westervelt later wrote that he “could never find any definite answer as to why it held together.” Both were convinced they could build a biplane better than any on the market.

In the autumn of 1915, Boeing returned to California to take flying lessons from another aviation pioneer, Glenn Martin. Before leaving, he asked Westervelt to start designing a new, more practical airplane. Construction of the twin-float seaplane began in Boeing’s boathouse, and they named it the B & W, after their initials. THIS WAS THE BEGINNING.  Boeing has since developed a position in global markets unparallel relative to competition.

This post is specifically involved with the 777 product and changes in the process of being made to upgrade that product to retain markets and fend off competition such as the Airbus. Let’s take a look.


In looking at the external physical characteristics, we see the following:


As you can see, this is one BIG aircraft with a wingspan of approximately 200 feet and a length of 242 feet for the “300” version.  The external dimensions are for passenger and freight configurations.  Both enjoy significantly big external dimensions.

Looking at the internal layout for passengers, we see the following:



If will drill down to the nitty-gritty, we find the following:



As mentioned, the 777 also provides much needed services for freight haulers the world over.  In looking at payload vs. range, we see the following global “footprint” and long range capabilities from Dubai.  I have chosen but similar “footprints” may be had from Hong Kong, London, Los Angles, etc etc.


Even with these very impressive numbers, Boeing felt an upgrade was necessary to remain competitive to other aircraft manufacturers.


Ever careful with its stewardship of the cash-generating 777 program, Boeing is planning a series of upgrades to ensure the aircraft remains competitive in the long-range market well after the 777X derivative enters service.

The plan, initially revealed this past January, was presented in detail by the company for the first time on March 9 at the International Society of Transport Air Trading meeting in Arizona. Aimed at providing the equivalent of two percent (2%) fuel-burn savings in baseline performance, the rolling upgrade effort will also include a series of optional product improvements to increase capacity by up to fourteen (14) seats that will push the total potential fuel-burn savings on a per-seat basis to as much as five percent (5%) over the current 777-300ER by late 2016.

At least 0.5% of the overall specific fuel-burn savings will be gained from an improvement package to the aircraft’s GE90-115B engine, the first elements of which General Electric will test later this year.  The bulk of the savings will come from broad changes to reduce aerodynamic drag and structural weight. Additional optional improvements to the cabin will also provide operators with more seating capacity and upgraded features that would offer various levels of extra savings on a per-seat basis, depending on specific configurations and layouts.  The digital below will highlight the improvements announced.


“We are making improvements to the fuel-burn performance and the payload/range and, at same time, adding features and functionality to allow the airlines to continue to keep the aircraft fresh in their fleets,” says 777 Chief Project Engineer and Vice President Larry Schneider. The upgrades, many of which will be retro-fittable, come as Boeing continues to pursue new sales of the current-generation twin to help maintain the 8.3-per-month production rate until the transition to the 777X at the end of the decade. Robert Stallard, an analyst at RBS Europe, notes that Boeing has a firm backlog of 273 777-300s and 777Fs, which equates to around 2.7 years of current production. “We calculate that Boeing needs to get 272 new orders for the 777 to bridge the current gap and then transition production phase on the 777X,” he says.

The upgrades will also boost existing fleets, Boeing says. “Our 777s are operated by the world’s premier airlines and now we are seeing the Chinese carriers moving from 747 fleets to big twins,” says Schneider. “There are huge 777 fleets in Europe and the Middle East, as well as the U.S., so enabling [operators] to be able to keep those up to date and competitive in the market—even though some of them are 15 years old—is a big element of this.”

Initial parts of the upgrade are already being introduced and, in the tradition of the continuous improvements made to the family since it entered service, will be rolled into the aircraft between now and the third quarter of 2016. “There is not a single block point in 2016 where one aircraft will have everything on it. It is going to be a continuous spin-out of those capabilities,” Schneider says. Fuel-burn improvements to both the 777-200LR and -300ER were introduced early in the service life of both derivatives, and the family has also received several upgrades to the interior, avionics and maintenance features over the last decade.

The overall structural weight of the 777-300ER will be reduced by 1,200 lb. “When the -300ER started service in 2004 it was 1,800 lb. heavier, so we have seen a nice healthy improvement in weight,” he adds. The reductions have been derived from production-line improvements being introduced as part of the move to the automated drilling and riveting process for the fuselage, which Boeing expects will cut assembly flow time by almost half. The manufacturer is adopting the fuselage automated upright build (FAUB) process as part of moves to streamline production ahead of the start of assembly of the first 777-9X in 2017.

One significant assembly change is a redesign of the fuselage crown, which follows the simplified approach taken with the 787. “All the systems go through the crown, which historically is designed around a fore and aft lattice system that is quite heavy. This was designed with capability for growth, but that was not needed from a systems standpoint. So we are going to a system of tie rods and composite integration panels, like the 787. The combination has taken out hundreds of pounds and is a significant improvement for workers on the line who install it as an integrated assembly,” Schneider says. Other reductions will come from a shift to a lower weight, less dense form of cabin insulation and adoption of a lower density hydraulic fluid.

Boeing has also decided to remove the tail skid from the 777-300ER as a weight and drag reduction improvement after developing new flight control software to protect the tail during abused takeoffs and landings. “We redesigned the flight control system to enable pilots to fly like normal and give them full elevator authority, so they can control the tail down to the ground without touching it. The system precludes the aircraft from contacting the tail,” Schneider says. Although Boeing originally developed the baseline electronic tail skid feature to prevent this from occurring on the -300ER, the “old system allowed contact, and to be able to handle those loads we had a lot of structure in the airplane to transfer them through the tailskid up through the aft body into the fuselage,” he adds. “So there are hundreds of pounds in the structure, and to be able to take all that out with the enhanced tail strike-protection system is a nice improvement.”

Boeing is also reducing the drag of the 777 by making a series of aerodynamic changes to the wing based on design work conducted for the 787 and, perhaps surprisingly, the long-canceled McDonnell Douglas MD-12. The most visible change, which sharp-eyed observers will also be able to spot from below the aircraft, is a 787-inspired inboard flap fairing redesign.

“We are using some of the technology we developed on the 787 to use the fairing to influence the pressure distribution on the lower wing. In the old days, aerodynamicists were thrilled if you could put a fairing on an airplane for just the penalty of the skin friction drag. On the 787, we spent a lot of time working on the contribution of the flap fairing shape and camber to control the pressures on the lower wing surface.”

Although Schneider admits that the process was a little easier with the 787’s all-new wing, Boeing “went back and took a look at the 777 and we found a nice healthy improvement,” he says. The resulting fairing will be longer and wider, and although the larger wetted area will increase skin friction, the overall benefits associated with the optimized lift distribution over the whole wing will more than compensate. It’s a little counterintuitive,” says Schneider, adding that wind-tunnel test results of the new shape showed close correlation with benefits predicted by computational fluid dynamics (CFD) analysis using the latest boundary layer capabilities and Navier-Stokes codes.

Having altered the pressure distribution along the underside of the wing, Boeing is matching the change on the upper surface by reaching back to technology developed for the MD-12 in the 1990s. The aircraft’s outboard raked wingtip, a feature added to increase span with the development of the longer-range variants, will be modified with a divergent trailing edge. “Today it has very low camber, and by using some Douglas Aircraft technology from the MD-12 we get a poor man’s version of a supercritical airfoil,” says Schneider. The tweak will increase lift at the outboard wing, making span loading more elliptical and reducing induced drag.

Boeing has been conducting loads analysis on the 777 wing to “make sure we understand where all those loads will go,” he says. A related loads analysis to evaluate whether the revisions could also be incorporated into a potential retrofit kit will be completed this month. “When we figure out at which line number those two changes will come together (as they must be introduced simultaneously by necessity), we will do a single flight to ensure we don’t have any buffet issues from the change in lift distribution. That’s our certification plan,” Schneider says.

A third change to the wing will focus on reducing the base drag of the leading-edge slat by introducing a version with a sharper trailing edge. “The trailing-edge step has a bit of drag associated with it, so we will be making it sharper and smoothing the profile,” he explains. The revised part will be made thinner and introduced around mid-2016. Further drag reductions will be made by extending the seals around the inboard end of the elevator to reduce leakage and by making the passenger windows thicker to ensure they are fully flush with the fuselage surface. The latter change will be introduced in early 2016.

In another change adopted from the 787, Boeing also plans to alter the 777 elevator trim bias. The software-controlled change will move the elevator trailing edge position in cruise by up to 2 deg., inducing increased inverse camber. This will increase the download, reducing the overall trim drag and improving long-range cruise efficiency.

The package of changes means that range will be increased by 100nm or, alternatively, an additional 5,000 lb. of payload can be carried. Some of this extra capacity could be utilized by changes in the cabin that will free up space for another fourteen (14) seats. These will include a revised seat track arrangement in the aft of the cabin to enable additional seats where the fuselage tapers. Some of the extra seating, which will increase overall seat count by three percent (3%), could feature the option of arm rests integrated into the cabin wall. Schneider says the added seats, on top of the baseline  two percent (2%) fuel-burn improvement, will improve total operating efficiency by five percent (5%) on a block fuel per-seat basis.

Other cabin change options will include repackaged Jamco-developed lavatory units that provide the same internal space as today’s units but are eight (8) inch narrower externally. The redesign includes the option of a foldable wall between two modules, providing access for a disabled passenger and an assistant. Boeing is also developing noise-damping modifications to reduce cabin sound by up to 2.5 db, full cabin-length LED lighting and a 787-style entryway around Door 2. Boeing is also preparing to offer a factory-fitted option for electrically controlled window shades, similar to the 777 system developed as an aftermarket modification by British Airways.


As you can see, the 777 is preparing to continue service for decades ahead by virtue of the modifications and improvements shown above.

As always, I welcome your comments.


February 2, 2014

Several months ago I posted an article entitled “GREEN AVIATION”.  That blog (hopefully) indicated several efforts to bring about improvements in the GPH (gallons per hour) of fuel used by commercial aviation.  Those efforts are significant and involve the following:

  • Investigations into the use of “bio-fuels”
  • Improvements in aerodynamics of aircraft bodies including the flight surfaces
  • The use of adhesives instead of rivets and screws used as fasteners for outer surfaces
  • The use of composite materials to lessen the overall weight of an aircraft

That effort continues by companies such as BOEING and governmental agencies such as NASA.  We also must factor into the “mix” educational institutions.  All three contribute greatly in the search for improvements relative to reducing the use of precious, non-renewable fossil fuels.  The following is one such effort.

Engineers at NASA’s Langley Research Center in Hampton, Va., recently installed this 15-percent scale model based on a possible future aircraft design by the Boeing Company in its Transonic Dynamics Tunnel. The 13-foot model is “semi-span,” meaning it looks like a plane cut in half. It is being used to assess the aeroelastic qualities of the unusual truss-braced wing configuration. (“Aeroelastic,” or “aeroelasticity,” is the study of how an aircraft flexes during flight in response to aerodynamic forces. The “truss” is the diagonal piece attached to the belly of the fuselage and the underside of the wing.)



Boeing designed the concept as part of SUGAR (Subsonic Ultra-Green Aircraft Research) to help conceive of airplane technologies and designs needed 20 years from now to meet projected fuel efficiency and other “green” aviation requirements. According to Boeing engineers the wind tunnel tests will help validate the analysis done during the SUGAR study, which predicts that the truss-braced wing would improve fuel consumption by 5 to 10 percent over advanced conventional wings. Boeing’s SUGAR work, as well as that of other teams studying advanced future aircraft concepts, is funded through NASA’s Fundamental Aeronautics Program’s Fixed Wing Project.

I will certainly keep you posted as to further developments in the “GREEN AVIATION” world.  It’s a fascinating technology.


January 25, 2014

My company, Cielo Technologies, LLC, is involved with designing equipment to automate manufacturing process using work cell methodology.   Very frequently we have the need to include conveyors to move components into a cell and convey completed assemblies from a cell.  I recently was made aware of “MODULAR WAVE HANDLING” technology from research accomplished by FESTO AG & Co., KG in Germany.   I have used FESTO pneumatic equipment for many years and can certainly attest to their quality and technical support.  The information that follows is from that company.  Hope you enjoy this one.

The system itself is described as a “targeted transportation system” for moving very delicate items from one point to another.  The system can also be designed to off-load, route and combine material to provide accomplishment of desired objectives.  Materials such as fruits, vegetables, eggs, etc. require very special handling when conveying due to their very nature.  Now, most of the movement is done by hand due to issues with possible bruising and breaking.   The basic principle is derived from examination of wave motion occurring on the “high seas”.  The actual concept may be shown as follows:



Just like waves in the ocean, the modular conveyor moves articles gently from point to point.  A cross-section of the conveyor is shown by the two JPEGS below.  Individual pneumatic modules are actuated providing motion to lift and lower upper sections of the conveyor, thus moving products forward.  This wave motion is controlled by software actuating spring-loaded modules strategically located under the solid surface.





The FESTO modules may be seen as follows:



As you can see, these modules can be “ganged” thus providing an interlocking network of cylinders providing the wave motion necessary for movement.  An individual module looks as follows:



You can see the entry port for tubing that carries air pressure internal valving internal of the device.  Absence or presence of pressure is determined by programming PLCs instructed to open or close the bellows assembly modules.  That configuration is given below.




I think this is a remarkably innovative solution to a problem that has been with us for years.  The product has not been commercialized as yet and, regrettably, we do not have cost figures for the individual modules nor the fabric surface covering the modules.   As with any innovation, I suspect costs will drop as purchases are made and equipment installed.  I definitely intend to “stay close” to any developments and news from FESTO.  We consistently derive our designs from nature and this is one great example of that being the case.

Hope you enjoyed this one and please give me your comments.


July 27, 2013


When we consider the history of cryogenics we see that this technology, like most others, has been evolutionary and necessarily revolutionary.  Steady progress in the field has brought us to where we are today.  The cryogenics industry is flourishing with new applications found every year.  For this reason, it remains viable and has given us processes that certainly benefit our daily lives.

The Invention

The invention of the thermometer by Galileo in 1592 may be considered as the start of the science of thermodynamics.    Thermodynamics lies at the heart of temperature measurement and certainly measuring temperature of substances of extremely low temperature.   A Galileo thermometer (or Galilean thermometer) is a device made  with a sealed glass cylinder containing a clear liquid and several glass vessels of varying densities.   As temperature changes, the individual floats rise or fall in proportion to their respective density.  Galileo discovered the principle on which this thermometer is based.   That principle being, the density of a liquid changes in proportion to its temperature.

Although named after Galileo, the thermometer described above was not invented by him.  Galileo did invent a thermometer, called Galileo’s air thermometer (more accurately termed a thermoscope), in or before the year 1603.    The so-called ‘Galileo thermometer’ was actually invented by a group of academics and technicians known as the Accademia del Cimento of Florence.  This group included one of Galileo’s pupils, Evangelista Torricelli and Torricelli’s pupil Viviani.Torricelli was an Italian physicist and mathematician, best known for his invention of the barometer even though he contributed to inventing the thermometer.     Details of the thermometer were published in the Saggi di naturali esperienze fatte nell’Academia del Cimento sotto la protezione del Serenissimo Principe Leopoldo di Toscan e descrittedal segretario di essa Accademia (1666), the Academy’s main publication. The English translation of this work (1684) describes the device (‘The Fifth Thermometer’) as ‘slow and lazy’, a description that is reflected in an alternative Italian name for the invention, the termometro lento (slow thermometer).  The outer vessel was filled with ‘rectified spirits of wine’ (a concentrated solution of ethanol in water); the weights of the glass bubbles were adjusted by grinding a small amount of glass from the sealed end; and a small air space was left at the top of the main vessel to allow ‘for the Liquor to rarefie’.

Guillaume Amontons actually predicted for the first time the existence of an absolute zero in 1702, which marks the beginning of the science of low temperatures.    Around 1780, the liquefaction of a gas was achieved for the first time. It took almost 100 years before a so-called “permanent “gas, i.e. oxygen, was successfully liquefied.  Thereafter Linde and Claude founded the cryogenic industry, which today has annual sales of more than 30 billion US $.  Kamerlingh  Onnes and his Cryogenic Laboratory in Leiden worked in the field of low temperature physics, which contributed to the experimental proof of the quantum theory.   Heike Kamerlingh Onnes (1853-1926; 1913 Nobel Prize winner for physics) liquefied the most difficult gas of all, helium. He liquefied it at the lowest temperature ever achieved in a laboratory to that date, 4.2 Kelvin’s (the Kelvin measurement is a scale of temperatures measured in degrees Celsius from absolute zero).  We will discuss the Kelvin temperature scale in depth later on in this course.  This marked a significant milestone in the history of cryogenics. Since that achievement, increased attention has been devoted to the study of physical phenomena of substances at very low temperatures.

Early Research

British chemists Michael Faraday (1791-1867) and Sir Humphry Davy (1778-1829) did pioneering work in low-temperature physics that led to the ongoing development of cryogenics.  In the early to middle 1800s they were able to produce gases by heating mixtures at one end of a sealed tube in the shape of an inverted “V.” A salt and ice mixture was used to cool the other end of the tube. This combination of reduced temperature and increased pressure caused the gas that was produced to liquefy (turn to a liquid). When they opened the tube, the liquid quickly evaporated and cooled to its normal boiling point.

In 1877, French mining engineer Louis Paul Cailletet announced that he had liquefied oxygen and nitrogen.  Cailletet was able to produce only a few droplets of these liquefied gases, however. In his research with oxygen, Cailletet collected the oxygen in a sturdy container and cooled it by evaporating  sulphur dioxide in contact with the container. He then compressed the oxygen as much as possible with his equipment. Next he reduced the pressure suddenly, causing the oxygen to expand. The sudden cooling that resulted caused a few drops of liquid oxygen to form.

The need to store liquefied gases led to another important development in cryogenics.   In 1891 Scottish chemist James Dewar (1842-1923) introduced the container known today as the “Dewar flask.” The Dewar flask is actually two flasks, one within the other, separated by an evacuated space (a vacuum). The inside of the outer flask and the outside of the inner flask are coated with silver. The vacuum and the silvered sides of the container are designed to prevent heat passage.

Dewar was also the first person to liquefy hydrogen in 1898.   Cryogenics, as we recognize it today, started in the late 1800’s when Sir James Dewar (1842 – 1923) perfected a technique for compressing and storage of gases from the atmosphere into liquids. (Some credit a Belgian team as being first to separate and liquefy gasses but being British we’ll stay with Sir James Dewar for now).  These compressed gases were super cold and any metal that came in contact with the ultra low temperatures showed some interesting changes in their characteristics.

The first liquefied hydrogen by Sir James was in 1898 and a year later he managed to solidify hydrogen – just think on that for a moment… This is before electricity was common in houses, cars and buses a rare find and photography a rich mans hobby. By pure persistence and fantastic mental ability a whole generation of ‘Gentleman Scientists’ managed to bring into existence many things we both rely on and take for granted today.

Sir James managed to study, and lay the corner stones for the production of a wide range of gases that we use in our everyday lives, mostly without even realizing it. He also invented the Thermos flask (how else was he to save his liquid gas samples), the industrial version of which still uses his name – ‘Dewar’.

Later Accomplishments

In the 1940’s, scientists discovered that by immersing some metals in liquid nitrogen they could increase the ware resistance of motor parts, particularly in aircraft engines, giving a longer in service life. At the time this was little more than dipping a part into a flask of liquid nitrogen, leaving there for an hour or two and then letting it return to room temperature. They managed to get the hardness they wanted but parts became brittle. As some benefits could be found in this crude method, further research into the process was conducted. The applications at this stage were mostly military.

NASA led the way and perfected a method to gain the best results, consistently, for a whole range of metals. The performance increase in parts was significant but so was the cost of performing the process.

Work continued over the years to perfect the process, insulation materials improved, the method of moving the gas around the process developed and most importantly the ability to tightly control the rate of temperature change.

Technology enabled scientists to look deeper into the very structure of metals and better understand what was happening to the atoms and how they bond with other carbons. They also started to better understand the role that temperature plays in the treatment of metals to affect the final characteristics (more information in the ‘How it works’ section).

As with most everything in our lives today, the microprocessor enabled a steady but continual reduction in size of the control equipment required as well as increasing the accuracy of that part of the process.

Since the mid 1990’s, the process has started to become a commercially viable treatment in terms of cost of process Vs benefits in performance.



Freezing is one of the oldest and most widely used methods of food preservation.  This process allows preservation of taste, texture, and nutritional value in foods better than any other method.  The freezing process is a combination of the beneficial effects of low temperatures at which microorganisms cannot grow, chemical reactions are reduced, and cellular metabolic reactions are delayed.    Cryogenics is most-often used method used to accomplish this food preservation.

 The importance of freezing as a preservation method

Freezing preservation retains the quality of agricultural products over long storage periods. As a method of long-term preservation for fruits and vegetables, freezing is generally regarded as superior to canning and dehydration, with respect to retention in sensory attributes and nutritive properties (Fennema, 1977). The safety and nutrition quality of frozen products are emphasized when high quality raw materials are used, good manufacturing practices are employed in the preservation process, and the products are kept in accordance with specified temperatures.

The need for freezing and frozen storage

Freezing has been successfully employed for the long-term preservation of many foods, providing a significantly extended shelf life. The process involves lowering the product temperature generally to -18 °C or below (Fennema et al., 1973). The physical state of food material is changed when energy is removed by cooling below freezing temperature. The extreme cold simply retards the growth of microorganisms and slows down the chemical changes that affect quality or cause food to spoil (George, 1993).

Competing with new technologies of minimal processing of foods, industrial freezing is the most satisfactory method for preserving quality during long storage periods (Arthey, 1993). When compared in terms of energy use, cost, and product quality, freezing requires the shortest processing time. Any other conventional method of preservation focused on fruits and vegetables, including dehydration and canning, requires less energy when compared with energy consumption in the freezing process and storage. However, when the overall cost is estimated, freezing costs can be kept as low (or lower) as any other method of food preservation (Harris and Kramer, 1975).

Current status of frozen food industry in U.S. and other countries

The frozen food market is one of the largest and most dynamic sectors of the food industry. In spite of considerable competition between the frozen food industry and other sectors, extensive quantities of frozen foods are being consumed all over the world. The industry has recently grown to a value of over US$ 75 billion in the U.S. and Europe combined. This number has reached US$ 27.3 billion in 2001 for total retail sales of frozen foods in the U.S. alone (AFFI, 2003). In Europe, based on U.S. currency, frozen food consumption also reached 11.1 million tons in 13 countries in the year 2000 (Quick Frozen Foods International, 2000).

Advantages of freezing technology in developing countries

Developed countries, mostly the U.S., dominate the international trade of fruits and vegetables. The U.S. is ranked number one as both importer and exporter, accounting for the highest percent of fresh produce in world trade. However, many developing countries still lead in the export of fresh exotic fruits and vegetables to developed countries (Mallett, 1993).

For developing countries, the application of freezing preservation is favorable with several main considerations. From a technical point of view, the freezing process is one of the most convenient and easiest of food preservation methods, compared with other commercial preservation techniques. The availability of different types of equipment for several different food products results in a flexible process in which degradation of initial food quality is minimal with proper application procedures. As mentioned earlier, the high capital investment of the freezing industry usually plays an important role in terms of economic feasibility of the process in developing countries. As for cost distribution, the freezing process and storage in terms of energy consumption constitute approximately 10 percent of the total cost (Person and Lohndal, 1993). Depending on the government regulations, especially in developing countries, energy cost for producers can be subsidized by means of lowering the unit price or reducing the tax percentage in order to enhance production. Therefore, in determining the economical convenience of the process, the cost related to energy consumption (according to energy tariffs) should be considered.

Frozen food industry in terms of annual sales in 2001
(Source: Information Resources)

Food items Sales US$
% Change
vs. 2000
Total Frozen Food Sales 26 600 6.1
Baked Goods 1 400 9.0
Breakfast Foods 1 050 4.1
Novelties 1 900 10.5
Ice Cream 4 500 5.7
Frozen Dessert/Fruit/Toppings 786 5.4
Juices/Drinks 827 -9.7
Vegetables 2 900 4.3

Market share of frozen fruits and vegetables

Today in modern society, frozen fruits and vegetables constitute a large and important food group among other frozen food products (Arthey, 1993). The historical development of commercial freezing systems designed for special food commodities helped shape the frozen food market. Technological innovations as early as 1869 led to the commercial development and marketing of some frozen foods. Early products saw limited distribution through retail establishments due to insufficient supply of mechanical refrigeration. Retail distribution of frozen foods gained importance with the development of commercially frozen vegetables in 1929.

The frozen vegetable industry mostly grew after the development of scientific methods for blanching and processing in the 1940s. Only after the achievement of success in stopping enzymatic degradation, did frozen vegetables gain a strong retail and institutional appeal. Today, market studies indicate that considering overall consumption of frozen foods, frozen vegetables constitute a very significant proportion of world frozen-food categories (excluding ice cream) in Austria, Denmark, Finland, France, Germany, Italy, Netherlands, Norway, Sweden, Switzerland, UK, and the USA. The division of frozen vegetables in terms of annual sales in 2001 is shown in Table 3.

Commercialization history of frozen fruits is older than frozen vegetables. The commercial freezing of small fruits and berries began in the eastern part of the U.S. in about 1905 (Desrosier and Tressler, 1977). The main advantage of freezing preservation of fruits is the extended usage of frozen fruits during off-season. Additionally, frozen fruits can be transported to remote markets that could not be accessed with fresh fruit. Also, freezing preservation makes year-round further processing of fruit products possible, such as jams, juice, and syrups from frozen whole fruit, slices, or pulps. In summary, the preservation of fruits by freezing has clearly become one the most important preservation methods.


Superconductivity: Properties, History, Applications and Challenges

Superconductors differ fundamentally in quantum physics behavior from conventional materials in the manner by which electrons, or electric currents, move through the material. It is these differences that give rise to the unique properties and performance benefits that differentiate superconductors from all other known conductors.  Superconductivity is accomplished by using cryogenic methodology.

Unique Properties

• Zero resistance to direct current

• Extremely high current carrying density

• Extremely low resistance at high frequencies

• Extremely low signal dispersion

• High sensitivity to magnetic field

• Exclusion of externally applied magnetic field

• Rapid single flux quantum transfer

• Close to speed of light signal transmission

Zero resistance and high current density have a major impact on electric power transmission and also enable much smaller or more powerful magnets for motors, generators, energy storage, medical equipment and industrial separations. Low resistance at high frequencies and extremely low signal dispersion are key aspects in microwave components, communications technology and several military applications. Low resistance at higher frequencies also reduces substantially the challenges inherent to miniaturization brought about by resistive, or I2R, heating. The high sensitivity of superconductors to magnetic field provides a unique sensing capability, in many cases 1000x superior to today’s best conventional measurement technology. Magnetic field exclusion is important in multi-layer electronic component miniaturization, provides a mechanism for magnetic levitation and enables magnetic field containment of charged particles. The final two properties form the basis for digital electronics and – computing well beyond the theoretical limits projected for semiconductors. All of these materials properties have been extensively demonstrated throughout the world.


Cryopreservation of Haematopoietic Stem Cells

This routine procedure generally involves slow cooling in the presence of a cryoprotectant to avoid the damaging effects of intracellular ice formation. The cryoprotectant in popular use is dimethyl sulphoxide (DMSO), and the use of a controlled rate freezing technique at 1 to 2 °C/min and rapid thawing is considered standard.  Passive cooling devices that employ mechanical refrigerators, generally at −80 °C, to cool the cells (so-called dump-freezing) generate cooling rates similar to those adopted in controlled rate freezing.  Generally, the outcome from such protocols has been comparable to controlled rate freezing   been undertaken in order to replace the largely empirical approach to developing an optimized protocol with a methodological one that takes into account the sequence of damaging events that occur during the freezing and thawing process.

A Sterling Cycle Cryocooler has been developed as an alternative to conventional liquid nitrogen controlled rate freezers. Unlike liquid nitrogen systems, the Sterling Cycle freezer does not pose a contamination risk, can be used in sterile conditions and has no need for a constant supply of cryogen. Three types of samples from two species (murine embryos, human spermatozoa and embryonic stem cells), each requiring different cooling protocols, were cryopreserved in the Sterling Cycle freezer. For comparison, cells were also frozen in a conventional liquid nitrogen controlled rate freezer. Upon thawing, the rates of survival of viable cells were generally greater than 50% for mouse embryos and human embryonic stem cells, based on morphology (mouse embryos) and staining and colony formation (human embryonic stem cells). Survival rates of human spermatozoa frozen in the Sterling Cycle freezer, based on motility and dead cell staining, were similar to those of samples frozen in a conventional controlled rate freezer using liquid nitrogen.


There are many benefits of sub-cooling metals, including:

  • Reduces abrasive and adhesive wear. Treated material typically yield two to three times the production of non-treated material
  • Permanently changes the structure of the metal resulting in improved machining properties. Treated components may be ground after treatment and the benefits of treatment are retained.
  • Reduce the frequency and cost of tool re-manufacture. Worn treated tools require less material removal to restore a uniform cutting edge. Furthermore treated tools may be reground more times before falling below the minimum acceptable dimensions.
  • Substantially reduce machine downtime caused by tool replacement.
  • Improved surface finishing on material being manufactured with treated tooling. Treated tooling stays sharper and in tolerance longer that untreated.
  • Reduces likelihood of catastrophic tool failures due to stress fracture.
  • Stress relieves to reduce inherit/residual stress caused by manufacture.
  • Increases the overall durability of the treated product.

Cryogenic processing makes changes to the structure of materials being treated; dependent on the composition of the material it performs three things:

1. Turns retained austenite into martensite

2. Refines the carbide structure

3. Stress relieves

Cryogenic treatment of ferrous metals converts retained austenite to martensite and promotes the precipitation of very fine carbides.

Most heat treatments at best will leave somewhere between ten and twenty percent retained austenite in ferrous metals. Because austenite and martensite have different size crystal structures, there will be stresses built into the crystal structure where the two co-exist. Cryogenic processing eliminates these stresses by converting the majority of the retained austenite to martensite.

An important factor to keep in mind is that Cryogenic Processing is not a substitute for heat-treating if the product is poorly treated cryogenic treatment can’t help it, also if the product is overheat during re-manufacture or over stressed during use, you may destroy the temper of the steel which is developed during the heat treatment process rendering the cryogenic process useless by default. Cryogenic processing will not in itself harden metal like quenching and tempering. It is an additional treatment to heat-treating.

This transformation itself can cause a problem in poorly heat –

treated items that have too much retained austenite it may result in dimensional change and possible stress points in the product being  treated. This is why Cryogen Industries will not treat poorly heat treated items.
The cryogenic metal treatment process also promotes the precipitation of small carbide particles in tool steels and suitable alloying metals. The fine carbides act as hard areas with a low coefficient of friction in the metal that greatly adds to the wear resistance of the metals.

A  Japanese study in the role of carbides in the wear resistance improvements of tool steel by cryogenic treatment; concluded the precipitation of fine carbides has more influence on the wear resistance increase than does the removal of the retained austenite.

The process also relieves residual stresses in metals and some forms of plastics; this  has been proven by field studies conducted on product in high impact scenarios where stress fractures are evident.

Cryogenic Processing is not a coating; it changes the structure of the material being treated to the core and in reality works in synergy with coatings. As cryogenics is a once only treatment you will never wear off the process like a coating but you will be able to sharpen, dress, or modify your tooling without damaging the process.

Tool Failure – Another good reason to cryogenically treat

Tooling failures that can occur are abrasive and adhesive wear, chipping, deformation, galling, catastrophic failure and stress fracture.

Abrasive wear results from friction between the tool and the work material. Adhesive wear occurs when the action of the tool being used exceeds the material’s ductile strength or the material is simple too hard to process.

Adhesive wear causes the formation of micro cracks (stress fractures). These micro cracks eventually interconnect, or network, and form fragments that pull out. This “pullout” looks like excessive abrasive wear on cutting edges when actually they are stress fracture failures. When fragments form, both abrasive and adhesive wear occurs because the fragments become wedged between the tool and the work piece, causing friction this can then lead to poor finish or at worst catastrophic tool failure.

Catastrophic tooling failures can cause thousands of dollars in machine damage and production loss. This type of tool failure can cause warping and stress fractures to tool heads and decks as well rotating and load bearing assemblies.


Most of the world’s Natural Gas resources are remote from the market and their exploitation is constrained by factors such as transportation costs and market outlets. To increase the economic utilization of Natural Gas, techniques other than pipeline transmission or LNG shipment have been developed. Chemical conversion (liquefaction) of gas to make gas transportable as a liquid and add value to the products is now a proven technology.





March 6, 2013

The following resources were used to write this blog: 1.) “Vortex Tubes—Theory and Application”, by iProcessSmart; copyright 1999 and 2.) “The Ranque-Hilsch Vortex Tube”, by Giorgio De Vera, March 2010.

If you follow my postings and read any of my work, you know I mainly stay within subjects involving education and technology.  Sometimes I tackle subject matter off the “beaten path” but STEM (Science, Technology, Engineering and Mathematics) get most of my ink.

Recently, I was asked to get involved with specifying a vortex tube.  The application was very specific and frankly quite fascinating.  Well, with that said, I had to go back to school on this one.  Let’s take a look.



The vortex tube was invented quite by accident in 1928. George Ranque, a French physics student, was experimenting with a vortex-type pump he had developed when he noticed warm air exhausting from one end and cold air from the other. Ranque soon forgot about his pump and started a small firm to exploit the commercial potential for this strange device that produced hot and cold air with no moving parts. However, it soon failed and the vortex tube slipped into obscurity until 1945 when Rudolph Hilsch, a German physicist, published a widely read scientific paper on the device.

Much earlier, the great nineteenth century physicist, James Clerk Maxwell postulated that since heat involves the movement of molecules, we might someday be able to get hot and cold air from the same device with the help of a “friendly little demon” who would sort out and separate the hot and cold molecules of air.

Thus, the vortex tube has been variously known as the “Ranque Vortex Tube”, the “Hilsch Tube”, the “Ranque-Hilsch Tube”, and Maxwell’s Demon“. By any name, it has in recent years gained acceptance as a simple, reliable and low cost answer to a wide variety of industrial spot-cooling problems.


The tube itself is a mechanical device that separates compressed air into an outward radial high temperature region and an inner lower region. It operates as a refrigerating machine with a simplistic geometry and no moving parts.   It is used commercially in CNC machines, cooling suits, refrigerators, airplanes, etc. Other practical applications include cooling of laboratory equipment, quick startup of steam power generators, natural gas liquefaction, and particle separation in the waste gas industry.  Two JPEGs show the configurations are as follows:

Vortex Configuration



Representation of Counter-Flow Type

Vortex Configuration(2)

Representation of Uni-Flow Type

A vortex tube uses compressed air as a power source, has no moving parts, and produces hot air from one end and cold air from the other. The volume and temperature of these two airstreams are adjustable with a valve built into the hot air exhaust. Temperatures as low as -50°F (-46°C) and as high as +260°F (127°C) are possible.


Theories abound regarding the dynamics of a vortex tube. Here is one widely accepted explanation of the phenomenon as follows:

Compressed air is supplied to the vortex tube and passes through nozzles that are tangent to an internal counterbore. These nozzles set the air in a vortex motion. This spinning stream of air turns 90° and passes down the hot tube in the form of a spinning shell, similar to a tornado. A valve at one end of the tube allows some of the warmed air to escape. What does not escape, heads back down the tube as a second vortex inside the low-pressure area of the larger vortex. This inner vortex loses heat and exhausts thru the other end as cold air.  One airstream moves up the tube and the other moves down the tube while both rotate in the same direction at the same angular velocity.    That is, a particle in the inner stream completes one rotation in the same amount of time as a particle in the outer stream. However, because of the principle of conservation of angular momentum, the rotational speed of the smaller vortex might be expected to increase. (The conservation principle is demonstrated by spinning skaters who can slow or speed up their spin by extending or drawing in their arms.) But in the vortex tube, the speed of the inner vortex remains the same. Angular momentum has been lost from the inner vortex. The energy that is lost shows up as heat in the outer vortex. Thus the outer vortex becomes warm, and the inner vortex is cooled.


There are two classifications of the vortex tube. Both of these are currently in use in the industry. The more popular is the counter-flow vortex tube.    The hot air that exits from the far side of the tube is controlled by the cone valve. The cold air exits through an orifice next to the inlet.

Counterflow Vortex Tube



On the other hand, the uni-flow vortex tube does not have its cold air orifice next to the inlet.

Uni-Flow Vortex Tube


Instead, the cold air comes out through a concentrically located annular exit in the cold valve. This type of vortex tube is used in applications where space and equipment cost are of high importance. The mechanism for the uni-flow tube is similar to the counter-flow tube. A radial temperature separation is still induced inside, but the efficiency of the uni-flow tube is generally less than that of the counter-flow tube.

This is a very very brief explanation of vortex tubes but hopefully, one which will pique interest for further study.  I welcome your comments.

%d bloggers like this: