For those who might be a little bit unsure as to the definition of machine vision (MV), let’s now define the term as follows:

“ Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.”

There are non-industrial uses for MV also, such as 1.) Law enforcement,  2.) Security,  3) Facial recognition and 4.)  Robotic surgery.  With this being the case, there must be several critical, if not very critical, aspects to the technology that must be considered prior to purchasing an MV system or even discussing MV with a vendor.  We will now take a closer look as to those critical factors.


As with any technology, there are certain elements critical to success. MV is no different.  There are six (6) basic and critical factors for choosing an imaging system.  These are as follows:

  • Resolution–A higher resolution camera will undoubtedly help increase accuracy by yielding a clearer, more precise image for analysis.   The downside to higher resolution is slower speed. The resolution of the image required for an inspection is determined by two factors: 1.) the field of view required and 2.) minimal dimension that must be resolved by the imaging system. Of course, lenses, lighting, mechanical placement and other factors come into play, but, if we confine our discussion to pixels, we can avoid having to entertain these topics.  This allows us to focus on the camera characteristics. Using an example, if a beverage packaging system requires verification that a case is full prior to sealing, it is necessary for the camera to image the contents from above and verify that twenty-four (24) bottle caps are present. It is understood that since the bottles and caps fit within the case, the caps are then the smallest feature within the scene that must be resolved. Once the application parameters and smallest features have been determined, the required camera resolution can be roughly defined. It is anticipated that, when the case is imaged, the bottle caps will stand out as light objects within a dark background. With the bottle caps being round, the image will appear as circles bounded by two edges with a span between the edges. The edges are defined as points where the image makes a transition from dark to light or light to dark. The span is the diametrical distance between the edges. At this point, it is necessary to define the number of pixels that will represent each of these points. In this application, it would be sufficient to allow three pixels to define each of the two edges and four pixels to define the span. Therefore, a minimum of ten pixels should be used to define the 25mm bottle cap in the image. From this, we can determine that one pixel will represent 2.5mm of the object itself. Now we can determine the overall camera resolution. Choosing 400mm of the object to represent the horizontal resolution of the camera, the camera then needs a minimum of 400/2.5 = 160 pixels of horizontal resolution. Vertically, the camera then needs 250/2.5 = 100 pixels vertical resolution. Adding a further 10% to each resolution to account for variations in the object location within the field of view will result in the absolute minimum camera resolution. There are pros and cons to image resolution as follows.

Pros and cons of increasing resolution

Digital cameras transmit image data as a series of digital numbers that represent pixel values. A camera with a resolution of 200 x 100 pixels will have a total of 20,000 pixels, and, therefore, 20,000 digital values must be sent to the acquisition system. If the camera is operating at a data rate of 25MHz, it takes 40 nanoseconds to send each value. This results in a total time of approximately .0008 seconds, which equates to 1,250 frames per second. Increasing the camera resolution to 640 x 480 results in a total of 307,200 pixels, which is approximately 15 times greater. Using the same data rate of 25MHz, a total time of 0.012288 seconds, or 81.4 frames per second, is achieved. These values are approximations and actual camera frame rates will be somewhat slower because we have to add exposure and setup times, but it is apparent that an increase in camera resolution will result in a proportional decrease in camera frame rate. While a variety of camera output configurations will enable increased camera resolution without a sacrifice in frame rate, these are accompanied by additional complexity and associated higher costs.

  • Speed of Exposure—Products rapidly moving down a conveyor line will require much faster exposure speed from vision systems.  Such applications might be candy or bottled products moving at extremely fast rates. When selecting a digital camera, the speed of the object being imaged must be considered as well.   Objects not moving during exposure would be perfectly fine with relatively simple and an inexpensive camera or cameras.  These could be used and provide perfectly satisfactory results. Objects moving continuously require other considerations. For other cases, objects may be stationary only for very short periods of time then move rapidly.  If this is the case, inspection during the stationary period would be the most desirable.

Stationary or slow-moving objects: Area array cameras are well suited to imaging objects that are stationary or slow moving. Because the entire area array must be exposed at once, any movement during the exposure time will result in a blurring of the image. Motion blurring can; however, be controlled by reducing exposure times or using strobe lights.

Fast-moving objects: When using an area array camera for objects in motion, some consideration must be taken for the amount of movement with respect to the exposure time of the camera and object resolution where it is defined as the smallest feature of the object represented by one pixel. A rule of thumb when acquiring images of a moving object is that the exposure must occur in less time than it takes for the object to move beyond one pixel. If you are grabbing images of an object that is moving steadily at 1cm/second and the object resolution is already set at 1 pixel/mm, then the absolute maximum exposure time required is 1/10 per second. There will be some amount of blur when using the maximum amount of exposure time since the object will have moved by an amount equal to 1 pixel on the camera sensor. In this case, it is preferable to set the exposure time to something faster than the maximum, possibly 1/20 per second, to keep the object within half a pixel. If the same object moving at 1cm/second has an object resolution of 1 pixel/micrometer, then a maximum exposure of 1/10,000 of a second would be required. How fast the exposure can be set will be dependent on what is available in the camera and whether you can get enough light on the object to obtain a good image. Additional tricks of the trade can be employed when attempting to obtain short exposure times of moving objects. In cases where a very short exposure time is required from a camera that does not have this capability, an application may make use of shutters or strobed illumination. Cameras that employ multiple outputs can also be considered if an application requires speeds beyond the capabilities of a single output camera.

  • Frame Rate–The frame rate of a camera is the number of complete frames a camera can send to an acquisition system within a predefined time period.  This period is usually stated as a specific number of frames per second.  As an example, a camera with a sensor resolution of 640 x 480 is specified with a maximum frame rate of 50 frames per second. Therefore, the camera needs 20 milliseconds to send one frame following an exposure. Some cameras are unable to take a subsequent exposure while the current exposure is being read, so they will require a fixed amount of time between exposures when no imaging takes place. Other types of cameras, however, are capable of reading one image while concurrently taking the next exposure. Therefore, the readout time and method of the camera must be considered when imaging moving objects. Further consideration must be given to the amount of time between frames when exposure may not be possible.
  • Spectral Response and Responsiveness–All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1000 nanometers. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined.  A matching and selection process must be undertaken by application engineers to insure proper usage of equipment relative to the needs at hand. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined. Filters may be incorporated into the application to tune out the unwanted wavelengths, but it will still be necessary to know how well the camera will respond to the desired wavelength. The responsiveness of a camera defines how sensitive the camera is to a fixed amount of exposure. The responsiveness of a camera can be defined in LUX or DN/(nJ/cm^2). “LUX” is a common term among imaging engineers that is used to define the sensitivity in photometric units over the range of visible light, where DN/ (nJ/ cm^2) is a radiometric expression that does not limit the response to visible light. In general, both terms state how the camera will respond to light. The radiometric expression of x DN/ (nJ/cm^2) indicates that, for a known exposure of 1 nJ/cm^2, the camera will output pixel data of x DN (digital numbers, also known as grayscale). Gain is another feature available within some cameras that can provide various levels of responsiveness. The responsiveness of a camera should be stated at a defined gain setting. Be aware, however, that a camera may be said to have high responsiveness at a high gain setting, but increased noise level can lead to reduced dynamic range.
  • Bit Depth–Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth.  Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs. Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth. This bit depth typically ranges from 8 to 16-bits. In monochrome cameras, the bit depth defines the quantity of gray levels from dark to light, where a pixel value of 0 is %100 dark and 255 (for 8-bit cameras) is %100 white. Values between 0 and 255 will be shades of gray, where near 0 values are dark gray and near 255 values are almost white. 10-bit data will produce 1024 distinct levels of gray, while 12-bit data will produce 4096 levels. Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.
  • Lighting— Perhaps no other aspect of vision system design and implementation consistently has caused more delay, cost-overruns, and general consternation than lighting. Historically, lighting often was the last aspect specified, developed, and or funded, if at all. And this approach was not entirely unwarranted, as until recently there was no real vision-specific lighting on the market, meaning lighting solutions typically consisted of standard incandescent or fluorescent consumer products, with various amounts of ambient contribution.  The following lighting sources are now commonly used in machine vision:
  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode • Metal Halide (Mercury)
  • Xenon
  • High Pressure Sodium

Fluorescent, quartz-halogen, and LED are by far the most widely used lighting types in machine vision, particularly for small to medium scale inspection stations, whereas metal halide, xenon, and high pressure sodium are more typically used in large scale applications, or in areas requiring a very bright source. Metal halide, also known as mercury, is often used in microscopy because it has many discrete wavelength peaks, which complements the use of filters for fluorescence studies. A xenon source is useful for applications requiring a very bright, strobed light.

Historically, fluorescent and quartz halogen lighting sources have been used most commonly. In recent years, LED technology has improved in stability, intensity, and cost-effectiveness; however, it is still not as cost-effective for large area lighting deployment, particularly compared with fluorescent sources. However, on the other hand, if application flexibility, output stability, and longevity are important parameters, then LED lighting might be more appropriate. Depending on the exact lighting requirements, oftentimes more than one source type may be used for a specific implementation, and most vision experts agree that one source type cannot adequately solve all lighting issues. It is important to consider not only a source’s brightness, but also its spectral content.  Microscopy applications, for example often use a full spectrum quartz halogen, xenon, or mercury source, particularly when imaging in color; however a monochrome LED source is also useful for B&W CCD camera, and also now for color applications, with the advent of “all color – RGB” and white LED light heads. In those applications requiring high light intensity, such as high-speed inspections, it may be useful to match the source’s spectral output with the spectral sensitivity of your particular vision camera. For example, CMOS sensor based cameras are more IR sensitive than their CCD counterparts, imparting a significant sensitivity advantage in light-starved inspection settings when using IR LED or IR-rich Tungsten sources.

Vendors must be contacted to recommend proper lighting relative to the job to be accomplished.


February 6, 2015

In John 18:38 we read the following from the King James Version of the Bible: “Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.”  Pilate did not stay for an answer.

One of my favorite television programs was the “X”-Files.  It’s been off the air for some years now but we are told will return as a “mini-series” sometime in the very near future.  The original cast; i.e. Fox Mulder and Dana Skully will again remind us—THE TRUTH IS OUT THERE.  The truth is definitely out there as indicated by the men and women comprising the Large Synoptic Survey Telescope team.  They are definitely staying for answers.  The team members posed for a group photograph as seen below.



The Large Synoptic Survey Telescope (LSST) structure is a revolutionary facility which will produce an unprecedented wide-field astronomical survey of our universe using an 8.4-meter ground-based telescope. The LSST leverages innovative technology in all subsystems: 1.) the camera (3200 Megapixels, the world’s largest digital camera), 2.) telescope (simultaneous casting of the primary and tertiary mirrors; 3.) two aspherical optical surfaces on one substrate), and 4.)  data management (30 Terabytes of data nightly.)  There will be almost instant alerts issued for objects that change in position or brightness.

The known forms of matter and types of energy experienced here on Earth account for only four percent (4%) of the universe. The remaining ninety-six percent ( 96 % ), though central to the history and future of the cosmos, remains shrouded in mystery. Two tremendous unknowns present one of the most tantalizing and essential questions in physics: What are dark energy and dark matter? LSST aims to expose both.


Something is driving the universe apart, accelerating the expansion begun by the Big Bang. This force accounts for seventy percent (70%) of the cosmos, yet is invisible and can only be “seen” by its effects on space. Because LSST is able to track cosmic movements over time, its images will provide some of the most precise measurements ever of our universe’s inflation. Light appears to stretch at the distant edges of space, a phenomenon known as red shift, and LSST may offer the key to understanding the cosmic anti-gravity behind it.


Einstein deduced that massive objects in the universe bend the path of light passing nearby, proving the curvature of space. One way of observing the invisible presence of dark matter is examining the way its heavy mass bends the light from distant stars. This technique is known as gravitational lensing. The extreme sensitivity of the LSST, as well as its wide field of view, will help assemble comprehensive data on these gravitational lenses, offering key clues to the presence of dark matter. The dense and mysterious substance acts as a kind of galactic glue, and it accounts for twenty-five percent (25 %) of the universe.

From its mountaintop site, LSST will image the entire visible sky every few nights, capturing changes over time from seconds to years. Ultimately, after 10 years of observation, a stunning time-lapse movie of the universe will be created.

As the LSST stitches together thousands of images of billions of galaxies, it will process and upload that information for applications beyond pure research. Frequent and real time updates – 100 thousand a night – announcing the drift of a planet or the flicker of a dying star will be made available to both research institutions and interested astronomers.

In conjunction with platforms such as Google Earth, LSST will build a 3D virtual map of the cosmos, allowing the public to fly through space from the comfort of home.  ALLOWING THE PUBLIC is the operative phrase.. For the very first time, the public will have access to information, as it is presented, relative to the cosmos.  LSST educational materials will clearly specify National and State science, math and technology standards that are met by the activity. Our materials will enhance 21st century workforce skills, incorporate inquiry and problem solving, and ensure continual assessment embedded in instruction.


The decision to place LSST on Cerro Pachón in Chile was made by an international site selection committee based on a competitive process.  In short, modern telescopes are located in sparsely populated areas (to avoid light pollution), at high altitudes and in dry climates (to avoid cloud cover). In addition to those physical concerns, there are infrastructure issues. The ten best candidate sites in both hemispheres were studied by the site selection committee. Cerro Pachón was the overall winner in terms of quality of the site for astronomical imaging and available infrastructure. The result will be superb deep images from the ultraviolet to near infrared over the vast panorama of the entire southern sky.

The location is shown by the following digital:

Construction Site

The actual site location, as you can see below, is a very rugged outcropping of rock now used by farmers needing food for their sheep.

The Mountain Location

The Observatory will be located about 500km (310.6856  miles )north of Santiago, Chile, about 52km (32.3113 miles) or 80km (49.7097  miles) by road from La Serena, at an altitude of 2200 meters (7217.848 feet).  It lies on a 34,491Ha (85,227 acres.) site known as “Estancia El Tortoral” which was purchased by AURA on the open market in 1967 for use as an astronomical observatory.

When purchased, the land supported a number of subsistence farmers and goat herders. They were allowed to continue to live on the reserve after it was purchased by AURA and have gradually been leaving voluntarily for more lucrative jobs in the nearby towns.

As a result of departure of most of its human inhabitants and a policy combining environmental protection with “benign neglect” on the part of the Observatory, the property sees little human activity except for the roads and relatively small areas on the tops of Cerro Tololo and Cerro Pachon. As a result, much of the reserve is gradually returning to its natural state. Many native species of plants and animals, long thought in danger of extinction, are now returning. The last half of the trip to Tololo is an excellent opportunity to see a reasonably intact Chilean desert ecosystem.


LSST construction is underway, with the NSF funding authorized as of 1 August 2014.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaire Charles Simonyi and Bill Gates of $20 and $10 million respectively.  $7.5 million is included in the U.S. President’s FY2013 NSF budget request. The Department of Energy is expected to fund construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy.

Construction of the primary mirror at the University of Arizona‘s Steward Observatory Mirror Lab, the most critical and time-consuming part of a large telescope’s construction, is almost complete. Construction of the mold began in November 2007, mirror casting was begun in March 2008, and the mirror blank was declared “perfect” at the beginning of September 2008.  In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

As of December 2014, the primary mirror is completed awaiting final approval, and the mirror transport box is ready to receive it for storage until it is shipped to Chile.

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. In November 2009, the blank was shipped to Harvard University for storage until funding to complete it was available. On October 21, 2014, the secondary mirror blank was delivered from Harvard to Exelis for fine grinding.

Site excavation began in earnest March 8, 2011, and the site had been leveled by the end of 2011. Also during that time, the design continued to evolve, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In November 2014, the LSST camera project, which is separately funded by the United States Department of Energy , passed its “critical decision 2″ design review and is progressing toward full funding.

When completed, the facility will look as follows with the mirror mounted as given by the second JPEG:

Artist Rendition of Building(2)


Telescope Relative to Building


The assembled mirror structure is given below.


In the LSST optical design, the primary (M1) and tertiary (M3) mirrors form a continuous surface without any vertical discontinuities. Because the two surfaces have different radii of curvature, a slight cusp is formed where the two surfaces meet, as seen in the figure below. This design makes it possible to fabricate both the primary and tertiary mirrors from a single monolithic substrate. We refer to this option as the M1-M3 monolith.


After a feasibility review was held on 23 June 2005, the LSST project team adopted the monolithic approach to fabricating the M1 and M3 surfaces as its baseline. In collaboration with the University of Arizona and Steward Observatory Mirror Lab (SOML) construction has begun with detailed engineering of the mirror blank and the testing procedures for the M1-M3 monolith. The M1-M3 monolith blank will be formed from Ohara E6 low expansion glass using the spin casting process developed at SOML.

At 3.42 meters in diameter the LSST secondary mirror will be the largest convex mirror ever made. The mirror is aspheric with approximately 17 microns of departure from the best-fit sphere. The design uses a 100 mm thick solid meniscus blank made of a low expansion glass (e.g. ULE or Zerodur) similar to the glasses used by the SOAR and Discovery Chanel telescopes. The mirror is actively supported by 102 axial and 6 tangent actuators. The alignment of the secondary to the M1-M3 monolith is accomplished by the 6 hexapod actuators between the mirror cell and support structure. The large conical baffle is necessary to prevent the direct reflection of star light from the tertiary mirror into the science camera.


The truth is out there and projects such as the one described in this post AND the Large Hadron Collider at CERN certainly prove some people and institutions are not at all reluctant to search for that truth, the ultimate purpose being to discover where we come from.  Are we truly made from “star stuff”?



January 24, 2015

Two days ago I had the need to refresh my memory concerning the Second Law of Thermodynamics. Most of the work I do involves designing work cells to automate manufacturing processes but one client asked me to take a look at a problem involving thermodynamic and heat transfer processes.  The statement of the second law is as follows:

“It is impossible to extract an amount of heat “Qh” from a hot reservoir and use it all to do work “W”.  Some amount of heat “Qc” must be exhausted to a cold reservoir.”

Another way to say this is:

“It is not possible for heat to flow from a cooler body to a warmer body without any work being done to accomplish this flow.”

That refresher took about fifteen (15) minutes but it made me realize just how far we have come relative to teaching and presenting subjects involving technology; i.e. STEM ( Science, Technology, Engineering and Mathematics) related information.  Theory does not change.  Those giants upon whose shoulders we stand paved the way and set the course for discovery and advancement of so many technical disciplines, but one device has revolutionized teaching methods—the modern day computer with accompanying software.

I would like to stay with thermodynamics to illustrate a point.  At the university I attended, we were required to have two semesters of heat transfer and two semesters of thermodynamics.  Both subjects were supposedly taken during the sophomore year and both offered in the department of mechanical engineering.   These courses were “busters” for many ME majors.  More than once they were the determining factors in the decision-making process as to whether or not to stay in engineering or try another field of endeavor.  The book was “Thermodynamics” by Gordon van Wylen, copyright 1959.  My sophomore year was 1962 so it was well before computers were used at the university level.  I remember pouring over the steam tables looking at saturations temperatures, saturation pressures trying to find specific volume, enthalpy, entropy and internal energy information.  It seemed as though interpolation was always necessary.  Have you ever tried negotiating a Mollier Chart to pick off needed data? WARNING: YOU CAN GO BLIND TRYING.      Psychometric charts presented the very same problem.  I remember one homework project in which we were expected to design a cooling tower for a commercial heating and air conditioning system.  All of the pertinent specifications were given as well as the cooling necessary for transmission into the facility.   It was drudgery and even though so long ago, I remember the “all-nighter” I pulled trying to get the final design on paper. Today, this information is readily available through software; obviously saving hours of time and greatly improving productivity.  I will say this; by the time these two courses were taken you did understand the basic principles and associated theory for heat systems.

Remember conversion tables?  One of the most-used programs by working engineers is found by accessing “”.  This web site provides conversions between differing measurement systems for length, temperature, weight, area, density, power and even has oddball classifications such as “fun stuff” and miscellaneous.  Fun stuff is truly interesting; the Chinese Zodiac, pig Latin, Morse code, dog years—all subheadings and many many more.  All possible without an exhaustive search through page after page of printed documentation.  All you have to do is log on.

The business courses I took, (yes, we were required to take several non-technical courses) were just as laborious.  We constructed spreadsheets and elaborate ones at that for cost accounting and finance; all accomplished today with MS Excel.  One great feature of MS Excel is the Σ or sum feature.  When you have fifty (50) or more line items and its 2:30 in the morning and all you want to do is wrap things up and go to bed, this becomes a god-send.

I cannot imagine where we will be in twenty (20) years relative to improvements in technology. I just hope I’m around to see them.


December 14, 2014

One of the most enjoyable vacations my family and I have ever had was to Barcelona, Spain, a fabulous European city.   I cashed in four “frequent flyer” tickets for that eleven day event suspecting we would have a wonderful time.  I was, or we were, not disappointed.   One marvel of the “new world” is Antoni Gaudi’s Sagrada Familia.  A magnificent structure and one every engineer should visit.  Let’s take a look.


Sagrada Familia (Holy Family or in Catalon: La Sagrada Familia) is located in the Eixample region of Barcelona.  The Sagrada Familia was designed by the architect Antoni Gaudi with construction beginning in 1883.  Construction was unfortunately stopped due to Gaudi’s sudden death in 1926 and resumed only in 1946, after many disputes regarding the design and financing.

Gaudí is one of the most outstanding figures of Catalan culture and international architecture. He was born in Baix Camp (Reus, Riudoms) on June 25, 1852, but attended school, studied, worked and lived with his family in Barcelona and Barcelona became home to most of his great works. Gaudí was part of the Catalan Modernista movement, eventually transcending it with his nature-based organic style. Gaudí died on June 10, 1926, in Barcelona, Spain. He was first and foremost an architect, but also designed furniture and worked in town planning and landscaping, among other disciplines. In all those fields he developed a highly expressive language of his own, thus creating a great body of work.  When work began on the Sagrada Familia in 1882, the architects, bricklayers and the laborers worked in a very traditional way.  Gaudí took over the direction of that work and knew the tasks were extremely complex and difficult.   With that being the case, he tried to take advantage of all the modern techniques available.  Among other resources, he had railway tracks laid with small wagons to transport the materials, brought in cranes to lift the weights, and located workshops on the site to make work easier for the craftsmen.

Gaudi’s picture is given below:


Today, the building of the church follows Gaudí’s original idea and, just as he himself did, the best techniques are applied to make the building work safer, more comfortable and faster. It is some time now since the old wagons gave way to powerful cranes, the old manual tools have been replaced by precise electric machines and the materials have been improved to ensure excellent quality in the building process and the final result.

Antoni Gaudí was run over and killed by a Barcelona tram in June 1926 but by that time, he had been working on his design for the Expiatory Church of the Holy Family for 43 years — almost his entire architectural career. For the last twelve of those years he worked on the Sagrada Familia to the exclusion of everything else, and during his last eighteen months he slept on site in his workshop in the church’s crypt, where he soon would be buried. But work was always slow because money was extremely tight, and at the time of his death only a fragment of the church had actually been built: the apse and melting candle wax Nativity façade with its peculiar spires that was to become the instantly recognizable symbol of the city of Barcelona. After Gaudí’s death, his inner circle of collaborators continued to work slowly in the violent days leading up to the outbreak of the Spanish Civil War, when a rampaging anarchist mob broke into the workshop and destroyed Gaudí’s remaining drawings and plaster models.

Today the present Church Technical Office and the management are charged with studying the complexity of Gaudí’s original project, performing the calculations and the building plans and directing the works as a whole.  According to the latest estimates, the Sagrada Familia will be completed by 2026, the centenary of Gaudí’s death. Over the next few years 10 more spires, the tallest reaching 170 meters in height, will dramatically transform the building’s roofline.

The Church is one of the major attractions of Barcelona, along with other buildings and places designed by Gaudi.  Looking at the graphics below, you can certainly see why the Sagrada Familia draws millions each year to Barcelona.  The designs are striking. But for now at least the interior is finished. Pope Benedict XVI opened the Sagrada Familia in November 2010, and it is reported that more than 3 million tourists visited in 2011. As a university student in Barcelona in the early 1960s, Catalan architect Oscar Tusquets Blanca was one of the leaders of a campaign opposing any further construction work on Antoni Gaudí’s unfinished Sagrada Familia church. Now, 50 years later, after taking a guided tour of the Sagrada Familia in the company of one of the building’s project architects, he has publicly recanted. Writing in the March 2011 edition of the Italian architecture magazine Domus, Tusquets Blanca says some of the building’s finishes and decorative features — hand railings, stained glass and flooring — are not on a par with the whole, and the sculptures on the Passion façade done by Josep Maria Subirachs are “pitiful.” But overall he says his tour of the Sagrada Familia left him “dumbfounded.”

We will explore very briefly the exterior and interior of the structure hoping to indicate the complexities of design and fabrication.  You will certainly understand why completion is still years to come.



As you can see, the church is meant to be a “neighborhood” church and is definitely accessible to the public at large.  Even though the digital above does not give the overall footprint justice, it does show how the structure is positioned relative to surrounding buildings and streets.

The JPEG below will show the main structure when completed. You must agree, it is a daunting undertaking.


Given below is the structure as competed to date.

Sagrada Familia Front View.

The following slides will indicate to you the craftsmanship of the exterior and how truly detailed the designs are.  I have no idea as to how long each took, but it must have been painstaking.




The interior of the church is no less marvelous than the exterior and just as intricate.  The vaulted ceilings are truly individual works of art as can be seen from the following slides.







I hope at some point in your life you visit Barcelona and Sagrada Familia.  You will come away knowing, as I did, Gaudi was an absolute genius.  Time to him was not a factor in design or construction.  He knew he would not live to see the church’s completion but felt sure those craftsmen following him would bring about its completion.

As always, I welcome you comments.


The use of natural gas in the form of CNG (compressed natural gas) is becoming an accepted alternative to petroleum; i.e. gasoline.   In 2011, the use of natural gas as a fuel for automobiles and trucks rose 7.1 % per year with a remarkable increase of thirty-eight percent (38%) since 2006.  That use has more than doubled in the past ten years to almost thirty-nine (38.85) million cubic feet in 2011.  It is estimated that by 2017, approximately eight percent (8%) of new North American Class 6-8 commercial vehicles will be natural-gas powered and annual sales will exceed 29,500 units.  This estimate was made by Frost & Sullivan.   Let’s get a better idea as to the various truck classifications.  The chart below will provide information relative to the classifications as defined by the Department of Transportation (DOT).

Truck Classifications

As you can see, the classifications basically revolve around the gross weight of the vehicle.  Both classifications indicate heavy-duty vehicles.


Proven and Reliable – More than 11 million NGVs are in use worldwide, with about 110,000 in the U.S. Some tune-ups for NGVs have been extended by up to 50,000 miles. Some oil changes have been extended by up to 25,000 miles. Pipes and mufflers have lasted longer in NGVs because the natural gas does not react with the metals.

Economical – CNG fleet vehicles realize an overall cost savings of as much as 50% over gasoline, particularly after factoring in available alternative tax credits.  If we look at the relative cost and compare fuel types we see the following:

Mach Fuel Comparison

In my home town, Chattanooga, Tennessee, we see an average gasoline price of $2.57 per gallon with a national average of $ 2.64 per gallon.   For CNG, the gasoline gallon equivalent or GGE is $1.55 per gallon.  Defining GGE, we find the following:

Gasoline gallon equivalent (GGE) or gasoline-equivalent gallon (GEG) is the amount of alternative fuel it takes to equal the energy content of one liquid gallon of gasoline. GGE allows consumers to compare the energy content of competing fuels against a commonly known fuel—gasoline. GGE also compares gasoline to fuels sold as a gas (Natural Gas, Propane, and Hydrogen) and electricity.

Domestic Fuel – Natural gas supplies are abundant domestically, reducing our dependence on foreign oil and the impact of weather-related shortages.

Eco-Conscious – CNG vehicles are much cleaner than traditional vehicles, producing up to 90% lower emissions than gasoline or diesel. Natural gas is the cleanest burning fossil fuel today.  CNG vehicles produce the fewest emissions of all vehicle fuel types and emissions contain significantly less pollutants than gasoline.  Dedicated CNG vehicles release little or no emissions during fueling.

State Incentives– Some states offer tax credits for each vehicle converted to run on natural gas. Some states offer tax credits for purchasing a vehicles running on CNG. Other states offer car pool lanes if the vehicle runs on CNG.  In order for a “Clean Fuel” vehicle to travel in the Express Lanes it must display a “Clean Fuel” sticker/decal which costs $10.  Also, in several states CNG vehicles qualify for high occupancy vehicle (HOV) lane access, where applicable.


The following news release was issued by the Atlanta Journal and Constitution in July of 2013.

Atlanta Gas Light teams up with The Langdale Company

ATLANTA – July 23, 2013 – The first compressed natural gas (CNG) fueling station developed under the Atlanta Gas Light (AGL) CNG Program is now open in Valdosta, GA. Approved by the Georgia Public Service Commission (PSC) in 2012, the program is designed to expand public access to the CNG fueling infrastructure throughout the state and enhance Georgia’s role in the emerging CNG market in the southeastern U.S.  The Langdale Fuel Company of Valdosta was chosen as the recipient of funding from Atlanta Gas Light for the installation.

That company has worked with MARTA to outfit selected buses with CNG.  A graphic of one of those buses given below:


The station itself looks very much like a “standard” filling station we are use to in dispensing gasoline.

Compressed Gas Filling


You drive up, put the hose in the filler, then start pumping.


The complexities of receiving and compressing natural gas are demonstrated by the graphic below.  As you can see, there is significant technology involved with a typical compression “event”.

CNG Storage and Piping



CNG is definitely a viable alternative fuel for consideration AND there are several companies in the mark place today that can retrofit an automobile engine with the necessary equipment successfully run CNG as a primary fuel.  As always, I welcome your comments.

There should be no doubt, with the advent of the Internet our daily lives have changed in a remarkable fashion.  From the Office for National Statistics (ONS), figures show that 36 million adults – or seventy-three percent (73%) – were daily internet users in 2013, up from the thirty-five percent (35 %) recorded in 2006, when comparable records began.   Of course these figures are world-wide.  The graphic below will indicate the breakdown by geographic region.


As you can see, our friends in Asia lead the pack by a remarkable margin with forty-five percent (45.1%) engaging the Internet on a daily basis.  The chart below will indicate the increase in Internet usage by region also as well as providing additional statistics.


If we look at penetration and growth, we see a huge increase just over the past five years.

The discussion today is not really about usage or the growth of the Internet.  We wish to discuss several applications that are revolutionizing our daily lives.  This revolution is generally called the Internet of Things or IoT.  Other terminology for IoT is M2M or machine to machine.  M2M is an absolutely fascinating use of technology with remarkable applications.  IoT generally refers to what some call the next-generation Internet, where physical objects are connected via Standard Internet Protocol or IP.

I was watching television several days ago when an advertisement began running.    The driver of an automobile was required to look back at her baby snuggly strapped into a car seat.  Her attention was diverted for only a second but just long enough for a truck in front of her to stop abruptly.   Without applying the brakes, the car came to a gentle stop.  A sensor in the grill of her vehicle detected zero movement of the truck ahead and sent that message to an onboard computer.  The computer relayed a signal to the brake cylinders, thereby applying pressure—the car came to a stop.   Machines “talking” to machines.   Using an array of embedded sensors, actuators and a variety of other technologies, the loosely connected “things” can sense aspects of their environment and communicate that information over wired and wireless networks, without human intervention, for a variety of compelling uses.  This is a great representation of IoT.


Let’s take a quick look at where some feel we are going relative to IoT and M2M.  The bullets below will give some indications as to what is to come.

  • The total economic value-added from IoT across industries will reach $1.9 trillion worldwide in 2020, as predicted by Gartner, “Magic Quadrant for Business Intelligence and Analytics Platforms”
  • Fifty billion devices will be connected to the Internet by 2020, predicts Cisco Corporation.
  • The remote patient monitoring market doubled from 2007 to 2011 and is projected to double again by 2016.  The data generated from sensors is sent to monitoring stations where audible and/or visual indications result when a patient is in trouble.
  • The utility smart grid transformation is expected to almost double the customer information system market, from $2.5 billion in 2013 to $5.5 billion in 2020, based on a study from Navigant Research.  Of course, this will allow utilities to provide power at lesser rates and with more regularity.
  • Wide deployment of IoT technologies in the auto industry could save $100 billion annually in accident reductions, according to McKinsey and Company.
  • The industrial Internet could add $10-15 trillion to global GDP, essentially doubling the US economy, says General Electric.
  • Seventy-five percent (75%) of global business leaders are exploring the economic opportunities of IoT, according to a report from The Economist.
  • The UK government recently approved 45 million pounds (US$76.26 million) in research funding for Internet of Things technologies.  (This is a huge sum of money.  The Crown feels it will be money well spent.)
  • Cities will spend $41 trillion in the next 20 years on infrastructure upgrades for IoT, according to Intel.
  • The number of developers involved in IoT activities will reach 1.7 million globally by the end of 2014, according to ABI Research estimates.

As the Internet of Things ramps up and millions of devices become connected to the Internet, there is also a push to enable communication among all types of devices available on the Internet.  These devices include process control systems, power line communication devices, precision machinery, and various types of infrastructure.  One very critical aspect of properly working IoT devices is the need for simulation.  Simulation is an essential element of building an IoT network.  These networks are starting to become complex and ubiquitous, and the communication among them can be very unpredictable without considerable modeling.   As we saw in the example above, successful braking is dependent upon feedback between the engine and actions of the automobile.  As the braking systems are applied, fuel injection to the engine must lessen and eventually stop.


There are four (4) critical competencies needed by IT professionals for the design of systems to bring about successful and enduring operation of M2M devices.  These are as follows:

  • Learning how to design and implement embedded software.  Mechanical engineers need to interface regularly with software specialists so both design aspects of the product evolve concurrently.  The ME can no longer “throw it over the fence” and let the “boys” in IT solve the remainder of the problems.  Also, a much higher level of software development is needed WITH simulation prior to product launch.
  • Communication capabilities will become paramount in developing software.  Engineers will need to choose from dozens of proprietary and standard communication protocols, and factor in things like network protocols, potential radio frequency (RF) noise and interference, and the physical fit and placement of new communications components as a part of their requirements.  Mechanical and electromechanical design teams now have to think about communications and what domain constraints might affect layout and design.
  • Instrumentation is absolutely critical to amass, store and manage data collected by “smart products”.  Understanding the functional aspects as to how equipment is to behave will help design engineers anticipate potential failure modes much more effectively, which in turn effects how they specify instrumentation packages into designs.
  • Data and security is an absolute MUST for any M2M application and safeguards must be factored into IT considerations.  IT, typically has ownership of the data and with this being the case, IT needs to be folded in with initial planning of the product or the assembly of components. Engineers CANNOT build devices in isolation if they want to take complete advantage of all possibilities.

As always, I welcome your comments.


October 11, 2014

What would you call a BIG story?  ISIL, Ebola Virus, Benghazi, IRS problems with Tea Party members, the search for the missing Malaysian jet?   All are big stories and certainly deserve necessary airtime and commentary.    There is one story that has gotten almost zero (0) airtime from the media and one story I feel is absolutely remarkable in importance relative to pushing the technological envelope.  The Mars MAVEN mission has been a huge success to date with the unmanned craft now orbiting the “red” planet.

MAVEN is an acronym for NASA’s Mars Atmosphere and Volatile Evolution spacecraft which successfully entered Mars’ orbit at 10:24 p.m. EDT Sunday, Sept. 21, 2014 after traveling 442 million miles. The purpose for the mission is to study the Red Planet’s upper atmosphere as never before.  This is the first spacecraft dedicated to exploring the tenuous upper atmosphere of Mars with the following objectives:


  1. Determine the role the loss of volatile gaseous substances to space from the Martian atmosphere has played through time.
  2. Determine the current state of the upper atmosphere, ionosphere, and interactions with the solar wind.
  3. Determine the current rates of escape of neutral gases and ions to space and the processes controlling them.
  4. Determine the ratio of stable isotopes in the Martian atmosphere.

There is some thought that by understanding the atmospheric conditions on Mars, we will gain better insights as to the evolutionary processes of that planet and maybe some ability to predict evolutionary processes on Earth.  Also, discussions are well underway relative to future establishment of colonies on Mars.  If that is to ever happen, we definitely will need additional information relative to atmospheric and surface conditions.


The graphic below is a pictorial of the MAVEN system.  This is somewhat “busy” but one which captures several significant specifics of the hardware including onboard instrumentation.


Please note the graphic at the bottom comparing what is believed to be early atmospheric conditions with current atmospheric conditions.  The loss of magnetic fields surrounding the planet is contributory to atmospheric losses.  Could this happen to Earth’s atmosphere?  That’s a question that we have yet to answer.  Additional specifics can be seen from the following:



After a 442 million mile trip, how did MAVEN hook up with Mars?  Very, very carefully.  The blue line in the graphic below shows the first part of MAVEN’s trajectory during its initial approach and the beginning of the 35-hour capture orbit. The red section of the line indicates the 33-minute engine burn that slows the spacecraft so it can be captured into Martian orbit. Mars’ orbit around the sun is indicated by the white line to the right of the planet, and the Martian moons’ orbits are dimly visible in the background.  This is a remarkable example of engineering and physics allowing for pinpoint accuracy relative to entry and the establishment of orbital stability.



MAVEN carries three instrument suites with eight scientific instrument packages designed to study the upper atmosphere and ionosphere of Mars and its interactions with the solar wind.  Three of the instruments are located on the Articulating Payload Platform extending from the bus, including the Imaging Ultraviolet Spectrograph and a mass spectrometer that will sample the atmosphere in situ.  The hardware housing these three packages is shown as follows:


The The Particles and Fields Package, built by the University of California at Berkeley with support from CU/LASP and Goddard Space Flight Center, contains six instruments that will characterize the solar wind and the ionosphere of the planet. The Remote Sensing Package, built by CU/LASP, will determine global characteristics of the upper atmosphere and ionosphere. The Neutral Gas and Ion Mass Spectrometer, provided by Goddard Space Flight Center, will measure the composition and isotopes of neutral ions. MAVEN also carries a government-furnished Electra UHF radio, shown by the graphic below, provides back-up data relay capability for the rovers on Mars’ surface.

Communication Module

Lockheed Martin, based in Littleton, Colorado, built the MAVEN spacecraft and provides mission operations. NASA’s Jet Propulsion Laboratory is providing navigation services, and CU/LASP conducts science operations and data distribution.


On February 19, the MAVEN team successfully completed the initial post-launch power-on and checkout of the spacecraft’s Electra ultra-high frequency (UHF) transceiver. This receiver is shown with the graphic below.  This relay radio transmitter and receiver will be used for UHF communication with robots on the surface of Mars. Using the orbiter to relay data with this relay radio from Mars rovers and stationary landers boosts the amount of information that can be relayed back to Earth.

A part of NASA’s Mars Scout program, MAVEN is the culmination of 10 years of R&D. Some of that R&D went into designing the materials for the spacecraft’s instruments as well as for the satellite itself, which weighs about as much as a small car and has a 37 ft wingspan, including solar panel arrays.  That panel system is shown as follows:


As you can see from the JPEG, the array is huge but necessary to power the complete system.

The craft’s core structures are made with carbon fiber composites made by TenCate Advanced Composites. The company is experienced in the design and fabrication of composites for aerospace applications, having already supplied them to previous Mars missions, including the Rover and Curiosity rovers. For MAVEN, which will orbit Mars for about one Earth year, TenCate engineered composite face sheets sandwiched between aluminum honeycomb sheets for the spacecraft’s primary bus structure.

Other materials in the orbiter include a cylindrical aluminum boat tail on the aft deck that provides engine structural support. The craft is kept at the correct operating temperature — 5F to 104F — using active thermal control and passive measures, such as several thermal materials for conducting or isolating heat. Most of the orbiter is enclosed within multi-layer insulation materials; the outside layer is black Kapton film coated with germanium.


Hopefully, you can see now why I feel MAVEN is a BIG story worthy of considerable air time.  It’s a modern-day engineering marvel.  I welcome your comments:


Get every new post delivered to your Inbox.

Join 117 other followers