For those who might be a little bit unsure as to the definition of machine vision (MV), let’s now define the term as follows:

“ Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.”

There are non-industrial uses for MV also, such as 1.) Law enforcement,  2.) Security,  3) Facial recognition and 4.)  Robotic surgery.  With this being the case, there must be several critical, if not very critical, aspects to the technology that must be considered prior to purchasing an MV system or even discussing MV with a vendor.  We will now take a closer look as to those critical factors.

CRITICAL FACTORS:

As with any technology, there are certain elements critical to success. MV is no different.  There are six (6) basic and critical factors for choosing an imaging system.  These are as follows:

  • Resolution–A higher resolution camera will undoubtedly help increase accuracy by yielding a clearer, more precise image for analysis.   The downside to higher resolution is slower speed. The resolution of the image required for an inspection is determined by two factors: 1.) the field of view required and 2.) minimal dimension that must be resolved by the imaging system. Of course, lenses, lighting, mechanical placement and other factors come into play, but, if we confine our discussion to pixels, we can avoid having to entertain these topics.  This allows us to focus on the camera characteristics. Using an example, if a beverage packaging system requires verification that a case is full prior to sealing, it is necessary for the camera to image the contents from above and verify that twenty-four (24) bottle caps are present. It is understood that since the bottles and caps fit within the case, the caps are then the smallest feature within the scene that must be resolved. Once the application parameters and smallest features have been determined, the required camera resolution can be roughly defined. It is anticipated that, when the case is imaged, the bottle caps will stand out as light objects within a dark background. With the bottle caps being round, the image will appear as circles bounded by two edges with a span between the edges. The edges are defined as points where the image makes a transition from dark to light or light to dark. The span is the diametrical distance between the edges. At this point, it is necessary to define the number of pixels that will represent each of these points. In this application, it would be sufficient to allow three pixels to define each of the two edges and four pixels to define the span. Therefore, a minimum of ten pixels should be used to define the 25mm bottle cap in the image. From this, we can determine that one pixel will represent 2.5mm of the object itself. Now we can determine the overall camera resolution. Choosing 400mm of the object to represent the horizontal resolution of the camera, the camera then needs a minimum of 400/2.5 = 160 pixels of horizontal resolution. Vertically, the camera then needs 250/2.5 = 100 pixels vertical resolution. Adding a further 10% to each resolution to account for variations in the object location within the field of view will result in the absolute minimum camera resolution. There are pros and cons to image resolution as follows.

Pros and cons of increasing resolution

Digital cameras transmit image data as a series of digital numbers that represent pixel values. A camera with a resolution of 200 x 100 pixels will have a total of 20,000 pixels, and, therefore, 20,000 digital values must be sent to the acquisition system. If the camera is operating at a data rate of 25MHz, it takes 40 nanoseconds to send each value. This results in a total time of approximately .0008 seconds, which equates to 1,250 frames per second. Increasing the camera resolution to 640 x 480 results in a total of 307,200 pixels, which is approximately 15 times greater. Using the same data rate of 25MHz, a total time of 0.012288 seconds, or 81.4 frames per second, is achieved. These values are approximations and actual camera frame rates will be somewhat slower because we have to add exposure and setup times, but it is apparent that an increase in camera resolution will result in a proportional decrease in camera frame rate. While a variety of camera output configurations will enable increased camera resolution without a sacrifice in frame rate, these are accompanied by additional complexity and associated higher costs.

  • Speed of Exposure—Products rapidly moving down a conveyor line will require much faster exposure speed from vision systems.  Such applications might be candy or bottled products moving at extremely fast rates. When selecting a digital camera, the speed of the object being imaged must be considered as well.   Objects not moving during exposure would be perfectly fine with relatively simple and an inexpensive camera or cameras.  These could be used and provide perfectly satisfactory results. Objects moving continuously require other considerations. For other cases, objects may be stationary only for very short periods of time then move rapidly.  If this is the case, inspection during the stationary period would be the most desirable.

Stationary or slow-moving objects: Area array cameras are well suited to imaging objects that are stationary or slow moving. Because the entire area array must be exposed at once, any movement during the exposure time will result in a blurring of the image. Motion blurring can; however, be controlled by reducing exposure times or using strobe lights.

Fast-moving objects: When using an area array camera for objects in motion, some consideration must be taken for the amount of movement with respect to the exposure time of the camera and object resolution where it is defined as the smallest feature of the object represented by one pixel. A rule of thumb when acquiring images of a moving object is that the exposure must occur in less time than it takes for the object to move beyond one pixel. If you are grabbing images of an object that is moving steadily at 1cm/second and the object resolution is already set at 1 pixel/mm, then the absolute maximum exposure time required is 1/10 per second. There will be some amount of blur when using the maximum amount of exposure time since the object will have moved by an amount equal to 1 pixel on the camera sensor. In this case, it is preferable to set the exposure time to something faster than the maximum, possibly 1/20 per second, to keep the object within half a pixel. If the same object moving at 1cm/second has an object resolution of 1 pixel/micrometer, then a maximum exposure of 1/10,000 of a second would be required. How fast the exposure can be set will be dependent on what is available in the camera and whether you can get enough light on the object to obtain a good image. Additional tricks of the trade can be employed when attempting to obtain short exposure times of moving objects. In cases where a very short exposure time is required from a camera that does not have this capability, an application may make use of shutters or strobed illumination. Cameras that employ multiple outputs can also be considered if an application requires speeds beyond the capabilities of a single output camera.

  • Frame Rate–The frame rate of a camera is the number of complete frames a camera can send to an acquisition system within a predefined time period.  This period is usually stated as a specific number of frames per second.  As an example, a camera with a sensor resolution of 640 x 480 is specified with a maximum frame rate of 50 frames per second. Therefore, the camera needs 20 milliseconds to send one frame following an exposure. Some cameras are unable to take a subsequent exposure while the current exposure is being read, so they will require a fixed amount of time between exposures when no imaging takes place. Other types of cameras, however, are capable of reading one image while concurrently taking the next exposure. Therefore, the readout time and method of the camera must be considered when imaging moving objects. Further consideration must be given to the amount of time between frames when exposure may not be possible.
  • Spectral Response and Responsiveness–All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1000 nanometers. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined.  A matching and selection process must be undertaken by application engineers to insure proper usage of equipment relative to the needs at hand. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined. Filters may be incorporated into the application to tune out the unwanted wavelengths, but it will still be necessary to know how well the camera will respond to the desired wavelength. The responsiveness of a camera defines how sensitive the camera is to a fixed amount of exposure. The responsiveness of a camera can be defined in LUX or DN/(nJ/cm^2). “LUX” is a common term among imaging engineers that is used to define the sensitivity in photometric units over the range of visible light, where DN/ (nJ/ cm^2) is a radiometric expression that does not limit the response to visible light. In general, both terms state how the camera will respond to light. The radiometric expression of x DN/ (nJ/cm^2) indicates that, for a known exposure of 1 nJ/cm^2, the camera will output pixel data of x DN (digital numbers, also known as grayscale). Gain is another feature available within some cameras that can provide various levels of responsiveness. The responsiveness of a camera should be stated at a defined gain setting. Be aware, however, that a camera may be said to have high responsiveness at a high gain setting, but increased noise level can lead to reduced dynamic range.
  • Bit Depth–Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth.  Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs. Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth. This bit depth typically ranges from 8 to 16-bits. In monochrome cameras, the bit depth defines the quantity of gray levels from dark to light, where a pixel value of 0 is %100 dark and 255 (for 8-bit cameras) is %100 white. Values between 0 and 255 will be shades of gray, where near 0 values are dark gray and near 255 values are almost white. 10-bit data will produce 1024 distinct levels of gray, while 12-bit data will produce 4096 levels. Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.
  • Lighting— Perhaps no other aspect of vision system design and implementation consistently has caused more delay, cost-overruns, and general consternation than lighting. Historically, lighting often was the last aspect specified, developed, and or funded, if at all. And this approach was not entirely unwarranted, as until recently there was no real vision-specific lighting on the market, meaning lighting solutions typically consisted of standard incandescent or fluorescent consumer products, with various amounts of ambient contribution.  The following lighting sources are now commonly used in machine vision:
  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode • Metal Halide (Mercury)
  • Xenon
  • High Pressure Sodium

Fluorescent, quartz-halogen, and LED are by far the most widely used lighting types in machine vision, particularly for small to medium scale inspection stations, whereas metal halide, xenon, and high pressure sodium are more typically used in large scale applications, or in areas requiring a very bright source. Metal halide, also known as mercury, is often used in microscopy because it has many discrete wavelength peaks, which complements the use of filters for fluorescence studies. A xenon source is useful for applications requiring a very bright, strobed light.

Historically, fluorescent and quartz halogen lighting sources have been used most commonly. In recent years, LED technology has improved in stability, intensity, and cost-effectiveness; however, it is still not as cost-effective for large area lighting deployment, particularly compared with fluorescent sources. However, on the other hand, if application flexibility, output stability, and longevity are important parameters, then LED lighting might be more appropriate. Depending on the exact lighting requirements, oftentimes more than one source type may be used for a specific implementation, and most vision experts agree that one source type cannot adequately solve all lighting issues. It is important to consider not only a source’s brightness, but also its spectral content.  Microscopy applications, for example often use a full spectrum quartz halogen, xenon, or mercury source, particularly when imaging in color; however a monochrome LED source is also useful for B&W CCD camera, and also now for color applications, with the advent of “all color – RGB” and white LED light heads. In those applications requiring high light intensity, such as high-speed inspections, it may be useful to match the source’s spectral output with the spectral sensitivity of your particular vision camera. For example, CMOS sensor based cameras are more IR sensitive than their CCD counterparts, imparting a significant sensitivity advantage in light-starved inspection settings when using IR LED or IR-rich Tungsten sources.

Vendors must be contacted to recommend proper lighting relative to the job to be accomplished.

HAPPY BIRTHDAY NASA

February 13, 2015


References for this post are taken from NASA Tech Briefs, Vol 39, No 2, February 2015.

In 1915 the National Advisory Committee for Aeronautics (NACA) was formed by our Federal government.  March 3, 2015 marks the 100th birthday of that occasion.  The NACA was created by Congress over concerns the U.S. was losing its edge in aviation technology to Europe.  WWI was raging at that time and advances in aeronautics was at the forefront of the European efforts to win the war using “heavier than air” craft to pound the enemy.  The purpose of NACA was to “supervise and direct the scientific study of the problems of flight with a view to their practical solution.” State-of-the-art laboratories were constructed in Virginia, California and Ohio that led to fundamental advances in aeronautics enabling victory in WW II. Those efforts also supported national security efforts during the cold war era with Russia.  DNA of the entire aircraft industry is infused with technology resulting from research and development efforts from NASA.

HUMBLE BEGINNINGS

NACA was formed by employing twelve (12) unpaid individuals with an annual budget of $5,000.00.  Over the course of forty-three (43) years, the agency made fundamental breakthroughs in aeronautical technology in ways affecting the manner in which airplanes and space craft are designed, built, tested and flown today.  NACA’s early successes are as follows:

  • Cowling to improve the cooling of radial engines thereby reducing drag.
  • Wind tunnel testing simulating air density at different altitudes, which engineers used to design and test dozens of wing cross-sections.
  • Wind tunnel with slots in walls that slowed researchers to take measurements of aerodynamic forces at supersonic speeds.
  • Design principals involving the shape of an aircraft’s wing in relation to the rest of the airplane to reduce drag and allow supersonic flight.
  • Distribution of reports and studies to aircraft manufacturers allowing designs benefiting from R & D efforts.
  • Development of airfoil and propeller shapes which simplified aircraft design. These shapes eventually were incorporated into aircraft such as the P-51 Mustang.
  • Research and wind tunnel testing led to the adoption of the “coke-bottle” design that still influences our supersonic military aircraft of today.

As a result of NACA efforts, flight tests were initiated on the first supersonic experimental airplane, the X-1.  This aircraft was flown by Captain Chuck Yeager and paved the way for further research into supersonic aircraft leading to the development of swept-wing configurations.

After the Soviet Union launched Sputnik 1 in 1957, the world’s first artificial satellite, Congress responded to the nation’s fear of falling behind by passing the National Aeronautics and Space Act of 1958.  NASA was borne.  The new agency, proposed by President Eisenhower, would be responsible for civilian human, satellite, and robotic space programs as well as aeronautical research and development. NACA was absorbed into the NASA framework.

ACHIEVEMENTS:

Looking at the achievements of NASA from that period of time, we see the following milestones:

  • 1959—Selection of seven (7) astronauts for Project Mercury.
  • 1960–Formation of NASA’s Marshall Space Flight Center with Dr. Werner von Braun as director.
  • 1961—President Kennedy structured a commitment to land a man on the moon.
  • 1962—John Glenn became the first American to circle the Earth in Friendship 7.
  • 1965—Gemini IV stayed aloft four (4) days during which time Edward H. White II performed the first space walk.
  • 1968—James A. Lovell Jr., William A Anders, and Frank Bormann flew the historic mission to circle the moon.
  • 1969—The first lunar landing.

Remarkable achievements that absolutely captured the imagination of most Americans.  It is extremely unfortunate that our nearsighted Federal government has chosen to reduce NASA funding and eliminate many of the manned programs and hardware previously on the “books”. We have seemingly altered course, at least relative to manned space travel.  Let’s hope we can get back on track in future years.

THE TRUTH IS OUT THERE

February 6, 2015


In John 18:38 we read the following from the King James Version of the Bible: “Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.”  Pilate did not stay for an answer.

One of my favorite television programs was the “X”-Files.  It’s been off the air for some years now but we are told will return as a “mini-series” sometime in the very near future.  The original cast; i.e. Fox Mulder and Dana Skully will again remind us—THE TRUTH IS OUT THERE.  The truth is definitely out there as indicated by the men and women comprising the Large Synoptic Survey Telescope team.  They are definitely staying for answers.  The team members posed for a group photograph as seen below.

LSST Team

THE MISSION:

The Large Synoptic Survey Telescope (LSST) structure is a revolutionary facility which will produce an unprecedented wide-field astronomical survey of our universe using an 8.4-meter ground-based telescope. The LSST leverages innovative technology in all subsystems: 1.) the camera (3200 Megapixels, the world’s largest digital camera), 2.) telescope (simultaneous casting of the primary and tertiary mirrors; 3.) two aspherical optical surfaces on one substrate), and 4.)  data management (30 Terabytes of data nightly.)  There will be almost instant alerts issued for objects that change in position or brightness.

The known forms of matter and types of energy experienced here on Earth account for only four percent (4%) of the universe. The remaining ninety-six percent ( 96 % ), though central to the history and future of the cosmos, remains shrouded in mystery. Two tremendous unknowns present one of the most tantalizing and essential questions in physics: What are dark energy and dark matter? LSST aims to expose both.

DARK ENERGY:

Something is driving the universe apart, accelerating the expansion begun by the Big Bang. This force accounts for seventy percent (70%) of the cosmos, yet is invisible and can only be “seen” by its effects on space. Because LSST is able to track cosmic movements over time, its images will provide some of the most precise measurements ever of our universe’s inflation. Light appears to stretch at the distant edges of space, a phenomenon known as red shift, and LSST may offer the key to understanding the cosmic anti-gravity behind it.

DARK MATTER:

Einstein deduced that massive objects in the universe bend the path of light passing nearby, proving the curvature of space. One way of observing the invisible presence of dark matter is examining the way its heavy mass bends the light from distant stars. This technique is known as gravitational lensing. The extreme sensitivity of the LSST, as well as its wide field of view, will help assemble comprehensive data on these gravitational lenses, offering key clues to the presence of dark matter. The dense and mysterious substance acts as a kind of galactic glue, and it accounts for twenty-five percent (25 %) of the universe.

From its mountaintop site, LSST will image the entire visible sky every few nights, capturing changes over time from seconds to years. Ultimately, after 10 years of observation, a stunning time-lapse movie of the universe will be created.

As the LSST stitches together thousands of images of billions of galaxies, it will process and upload that information for applications beyond pure research. Frequent and real time updates – 100 thousand a night – announcing the drift of a planet or the flicker of a dying star will be made available to both research institutions and interested astronomers.

In conjunction with platforms such as Google Earth, LSST will build a 3D virtual map of the cosmos, allowing the public to fly through space from the comfort of home.  ALLOWING THE PUBLIC is the operative phrase.. For the very first time, the public will have access to information, as it is presented, relative to the cosmos.  LSST educational materials will clearly specify National and State science, math and technology standards that are met by the activity. Our materials will enhance 21st century workforce skills, incorporate inquiry and problem solving, and ensure continual assessment embedded in instruction.

THE LOCATION:

The decision to place LSST on Cerro Pachón in Chile was made by an international site selection committee based on a competitive process.  In short, modern telescopes are located in sparsely populated areas (to avoid light pollution), at high altitudes and in dry climates (to avoid cloud cover). In addition to those physical concerns, there are infrastructure issues. The ten best candidate sites in both hemispheres were studied by the site selection committee. Cerro Pachón was the overall winner in terms of quality of the site for astronomical imaging and available infrastructure. The result will be superb deep images from the ultraviolet to near infrared over the vast panorama of the entire southern sky.

The location is shown by the following digital:

Construction Site

The actual site location, as you can see below, is a very rugged outcropping of rock now used by farmers needing food for their sheep.

The Mountain Location

The Observatory will be located about 500km (310.6856  miles )north of Santiago, Chile, about 52km (32.3113 miles) or 80km (49.7097  miles) by road from La Serena, at an altitude of 2200 meters (7217.848 feet).  It lies on a 34,491Ha (85,227 acres.) site known as “Estancia El Tortoral” which was purchased by AURA on the open market in 1967 for use as an astronomical observatory.

When purchased, the land supported a number of subsistence farmers and goat herders. They were allowed to continue to live on the reserve after it was purchased by AURA and have gradually been leaving voluntarily for more lucrative jobs in the nearby towns.

As a result of departure of most of its human inhabitants and a policy combining environmental protection with “benign neglect” on the part of the Observatory, the property sees little human activity except for the roads and relatively small areas on the tops of Cerro Tololo and Cerro Pachon. As a result, much of the reserve is gradually returning to its natural state. Many native species of plants and animals, long thought in danger of extinction, are now returning. The last half of the trip to Tololo is an excellent opportunity to see a reasonably intact Chilean desert ecosystem.

THE FACILITY:

LSST construction is underway, with the NSF funding authorized as of 1 August 2014.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaire Charles Simonyi and Bill Gates of $20 and $10 million respectively.  $7.5 million is included in the U.S. President’s FY2013 NSF budget request. The Department of Energy is expected to fund construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy.

Construction of the primary mirror at the University of Arizona‘s Steward Observatory Mirror Lab, the most critical and time-consuming part of a large telescope’s construction, is almost complete. Construction of the mold began in November 2007, mirror casting was begun in March 2008, and the mirror blank was declared “perfect” at the beginning of September 2008.  In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

As of December 2014, the primary mirror is completed awaiting final approval, and the mirror transport box is ready to receive it for storage until it is shipped to Chile.

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. In November 2009, the blank was shipped to Harvard University for storage until funding to complete it was available. On October 21, 2014, the secondary mirror blank was delivered from Harvard to Exelis for fine grinding.

Site excavation began in earnest March 8, 2011, and the site had been leveled by the end of 2011. Also during that time, the design continued to evolve, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In November 2014, the LSST camera project, which is separately funded by the United States Department of Energy , passed its “critical decision 2″ design review and is progressing toward full funding.

When completed, the facility will look as follows with the mirror mounted as given by the second JPEG:


Artist Rendition of Building(2)

 

Telescope Relative to Building

MIRROR DESIGN:

The assembled mirror structure is given below.

Telescope

In the LSST optical design, the primary (M1) and tertiary (M3) mirrors form a continuous surface without any vertical discontinuities. Because the two surfaces have different radii of curvature, a slight cusp is formed where the two surfaces meet, as seen in the figure below. This design makes it possible to fabricate both the primary and tertiary mirrors from a single monolithic substrate. We refer to this option as the M1-M3 monolith.

MIRROR MONOLITH

After a feasibility review was held on 23 June 2005, the LSST project team adopted the monolithic approach to fabricating the M1 and M3 surfaces as its baseline. In collaboration with the University of Arizona and Steward Observatory Mirror Lab (SOML) construction has begun with detailed engineering of the mirror blank and the testing procedures for the M1-M3 monolith. The M1-M3 monolith blank will be formed from Ohara E6 low expansion glass using the spin casting process developed at SOML.

At 3.42 meters in diameter the LSST secondary mirror will be the largest convex mirror ever made. The mirror is aspheric with approximately 17 microns of departure from the best-fit sphere. The design uses a 100 mm thick solid meniscus blank made of a low expansion glass (e.g. ULE or Zerodur) similar to the glasses used by the SOAR and Discovery Chanel telescopes. The mirror is actively supported by 102 axial and 6 tangent actuators. The alignment of the secondary to the M1-M3 monolith is accomplished by the 6 hexapod actuators between the mirror cell and support structure. The large conical baffle is necessary to prevent the direct reflection of star light from the tertiary mirror into the science camera.

SUMMARY:

The truth is out there and projects such as the one described in this post AND the Large Hadron Collider at CERN certainly prove some people and institutions are not at all reluctant to search for that truth, the ultimate purpose being to discover where we come from.  Are we truly made from “star stuff”?

 


Wonder how difficult it would be to land a mosquito on a speeding bullet?  What do you think?  Well, that’s just about the degree of difficulty in launching, navigating and landing the PHILAE spacecraft on the comet 67P/Churyumov–Gerasimenko.  Like all comets, Churyumov-Gerasimenko is named after its discoverers.

THE DISCOVERY:

It was first observed in 1969, when several astronomers from Kiev visited the Alma-Ata Astrophysical Institute in Kazakhstan to conduct a survey of comets.  Comet 67P is one of numerous short period comets which have orbital periods of less than 20 years and a low orbital inclination. Since their orbits are controlled by Jupiter’s gravity, they are also called Jupiter Family comets.  These comets are believed to originate from the Kuiper Belt, a large reservoir of small icy bodies located just beyond Neptune. As a result of collisions or gravitational perturbations, some of these icy objects are ejected from the Kuiper Belt and fall towards the Sun.

When they cross the orbit of Jupiter, the comets gravitationally interact with the massive planet. Their orbits gradually change as a result of these interactions until they are eventually thrown out of the Solar System or collide with another planet or the Sun.  Actually, the favored target for Rosetta was the periodic comet 46P/Wirtanen, but, after the launch was delayed, another regular visitor to the inner Solar System, 67P/Churyumov-Gerasimenko, was selected as a suitable replacement.

THE MISSION:

Philae (/ˈfli/ or /ˈfl/) is a robotic device designed and launched by the European Space Agency .  The mission was called Rosetta. In November 1993, the International Rosetta Mission was approved as a Cornerstone Mission in ESA’s Horizons 2000 Science Program.  Rosetta’s industrial team involved more than 50 contractors from 14 European countries and the United States. The prime spacecraft contractor is Astrium Germany. Major subcontractors are Astrium UK (spacecraft platform), Astrium France (spacecraft avionics) and Alenia Spazio (assembly, integration and verification).

The duration of travel was more than ten years after departing Earth. (Now do you see the complexity?  It’s a “tough putt” to land a small object on a rapidly moving object and after a ten-year launch.)  The Rosetta spacecraft is a work of engineering art in itself. It’s basically a large aluminum box measuring 2.8 x 2.1 x 2.0 meters with scientific instruments mounted on ‘top’ of the box forming the Payload Support Module while the subsystems are on the base or the Bus Support Module.

On one side of the orbiter is a 2.2-metre diameter communications dish with a steerable high-gain antenna. The Lander itself is attached to the opposite face.

Two enormous solar panel ‘wings’ extend from the sides. These wings, each 32 square meters in area, have a total span of about 32 meters tip to tip. Each assembly comprises five panels, and both may be rotated +/-180 degrees to catch the maximum amount of sunlight. A digital photograph of the Rosetta is given as follows:

CONFIGURATION OF ROSETTA

On 12 November 2014, the probe achieved the first-ever soft landing on a comet nucleus. Its instruments obtained the first images from a comet’s surface. PHILEA is tracked and operated from the European Space Operations Centre (ESOC) in Darmstadt, Germany.  Several of the instruments on PHILEA made the first direct analysis of a comet, sending back data that will be analyzed to determine the composition of the surface.

The Lander is named after the Philae obelisk, which bears a bilingual inscription and was used along with the Rosetta Stone to decipher Egyptian hieroglyphics.  A very condensed version of the mission is given by the JPEG below:

THE MISSION

An Ariane 5G+ rocket carrying the Rosetta spacecraft and PHILAE Lander  was launched from French Guiana on 2 March 2004, and travelled for 3,907 days (10.7 years) to reach the target–Churyumov–Gerasimenko. Unlike a Deep Impact probe, PHILAE is not an impactor. Some of the instruments on the Lander were used for the first time as autonomous systems during the Mars flyby on 25 February 2007.   One camera system returned images while the Rosetta instruments were powered down, while one system took measurements of the Martian magnetosphere. Most of the other instruments need contact with the surface for analysis and stayed offline during the flyby. An optimistic estimate of mission length following touchdown was “four to five months”.

PHILAE CONFIGURATION:

Components of PHILAE are as follows:

SPACECRAFT COMPONENTS

A digital photograph of the Lander with the basic instrument packages is given below.

PHILAE LANDER CONFIGURATION

RESULTS:

The results of the landing and the investigation are striking.  The comet’s surface, as Nicolas Thomas of the University of Bern has discovered, is surprisingly complex. It has 19 distinct regions, characterized by features such as pits, wide depressions and smooth, dust-covered plains. It even sports things that look like sand dunes.

The surface is also, according to Fabrizio Capaccioni of the National Institute of Astrophysics  in Rome, drier than expected and rich in organic compounds. That may excite those who wonder how the chemicals needed for life’s development arrived on Earth. The comet’s interior, meanwhile, says Holger Sierks of the Max Planck Institute for Solar System Research, in Göttingen, Germany, has only half the density of water. It is therefore probably porous and fluffy. And it ejects jets of material into space particularly from the neck that connects the two halves of the comet’s peculiar dumbbell shape.

The reason for that shape, though, remains a mystery. Possibly, Dr Sierks speculates, Churyumov-Gerasimenko is made up of two comets which have collided and joined together. Determining the truth of this will require further investigation.  A depiction of the comets configuration is given as follows:

THE COMET ITSELF

CONCLUSIONS:

Number one—we know now that navigation and impact can be accomplished.  With that being the case, maybe mining the subterranian riches for minerals might be possible for a great number of comets.  One greater “find” might be adding one piece to the puzzle as to whether or not there is life other places than Earth.  We are just becoming able to investigate that possibility with marvelous devices such as Rosetta and PHILAE.  Time will tell.

As always, I welcome your comments.


The references for this post are derived from the publication NGV America (Natural Gas Vehicles), “Oil Price Volatility and the Continuing Case for Natural Gas as a Transportation Fuel”; 400 North Capitol St. NW, ۰ Washington, D.C. 20001 ۰ phone (202) 824-7360 ۰ fax (202) 824-9160 ۰http://www.ngvamerica.org.

If you have read my posts you realize that I am a staunch supporter of alternate fuels for transportation, specifically the use of LNG (liquefied natural gas) and CNG (compressed natural gas).  We all know that oil is a non-renewable resource—a precious resource and one that should be conserved if at all possible.  With that said, there has been a significant drop in gasoline prices over the past few weeks which may lull us into thinking the need to continue seeking conservations measures relative to oil-based fuels is no longer necessary.  Let’s take a look at several facts, and then we will strive to draw conclusions.

The chart below will indicate the ebb and flow of crude oil vs. natural gas in BTU equivalence.  Please remember that BTU is British Thermal Unit.  A BTU is the measure of the energy required to raise one pound of water one degree Fahrenheit.  As you can see, the price of natural gas per barrel, energy equivalent, has remained fairly stable since 2008 relative to the price of crude per barrel.  Natural gas, either LNG or CNG is considerably more “affordable” than crude oil.

Crude Oil vs Natural Gas

There are several reasons for the price of oil per barrel dropping over the past few months.  These are as follows:

  • World supply is currently outpacing world demand. Supply has reached historic levels.
  • Significant increase in US production due to hydraulic fracturing or fracking.
  • There is to some degree, economic stagnation in Western Europe thereby lessening the demand for crude and crude oil products.
  • The economy in China, India and other countries is slowing.
  • Geopolitical factors tend to affect crude oil inventories.

Over the longterm, oil demand is likely to increase as economic growth returns to more normal levels and economic activity picks up. As has been the case in recent years, the developing countries led by China and India will likely lead the way in driving oil demand. The developed countries, including the U.S., are not expected to experience much growth in overall levels of petroleum use.

According to the International Energy Agency (IEA) and the U.S. Energy Information Administration (EIA), oil markets may turn the corner sometime in late 2015, as that is when these agencies are predicting that oil demand and supply will cross back over. These agencies also are forecasting 2015 prices in the mid- to high-$50 per barrel range. The most recent Short-Term Outlook from EIA (January 2015 SEO) pegs the price of Brent oil at an average of $58 a barrel in 2015. That level reflects averages as low as $49 a barrel and a high of $67 a barrel in the latter part of the year. The WTI price of oil is expected to average $3 less than Brent for a 2015 average of $55 a barrel. For 2016, EIA’s January SEO forecasts average prices of $75 per barrel for Brent oil and $71 for WTI oil.

Another important issue is the current number of U.S. refineries.  In the U.S., virtually no new refineries have been built for several years and the number of operable refineries has dropped from 150 to 142 between 2009 and 2014.  One question—will diesel prices continue to fall?  Will the transportation sector of our economy continue to benefit from lower prices?   We must remember also that refineries have several potential markets for diesel fuel other than transportation uses, since it can be used for home heating, industrial purposes and as boiler fuel. The lead up to winter has increased home heating fuel demand, particularly in the northeast, which has likely also contributed to a slower decline in diesel prices.

What is the long-term projection for transportation-grade fuels?  The graphic below will indicate the use of natural gas will continue being the lowest cost fuel relative to gasoline and diesel grade petroleum.

Projected Price Differentials

This is also supported by the following chart.  As you can see, natural gas prices have remained steady over the past few years.

Average Retail Fuel Prices in USA

Another key factor in assessing the long-term stability of transportation fuel prices is the cost of the commodity as a portion of its price at the pump. Market volatility and commodity price increases have a much larger impact on the economics of gasoline and diesel fuel prices than they do for natural gas. As shown below, as much as 70 percent of the cost of gasoline and 60 percent of diesel fuel is directly attributable to the commodity cost of oil, while only 20 percent of the cost of CNG is part of the commodity cost of natural gas. This is a key in understanding the volatile price swings of petroleum-based fuels compared to the stability of natural gas.

Price at the Pump

Proven, abundant and growing domestic reserves of natural gas are another influence on the long-term stability of natural gas prices. The recent estimates provided by the independent and non-partisan Colorado School of Mines’ Potential Gas Committee have included substantial increases to domestic reserves. The U.S. is now the number one producer of natural gas in the world.

Even with today’s lower oil prices, natural gas as a commodity is one-third (3:1) the cost of oil per million Btu of energy supplied. More recently, the price of oil has exceeded natural gas by a factor of 4:1 and as much as 8:1 when oil was $140 a barrel and natural gas was trading at $3 per million Btu. Perhaps most relevant is that the fluctuations in these comparisons have been almost totally based on the volatility of oil prices. As the earlier tables clearly demonstrate, natural gas pricing has been relatively consistent and stable and is projected to be for decades to come.

As you can see from the following chart, the abundance of natural gas is definitely THE key factor relative to insuring the continuation of crude resources for generations to come.

US Natural Gas Future Supplies

Conclusion

  • History shows that the recent decline in world crude oil prices and related gasoline and diesel prices are likely to be short-lived. Oil prices will increase as the world economy rebounds.
  • Diesel fuel is influenced by a variety of other factors that will likely keep upward pressure on prices over the long run.
  • On a Btu basis, natural gas still has a 3:1 price advantage over oil. At the pump, average CNG prices are currently $0.75 to $1 lower than diesel.
  • The long-term stability and low prices for natural gas relative to oil are likely to remain for many years – perhaps even decades – based on well-documented economic models.
  • The long-term nature of fleet asset management suggests that it is prudent to continue to invest in transportation fuel portfolio diversification by transitioning more vehicles to natural gas. Fleets that have already made the investment in vehicles and infrastructure will continue to benefit from the stability of natural gas prices and their continuing economic advantage.
  • State and federal policymakers are likely to continue to promote fuel diversity and policies that encourage use of natural gas as a transportation fuel on the road to energy security.

As always, I welcome your comments.

WE’VE COME A LONG WAY

January 24, 2015


Two days ago I had the need to refresh my memory concerning the Second Law of Thermodynamics. Most of the work I do involves designing work cells to automate manufacturing processes but one client asked me to take a look at a problem involving thermodynamic and heat transfer processes.  The statement of the second law is as follows:

“It is impossible to extract an amount of heat “Qh” from a hot reservoir and use it all to do work “W”.  Some amount of heat “Qc” must be exhausted to a cold reservoir.”

Another way to say this is:

“It is not possible for heat to flow from a cooler body to a warmer body without any work being done to accomplish this flow.”

That refresher took about fifteen (15) minutes but it made me realize just how far we have come relative to teaching and presenting subjects involving technology; i.e. STEM ( Science, Technology, Engineering and Mathematics) related information.  Theory does not change.  Those giants upon whose shoulders we stand paved the way and set the course for discovery and advancement of so many technical disciplines, but one device has revolutionized teaching methods—the modern day computer with accompanying software.

I would like to stay with thermodynamics to illustrate a point.  At the university I attended, we were required to have two semesters of heat transfer and two semesters of thermodynamics.  Both subjects were supposedly taken during the sophomore year and both offered in the department of mechanical engineering.   These courses were “busters” for many ME majors.  More than once they were the determining factors in the decision-making process as to whether or not to stay in engineering or try another field of endeavor.  The book was “Thermodynamics” by Gordon van Wylen, copyright 1959.  My sophomore year was 1962 so it was well before computers were used at the university level.  I remember pouring over the steam tables looking at saturations temperatures, saturation pressures trying to find specific volume, enthalpy, entropy and internal energy information.  It seemed as though interpolation was always necessary.  Have you ever tried negotiating a Mollier Chart to pick off needed data? WARNING: YOU CAN GO BLIND TRYING.      Psychometric charts presented the very same problem.  I remember one homework project in which we were expected to design a cooling tower for a commercial heating and air conditioning system.  All of the pertinent specifications were given as well as the cooling necessary for transmission into the facility.   It was drudgery and even though so long ago, I remember the “all-nighter” I pulled trying to get the final design on paper. Today, this information is readily available through software; obviously saving hours of time and greatly improving productivity.  I will say this; by the time these two courses were taken you did understand the basic principles and associated theory for heat systems.

Remember conversion tables?  One of the most-used programs by working engineers is found by accessing “onlineconversions.com”.  This web site provides conversions between differing measurement systems for length, temperature, weight, area, density, power and even has oddball classifications such as “fun stuff” and miscellaneous.  Fun stuff is truly interesting; the Chinese Zodiac, pig Latin, Morse code, dog years—all subheadings and many many more.  All possible without an exhaustive search through page after page of printed documentation.  All you have to do is log on.

The business courses I took, (yes, we were required to take several non-technical courses) were just as laborious.  We constructed spreadsheets and elaborate ones at that for cost accounting and finance; all accomplished today with MS Excel.  One great feature of MS Excel is the Σ or sum feature.  When you have fifty (50) or more line items and its 2:30 in the morning and all you want to do is wrap things up and go to bed, this becomes a god-send.

I cannot imagine where we will be in twenty (20) years relative to improvements in technology. I just hope I’m around to see them.

MACHINE VISION

January 2, 2015


INTRODUCTION:

Machine vision is an evolving technology used to replace or complement manual inspections and measurements. The technology uses digital cameras and image processing software. This technology is used in a variety of different industries to automate production, increase production speed and yield, and to improve product quality. One primary objective is discerning the quality of a product when high-speed production is required.  This industry is knowledge-driven and experiences an ever- increasing complexity of components and modules of machine vision systems. In the last few years, the markets pertaining to machine vision components and systems have grown significantly.

Machine vision, also known as “industrial vision” or “vision systems”, is primarily focused on computer vision in the perspective of industrial manufacturing processes like defect detection and in non-manufacturing processes like traffic control and healthcare purposes. The inspection processes are carried by responsive input needed for control; for example, robot control or default verification. The system setup consists of cameras capturing, interpreting and signaling individual control systems related to some pre-determined tolerance or requirement. These systems have increasingly become more powerful while at the same time easy to use. Recent advancements in machine vision technology, such as smart cameras and embedded machine vision systems, have increased the scope of machine vision markets for a wider application in the industrial and non-industrial sectors.

INDUSTRIAL SPECIFICS:

Let’s take a very quick look at several components and systems used when applying vision to specific applications.

MACHINE VISION SETUP

You can see from the graphic above products advancing down a conveyor past cameras mounted on either side of the line.  These cameras are processing information relative to specifications pre-loaded into software.  One type of specification might be a physical dimension of the product itself.  The image for each may look similar to the following:

VISION SPECIFICS

In this example, 55.85 mm, 41.74 mm, and 13.37 mm are being investigated and represent the critical-to-quality information.  The computer program will have the “limits of acceptability”; i.e. maximum and minimum data.  Dimensions falling outside these limits will not be accepted.  The product will be removed from the conveyor for disposition.

Another usage for machine vision is simple counting, and the following two JPEGs will indicate.

BOTTLE INSPECTION

COUNTING

FACIAL RECOGNITION:

One example of a non-industrial application for machine vision is facial recognition.   This technology is generally considered to be one facet of the biometrics technology suite.  Facial recognition is playing a major role in identifying and apprehending suspected criminals as well as individuals in the process of committing a crime or unwanted activity.  Casinos in Las Vegas are using facial recognition to spot “players” with shady records or even employees complicit with individuals trying to get even with “the house”.   This technology incorporates visible and infrared modalities face detection, image quality analysis, verification and identification.   Many companies use cloud-based image-matching technology to their product range providing the ability to apply theory and innovation to challenging problems in the real world.  Facial recognition technology is extremely complex and depends upon many data points relative to the human face.

FACIAL RECOGNITION TECHNOLOGY

FACIAL

A grid is constructed of “surface features”;those features are then compared with photographs located in data bases or archives.  In this fashion, positive identification can be accomplished.

One of the most successful cases for the use of facial recognition was last year’s bombing during the Boston Marathon.   Cameras mounted at various locations around the site of the bombing captured photographs of Tamerian and Dzhokhar Tsarnaev prior to their backpack being positioned for both blasts.  Even though this is not facial recognition in the truest since of the word, there is no doubt the cameras were instrumental in identifying both criminals.

BOSTON BOMBING

LAW INFORCEMENT AND TRAFFIC CONTROL:

Remember that last ticket you got for speeding?  Maybe, just maybe, that ticket came to you through the mail with a very “neat” picture of your license plate AND the speed at which you were traveling. Probably, there was a warning sign as follows:

PHOTO ENFORCEMENT ZONE

OK,so you did not see it.  Cameras such as the one below were mounted on the shoulder of the road and snapped a very telling photograph.

ROAD-SIDE CAMERAS

You were nailed.

CRITICAL FACTORS:

There are five (5) basic and critical factors for choosing an imaging system.  These are as follows:

  • Resolution–While a higher resolution camera will help increase accuracy by yielding a clearer, more precise image for analysis, the downside is slower speed.
  • Speed of Exposure—Products rapidly moving down a conveyor line will require much faster exposure speed from vision systems.  Such applications might be candy or bottled products moving at extremely fast rates.
  • Frame Rate–The frame rate of a camera is the number of complete frames that a camera can send to an acquisition system within a predefined time period, which is usually stated as a specific number of frames per second.
  • Spectral Response and Responsiveness–All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1000 nanometers. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined.  A matching and selection process must be undertaken by application engineers to insure proper usage of equipment relative to the needs at hand.
  • Bit Depth–Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth.  Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.

SUMMARY:

Machine vision technology will continue to grow as time goes by simply because it is the most efficient and practical, not to mention cost effective, method of obtaining desired results.  As always, I welcome your comments.

Follow

Get every new post delivered to your Inbox.

Join 116 other followers