WHERE THE JOBS ARE

May 1, 2015


The following data was taken from a survey done by nerdwallet.com:  Best Places for Engineers, 23 February 2015.

If you follow my postings you know I primarily concentrate on the STEM (science, technology, engineering and mathematics) professions. I track the job market relative to job availability and salary rates over the country and the world.  An online publication called NerdWallet recently published a very informative article on job availability for engineers.  Here is the methodology used to provide the results.

Methodology:

The overall score for each of the metro areas was calculated using the following measures:

  1. Engineers per 1,000 total jobs (50% of each overall score). Data is from the Bureau of Labor Statistics May 2013 Metropolitan and Nonmetropolitan Area Occupational Employment and Wage Estimates.
  1. Annual mean wage for engineering jobs (25% of each overall score). Data is from the Bureau of Labor Statistics May 2013 Metropolitan and Nonmetropolitan Area Occupational Employment and Wage Estimates.
  1. Median gross rent for each place (25% of each overall score). Data is from the 2013 U.S. Census Bureau American Community Survey.

This study analyzed 350 of the largest metro areas in the U.S.

The following engineering fields, as defined by the Bureau of Labor Statistics, were used to compound the data: aerospace, biomedical, chemical, civil, computer hardware, electrical, electronics, environmental, health and safety engineers, industrial, marine engineers and naval architects, materials, mechanical, mining and geological engineers and all other engineers.  This list just about covers the “waterfront” as far as working-class engineers.  Let’s take a look at the results.

List 1-10

List 11-20

In looking at the list above, we can make the following observations:

  • Eleven of the top twenty cities and areas are in the South. The list includes the following southern cities:
  1. Huntsville, Alabama

With a NASA flight center and an Army arsenal, Huntsville is nicknamed “The Rocket City” for good reason. Engineers make up 6% of its employed population and make nearly $103,000 a year, which is higher than the national mean. Median rent is the second lowest in our top 10, at around $725 a month. Huntsville, a northern Alabama city near the Tennessee border, is a hub for aerospace engineers.

  1. Warner Robins, Georgia

Drive 90 minutes south of Atlanta and you’ll hit Warner Robins, where nearly 4% of the working world is in engineering. Here you’ll find the Robins Air Force Base, which employees more than 25,000 people, and the Museum of Aviation, the second-largest museum in the nation’s Air Force. However, engineers in this area earn the lowest salary of our top 10, around $86,000 a year, which is lower than the national mean.

  1. Palm Bay-Melbourne-Titusville, Florida

Aside from ocean views, the Palm Bay-Melbourne-Titusville area offers career opportunities for engineers, who make up about 3% of the employed population, earn almost $94,000 a year and pay around $876 in rent. Harris Corp., a worldwide telecommunications company, and Intersil Corp., a semiconductor manufacturer, are headquartered in the area, employing thousands.

  1. Houston-Sugar Land-Baytown, Texas

In the Lone Star State’s most populated area, engineers earn their livelihood in the energy sector at companies including Phillips 66, Marathon Oil and Kinder Morgan. Engineers in this area make a mean salary of almost $123,000, which is the second highest in our top 20. This area also made our top 10 list of Best Places for STEM Graduates.

  1. Midland, Texas

As the saying goes, “Everything’s bigger in Texas,” including the engineering sector. Engineers here take home the largest salary of our top 20 — about $141,000 a year. Midland, with key industries including aerospace, oil and gas, has one of the lowest unemployment rates in the country, 2.6%, according to the U.S. Bureau of Labor Statistics.

  1. Decatur, Alabama

Just 25 miles west of our list’s leading place, Decatur engineers have access to many opportunities in Huntsville. But Decatur itself is home to a United Launch Alliance facility, where spacecraft launch equipment is manufactured. Engineers make up about 2% of Decatur’s workforce, making it the smallest engineering industry in our top 10. However, it still has more engineers per 1,000 employees than the national average.

  • All 20 locations have larger engineering industries than the national average of twelve (12) engineers for every 1,000 employees.
  • Engineers in thirteen (13) of the top twenty (20) places earn more than the national mean engineering salary, which is $92,170.
  •  Fourteen (14) locations have lower median rent than the average U.S. metro area, which is $905 a month.
  • A great deal of employment results from proximity to universities and military-industrial complexes although the “oil patch” certainly draws a great number of individuals in STEM professions.
  • There is a significant absence from areas of the northeast and the “rust belt”; i.e. the northern and mid-western states.

I also think certain factors such as lower taxes; less congestion during commute, milder climate, and lower cost of living contribute to overall reasons for companies locating in southern areas.

I hope you enjoyed this one. I will make every effort to keep this list current.  As always, I appreciate your comments.  Keep them coming.


Data for this post was taken from the following sources: 1.) Design News Daily, and 2.) Those references given on the individual slides.

I have been a “blue-collar” working engineer since graduation in 1966.  I think it’s a marvelous profession and tremendously rewarding.  I also find that engineering is one of the most trusted professions.  When you are designing a bridge, a machine, a biomedical device, etc. there is little room for PC.   Being politically correct will get you a bum design.  You design towards accomplishing an objective or satisfying a consume needs.  Also, you can’t talk your way into success.  You have to perform at every phase of the engineering program.  There are processes in place that aid our efforts along the way.  Some of these are as follows:

  • Six Sigma
  • Design for Six Sigma
  • QFD or Quality Functional Deployment
  • FMEA or Failure Mode Effect Analysis
  • Computational Fluid Dynamics
  • Reliability Engineering
  • HALT—Highly Accelerated Laboratory Testing
  • Engineering Reliability

There are others depending upon the branch of engineering in question.  There are also a large number of computer programs specifically written for each engineering discipline.

With that being the case, what would you say are the highest paying engineering salary levels by discipline?  You might be surprised.  I was.  The following slides basically speak for themselves and represent entry level, mid-level and high-paying salaries for graduate engineers.  Let’s take a look at the top ten (10).

BIOMEDICAL

I’m not surprised at biomedical engineering being in the top ten.  There is a huge demand for “bio-engineers” due to rapid advances in technology and significant needs relative to non-invasive medical investigations.

The next one, Civil Engineering, does surprise me a little although we live with a crumbling infrastructure.  Much more needs to be accomplished to redesign, replace and upgrade our roads, dams, bridges, levees, etc etc.  We are literally falling apart.CIVIL

The next two should not surprise anyone.  IT is driving innovation in our time and the need for computer programmers, hardware engineers and software engineers will only increase as time goes by.

COMPUTER ENGINERING HARDWARE

COMPUTER ENGINEERING SOFTWARE

Chemical engineering has always been one of the top engineering disciplines.  CEs can apply their “trade” to an extremely large number of endeavors.

CHEMICAL ENGINEERING

EE

During my time EEs were the highest paying jobs.  They still are.

Years ago, environmental engineering was included in the CE discipline. Today, it is important enough to stand alone and provide excellent salary levels.

ENVIRONMENTAL

GEOLOGY AND MINING

Geology and Mining engineering has taken off in recent years due to needs brought about by the oil industry.  More than ever, new sources of natural gas and oil are needed.  The term fracking was unknown ten and certainly twenty years ago.

Material Science is one of the most fascinating areas of investigation undertaken in today’s engineering world.  Composite structures, “additive” manufacturing, adhesives, and a host of other areas of materials engineering are producing needs throughout the profession.

Materials Science

MATERIALS AND SCIENCE

MECHANICAL

I am a mechanical engineer and greatly enjoy the work I do in designing work cells to automate manufacturing and assembly processes.  The field is absolutely wide open.

I hope you enjoy this very brief look at the top ten disciplines.  I also hope you will be encouraged to show this post to you children and grandchildren.  Explain what engineers do and how our profession benefits mankind.

EMBRAER

March 27, 2015


You know Dasher and Dancer and Prancer and Vixson, Gulfstream and Piper and Beechcraft and Cessna; but do you recall the least-known aircraft of all?  OK, so I’m not a poet or songwriter.  Have you ever heard of an aircraft manufacturer called EMBRAER?  Do you recognize their logotype?

LOGO

Well, I’ll bet you have flown on one of their aircraft.

HISTORY:

Embraer S.A. is a Brazilian aerospace conglomerate that produces commercial, military, executive and agricultural aircraft.  The company also provides corporate and private aeronautical services. It is headquartered in ão José dos Campos in the State of São Paulo.

On August 19, 1969, Embraer; (Empresa Brasileira de Aeronáutica S.A.) was created. With the support of the Brazilian government, the Company turned science and technology into engineering and industrial capacity. The Brazilian government was seeking a domestic aircraft manufacture thus making several investment attempts during the 1940s and ’50s to fulfill this need.    Its first president, Ozires Silva, was appointed by the Brazilian government to run the company.   EMBRAER initially produced one turboprop passenger aircraft, the Embraer EMB 110 Bandeirante, a project organized and executed by Ozires Silva. The first EMB 110 Bandeirante to be produced in series made its maiden flight on August 9, 1972. On the 19th of that same month, a public ceremony was held at the Embraer headquarters, attended by officials, employees and journalists from not only Brazil but several countries in South America. That aircraft is shown by the digital below.

40 Years Ago

By the end of the ‘70s, the development of new products, such as the EMB 312 Tucano and the EMB 120 Brasilia, followed by the AMX program in cooperation with Aeritalia (currently Alenia) and Aermacchi companies, allowed Embraer to reach a new technological and industrial level.  At exactly 8:44 AM, on April 8, 1982, the twin-engines EMB 121 Xingu PP-ZXA and PP-ZXB took off from São José dos Campos, piloted by Brasílico Freire Netto, Carlos Arlindo Rondom, Paulo César Schuler Remido and Luiz Carlos Miguez Urbano, en route to France. They were the first two aircraft of a total of forty-one (41) ordered by the French government for use in training military pilots from the Air Force (Armé de L’Air) and Naval Aviation (Aeronavale) department. The aircraft were delivered to the French authorities on April 16, at Le Bourget Airport.  That aircraft may be seen as follows:

Comissioned by the French

The EMB 120 Brasilia aircraft became an important milestone in the history of Embraer. Developed as a response to the evolving demands of the regional air transport industry, its design took advantage of the most advanced technologies available at the time. It was the fastest, lightest and most economical airplane in its category.  Most of the EMB 120s were sold in the United States and other destinations in the Western Hemisphere. Some European airlines such as Régional in France, Atlant-Soyuz Airlines in Russia, DAT in Belgium, and DLT in Germany also purchased EMB-120s. Serial production ended in 2001. As of 2007, it is still available for one-off orders, as it shares much of the production equipment with the ERJ-145 family, which is still being produced. The Angolan Air Force, for example, received a new EMB 120 in 2007.  If you’ve done much flying at all you probably have flown on the EMB 120. SkyWest Airlines operates the largest fleet of EMB 120s under the United Express and Delta Connection brand. Great Lakes Airlines operates six EMB 120s in its fleet, and Ameriflight flies eight as freighters.  This configuration has been a real short-haul workhorse. Another, and possibly better look, is as follows:

Air Moldova

COMMERCIAL LONG-HAUL:

Another workhorse is the EMBRAER 195.  That aircraft may be seen below.  It costs approximately $40 Million, which is just as expensive as the average narrow-body passenger jet and seats 108 passengers in a typical layout, 8 more than the average narrow-body passenger plane. The maximum seating capacity is 122 passengers in an all-economy class configuration.   The 195 uses roughly $11.64 worth of fuel per nautical mile flown (assuming $6 per gallon of jet fuel).  On a per-seat basis, this translates to being 7.3% more cost-efficient than the average aircraft.

A maximum range of 2,200 nautical miles (equal to 2,530 miles) makes this aircraft most appropriate for long domestic flights, or very short international flights.   With a service ceiling (max cruise altitude) of 41,000 feet, it is just slightly higher than the norm for this type of aircraft and can certainly get above most weather patterns along the flight route.

EMBRAER 195.doc

BUSINESS JET:

The Embraer EMB-505 Phenom 300 is a light jet aircraft developed by Embraer which can carry eight (8) or nine (9) occupants.  It has a flying range of 1,971 nmi (3,650 km) and carries a price estimate between US $ 5 million and US $ 8 million in 2012.

At 45,000 feet (14,000 m), the Phenom 300 is pressurized to a cabin altitude of 6,600 feet (2,000 m). The jet features single-point refueling and an externally serviced private rear lavatory, refreshment center and baggage area. It received FAA Type Certification on 14 December 2009 as the Embraer EMB-505.

On 29 December 2009 Embraer delivered the first Phenom 300 to Executive Flight Services at the company’s headquarters at São José dos Campos, Brazil.  In just four years, the Phenom 300 climbed to the top position on the list of most delivered business jets, with 60 units delivered in 2013. The Phenom 300 is the fastest seller in NetJets‘ inventory, counting thirty-six (36).  A beautiful aircraft with the ten (10)  most recent deliveries totaling $90 million. 

BUSINESS

MILITARY ISSUE:

Embraer has started work on modernizing a second production of Northrop F-5E fighters and F-model trainers for the Brazilian air force.

Three aircraft from a total of 11 are already being worked on at the company’s facilities in Gavião Peixoto, Brazil, with deliveries expected to start later this year. Embraer says it completed the delivery of a first batch of 46 modified F-5EM/FMs in 2012.  That aircraft is shown below.

Fighter

Both the modernized F-5M and AMX are being upgraded to a common avionics configuration. “What we are doing in Brazil is basically a commonality between the Super Tucano, F-5 and the AMX so that the pilots would not have many problems for transition,” Embraer says. “You also reduce costs and assist in training.”

The AMX and F-5 fleets are also receiving Elbit Systems-built radars, in addition to upgraded electronic warfare equipment, in-flight refueling systems and other improvements.

Meanwhile, the Brazilian navy is also upgrading its small fleet of 12 Douglas A-4 Skyhawk carrier-based light strike aircraft. At least one of the Skyhawks is currently being modernized at Gavião Peixoto, but Embraer could not immediately offer any details.

Alongside the modernization work for the Brazilian military, the factory at Gavião Peixoto is at work building a number of Super Tucanos for export customers in Angola and Indonesia.

Brazil is has previously increased spending on defense to prepare hosting the FIFA World Cup in 2014 and Olympic Games 2016 respectively.

There is also a growing realization in the country that it will have to work diligently in the future to protect its vast natural resources. This could unfortunately require military preparedness.

Another example of Embraer’s military ability may be seen from the following aircraft:

Heavy Duty Cargo Aircraft

The Embraer KC-390 is a medium-size, twin-engine jet-powered military transport aircraft now under development.  It is able to perform aerial refueling and to transport cargo and troops and will be the heaviest aircraft the company has in its inventory.  It will be able to transport up to 21 metric tons (23 short tons) of cargo, including wheeled armored fighting vehicles.

AGRICULTURAL:

The Ipanema is the market leader, with 50 years of continuous production and over 1,300 units sold, representing about 75% of the nation’s fleet in this segment.   The Ipanema agricultural aircraft is a leading agricultural market in Brazil, with about 60% share.  There has been 40 years of continuous production and constant research to improve the aircraft.  That concentration of effort always focused on the needs of the customers and the national agricultural market.  This brand demonstrates the reliability, solidity and tradition of Ipanema.  One other fact, the Ipanema is the first aircraft certified to fly powered solely by ethanol.  In addition to the economic advantages and obtained improvement in engine performance, ethanol is a renewable source of energy, which helps protect the environment.

Agricultural

CONCLUSION:

As you can see, the United States aircraft manufacturers do have competition and excellent competition at that.    This foreign entry keeps us on our toes.

FACIAL RECOGNITION

March 6, 2015


THE TECHNOLOGY:

Humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability and that ability results from proper software being installed into PCs with memory adequate to manipulate the mapping process.

In the mid 1960s, scientists began working to us computers to recognize human faces.  This certainly was not easy at first. Facial recognition software and hardware have come a long way since those fledgling early days and definitely involve mathematical algorithms.

ALGORITHMS;

An algorithm is defined by Merriam-Webster as follows:

“a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly :  a step-by-step procedure for solving a problem or accomplishing some end especially by a computer.”

Some facial recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject’s face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.

Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.

Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. These landmarks are defined as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:

  • Distance between the eyes
  • Width of the nose
  • Depth of the eye sockets
  • The shape of the cheekbones
  • The length of the jaw line

These nodal points are measured thereby creating a numerical code, called a face-print, representing the face in the database.

In the past, facial recognition software has relied on a 2D image to compare or identify another 2D image from the database. To be effective and accurate, the image captured needed to be of a face that was looking almost directly at the camera, with little variance of light or facial expression from the image in the database. This created quite a problem.

In most instances the images were not taken in a controlled environment. Even the smallest changes in light or orientation could reduce the effectiveness of the system, so they couldn’t be matched to any face in the database, leading to a high rate of failure. In the next section, we will look at ways to correct the problem.

A newly-emerging trend in facial recognition software uses a 3D model, which claims to provide more accuracy. Capturing a real-time 3D image of a person’s facial surface, 3D facial recognition uses distinctive features of the face — where rigid tissue and bone is most apparent, such as the curves of the eye socket, nose and chin — to identify the subject. These areas are all unique and don’t change over time.

Using depth and an axis of measurement that is not affected by lighting, 3D facial recognition can even be used in darkness and has the ability to recognize a subject at different view angles with the potential to recognize up to 90 degrees (a face in profile).

Using the 3D software, the system goes through a series of steps to verify the identity of an individual.

 

The nodal points or recognition points are demonstrated with the following graphic.

POINTS OF RECOGNITION

This is where Machine Vision or MV comes into the picture.  Without MV, facial recognition would not be possible.  An image must first be taken, then that image is digitized and processed.

MACHINE VISION:

Facial recognition is one example of a non-industrial application for machine vision (MV).   This technology is generally considered to be one facet in the biometrics technology suite.  Facial recognition is playing a major role in identifying and apprehending suspected criminals as well as individuals in the process of committing a crime or unwanted activity.  Casinos in Las Vegas are using facial recognition to spot “players” with shady records or even employees complicit with individuals trying to get even with “the house”.   This technology incorporates visible and infrared modalities face detection, image quality analysis, verification and identification.   Many companies use cloud-based image-matching technology to their product range providing the ability to apply theory and innovation to challenging problems in the real world.  Facial recognition technology is extremely complex and depends upon many data points relative to the human face.

Facial recognition has a very specific methodology associated with it. You can see from the graphic above points of recognition are “mapped” highlighting very specific characteristics of the human face.  Tattoos, scars, feature shapes, etc. all play into identifying an individual.  A grid is constructed of “surface features”; those features are then compared with photographs located in data bases or archives.  In this fashion, positive identification can be accomplished. The graphic below will indicate the grid developed and used for the mapping process.  Cameras are also shown that receive the image and send that image to software used for comparisons.

MAPPING AND CAMERAS USED

One of the most successful cases for the use of facial recognition was last year’s bombing during the Boston Marathon.   Cameras mounted at various locations around the site of the bombing captured photographs of Tamerian and Dzhokhar Tsarnaev prior to their backpack being positioned for both blasts.  Even though this is not facial recognition in the truest since of the word, there is no doubt the cameras were instrumental in identifying both criminals.

TAMERIAN AND DZHOKHAR

Dzhokhar Tsarnaev is now the only of the court case that will determine life or death.  There is no doubt, thanks to MV, concerning his guilt or innocence.  He is guilty. Jurors in Boston heard harrowing testimony this week in his trial. Survivors, as well as police and first responders, recounted often-disturbing accounts of their suffering and the suffering of runners and spectators as a result of the attack. Facial recognition was paramount in his identification and ultimate capture.

As always, your comments are very welcome.


For those who might be a little bit unsure as to the definition of machine vision (MV), let’s now define the term as follows:

“ Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.”

There are non-industrial uses for MV also, such as 1.) Law enforcement,  2.) Security,  3) Facial recognition and 4.)  Robotic surgery.  With this being the case, there must be several critical, if not very critical, aspects to the technology that must be considered prior to purchasing an MV system or even discussing MV with a vendor.  We will now take a closer look as to those critical factors.

CRITICAL FACTORS:

As with any technology, there are certain elements critical to success. MV is no different.  There are six (6) basic and critical factors for choosing an imaging system.  These are as follows:

  • Resolution–A higher resolution camera will undoubtedly help increase accuracy by yielding a clearer, more precise image for analysis.   The downside to higher resolution is slower speed. The resolution of the image required for an inspection is determined by two factors: 1.) the field of view required and 2.) minimal dimension that must be resolved by the imaging system. Of course, lenses, lighting, mechanical placement and other factors come into play, but, if we confine our discussion to pixels, we can avoid having to entertain these topics.  This allows us to focus on the camera characteristics. Using an example, if a beverage packaging system requires verification that a case is full prior to sealing, it is necessary for the camera to image the contents from above and verify that twenty-four (24) bottle caps are present. It is understood that since the bottles and caps fit within the case, the caps are then the smallest feature within the scene that must be resolved. Once the application parameters and smallest features have been determined, the required camera resolution can be roughly defined. It is anticipated that, when the case is imaged, the bottle caps will stand out as light objects within a dark background. With the bottle caps being round, the image will appear as circles bounded by two edges with a span between the edges. The edges are defined as points where the image makes a transition from dark to light or light to dark. The span is the diametrical distance between the edges. At this point, it is necessary to define the number of pixels that will represent each of these points. In this application, it would be sufficient to allow three pixels to define each of the two edges and four pixels to define the span. Therefore, a minimum of ten pixels should be used to define the 25mm bottle cap in the image. From this, we can determine that one pixel will represent 2.5mm of the object itself. Now we can determine the overall camera resolution. Choosing 400mm of the object to represent the horizontal resolution of the camera, the camera then needs a minimum of 400/2.5 = 160 pixels of horizontal resolution. Vertically, the camera then needs 250/2.5 = 100 pixels vertical resolution. Adding a further 10% to each resolution to account for variations in the object location within the field of view will result in the absolute minimum camera resolution. There are pros and cons to image resolution as follows.

Pros and cons of increasing resolution

Digital cameras transmit image data as a series of digital numbers that represent pixel values. A camera with a resolution of 200 x 100 pixels will have a total of 20,000 pixels, and, therefore, 20,000 digital values must be sent to the acquisition system. If the camera is operating at a data rate of 25MHz, it takes 40 nanoseconds to send each value. This results in a total time of approximately .0008 seconds, which equates to 1,250 frames per second. Increasing the camera resolution to 640 x 480 results in a total of 307,200 pixels, which is approximately 15 times greater. Using the same data rate of 25MHz, a total time of 0.012288 seconds, or 81.4 frames per second, is achieved. These values are approximations and actual camera frame rates will be somewhat slower because we have to add exposure and setup times, but it is apparent that an increase in camera resolution will result in a proportional decrease in camera frame rate. While a variety of camera output configurations will enable increased camera resolution without a sacrifice in frame rate, these are accompanied by additional complexity and associated higher costs.

  • Speed of Exposure—Products rapidly moving down a conveyor line will require much faster exposure speed from vision systems.  Such applications might be candy or bottled products moving at extremely fast rates. When selecting a digital camera, the speed of the object being imaged must be considered as well.   Objects not moving during exposure would be perfectly fine with relatively simple and an inexpensive camera or cameras.  These could be used and provide perfectly satisfactory results. Objects moving continuously require other considerations. For other cases, objects may be stationary only for very short periods of time then move rapidly.  If this is the case, inspection during the stationary period would be the most desirable.

Stationary or slow-moving objects: Area array cameras are well suited to imaging objects that are stationary or slow moving. Because the entire area array must be exposed at once, any movement during the exposure time will result in a blurring of the image. Motion blurring can; however, be controlled by reducing exposure times or using strobe lights.

Fast-moving objects: When using an area array camera for objects in motion, some consideration must be taken for the amount of movement with respect to the exposure time of the camera and object resolution where it is defined as the smallest feature of the object represented by one pixel. A rule of thumb when acquiring images of a moving object is that the exposure must occur in less time than it takes for the object to move beyond one pixel. If you are grabbing images of an object that is moving steadily at 1cm/second and the object resolution is already set at 1 pixel/mm, then the absolute maximum exposure time required is 1/10 per second. There will be some amount of blur when using the maximum amount of exposure time since the object will have moved by an amount equal to 1 pixel on the camera sensor. In this case, it is preferable to set the exposure time to something faster than the maximum, possibly 1/20 per second, to keep the object within half a pixel. If the same object moving at 1cm/second has an object resolution of 1 pixel/micrometer, then a maximum exposure of 1/10,000 of a second would be required. How fast the exposure can be set will be dependent on what is available in the camera and whether you can get enough light on the object to obtain a good image. Additional tricks of the trade can be employed when attempting to obtain short exposure times of moving objects. In cases where a very short exposure time is required from a camera that does not have this capability, an application may make use of shutters or strobed illumination. Cameras that employ multiple outputs can also be considered if an application requires speeds beyond the capabilities of a single output camera.

  • Frame Rate–The frame rate of a camera is the number of complete frames a camera can send to an acquisition system within a predefined time period.  This period is usually stated as a specific number of frames per second.  As an example, a camera with a sensor resolution of 640 x 480 is specified with a maximum frame rate of 50 frames per second. Therefore, the camera needs 20 milliseconds to send one frame following an exposure. Some cameras are unable to take a subsequent exposure while the current exposure is being read, so they will require a fixed amount of time between exposures when no imaging takes place. Other types of cameras, however, are capable of reading one image while concurrently taking the next exposure. Therefore, the readout time and method of the camera must be considered when imaging moving objects. Further consideration must be given to the amount of time between frames when exposure may not be possible.
  • Spectral Response and Responsiveness–All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1000 nanometers. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined.  A matching and selection process must be undertaken by application engineers to insure proper usage of equipment relative to the needs at hand. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined. Filters may be incorporated into the application to tune out the unwanted wavelengths, but it will still be necessary to know how well the camera will respond to the desired wavelength. The responsiveness of a camera defines how sensitive the camera is to a fixed amount of exposure. The responsiveness of a camera can be defined in LUX or DN/(nJ/cm^2). “LUX” is a common term among imaging engineers that is used to define the sensitivity in photometric units over the range of visible light, where DN/ (nJ/ cm^2) is a radiometric expression that does not limit the response to visible light. In general, both terms state how the camera will respond to light. The radiometric expression of x DN/ (nJ/cm^2) indicates that, for a known exposure of 1 nJ/cm^2, the camera will output pixel data of x DN (digital numbers, also known as grayscale). Gain is another feature available within some cameras that can provide various levels of responsiveness. The responsiveness of a camera should be stated at a defined gain setting. Be aware, however, that a camera may be said to have high responsiveness at a high gain setting, but increased noise level can lead to reduced dynamic range.
  • Bit Depth–Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth.  Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs. Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth. This bit depth typically ranges from 8 to 16-bits. In monochrome cameras, the bit depth defines the quantity of gray levels from dark to light, where a pixel value of 0 is %100 dark and 255 (for 8-bit cameras) is %100 white. Values between 0 and 255 will be shades of gray, where near 0 values are dark gray and near 255 values are almost white. 10-bit data will produce 1024 distinct levels of gray, while 12-bit data will produce 4096 levels. Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.
  • Lighting— Perhaps no other aspect of vision system design and implementation consistently has caused more delay, cost-overruns, and general consternation than lighting. Historically, lighting often was the last aspect specified, developed, and or funded, if at all. And this approach was not entirely unwarranted, as until recently there was no real vision-specific lighting on the market, meaning lighting solutions typically consisted of standard incandescent or fluorescent consumer products, with various amounts of ambient contribution.  The following lighting sources are now commonly used in machine vision:
  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode • Metal Halide (Mercury)
  • Xenon
  • High Pressure Sodium

Fluorescent, quartz-halogen, and LED are by far the most widely used lighting types in machine vision, particularly for small to medium scale inspection stations, whereas metal halide, xenon, and high pressure sodium are more typically used in large scale applications, or in areas requiring a very bright source. Metal halide, also known as mercury, is often used in microscopy because it has many discrete wavelength peaks, which complements the use of filters for fluorescence studies. A xenon source is useful for applications requiring a very bright, strobed light.

Historically, fluorescent and quartz halogen lighting sources have been used most commonly. In recent years, LED technology has improved in stability, intensity, and cost-effectiveness; however, it is still not as cost-effective for large area lighting deployment, particularly compared with fluorescent sources. However, on the other hand, if application flexibility, output stability, and longevity are important parameters, then LED lighting might be more appropriate. Depending on the exact lighting requirements, oftentimes more than one source type may be used for a specific implementation, and most vision experts agree that one source type cannot adequately solve all lighting issues. It is important to consider not only a source’s brightness, but also its spectral content.  Microscopy applications, for example often use a full spectrum quartz halogen, xenon, or mercury source, particularly when imaging in color; however a monochrome LED source is also useful for B&W CCD camera, and also now for color applications, with the advent of “all color – RGB” and white LED light heads. In those applications requiring high light intensity, such as high-speed inspections, it may be useful to match the source’s spectral output with the spectral sensitivity of your particular vision camera. For example, CMOS sensor based cameras are more IR sensitive than their CCD counterparts, imparting a significant sensitivity advantage in light-starved inspection settings when using IR LED or IR-rich Tungsten sources.

Vendors must be contacted to recommend proper lighting relative to the job to be accomplished.

THE TRUTH IS OUT THERE

February 6, 2015


In John 18:38 we read the following from the King James Version of the Bible: “Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.”  Pilate did not stay for an answer.

One of my favorite television programs was the “X”-Files.  It’s been off the air for some years now but we are told will return as a “mini-series” sometime in the very near future.  The original cast; i.e. Fox Mulder and Dana Skully will again remind us—THE TRUTH IS OUT THERE.  The truth is definitely out there as indicated by the men and women comprising the Large Synoptic Survey Telescope team.  They are definitely staying for answers.  The team members posed for a group photograph as seen below.

LSST Team

THE MISSION:

The Large Synoptic Survey Telescope (LSST) structure is a revolutionary facility which will produce an unprecedented wide-field astronomical survey of our universe using an 8.4-meter ground-based telescope. The LSST leverages innovative technology in all subsystems: 1.) the camera (3200 Megapixels, the world’s largest digital camera), 2.) telescope (simultaneous casting of the primary and tertiary mirrors; 3.) two aspherical optical surfaces on one substrate), and 4.)  data management (30 Terabytes of data nightly.)  There will be almost instant alerts issued for objects that change in position or brightness.

The known forms of matter and types of energy experienced here on Earth account for only four percent (4%) of the universe. The remaining ninety-six percent ( 96 % ), though central to the history and future of the cosmos, remains shrouded in mystery. Two tremendous unknowns present one of the most tantalizing and essential questions in physics: What are dark energy and dark matter? LSST aims to expose both.

DARK ENERGY:

Something is driving the universe apart, accelerating the expansion begun by the Big Bang. This force accounts for seventy percent (70%) of the cosmos, yet is invisible and can only be “seen” by its effects on space. Because LSST is able to track cosmic movements over time, its images will provide some of the most precise measurements ever of our universe’s inflation. Light appears to stretch at the distant edges of space, a phenomenon known as red shift, and LSST may offer the key to understanding the cosmic anti-gravity behind it.

DARK MATTER:

Einstein deduced that massive objects in the universe bend the path of light passing nearby, proving the curvature of space. One way of observing the invisible presence of dark matter is examining the way its heavy mass bends the light from distant stars. This technique is known as gravitational lensing. The extreme sensitivity of the LSST, as well as its wide field of view, will help assemble comprehensive data on these gravitational lenses, offering key clues to the presence of dark matter. The dense and mysterious substance acts as a kind of galactic glue, and it accounts for twenty-five percent (25 %) of the universe.

From its mountaintop site, LSST will image the entire visible sky every few nights, capturing changes over time from seconds to years. Ultimately, after 10 years of observation, a stunning time-lapse movie of the universe will be created.

As the LSST stitches together thousands of images of billions of galaxies, it will process and upload that information for applications beyond pure research. Frequent and real time updates – 100 thousand a night – announcing the drift of a planet or the flicker of a dying star will be made available to both research institutions and interested astronomers.

In conjunction with platforms such as Google Earth, LSST will build a 3D virtual map of the cosmos, allowing the public to fly through space from the comfort of home.  ALLOWING THE PUBLIC is the operative phrase.. For the very first time, the public will have access to information, as it is presented, relative to the cosmos.  LSST educational materials will clearly specify National and State science, math and technology standards that are met by the activity. Our materials will enhance 21st century workforce skills, incorporate inquiry and problem solving, and ensure continual assessment embedded in instruction.

THE LOCATION:

The decision to place LSST on Cerro Pachón in Chile was made by an international site selection committee based on a competitive process.  In short, modern telescopes are located in sparsely populated areas (to avoid light pollution), at high altitudes and in dry climates (to avoid cloud cover). In addition to those physical concerns, there are infrastructure issues. The ten best candidate sites in both hemispheres were studied by the site selection committee. Cerro Pachón was the overall winner in terms of quality of the site for astronomical imaging and available infrastructure. The result will be superb deep images from the ultraviolet to near infrared over the vast panorama of the entire southern sky.

The location is shown by the following digital:

Construction Site

The actual site location, as you can see below, is a very rugged outcropping of rock now used by farmers needing food for their sheep.

The Mountain Location

The Observatory will be located about 500km (310.6856  miles )north of Santiago, Chile, about 52km (32.3113 miles) or 80km (49.7097  miles) by road from La Serena, at an altitude of 2200 meters (7217.848 feet).  It lies on a 34,491Ha (85,227 acres.) site known as “Estancia El Tortoral” which was purchased by AURA on the open market in 1967 for use as an astronomical observatory.

When purchased, the land supported a number of subsistence farmers and goat herders. They were allowed to continue to live on the reserve after it was purchased by AURA and have gradually been leaving voluntarily for more lucrative jobs in the nearby towns.

As a result of departure of most of its human inhabitants and a policy combining environmental protection with “benign neglect” on the part of the Observatory, the property sees little human activity except for the roads and relatively small areas on the tops of Cerro Tololo and Cerro Pachon. As a result, much of the reserve is gradually returning to its natural state. Many native species of plants and animals, long thought in danger of extinction, are now returning. The last half of the trip to Tololo is an excellent opportunity to see a reasonably intact Chilean desert ecosystem.

THE FACILITY:

LSST construction is underway, with the NSF funding authorized as of 1 August 2014.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaire Charles Simonyi and Bill Gates of $20 and $10 million respectively.  $7.5 million is included in the U.S. President’s FY2013 NSF budget request. The Department of Energy is expected to fund construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy.

Construction of the primary mirror at the University of Arizona‘s Steward Observatory Mirror Lab, the most critical and time-consuming part of a large telescope’s construction, is almost complete. Construction of the mold began in November 2007, mirror casting was begun in March 2008, and the mirror blank was declared “perfect” at the beginning of September 2008.  In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

As of December 2014, the primary mirror is completed awaiting final approval, and the mirror transport box is ready to receive it for storage until it is shipped to Chile.

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. In November 2009, the blank was shipped to Harvard University for storage until funding to complete it was available. On October 21, 2014, the secondary mirror blank was delivered from Harvard to Exelis for fine grinding.

Site excavation began in earnest March 8, 2011, and the site had been leveled by the end of 2011. Also during that time, the design continued to evolve, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In November 2014, the LSST camera project, which is separately funded by the United States Department of Energy , passed its “critical decision 2″ design review and is progressing toward full funding.

When completed, the facility will look as follows with the mirror mounted as given by the second JPEG:


Artist Rendition of Building(2)

 

Telescope Relative to Building

MIRROR DESIGN:

The assembled mirror structure is given below.

Telescope

In the LSST optical design, the primary (M1) and tertiary (M3) mirrors form a continuous surface without any vertical discontinuities. Because the two surfaces have different radii of curvature, a slight cusp is formed where the two surfaces meet, as seen in the figure below. This design makes it possible to fabricate both the primary and tertiary mirrors from a single monolithic substrate. We refer to this option as the M1-M3 monolith.

MIRROR MONOLITH

After a feasibility review was held on 23 June 2005, the LSST project team adopted the monolithic approach to fabricating the M1 and M3 surfaces as its baseline. In collaboration with the University of Arizona and Steward Observatory Mirror Lab (SOML) construction has begun with detailed engineering of the mirror blank and the testing procedures for the M1-M3 monolith. The M1-M3 monolith blank will be formed from Ohara E6 low expansion glass using the spin casting process developed at SOML.

At 3.42 meters in diameter the LSST secondary mirror will be the largest convex mirror ever made. The mirror is aspheric with approximately 17 microns of departure from the best-fit sphere. The design uses a 100 mm thick solid meniscus blank made of a low expansion glass (e.g. ULE or Zerodur) similar to the glasses used by the SOAR and Discovery Chanel telescopes. The mirror is actively supported by 102 axial and 6 tangent actuators. The alignment of the secondary to the M1-M3 monolith is accomplished by the 6 hexapod actuators between the mirror cell and support structure. The large conical baffle is necessary to prevent the direct reflection of star light from the tertiary mirror into the science camera.

SUMMARY:

The truth is out there and projects such as the one described in this post AND the Large Hadron Collider at CERN certainly prove some people and institutions are not at all reluctant to search for that truth, the ultimate purpose being to discover where we come from.  Are we truly made from “star stuff”?

 

WE’VE COME A LONG WAY

January 24, 2015


Two days ago I had the need to refresh my memory concerning the Second Law of Thermodynamics. Most of the work I do involves designing work cells to automate manufacturing processes but one client asked me to take a look at a problem involving thermodynamic and heat transfer processes.  The statement of the second law is as follows:

“It is impossible to extract an amount of heat “Qh” from a hot reservoir and use it all to do work “W”.  Some amount of heat “Qc” must be exhausted to a cold reservoir.”

Another way to say this is:

“It is not possible for heat to flow from a cooler body to a warmer body without any work being done to accomplish this flow.”

That refresher took about fifteen (15) minutes but it made me realize just how far we have come relative to teaching and presenting subjects involving technology; i.e. STEM ( Science, Technology, Engineering and Mathematics) related information.  Theory does not change.  Those giants upon whose shoulders we stand paved the way and set the course for discovery and advancement of so many technical disciplines, but one device has revolutionized teaching methods—the modern day computer with accompanying software.

I would like to stay with thermodynamics to illustrate a point.  At the university I attended, we were required to have two semesters of heat transfer and two semesters of thermodynamics.  Both subjects were supposedly taken during the sophomore year and both offered in the department of mechanical engineering.   These courses were “busters” for many ME majors.  More than once they were the determining factors in the decision-making process as to whether or not to stay in engineering or try another field of endeavor.  The book was “Thermodynamics” by Gordon van Wylen, copyright 1959.  My sophomore year was 1962 so it was well before computers were used at the university level.  I remember pouring over the steam tables looking at saturations temperatures, saturation pressures trying to find specific volume, enthalpy, entropy and internal energy information.  It seemed as though interpolation was always necessary.  Have you ever tried negotiating a Mollier Chart to pick off needed data? WARNING: YOU CAN GO BLIND TRYING.      Psychometric charts presented the very same problem.  I remember one homework project in which we were expected to design a cooling tower for a commercial heating and air conditioning system.  All of the pertinent specifications were given as well as the cooling necessary for transmission into the facility.   It was drudgery and even though so long ago, I remember the “all-nighter” I pulled trying to get the final design on paper. Today, this information is readily available through software; obviously saving hours of time and greatly improving productivity.  I will say this; by the time these two courses were taken you did understand the basic principles and associated theory for heat systems.

Remember conversion tables?  One of the most-used programs by working engineers is found by accessing “onlineconversions.com”.  This web site provides conversions between differing measurement systems for length, temperature, weight, area, density, power and even has oddball classifications such as “fun stuff” and miscellaneous.  Fun stuff is truly interesting; the Chinese Zodiac, pig Latin, Morse code, dog years—all subheadings and many many more.  All possible without an exhaustive search through page after page of printed documentation.  All you have to do is log on.

The business courses I took, (yes, we were required to take several non-technical courses) were just as laborious.  We constructed spreadsheets and elaborate ones at that for cost accounting and finance; all accomplished today with MS Excel.  One great feature of MS Excel is the Σ or sum feature.  When you have fifty (50) or more line items and its 2:30 in the morning and all you want to do is wrap things up and go to bed, this becomes a god-send.

I cannot imagine where we will be in twenty (20) years relative to improvements in technology. I just hope I’m around to see them.

Follow

Get every new post delivered to your Inbox.

Join 119 other followers