EMBRAER

March 27, 2015


You know Dasher and Dancer and Prancer and Vixson, Gulfstream and Piper and Beechcraft and Cessna; but do you recall the least-known aircraft of all?  OK, so I’m not a poet or songwriter.  Have you ever heard of an aircraft manufacturer called EMBRAER?  Do you recognize their logotype?

LOGO

Well, I’ll bet you have flown on one of their aircraft.

HISTORY:

Embraer S.A. is a Brazilian aerospace conglomerate that produces commercial, military, executive and agricultural aircraft.  The company also provides corporate and private aeronautical services. It is headquartered in ão José dos Campos in the State of São Paulo.

On August 19, 1969, Embraer; (Empresa Brasileira de Aeronáutica S.A.) was created. With the support of the Brazilian government, the Company turned science and technology into engineering and industrial capacity. The Brazilian government was seeking a domestic aircraft manufacture thus making several investment attempts during the 1940s and ’50s to fulfill this need.    Its first president, Ozires Silva, was appointed by the Brazilian government to run the company.   EMBRAER initially produced one turboprop passenger aircraft, the Embraer EMB 110 Bandeirante, a project organized and executed by Ozires Silva. The first EMB 110 Bandeirante to be produced in series made its maiden flight on August 9, 1972. On the 19th of that same month, a public ceremony was held at the Embraer headquarters, attended by officials, employees and journalists from not only Brazil but several countries in South America. That aircraft is shown by the digital below.

40 Years Ago

By the end of the ‘70s, the development of new products, such as the EMB 312 Tucano and the EMB 120 Brasilia, followed by the AMX program in cooperation with Aeritalia (currently Alenia) and Aermacchi companies, allowed Embraer to reach a new technological and industrial level.  At exactly 8:44 AM, on April 8, 1982, the twin-engines EMB 121 Xingu PP-ZXA and PP-ZXB took off from São José dos Campos, piloted by Brasílico Freire Netto, Carlos Arlindo Rondom, Paulo César Schuler Remido and Luiz Carlos Miguez Urbano, en route to France. They were the first two aircraft of a total of forty-one (41) ordered by the French government for use in training military pilots from the Air Force (Armé de L’Air) and Naval Aviation (Aeronavale) department. The aircraft were delivered to the French authorities on April 16, at Le Bourget Airport.  That aircraft may be seen as follows:

Comissioned by the French

The EMB 120 Brasilia aircraft became an important milestone in the history of Embraer. Developed as a response to the evolving demands of the regional air transport industry, its design took advantage of the most advanced technologies available at the time. It was the fastest, lightest and most economical airplane in its category.  Most of the EMB 120s were sold in the United States and other destinations in the Western Hemisphere. Some European airlines such as Régional in France, Atlant-Soyuz Airlines in Russia, DAT in Belgium, and DLT in Germany also purchased EMB-120s. Serial production ended in 2001. As of 2007, it is still available for one-off orders, as it shares much of the production equipment with the ERJ-145 family, which is still being produced. The Angolan Air Force, for example, received a new EMB 120 in 2007.  If you’ve done much flying at all you probably have flown on the EMB 120. SkyWest Airlines operates the largest fleet of EMB 120s under the United Express and Delta Connection brand. Great Lakes Airlines operates six EMB 120s in its fleet, and Ameriflight flies eight as freighters.  This configuration has been a real short-haul workhorse. Another, and possibly better look, is as follows:

Air Moldova

COMMERCIAL LONG-HAUL:

Another workhorse is the EMBRAER 195.  That aircraft may be seen below.  It costs approximately $40 Million, which is just as expensive as the average narrow-body passenger jet and seats 108 passengers in a typical layout, 8 more than the average narrow-body passenger plane. The maximum seating capacity is 122 passengers in an all-economy class configuration.   The 195 uses roughly $11.64 worth of fuel per nautical mile flown (assuming $6 per gallon of jet fuel).  On a per-seat basis, this translates to being 7.3% more cost-efficient than the average aircraft.

A maximum range of 2,200 nautical miles (equal to 2,530 miles) makes this aircraft most appropriate for long domestic flights, or very short international flights.   With a service ceiling (max cruise altitude) of 41,000 feet, it is just slightly higher than the norm for this type of aircraft and can certainly get above most weather patterns along the flight route.

EMBRAER 195.doc

BUSINESS JET:

The Embraer EMB-505 Phenom 300 is a light jet aircraft developed by Embraer which can carry eight (8) or nine (9) occupants.  It has a flying range of 1,971 nmi (3,650 km) and carries a price estimate between US $ 5 million and US $ 8 million in 2012.

At 45,000 feet (14,000 m), the Phenom 300 is pressurized to a cabin altitude of 6,600 feet (2,000 m). The jet features single-point refueling and an externally serviced private rear lavatory, refreshment center and baggage area. It received FAA Type Certification on 14 December 2009 as the Embraer EMB-505.

On 29 December 2009 Embraer delivered the first Phenom 300 to Executive Flight Services at the company’s headquarters at São José dos Campos, Brazil.  In just four years, the Phenom 300 climbed to the top position on the list of most delivered business jets, with 60 units delivered in 2013. The Phenom 300 is the fastest seller in NetJets‘ inventory, counting thirty-six (36).  A beautiful aircraft with the ten (10)  most recent deliveries totaling $90 million. 

BUSINESS

MILITARY ISSUE:

Embraer has started work on modernizing a second production of Northrop F-5E fighters and F-model trainers for the Brazilian air force.

Three aircraft from a total of 11 are already being worked on at the company’s facilities in Gavião Peixoto, Brazil, with deliveries expected to start later this year. Embraer says it completed the delivery of a first batch of 46 modified F-5EM/FMs in 2012.  That aircraft is shown below.

Fighter

Both the modernized F-5M and AMX are being upgraded to a common avionics configuration. “What we are doing in Brazil is basically a commonality between the Super Tucano, F-5 and the AMX so that the pilots would not have many problems for transition,” Embraer says. “You also reduce costs and assist in training.”

The AMX and F-5 fleets are also receiving Elbit Systems-built radars, in addition to upgraded electronic warfare equipment, in-flight refueling systems and other improvements.

Meanwhile, the Brazilian navy is also upgrading its small fleet of 12 Douglas A-4 Skyhawk carrier-based light strike aircraft. At least one of the Skyhawks is currently being modernized at Gavião Peixoto, but Embraer could not immediately offer any details.

Alongside the modernization work for the Brazilian military, the factory at Gavião Peixoto is at work building a number of Super Tucanos for export customers in Angola and Indonesia.

Brazil is has previously increased spending on defense to prepare hosting the FIFA World Cup in 2014 and Olympic Games 2016 respectively.

There is also a growing realization in the country that it will have to work diligently in the future to protect its vast natural resources. This could unfortunately require military preparedness.

Another example of Embraer’s military ability may be seen from the following aircraft:

Heavy Duty Cargo Aircraft

The Embraer KC-390 is a medium-size, twin-engine jet-powered military transport aircraft now under development.  It is able to perform aerial refueling and to transport cargo and troops and will be the heaviest aircraft the company has in its inventory.  It will be able to transport up to 21 metric tons (23 short tons) of cargo, including wheeled armored fighting vehicles.

AGRICULTURAL:

The Ipanema is the market leader, with 50 years of continuous production and over 1,300 units sold, representing about 75% of the nation’s fleet in this segment.   The Ipanema agricultural aircraft is a leading agricultural market in Brazil, with about 60% share.  There has been 40 years of continuous production and constant research to improve the aircraft.  That concentration of effort always focused on the needs of the customers and the national agricultural market.  This brand demonstrates the reliability, solidity and tradition of Ipanema.  One other fact, the Ipanema is the first aircraft certified to fly powered solely by ethanol.  In addition to the economic advantages and obtained improvement in engine performance, ethanol is a renewable source of energy, which helps protect the environment.

Agricultural

CONCLUSION:

As you can see, the United States aircraft manufacturers do have competition and excellent competition at that.    This foreign entry keeps us on our toes.

BOEING 777

March 22, 2015


The following post used the following references as resources: 1.) Aviation Week and 2.) the Boeing Company web site for the 777 aircraft configurations and history of the Boeing Company.

I don’t think there is much doubt that The Boeing Company is and has been the foremost company in the world when it comes to building commercial aircraft. The history of aviation, specifically commercial aviation, would NOT be complete without Boeing being in the picture. There have been five (5) companies that figured prominently in aviation history relative to the United States. Let’s take a look.

THE COMPANIES:

During the last one hundred (100) years, humans have gone from walking on Earth to walking on the moon. They went from riding horses to flying jet airplanes. With each decade, aviation technology crossed another frontier, and, with each crossing, the world changed.

During the 20th century, five companies charted the course of aerospace history in the United States. They were the Boeing Airplane Co., Douglas Aircraft Co., McDonnell Aircraft Corp., North American Aviation and Hughes Aircraft. By the dawning of the new millennium, they had joined forces to share a legacy of victory and discovery, cooperation and competition, high adventure and hard struggle.

Their stories began with five men who shared the vision that gave tangible wings to the eternal dream of flight. William Edward Boeing, born in 1881 in Detroit, Mich., began building floatplanes near Seattle, Wash. Donald Wills Douglas, born in 1892 in New York, began building bombers and passenger transports in Santa Monica, Calif. James Smith McDonnell, born in 1899 in Denver, Colo., began building jet fighters in St. Louis, Mo. James Howard “Dutch” Kindelberger, born in 1895 in Wheeling, W.Va., began building trainers in Los Angeles, Calif. Howard Hughes Jr. was born in Houston, Texas, in 1905. The Hughes Space and Communications Co. built the world’s first geosynchronous communications satellite in 1963.

These companies began their journey across the frontiers of aerospace at different times and under different circumstances. Their paths merged and their contributions are the common heritage of The Boeing Company today.

In 1903, two events launched the history of modern aviation. The Wright brothers made their first flight at Kitty Hawk, N.C., and twenty-two (22) year-old William Boeing left Yale engineering college for the West Coast.

After making his fortune trading forest lands around Grays Harbor, Wash., Boeing moved to Seattle, Wash., in 1908 and, two years later, went to Los Angeles, Calif., for the first American air meet. Boeing tried to get a ride in one of the airplanes, but not one of the dozen aviators participating in the event would oblige. Boeing came back to Seattle disappointed, but determined to learn more about this new science of aviation.

For the next five years, Boeing’s air travel was mostly theoretical, explored during conversations at Seattle’s University Club with George Conrad Westervelt, a Navy engineer who had taken several aeronautics courses from the Massachusetts Institute of Technology.

The two checked out biplane construction and were passengers on an early Curtiss Airplane and Motor Co.-designed biplane that required the pilot and passenger to sit on the wing. Westervelt later wrote that he “could never find any definite answer as to why it held together.” Both were convinced they could build a biplane better than any on the market.

In the autumn of 1915, Boeing returned to California to take flying lessons from another aviation pioneer, Glenn Martin. Before leaving, he asked Westervelt to start designing a new, more practical airplane. Construction of the twin-float seaplane began in Boeing’s boathouse, and they named it the B & W, after their initials. THIS WAS THE BEGINNING.  Boeing has since developed a position in global markets unparallel relative to competition.

This post is specifically involved with the 777 product and changes in the process of being made to upgrade that product to retain markets and fend off competition such as the Airbus. Let’s take a look.

SPECIFICATION FOR THE 777:

In looking at the external physical characteristics, we see the following:

BOEING GENERAL EXTERNAL ARRANGEMENTS

As you can see, this is one BIG aircraft with a wingspan of approximately 200 feet and a length of 242 feet for the “300” version.  The external dimensions are for passenger and freight configurations.  Both enjoy significantly big external dimensions.

Looking at the internal layout for passengers, we see the following:

TYPICAL INTERIOR SEATING ARRANGEMENTS

TECHNICAL CHARACTERISTICS:

If will drill down to the nitty-gritty, we find the following:

TECHNICAL CHARACTERISTICS(1)

TECHNICAL CHARACTERISTICS(2)

As mentioned, the 777 also provides much needed services for freight haulers the world over.  In looking at payload vs. range, we see the following global “footprint” and long range capabilities from Dubai.  I have chosen but similar “footprints” may be had from Hong Kong, London, Los Angles, etc etc.

FREIGHTER PAYLOAD AND RANGE

Even with these very impressive numbers, Boeing felt an upgrade was necessary to remain competitive to other aircraft manufacturers.

UPGRADES:

Ever careful with its stewardship of the cash-generating 777 program, Boeing is planning a series of upgrades to ensure the aircraft remains competitive in the long-range market well after the 777X derivative enters service.

The plan, initially revealed this past January, was presented in detail by the company for the first time on March 9 at the International Society of Transport Air Trading meeting in Arizona. Aimed at providing the equivalent of two percent (2%) fuel-burn savings in baseline performance, the rolling upgrade effort will also include a series of optional product improvements to increase capacity by up to fourteen (14) seats that will push the total potential fuel-burn savings on a per-seat basis to as much as five percent (5%) over the current 777-300ER by late 2016.

At least 0.5% of the overall specific fuel-burn savings will be gained from an improvement package to the aircraft’s GE90-115B engine, the first elements of which General Electric will test later this year.  The bulk of the savings will come from broad changes to reduce aerodynamic drag and structural weight. Additional optional improvements to the cabin will also provide operators with more seating capacity and upgraded features that would offer various levels of extra savings on a per-seat basis, depending on specific configurations and layouts.  The digital below will highlight the improvements announced.

UPGRADES FOR 777

“We are making improvements to the fuel-burn performance and the payload/range and, at same time, adding features and functionality to allow the airlines to continue to keep the aircraft fresh in their fleets,” says 777 Chief Project Engineer and Vice President Larry Schneider. The upgrades, many of which will be retro-fittable, come as Boeing continues to pursue new sales of the current-generation twin to help maintain the 8.3-per-month production rate until the transition to the 777X at the end of the decade. Robert Stallard, an analyst at RBS Europe, notes that Boeing has a firm backlog of 273 777-300s and 777Fs, which equates to around 2.7 years of current production. “We calculate that Boeing needs to get 272 new orders for the 777 to bridge the current gap and then transition production phase on the 777X,” he says.

The upgrades will also boost existing fleets, Boeing says. “Our 777s are operated by the world’s premier airlines and now we are seeing the Chinese carriers moving from 747 fleets to big twins,” says Schneider. “There are huge 777 fleets in Europe and the Middle East, as well as the U.S., so enabling [operators] to be able to keep those up to date and competitive in the market—even though some of them are 15 years old—is a big element of this.”

Initial parts of the upgrade are already being introduced and, in the tradition of the continuous improvements made to the family since it entered service, will be rolled into the aircraft between now and the third quarter of 2016. “There is not a single block point in 2016 where one aircraft will have everything on it. It is going to be a continuous spin-out of those capabilities,” Schneider says. Fuel-burn improvements to both the 777-200LR and -300ER were introduced early in the service life of both derivatives, and the family has also received several upgrades to the interior, avionics and maintenance features over the last decade.

The overall structural weight of the 777-300ER will be reduced by 1,200 lb. “When the -300ER started service in 2004 it was 1,800 lb. heavier, so we have seen a nice healthy improvement in weight,” he adds. The reductions have been derived from production-line improvements being introduced as part of the move to the automated drilling and riveting process for the fuselage, which Boeing expects will cut assembly flow time by almost half. The manufacturer is adopting the fuselage automated upright build (FAUB) process as part of moves to streamline production ahead of the start of assembly of the first 777-9X in 2017.

One significant assembly change is a redesign of the fuselage crown, which follows the simplified approach taken with the 787. “All the systems go through the crown, which historically is designed around a fore and aft lattice system that is quite heavy. This was designed with capability for growth, but that was not needed from a systems standpoint. So we are going to a system of tie rods and composite integration panels, like the 787. The combination has taken out hundreds of pounds and is a significant improvement for workers on the line who install it as an integrated assembly,” Schneider says. Other reductions will come from a shift to a lower weight, less dense form of cabin insulation and adoption of a lower density hydraulic fluid.

Boeing has also decided to remove the tail skid from the 777-300ER as a weight and drag reduction improvement after developing new flight control software to protect the tail during abused takeoffs and landings. “We redesigned the flight control system to enable pilots to fly like normal and give them full elevator authority, so they can control the tail down to the ground without touching it. The system precludes the aircraft from contacting the tail,” Schneider says. Although Boeing originally developed the baseline electronic tail skid feature to prevent this from occurring on the -300ER, the “old system allowed contact, and to be able to handle those loads we had a lot of structure in the airplane to transfer them through the tailskid up through the aft body into the fuselage,” he adds. “So there are hundreds of pounds in the structure, and to be able to take all that out with the enhanced tail strike-protection system is a nice improvement.”

Boeing is also reducing the drag of the 777 by making a series of aerodynamic changes to the wing based on design work conducted for the 787 and, perhaps surprisingly, the long-canceled McDonnell Douglas MD-12. The most visible change, which sharp-eyed observers will also be able to spot from below the aircraft, is a 787-inspired inboard flap fairing redesign.

“We are using some of the technology we developed on the 787 to use the fairing to influence the pressure distribution on the lower wing. In the old days, aerodynamicists were thrilled if you could put a fairing on an airplane for just the penalty of the skin friction drag. On the 787, we spent a lot of time working on the contribution of the flap fairing shape and camber to control the pressures on the lower wing surface.”

Although Schneider admits that the process was a little easier with the 787’s all-new wing, Boeing “went back and took a look at the 777 and we found a nice healthy improvement,” he says. The resulting fairing will be longer and wider, and although the larger wetted area will increase skin friction, the overall benefits associated with the optimized lift distribution over the whole wing will more than compensate. It’s a little counterintuitive,” says Schneider, adding that wind-tunnel test results of the new shape showed close correlation with benefits predicted by computational fluid dynamics (CFD) analysis using the latest boundary layer capabilities and Navier-Stokes codes.

Having altered the pressure distribution along the underside of the wing, Boeing is matching the change on the upper surface by reaching back to technology developed for the MD-12 in the 1990s. The aircraft’s outboard raked wingtip, a feature added to increase span with the development of the longer-range variants, will be modified with a divergent trailing edge. “Today it has very low camber, and by using some Douglas Aircraft technology from the MD-12 we get a poor man’s version of a supercritical airfoil,” says Schneider. The tweak will increase lift at the outboard wing, making span loading more elliptical and reducing induced drag.

Boeing has been conducting loads analysis on the 777 wing to “make sure we understand where all those loads will go,” he says. A related loads analysis to evaluate whether the revisions could also be incorporated into a potential retrofit kit will be completed this month. “When we figure out at which line number those two changes will come together (as they must be introduced simultaneously by necessity), we will do a single flight to ensure we don’t have any buffet issues from the change in lift distribution. That’s our certification plan,” Schneider says.

A third change to the wing will focus on reducing the base drag of the leading-edge slat by introducing a version with a sharper trailing edge. “The trailing-edge step has a bit of drag associated with it, so we will be making it sharper and smoothing the profile,” he explains. The revised part will be made thinner and introduced around mid-2016. Further drag reductions will be made by extending the seals around the inboard end of the elevator to reduce leakage and by making the passenger windows thicker to ensure they are fully flush with the fuselage surface. The latter change will be introduced in early 2016.

In another change adopted from the 787, Boeing also plans to alter the 777 elevator trim bias. The software-controlled change will move the elevator trailing edge position in cruise by up to 2 deg., inducing increased inverse camber. This will increase the download, reducing the overall trim drag and improving long-range cruise efficiency.

The package of changes means that range will be increased by 100nm or, alternatively, an additional 5,000 lb. of payload can be carried. Some of this extra capacity could be utilized by changes in the cabin that will free up space for another fourteen (14) seats. These will include a revised seat track arrangement in the aft of the cabin to enable additional seats where the fuselage tapers. Some of the extra seating, which will increase overall seat count by three percent (3%), could feature the option of arm rests integrated into the cabin wall. Schneider says the added seats, on top of the baseline  two percent (2%) fuel-burn improvement, will improve total operating efficiency by five percent (5%) on a block fuel per-seat basis.

Other cabin change options will include repackaged Jamco-developed lavatory units that provide the same internal space as today’s units but are eight (8) inch narrower externally. The redesign includes the option of a foldable wall between two modules, providing access for a disabled passenger and an assistant. Boeing is also developing noise-damping modifications to reduce cabin sound by up to 2.5 db, full cabin-length LED lighting and a 787-style entryway around Door 2. Boeing is also preparing to offer a factory-fitted option for electrically controlled window shades, similar to the 777 system developed as an aftermarket modification by British Airways.

CONCLUSIONS:

As you can see, the 777 is preparing to continue service for decades ahead by virtue of the modifications and improvements shown above.

As always, I welcome your comments.


The following post uses as references:  Bloomberg Business, National Council on Higher Education, The Business Insider, and The College Board.

May 7 (Bloomberg) — A group of bankers that advises the Federal Reserve’s Board of Governors has warned that farmland prices are inflating “a bubble” and growth in student-loan debt has “parallels to the housing crisis.  “Recent growth in student-loan debt, to nearly $1 trillion, now exceeds credit-card outstandings and has parallels to the housing crisis,” the council said in its Feb. 3, 2012, meeting. The trend has continued, with the Consumer Financial Protection Bureau saying in March 2012 that student debt had topped a record $1 trillion.

I was extremely surprised when first reading this statement published in Bloomberg Business.  That surprise lasted about ten seconds.  My wife and I put three boys through college; Mercer University, Tulane University and the University of Georgia.  Even though they worked and had scholarships, the cost of a university education, even ten years ago, was daunting to a working engineer and his working wife.  I can categorically state the cost of tuition for our three increased between three (3) and ten (10) percent each year depending upon the school.  Have you purchased textbooks lately?  Our youngest son had a book bill approaching $600.00 one semester. He was an undergraduate.  Absolutely ridiculous.  Of course this is not to mention lab fees, parking permits, mandated university health insurance and a host of other requirements the universities levied upon students and their parents. The chart below will indicate the increases by year.  As you can see, these numbers are for public colleges.

TUITION INFLATION

The next chart will indicate tuition and total costs by region for two and four year colleges both public and private.

TUITION AND TOTAL COSTS

Seven in ten (10) seniors (69%) who graduated from public and nonprofit colleges in 2013 had student loan debt, with an average of $28,400 per borrower. This represents a two percent increase from the average debt of 2012 public and nonprofit graduates.  The map below indicates graphically the problem by region.

STUDENT LOAN BY STATE AND REGION

The twenty (20) high-debt public colleges had an individual average debt levels ranging from $33,950 to $48,850, while the twenty (20) high-debt nonprofit colleges ranged from $41,750 to $71,350. Of the twenty (20) low-debt colleges listed, nine were public and eleven (11) were nonprofit schools, with reported average debt levels ranging between $2,250 and $11,200.

Let’s now congratulate the class of 2014. You now “enjoy” being the class with the most individual student debt in history.  This comes at a time when job opportunities are at a minimum.

THE CLASS OF 2014

From the experience my wife and I had with our three boys, I’m not surprised at the following chart.  As you can see, those who wish to obtain a college degree are sometimes forced to secure loans due to the extremely high tuition, book and living expenses. In looking at the graph below, we see that number approaching seventy percent (70%).

MORE STUDENTS TAKING ON DEBT

The next one is really scary.  Take a look.

YOUNG PEOPLE AND WHAT THEY OWE VS WHAT THEY MAKE

Student debt up approximately thirty-five percent (35%) and earned income down five percent (5%) from the year 2009.

One individual, in business, has recognized the gravity relative to this issue—Mr. Mark Cuban.

Mark Cuban states:

“It’s inevitable at some point there will be a cap on student loan guarantees. And when that happens you’re going to see a repeat of what we saw in the housing market: when easy credit for buying or flipping a house disappeared we saw a collapse in the price housing, and we’re going to see that same collapse in the price of student tuition, and that’s going to lead to colleges going out of business.”

I honestly believe Mr. Cuban is correct.  Our economy either improves with significant increases in individual earning power or great issues with student debt will create a situation where smaller less prestigious colleges and even universities will have to close.  The drop in enrollment will be significant.  We have already experienced that in our town with two four year colleges closing.

OK, the big question.  With the economy being in “the tank”, is a four year college degree worth it?  Would it be better and with less stress to look at the “trades”?

  • Plumber. The median salary for a plumber was $50,180 in 2013, the BLS reports. The best-paid pulled in about $86,120, while those in the bottom 10 percent earned $29,590 a year.
  • Electrician Salary: $55,783 (average).
  • Average Machinist Salary: $37,000.
  • Auto Mechanic. The median annual salary for mechanic and automotive technicians was $36,710 in 2013. The highest earners in the field made about $61,210, while the lowest-paid took home $20,920.
  • CAD Technician Salary: $47,966 (average)

Please don’t misunderstand, I have a four year degree in Engineering and love the profession.  The university experience is wonderful and extremely rewarding, but maybe learning a trade and going to night school to obtain that four year degree is not such a bad idea after all.  Even if it does mean an eight or ten year journey.  If there is one thing I have learned in my seventy-two years: we have time. YES, there is time to do what you wish to do.  You have to develop a plan, set realistic goals, stay focused and DO NOT GIVE UP.

I welcome your comments.

FACIAL RECOGNITION

March 6, 2015


THE TECHNOLOGY:

Humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability and that ability results from proper software being installed into PCs with memory adequate to manipulate the mapping process.

In the mid 1960s, scientists began working to us computers to recognize human faces.  This certainly was not easy at first. Facial recognition software and hardware have come a long way since those fledgling early days and definitely involve mathematical algorithms.

ALGORITHMS;

An algorithm is defined by Merriam-Webster as follows:

“a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly :  a step-by-step procedure for solving a problem or accomplishing some end especially by a computer.”

Some facial recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject’s face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.

Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances.

Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. These landmarks are defined as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:

  • Distance between the eyes
  • Width of the nose
  • Depth of the eye sockets
  • The shape of the cheekbones
  • The length of the jaw line

These nodal points are measured thereby creating a numerical code, called a face-print, representing the face in the database.

In the past, facial recognition software has relied on a 2D image to compare or identify another 2D image from the database. To be effective and accurate, the image captured needed to be of a face that was looking almost directly at the camera, with little variance of light or facial expression from the image in the database. This created quite a problem.

In most instances the images were not taken in a controlled environment. Even the smallest changes in light or orientation could reduce the effectiveness of the system, so they couldn’t be matched to any face in the database, leading to a high rate of failure. In the next section, we will look at ways to correct the problem.

A newly-emerging trend in facial recognition software uses a 3D model, which claims to provide more accuracy. Capturing a real-time 3D image of a person’s facial surface, 3D facial recognition uses distinctive features of the face — where rigid tissue and bone is most apparent, such as the curves of the eye socket, nose and chin — to identify the subject. These areas are all unique and don’t change over time.

Using depth and an axis of measurement that is not affected by lighting, 3D facial recognition can even be used in darkness and has the ability to recognize a subject at different view angles with the potential to recognize up to 90 degrees (a face in profile).

Using the 3D software, the system goes through a series of steps to verify the identity of an individual.

 

The nodal points or recognition points are demonstrated with the following graphic.

POINTS OF RECOGNITION

This is where Machine Vision or MV comes into the picture.  Without MV, facial recognition would not be possible.  An image must first be taken, then that image is digitized and processed.

MACHINE VISION:

Facial recognition is one example of a non-industrial application for machine vision (MV).   This technology is generally considered to be one facet in the biometrics technology suite.  Facial recognition is playing a major role in identifying and apprehending suspected criminals as well as individuals in the process of committing a crime or unwanted activity.  Casinos in Las Vegas are using facial recognition to spot “players” with shady records or even employees complicit with individuals trying to get even with “the house”.   This technology incorporates visible and infrared modalities face detection, image quality analysis, verification and identification.   Many companies use cloud-based image-matching technology to their product range providing the ability to apply theory and innovation to challenging problems in the real world.  Facial recognition technology is extremely complex and depends upon many data points relative to the human face.

Facial recognition has a very specific methodology associated with it. You can see from the graphic above points of recognition are “mapped” highlighting very specific characteristics of the human face.  Tattoos, scars, feature shapes, etc. all play into identifying an individual.  A grid is constructed of “surface features”; those features are then compared with photographs located in data bases or archives.  In this fashion, positive identification can be accomplished. The graphic below will indicate the grid developed and used for the mapping process.  Cameras are also shown that receive the image and send that image to software used for comparisons.

MAPPING AND CAMERAS USED

One of the most successful cases for the use of facial recognition was last year’s bombing during the Boston Marathon.   Cameras mounted at various locations around the site of the bombing captured photographs of Tamerian and Dzhokhar Tsarnaev prior to their backpack being positioned for both blasts.  Even though this is not facial recognition in the truest since of the word, there is no doubt the cameras were instrumental in identifying both criminals.

TAMERIAN AND DZHOKHAR

Dzhokhar Tsarnaev is now the only of the court case that will determine life or death.  There is no doubt, thanks to MV, concerning his guilt or innocence.  He is guilty. Jurors in Boston heard harrowing testimony this week in his trial. Survivors, as well as police and first responders, recounted often-disturbing accounts of their suffering and the suffering of runners and spectators as a result of the attack. Facial recognition was paramount in his identification and ultimate capture.

As always, your comments are very welcome.


For those who might be a little bit unsure as to the definition of machine vision (MV), let’s now define the term as follows:

“ Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.”

There are non-industrial uses for MV also, such as 1.) Law enforcement,  2.) Security,  3) Facial recognition and 4.)  Robotic surgery.  With this being the case, there must be several critical, if not very critical, aspects to the technology that must be considered prior to purchasing an MV system or even discussing MV with a vendor.  We will now take a closer look as to those critical factors.

CRITICAL FACTORS:

As with any technology, there are certain elements critical to success. MV is no different.  There are six (6) basic and critical factors for choosing an imaging system.  These are as follows:

  • Resolution–A higher resolution camera will undoubtedly help increase accuracy by yielding a clearer, more precise image for analysis.   The downside to higher resolution is slower speed. The resolution of the image required for an inspection is determined by two factors: 1.) the field of view required and 2.) minimal dimension that must be resolved by the imaging system. Of course, lenses, lighting, mechanical placement and other factors come into play, but, if we confine our discussion to pixels, we can avoid having to entertain these topics.  This allows us to focus on the camera characteristics. Using an example, if a beverage packaging system requires verification that a case is full prior to sealing, it is necessary for the camera to image the contents from above and verify that twenty-four (24) bottle caps are present. It is understood that since the bottles and caps fit within the case, the caps are then the smallest feature within the scene that must be resolved. Once the application parameters and smallest features have been determined, the required camera resolution can be roughly defined. It is anticipated that, when the case is imaged, the bottle caps will stand out as light objects within a dark background. With the bottle caps being round, the image will appear as circles bounded by two edges with a span between the edges. The edges are defined as points where the image makes a transition from dark to light or light to dark. The span is the diametrical distance between the edges. At this point, it is necessary to define the number of pixels that will represent each of these points. In this application, it would be sufficient to allow three pixels to define each of the two edges and four pixels to define the span. Therefore, a minimum of ten pixels should be used to define the 25mm bottle cap in the image. From this, we can determine that one pixel will represent 2.5mm of the object itself. Now we can determine the overall camera resolution. Choosing 400mm of the object to represent the horizontal resolution of the camera, the camera then needs a minimum of 400/2.5 = 160 pixels of horizontal resolution. Vertically, the camera then needs 250/2.5 = 100 pixels vertical resolution. Adding a further 10% to each resolution to account for variations in the object location within the field of view will result in the absolute minimum camera resolution. There are pros and cons to image resolution as follows.

Pros and cons of increasing resolution

Digital cameras transmit image data as a series of digital numbers that represent pixel values. A camera with a resolution of 200 x 100 pixels will have a total of 20,000 pixels, and, therefore, 20,000 digital values must be sent to the acquisition system. If the camera is operating at a data rate of 25MHz, it takes 40 nanoseconds to send each value. This results in a total time of approximately .0008 seconds, which equates to 1,250 frames per second. Increasing the camera resolution to 640 x 480 results in a total of 307,200 pixels, which is approximately 15 times greater. Using the same data rate of 25MHz, a total time of 0.012288 seconds, or 81.4 frames per second, is achieved. These values are approximations and actual camera frame rates will be somewhat slower because we have to add exposure and setup times, but it is apparent that an increase in camera resolution will result in a proportional decrease in camera frame rate. While a variety of camera output configurations will enable increased camera resolution without a sacrifice in frame rate, these are accompanied by additional complexity and associated higher costs.

  • Speed of Exposure—Products rapidly moving down a conveyor line will require much faster exposure speed from vision systems.  Such applications might be candy or bottled products moving at extremely fast rates. When selecting a digital camera, the speed of the object being imaged must be considered as well.   Objects not moving during exposure would be perfectly fine with relatively simple and an inexpensive camera or cameras.  These could be used and provide perfectly satisfactory results. Objects moving continuously require other considerations. For other cases, objects may be stationary only for very short periods of time then move rapidly.  If this is the case, inspection during the stationary period would be the most desirable.

Stationary or slow-moving objects: Area array cameras are well suited to imaging objects that are stationary or slow moving. Because the entire area array must be exposed at once, any movement during the exposure time will result in a blurring of the image. Motion blurring can; however, be controlled by reducing exposure times or using strobe lights.

Fast-moving objects: When using an area array camera for objects in motion, some consideration must be taken for the amount of movement with respect to the exposure time of the camera and object resolution where it is defined as the smallest feature of the object represented by one pixel. A rule of thumb when acquiring images of a moving object is that the exposure must occur in less time than it takes for the object to move beyond one pixel. If you are grabbing images of an object that is moving steadily at 1cm/second and the object resolution is already set at 1 pixel/mm, then the absolute maximum exposure time required is 1/10 per second. There will be some amount of blur when using the maximum amount of exposure time since the object will have moved by an amount equal to 1 pixel on the camera sensor. In this case, it is preferable to set the exposure time to something faster than the maximum, possibly 1/20 per second, to keep the object within half a pixel. If the same object moving at 1cm/second has an object resolution of 1 pixel/micrometer, then a maximum exposure of 1/10,000 of a second would be required. How fast the exposure can be set will be dependent on what is available in the camera and whether you can get enough light on the object to obtain a good image. Additional tricks of the trade can be employed when attempting to obtain short exposure times of moving objects. In cases where a very short exposure time is required from a camera that does not have this capability, an application may make use of shutters or strobed illumination. Cameras that employ multiple outputs can also be considered if an application requires speeds beyond the capabilities of a single output camera.

  • Frame Rate–The frame rate of a camera is the number of complete frames a camera can send to an acquisition system within a predefined time period.  This period is usually stated as a specific number of frames per second.  As an example, a camera with a sensor resolution of 640 x 480 is specified with a maximum frame rate of 50 frames per second. Therefore, the camera needs 20 milliseconds to send one frame following an exposure. Some cameras are unable to take a subsequent exposure while the current exposure is being read, so they will require a fixed amount of time between exposures when no imaging takes place. Other types of cameras, however, are capable of reading one image while concurrently taking the next exposure. Therefore, the readout time and method of the camera must be considered when imaging moving objects. Further consideration must be given to the amount of time between frames when exposure may not be possible.
  • Spectral Response and Responsiveness–All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1000 nanometers. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined.  A matching and selection process must be undertaken by application engineers to insure proper usage of equipment relative to the needs at hand. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined. Filters may be incorporated into the application to tune out the unwanted wavelengths, but it will still be necessary to know how well the camera will respond to the desired wavelength. The responsiveness of a camera defines how sensitive the camera is to a fixed amount of exposure. The responsiveness of a camera can be defined in LUX or DN/(nJ/cm^2). “LUX” is a common term among imaging engineers that is used to define the sensitivity in photometric units over the range of visible light, where DN/ (nJ/ cm^2) is a radiometric expression that does not limit the response to visible light. In general, both terms state how the camera will respond to light. The radiometric expression of x DN/ (nJ/cm^2) indicates that, for a known exposure of 1 nJ/cm^2, the camera will output pixel data of x DN (digital numbers, also known as grayscale). Gain is another feature available within some cameras that can provide various levels of responsiveness. The responsiveness of a camera should be stated at a defined gain setting. Be aware, however, that a camera may be said to have high responsiveness at a high gain setting, but increased noise level can lead to reduced dynamic range.
  • Bit Depth–Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth.  Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs. Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth. This bit depth typically ranges from 8 to 16-bits. In monochrome cameras, the bit depth defines the quantity of gray levels from dark to light, where a pixel value of 0 is %100 dark and 255 (for 8-bit cameras) is %100 white. Values between 0 and 255 will be shades of gray, where near 0 values are dark gray and near 255 values are almost white. 10-bit data will produce 1024 distinct levels of gray, while 12-bit data will produce 4096 levels. Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.
  • Lighting— Perhaps no other aspect of vision system design and implementation consistently has caused more delay, cost-overruns, and general consternation than lighting. Historically, lighting often was the last aspect specified, developed, and or funded, if at all. And this approach was not entirely unwarranted, as until recently there was no real vision-specific lighting on the market, meaning lighting solutions typically consisted of standard incandescent or fluorescent consumer products, with various amounts of ambient contribution.  The following lighting sources are now commonly used in machine vision:
  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode • Metal Halide (Mercury)
  • Xenon
  • High Pressure Sodium

Fluorescent, quartz-halogen, and LED are by far the most widely used lighting types in machine vision, particularly for small to medium scale inspection stations, whereas metal halide, xenon, and high pressure sodium are more typically used in large scale applications, or in areas requiring a very bright source. Metal halide, also known as mercury, is often used in microscopy because it has many discrete wavelength peaks, which complements the use of filters for fluorescence studies. A xenon source is useful for applications requiring a very bright, strobed light.

Historically, fluorescent and quartz halogen lighting sources have been used most commonly. In recent years, LED technology has improved in stability, intensity, and cost-effectiveness; however, it is still not as cost-effective for large area lighting deployment, particularly compared with fluorescent sources. However, on the other hand, if application flexibility, output stability, and longevity are important parameters, then LED lighting might be more appropriate. Depending on the exact lighting requirements, oftentimes more than one source type may be used for a specific implementation, and most vision experts agree that one source type cannot adequately solve all lighting issues. It is important to consider not only a source’s brightness, but also its spectral content.  Microscopy applications, for example often use a full spectrum quartz halogen, xenon, or mercury source, particularly when imaging in color; however a monochrome LED source is also useful for B&W CCD camera, and also now for color applications, with the advent of “all color – RGB” and white LED light heads. In those applications requiring high light intensity, such as high-speed inspections, it may be useful to match the source’s spectral output with the spectral sensitivity of your particular vision camera. For example, CMOS sensor based cameras are more IR sensitive than their CCD counterparts, imparting a significant sensitivity advantage in light-starved inspection settings when using IR LED or IR-rich Tungsten sources.

Vendors must be contacted to recommend proper lighting relative to the job to be accomplished.

HAPPY BIRTHDAY NASA

February 13, 2015


References for this post are taken from NASA Tech Briefs, Vol 39, No 2, February 2015.

In 1915 the National Advisory Committee for Aeronautics (NACA) was formed by our Federal government.  March 3, 2015 marks the 100th birthday of that occasion.  The NACA was created by Congress over concerns the U.S. was losing its edge in aviation technology to Europe.  WWI was raging at that time and advances in aeronautics was at the forefront of the European efforts to win the war using “heavier than air” craft to pound the enemy.  The purpose of NACA was to “supervise and direct the scientific study of the problems of flight with a view to their practical solution.” State-of-the-art laboratories were constructed in Virginia, California and Ohio that led to fundamental advances in aeronautics enabling victory in WW II. Those efforts also supported national security efforts during the cold war era with Russia.  DNA of the entire aircraft industry is infused with technology resulting from research and development efforts from NASA.

HUMBLE BEGINNINGS

NACA was formed by employing twelve (12) unpaid individuals with an annual budget of $5,000.00.  Over the course of forty-three (43) years, the agency made fundamental breakthroughs in aeronautical technology in ways affecting the manner in which airplanes and space craft are designed, built, tested and flown today.  NACA’s early successes are as follows:

  • Cowling to improve the cooling of radial engines thereby reducing drag.
  • Wind tunnel testing simulating air density at different altitudes, which engineers used to design and test dozens of wing cross-sections.
  • Wind tunnel with slots in walls that slowed researchers to take measurements of aerodynamic forces at supersonic speeds.
  • Design principals involving the shape of an aircraft’s wing in relation to the rest of the airplane to reduce drag and allow supersonic flight.
  • Distribution of reports and studies to aircraft manufacturers allowing designs benefiting from R & D efforts.
  • Development of airfoil and propeller shapes which simplified aircraft design. These shapes eventually were incorporated into aircraft such as the P-51 Mustang.
  • Research and wind tunnel testing led to the adoption of the “coke-bottle” design that still influences our supersonic military aircraft of today.

As a result of NACA efforts, flight tests were initiated on the first supersonic experimental airplane, the X-1.  This aircraft was flown by Captain Chuck Yeager and paved the way for further research into supersonic aircraft leading to the development of swept-wing configurations.

After the Soviet Union launched Sputnik 1 in 1957, the world’s first artificial satellite, Congress responded to the nation’s fear of falling behind by passing the National Aeronautics and Space Act of 1958.  NASA was borne.  The new agency, proposed by President Eisenhower, would be responsible for civilian human, satellite, and robotic space programs as well as aeronautical research and development. NACA was absorbed into the NASA framework.

ACHIEVEMENTS:

Looking at the achievements of NASA from that period of time, we see the following milestones:

  • 1959—Selection of seven (7) astronauts for Project Mercury.
  • 1960–Formation of NASA’s Marshall Space Flight Center with Dr. Werner von Braun as director.
  • 1961—President Kennedy structured a commitment to land a man on the moon.
  • 1962—John Glenn became the first American to circle the Earth in Friendship 7.
  • 1965—Gemini IV stayed aloft four (4) days during which time Edward H. White II performed the first space walk.
  • 1968—James A. Lovell Jr., William A Anders, and Frank Bormann flew the historic mission to circle the moon.
  • 1969—The first lunar landing.

Remarkable achievements that absolutely captured the imagination of most Americans.  It is extremely unfortunate that our nearsighted Federal government has chosen to reduce NASA funding and eliminate many of the manned programs and hardware previously on the “books”. We have seemingly altered course, at least relative to manned space travel.  Let’s hope we can get back on track in future years.

THE TRUTH IS OUT THERE

February 6, 2015


In John 18:38 we read the following from the King James Version of the Bible: “Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.”  Pilate did not stay for an answer.

One of my favorite television programs was the “X”-Files.  It’s been off the air for some years now but we are told will return as a “mini-series” sometime in the very near future.  The original cast; i.e. Fox Mulder and Dana Skully will again remind us—THE TRUTH IS OUT THERE.  The truth is definitely out there as indicated by the men and women comprising the Large Synoptic Survey Telescope team.  They are definitely staying for answers.  The team members posed for a group photograph as seen below.

LSST Team

THE MISSION:

The Large Synoptic Survey Telescope (LSST) structure is a revolutionary facility which will produce an unprecedented wide-field astronomical survey of our universe using an 8.4-meter ground-based telescope. The LSST leverages innovative technology in all subsystems: 1.) the camera (3200 Megapixels, the world’s largest digital camera), 2.) telescope (simultaneous casting of the primary and tertiary mirrors; 3.) two aspherical optical surfaces on one substrate), and 4.)  data management (30 Terabytes of data nightly.)  There will be almost instant alerts issued for objects that change in position or brightness.

The known forms of matter and types of energy experienced here on Earth account for only four percent (4%) of the universe. The remaining ninety-six percent ( 96 % ), though central to the history and future of the cosmos, remains shrouded in mystery. Two tremendous unknowns present one of the most tantalizing and essential questions in physics: What are dark energy and dark matter? LSST aims to expose both.

DARK ENERGY:

Something is driving the universe apart, accelerating the expansion begun by the Big Bang. This force accounts for seventy percent (70%) of the cosmos, yet is invisible and can only be “seen” by its effects on space. Because LSST is able to track cosmic movements over time, its images will provide some of the most precise measurements ever of our universe’s inflation. Light appears to stretch at the distant edges of space, a phenomenon known as red shift, and LSST may offer the key to understanding the cosmic anti-gravity behind it.

DARK MATTER:

Einstein deduced that massive objects in the universe bend the path of light passing nearby, proving the curvature of space. One way of observing the invisible presence of dark matter is examining the way its heavy mass bends the light from distant stars. This technique is known as gravitational lensing. The extreme sensitivity of the LSST, as well as its wide field of view, will help assemble comprehensive data on these gravitational lenses, offering key clues to the presence of dark matter. The dense and mysterious substance acts as a kind of galactic glue, and it accounts for twenty-five percent (25 %) of the universe.

From its mountaintop site, LSST will image the entire visible sky every few nights, capturing changes over time from seconds to years. Ultimately, after 10 years of observation, a stunning time-lapse movie of the universe will be created.

As the LSST stitches together thousands of images of billions of galaxies, it will process and upload that information for applications beyond pure research. Frequent and real time updates – 100 thousand a night – announcing the drift of a planet or the flicker of a dying star will be made available to both research institutions and interested astronomers.

In conjunction with platforms such as Google Earth, LSST will build a 3D virtual map of the cosmos, allowing the public to fly through space from the comfort of home.  ALLOWING THE PUBLIC is the operative phrase.. For the very first time, the public will have access to information, as it is presented, relative to the cosmos.  LSST educational materials will clearly specify National and State science, math and technology standards that are met by the activity. Our materials will enhance 21st century workforce skills, incorporate inquiry and problem solving, and ensure continual assessment embedded in instruction.

THE LOCATION:

The decision to place LSST on Cerro Pachón in Chile was made by an international site selection committee based on a competitive process.  In short, modern telescopes are located in sparsely populated areas (to avoid light pollution), at high altitudes and in dry climates (to avoid cloud cover). In addition to those physical concerns, there are infrastructure issues. The ten best candidate sites in both hemispheres were studied by the site selection committee. Cerro Pachón was the overall winner in terms of quality of the site for astronomical imaging and available infrastructure. The result will be superb deep images from the ultraviolet to near infrared over the vast panorama of the entire southern sky.

The location is shown by the following digital:

Construction Site

The actual site location, as you can see below, is a very rugged outcropping of rock now used by farmers needing food for their sheep.

The Mountain Location

The Observatory will be located about 500km (310.6856  miles )north of Santiago, Chile, about 52km (32.3113 miles) or 80km (49.7097  miles) by road from La Serena, at an altitude of 2200 meters (7217.848 feet).  It lies on a 34,491Ha (85,227 acres.) site known as “Estancia El Tortoral” which was purchased by AURA on the open market in 1967 for use as an astronomical observatory.

When purchased, the land supported a number of subsistence farmers and goat herders. They were allowed to continue to live on the reserve after it was purchased by AURA and have gradually been leaving voluntarily for more lucrative jobs in the nearby towns.

As a result of departure of most of its human inhabitants and a policy combining environmental protection with “benign neglect” on the part of the Observatory, the property sees little human activity except for the roads and relatively small areas on the tops of Cerro Tololo and Cerro Pachon. As a result, much of the reserve is gradually returning to its natural state. Many native species of plants and animals, long thought in danger of extinction, are now returning. The last half of the trip to Tololo is an excellent opportunity to see a reasonably intact Chilean desert ecosystem.

THE FACILITY:

LSST construction is underway, with the NSF funding authorized as of 1 August 2014.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaire Charles Simonyi and Bill Gates of $20 and $10 million respectively.  $7.5 million is included in the U.S. President’s FY2013 NSF budget request. The Department of Energy is expected to fund construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy.

Construction of the primary mirror at the University of Arizona‘s Steward Observatory Mirror Lab, the most critical and time-consuming part of a large telescope’s construction, is almost complete. Construction of the mold began in November 2007, mirror casting was begun in March 2008, and the mirror blank was declared “perfect” at the beginning of September 2008.  In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

As of December 2014, the primary mirror is completed awaiting final approval, and the mirror transport box is ready to receive it for storage until it is shipped to Chile.

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. In November 2009, the blank was shipped to Harvard University for storage until funding to complete it was available. On October 21, 2014, the secondary mirror blank was delivered from Harvard to Exelis for fine grinding.

Site excavation began in earnest March 8, 2011, and the site had been leveled by the end of 2011. Also during that time, the design continued to evolve, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In November 2014, the LSST camera project, which is separately funded by the United States Department of Energy , passed its “critical decision 2″ design review and is progressing toward full funding.

When completed, the facility will look as follows with the mirror mounted as given by the second JPEG:


Artist Rendition of Building(2)

 

Telescope Relative to Building

MIRROR DESIGN:

The assembled mirror structure is given below.

Telescope

In the LSST optical design, the primary (M1) and tertiary (M3) mirrors form a continuous surface without any vertical discontinuities. Because the two surfaces have different radii of curvature, a slight cusp is formed where the two surfaces meet, as seen in the figure below. This design makes it possible to fabricate both the primary and tertiary mirrors from a single monolithic substrate. We refer to this option as the M1-M3 monolith.

MIRROR MONOLITH

After a feasibility review was held on 23 June 2005, the LSST project team adopted the monolithic approach to fabricating the M1 and M3 surfaces as its baseline. In collaboration with the University of Arizona and Steward Observatory Mirror Lab (SOML) construction has begun with detailed engineering of the mirror blank and the testing procedures for the M1-M3 monolith. The M1-M3 monolith blank will be formed from Ohara E6 low expansion glass using the spin casting process developed at SOML.

At 3.42 meters in diameter the LSST secondary mirror will be the largest convex mirror ever made. The mirror is aspheric with approximately 17 microns of departure from the best-fit sphere. The design uses a 100 mm thick solid meniscus blank made of a low expansion glass (e.g. ULE or Zerodur) similar to the glasses used by the SOAR and Discovery Chanel telescopes. The mirror is actively supported by 102 axial and 6 tangent actuators. The alignment of the secondary to the M1-M3 monolith is accomplished by the 6 hexapod actuators between the mirror cell and support structure. The large conical baffle is necessary to prevent the direct reflection of star light from the tertiary mirror into the science camera.

SUMMARY:

The truth is out there and projects such as the one described in this post AND the Large Hadron Collider at CERN certainly prove some people and institutions are not at all reluctant to search for that truth, the ultimate purpose being to discover where we come from.  Are we truly made from “star stuff”?

 

Follow

Get every new post delivered to your Inbox.

Join 117 other followers