October 11, 2014
What would you call a BIG story? ISIL, Ebola Virus, Benghazi, IRS problems with Tea Party members, the search for the missing Malaysian jet? All are big stories and certainly deserve necessary airtime and commentary. There is one story that has gotten almost zero (0) airtime from the media and one story I feel is absolutely remarkable in importance relative to pushing the technological envelope. The Mars MAVEN mission has been a huge success to date with the unmanned craft now orbiting the “red” planet.
MAVEN is an acronym for NASA’s Mars Atmosphere and Volatile Evolution spacecraft which successfully entered Mars’ orbit at 10:24 p.m. EDT Sunday, Sept. 21, 2014 after traveling 442 million miles. The purpose for the mission is to study the Red Planet’s upper atmosphere as never before. This is the first spacecraft dedicated to exploring the tenuous upper atmosphere of Mars with the following objectives:
- Determine the role the loss of volatile gaseous substances to space from the Martian atmosphere has played through time.
- Determine the current state of the upper atmosphere, ionosphere, and interactions with the solar wind.
- Determine the current rates of escape of neutral gases and ions to space and the processes controlling them.
- Determine the ratio of stable isotopes in the Martian atmosphere.
There is some thought that by understanding the atmospheric conditions on Mars, we will gain better insights as to the evolutionary processes of that planet and maybe some ability to predict evolutionary processes on Earth. Also, discussions are well underway relative to future establishment of colonies on Mars. If that is to ever happen, we definitely will need additional information relative to atmospheric and surface conditions.
The graphic below is a pictorial of the MAVEN system. This is somewhat “busy” but one which captures several significant specifics of the hardware including onboard instrumentation.
Please note the graphic at the bottom comparing what is believed to be early atmospheric conditions with current atmospheric conditions. The loss of magnetic fields surrounding the planet is contributory to atmospheric losses. Could this happen to Earth’s atmosphere? That’s a question that we have yet to answer. Additional specifics can be seen from the following:
After a 442 million mile trip, how did MAVEN hook up with Mars? Very, very carefully. The blue line in the graphic below shows the first part of MAVEN’s trajectory during its initial approach and the beginning of the 35-hour capture orbit. The red section of the line indicates the 33-minute engine burn that slows the spacecraft so it can be captured into Martian orbit. Mars’ orbit around the sun is indicated by the white line to the right of the planet, and the Martian moons’ orbits are dimly visible in the background. This is a remarkable example of engineering and physics allowing for pinpoint accuracy relative to entry and the establishment of orbital stability.
MAVEN carries three instrument suites with eight scientific instrument packages designed to study the upper atmosphere and ionosphere of Mars and its interactions with the solar wind. Three of the instruments are located on the Articulating Payload Platform extending from the bus, including the Imaging Ultraviolet Spectrograph and a mass spectrometer that will sample the atmosphere in situ. The hardware housing these three packages is shown as follows:
The The Particles and Fields Package, built by the University of California at Berkeley with support from CU/LASP and Goddard Space Flight Center, contains six instruments that will characterize the solar wind and the ionosphere of the planet. The Remote Sensing Package, built by CU/LASP, will determine global characteristics of the upper atmosphere and ionosphere. The Neutral Gas and Ion Mass Spectrometer, provided by Goddard Space Flight Center, will measure the composition and isotopes of neutral ions. MAVEN also carries a government-furnished Electra UHF radio, shown by the graphic below, provides back-up data relay capability for the rovers on Mars’ surface.
Lockheed Martin, based in Littleton, Colorado, built the MAVEN spacecraft and provides mission operations. NASA’s Jet Propulsion Laboratory is providing navigation services, and CU/LASP conducts science operations and data distribution.
On February 19, the MAVEN team successfully completed the initial post-launch power-on and checkout of the spacecraft’s Electra ultra-high frequency (UHF) transceiver. This receiver is shown with the graphic below. This relay radio transmitter and receiver will be used for UHF communication with robots on the surface of Mars. Using the orbiter to relay data with this relay radio from Mars rovers and stationary landers boosts the amount of information that can be relayed back to Earth.
A part of NASA’s Mars Scout program, MAVEN is the culmination of 10 years of R&D. Some of that R&D went into designing the materials for the spacecraft’s instruments as well as for the satellite itself, which weighs about as much as a small car and has a 37 ft wingspan, including solar panel arrays. That panel system is shown as follows:
As you can see from the JPEG, the array is huge but necessary to power the complete system.
The craft’s core structures are made with carbon fiber composites made by TenCate Advanced Composites. The company is experienced in the design and fabrication of composites for aerospace applications, having already supplied them to previous Mars missions, including the Rover and Curiosity rovers. For MAVEN, which will orbit Mars for about one Earth year, TenCate engineered composite face sheets sandwiched between aluminum honeycomb sheets for the spacecraft’s primary bus structure.
Other materials in the orbiter include a cylindrical aluminum boat tail on the aft deck that provides engine structural support. The craft is kept at the correct operating temperature — 5F to 104F — using active thermal control and passive measures, such as several thermal materials for conducting or isolating heat. Most of the orbiter is enclosed within multi-layer insulation materials; the outside layer is black Kapton film coated with germanium.
Hopefully, you can see now why I feel MAVEN is a BIG story worthy of considerable air time. It’s a modern-day engineering marvel. I welcome your comments: firstname.lastname@example.org.
September 6, 2014
The following resources were used to produce this post: Internet Society, “Global Internet Report 2014”, SITEOPEDIA and HELPGUIDE.ORG, BBC News, “The Age of Internet Overload”.
WHAT IS THE INTERNET:
According to the Global Internet Society, “The Internet is a uniquely universal platform that uses the same standards in every country, so that every user can interact with every other user in ways unimaginable 10 years ago, regardless of the multitude of changes taking place.”
This statement sums it up in a very precise fashion. The Internet has undoubtedly changed the entire world. Open access to the Internet has revolutionized the way individuals communicate and collaborate, entrepreneurs and corporations conduct business, and governments and citizens interact. At the same time, the Internet established a revolutionary open model for its own development and governance, encompassing all stakeholders. Fundamentally, the Internet is a ‘network of networks’ whose protocols are designed to allow networks to interoperate. In the very beginning, these networks represented different academic, government, and research communities whose members needed to cooperate to develop common standards and manage joint resources. Later, as the Internet was commercialized, vendors and operators joined the open protocol development process and helped unleash the unprecedented era of growth and innovation.
INTERNET PENETRATION BY COUNTRY:
If we look at global Internet penetration by country, we see the following:
Internet penetration on a global basis is obvious for countries other than those considered third-world. Internet usage on a daily basis approaches use by one billion individuals per day. There should be no doubt that with numbers such as these, there will be those with obsessive/compulsive disorders producing addiction. With that being the case, what is Internet addiction?
Internet Addiction, otherwise known as computer addiction, online addiction, or Internet addiction disorder (IAD), covers a variety of impulse-control problems, including:
- Cybersex Addiction – compulsive use of Internet pornography, adult chat rooms, or adult fantasy role-play sites impacting negatively on real-life intimate relationships. The Internet is the cheapest, fastest, and most anonymous pornography source. Internet pornographers made over $1 billion in revenues dealing their merchandise on-line. The threat of pornography over the Internet cannot be discounted: 70 percent of children viewing pornography on the Internet do so in public schools and libraries (The Internet Online Summit, 1997). All of us realize that we are surrounded by various forms of pornography, whether noticing the “adult” section of videos at Blockbuster, surfing the Internet, seeing advertising which is clearly sexually suggestive, or innocently going to a movie that just happens to have some kind of sex scene.
- Cyber-Relationship Addiction – addiction to social networking, chat rooms, texting, and messaging to the point where virtual, online friends become more important than real-life relationships with family and friends. Facebook has 1.4 billion profiles, and 1.06 billion of those (or 15 percent of the world’s population) use Facebook regularly. Of those, 78 percent of users access Facebook on a mobile device a minimum of once a month. Every second, there are 8,000 likes on Instagram. Instagram launched in 2010, and boasts 200 million active users in 2014, with over 75 million users daily. Google+ has over 540 million profiles and over 300 million monthly active users. LinkedIn, launched in 2003, has 300 million users, and an average of two new members per second. Forty percent of users on LinkedIn check the site daily, and Mashable is the LinkedIn company with the most engaged following.
- Net Compulsions – such as compulsive online gaming, gambling, stock trading, or compulsive use of online auction sites such as eBay, often resulting in financial and job-related problems. Obsessive playing of off-line computer games, such as Solitaire or Minesweeper, or obsessive computer programming.
- Information Overload – compulsive web surfing or database searching, leading to lower work productivity and less social interaction with family and friends. An average US citizen on an average day consumes 100,500 words, whether that is email, messages on social networks, searching websites or anywhere else digitally. Take a look at the global statistics given below and consider what happens in sixty (60) seconds:
- 168 million e-mails sent
- 694,445 Google searches launched
- 695,000 Facebook updates attempted
- 370,000 Skype calls made
- 98,000 Tweets accomplished
- 20,000 new posts on TUMBLR
- 13,000 iPhone apps downloaded
- 6,600 new pictures on Flickr
- 1,500 new blog entries posted, (just like this one )
- 600+ videos posted totaling over 25 hours duration on YouTube
The most common of these Internet addictions are cybersex, online gambling, and cyber-relationship addiction. Talk about busy.
SIGNS AND SYMPTOMS:
Signs and symptoms of Internet addiction vary from person to person. For example, there are no set hours per day or number of messages sent that indicate Internet addiction. But here are some general warning signs that your Internet use may have become a problem:
- Losing track of time online. Do you frequently find yourself on the Internet longer than you intended? Does a few minutes turn into a few hours? Do you get irritated or cranky if your online time is interrupted? From a business standpoint, I have often heard the Internet is a “black hole” when it comes to wasting time. This is primarily due to net-surfing. I will admit, in the work I do as a consulting engineer, I use the Internet on a daily basis to investigate vendors and companies supplying services to complement my work. I don’t really consider this wasting time but actually saves time spent in research through phone calls, magazine searches, searches through Thomas Register, etc.
- Having trouble completing tasks at work or home. Do you find laundry piling up and little food in the house for dinner because you’ve been busy online? Perhaps you find yourself working late more often because you can’t complete your work on time—then staying even longer when everyone else has gone home so you can use the Internet freely.
- Isolation from family and friends. Is your social life suffering because of all the time you spend online? Are you neglecting your family and friends? Do you feel like no one in your “real” life—even your spouse—understands you like your online friends?
- Feeling guilty or defensive about your Internet use. Are you sick of your spouse nagging you to get off the computer or put your smart phone down and spend time together? Do you hide your Internet use or lie to your boss and family about the amount of time you spend on the computer or mobile devices and what you do while you’re online?
- Feeling a sense of euphoria while involved in Internet activities. Do you use the Internet as an outlet when stressed, sad, or for sexual gratification or excitement? Have you tried to limit your Internet time but failed?
If we look at Internet usage relative to addiction, we see the following for the United States:
This calculates to 988 hours per year for men and 728 hours per year for women. How much time do you spend per year reading a good book, calling your mother, taking a course at a local technical school or university, volunteering in your community, etc? Have you improved your reading speed and reading comprehension lately? You get the picture.
SITEOPEDIA has conducted polls that indicate significant addiction can result from Internet usage. The graphic below will highlight the results of that poll. Note: those indicating they are not addicted may just be lying. The real rates of addiction are estimates at best.
Those indicating they are addicted might consider the following recourse:
- Recognize any underlying problems that may support your Internet addiction. If you are struggling with depression, stress, or anxiety, for example, Internet addiction might be a way to self-soothe rocky moods. Have you had problems with alcohol or drugs in the past? Does anything about your Internet use remind you of how you used to drink or use drugs to numb yourself? Recognize if you need to address treatment in these areas or return to group support meetings.
- Build your coping skills. Perhaps blowing off steam on the Internet is your way of coping with stress or angry feelings. Or maybe you have trouble relating to others, or are excessively shy with people in real life. Building skills in these areas will help you weather the stresses and strains of daily life without resorting to compulsive Internet use.
- Strengthen your support network. The more relationships you have in real life, the less you will need the Internet for social interaction. Set aside dedicated time each week for friends and family. If you are shy, try finding common interest groups such as a sports team, education class, or book reading club. This allows you to interact with others and let relationships develop naturally.
Modify your Internet use step by step:
- To help you see problem areas, keep a log of how much you use the Internet for non-work or non-essential activities. Are there times of day that you use the Internet more? Are there triggers in your day that make you stay online for hours at a time when you only planned to stay for a few minutes?
- Set goals for when you can use the Internet. For example, you might try setting a timer, scheduling use for certain times of day, or making a commitment to turn off the computer, tablet, or smart phone at the same time each night. Or you could reward yourself with a certain amount of online time once you’ve completed a homework assignment or finished the laundry, for instance.
- Replace your Internet usage with healthy activities. If you are bored and lonely, resisting the urge to get back online can be very difficult. Have a plan for other ways to fill the time, such as going to lunch with a coworker, taking a class, or inviting a friend over.
WHAT WE DO:
The fascinating thing about Internet usage is what we actually do with all that time. From the graphic below, we see legitimate usage of the Internet to accomplish “chores” and execute responsibilities. I think shopping online and paying bills certainly fall within reason.
Wasting time on the Internet is a matter of definition. Please keep in mind the graphic below indicates time per DAY. Left side men—right side women.
OK, now that I have your attention, where do we go next?
We just might be doomed as a society. Curb that habit. I welcome your comments:
September 1, 2014
There is absolutely no doubt the entire world is dependent upon the generation and transmission of electricity. Those countries without electrical power are considered third world countries with no immediate hope of improving lives and living conditions and yet there just may be alternatives to generally held methods for generating electricity.
If we look at the definition for renewable energy, we see the following:
Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
We are all familiar with current methodologies for power generation. These are 1.) Hydroelectric, 2.) Nuclear, 3.) Coal-Powered, 4.) Oil-Fired, and 5.) Generation using Natural gas. The graphic below will indicate the percentages of each generation type by technique. This is for the United States. Other countries use generation methods relative to the availability of resources, political pressures and cultural pressures. Germany is in the process of abandoning their use of nuclear energy for power generation. This is a cultural and political decision and not entirely based upon scientific considerations.
You will notice that renewable energy was approximately 12.9 percent of the total generation within the United States in 2013. Please note also that hydroelectric is considered to be a source of renewable energy. This is show by the graphic below. To break this down even further, we look at the following:
Renewable energy is represented by five (5) categories:
One additional possibly is generation of electricity by virtue of tidal processes. This technology is in its infancy with work being accomplished on a “demonstration” scale. It is an up-and-coming methodology but right now does not enjoy a place within the list above.
Just how much energy results from each renewable category?
From above we see there has been growing dependence upon renewable technology as a source of electricity. Wind and biomass production are increasing while hydroelectric decreasing. Geothermal and solar remain about the same. The increase in energy production by biomass is significant. Very significant.
The Energy Information Agency (EIA) has collected the following data:
Why should governments and independent companies continue to consider renewable energy as a source of power? There are compelling reasons.
- ENVIRONMENTAL BENEFITS– For the most part, renewable sources of energy have minimal negative impact on our environment. They are paramount in reducing carbon dioxide emissions. Millions of people are exposed to toxic fumes from cooking fuels and kerosene lanterns, emissions from automobiles and energy sources for generating electricity. All result in chronic eye and lung conditions. Countries such as China and India have days where atmospheric particulate requires masks or face coverings when prolonged periods of outdoor activity are needed.
- ENERGY FOR THE FUTURE—Coal, oil, natural gas, and even nuclear energy are expendable non-renewable sources of energy. Once exhausted—gone forever. Prolonging their use is paramount. We will never completely remove ourselves from being a petro-based economy. Too many bi-products are made from petroleum. It is fantasy to suspect total elimination of petroleum usage.
- JOBS AND ECONOMY—Investments in hardware and infrastructure for renewable energy use requires money but can creates jobs. If you have been following the insanity relative to approval of the Keystone Pipeline you know the argument. On a global basis, we can see the following: (PLEASE NOTE: The numbers are in billions of US dollars )
The point with this graph is showing the increasing investment dollars for R & D efforts and production of infrastructure in allowing generation of energy.
- ENERGY SECURITY– The U.S. imported approximately 10.6 million barrels per day of petroleum in 2012 from about 80 countries. We exported 3.2 MMbd of crude oil and petroleum products, resulting in net imports (imports minus exports) equaling 7.4 MMbd. Net imports accounted for 40% of the petroleum consumed in the United States, the lowest annual average since 1991.
“Petroleum” includes crude oil and refined petroleum products like gasoline, and biofuels like ethanol and biodiesel. In 2012, about 80% of gross petroleum imports were crude oil, and about 57% of all crude oil that was processed in U.S. refineries was imported.
The top five source countries of U.S. petroleum imports in 2012 were Canada, Mexico, Saudi Arabia, Venezuela, and Russia. Their respective rankings vary based on gross petroleum imports or net petroleum imports (gross imports minus exports). Net imports from OPEC countries accounted for 55% of U.S. net imports.
One disadvantage with renewable energy is that it is difficult to generate the quantities of electricity that are as large as those produced by traditional fossil fuel generators. This may mean that we need to reduce the amount of energy we use or simply build more energy facilities. It also indicates that the best solution to our energy problems may be to have a balance of many different power sources.
Another disadvantage of renewable energy sources is the reliability of supply. Renewable energy often relies on the weather for its source of power. Hydro generators need rain to fill dams to supply flowing water. Wind turbines need wind to turn the blades, and solar collectors need clear skies and sunshine to collect heat and make electricity. When these resources are unavailable so is the capacity to make energy from them. This can be unpredictable and inconsistent. The current cost of renewable energy technology is also far in excess of traditional fossil fuel generation. This is because it is a new technology and as such has extremely large capital cost.
CONCLUSIONS: It remains right and proper that the Unites States and other countries continue research and development relative to renewable sources of energy. The cost of power generation is increasing and depletion of non-renewable sources is of great concern. We must continue efforts to improve technologies of renewable power to reduce the cost of infrastructure and delivery.
I would welcome your comments: email@example.com
August 9, 2014
One of the very best publications existing today is “NASA TECH BRIEFS, Engineering Solutions for Design & Manufacturing”. This monthly publication strives to transfer technology from NASA design centers to University and corporate entities hoping the research and development can be commercialized in some fashion. In my opinion, it is a marvelous resource and demonstrates avenues of investigation separate and apart from what we have come to know as the recognized NASA mission. As you well know, in the process of exploration, there are many very useful “down-to-Earth” developments that can utilize and commercialized to benefit manufacturing and our populace at large. These are enumerated in this publication. Several distinct areas within the magazine highlighting papers and studies may be seen as follows:
- Technology Focus: Mechanical Components
- Manufacturing & Prototyping
- Materials & Coatings
- Physical Sciences
- Patents of Note
- New For Design Engineers
As you can see, each of these areas concentrates upon differing subjects, all relating to engineering and product design.
Let me now mention several publications and papers coming from the Volume 38, Number 8 edition. This will give you some feel for the investigative work coming from the NASA research centers across our country. These are in the August 2014 magazine.
- “Extreme Low Frequency Acoustic Measurement System”, Langley Research Center, Hampton, Va.
- “Piezoelectric Actuated Valve for Operation in Extreme Conditions”, Jet Propulsion Laboratory, Pasadena, California.
- “Compact Active Vibration Control System”, Langley Research Center, Hampton, Va.
- “Rotary Series Elastic Actuator”, L.B.J Space Center, Houston, Texas.
- “HALT Technique to Predict the Reliability of Solder Joints in a Shorter Duration”, Jet Propulsion Laboratory, Pasadena, California.
I feel one the great failure of our federal government is the abdication of manned-space programs. WE REALLY SCREWED UP on this one. If you have read any of my previous posting on this subject you will understand my complete and utter amazement relative to that decision by the Executive and Legislative branches of our government. This, to some extent, underscores the deplorable lack of vision existing at the highest levels. We have decided to let the Russians get us up and back. Very bad decision on our part. Now, it is important to note that NASA is far from being dormant-NASA is working.
Let’s take a look at the various NASA locations and the areas of research they are undertaking.
- Ames Research Center: Technological Strengths: Information Technology, Biotechnology, Nanotechnology, Aerospace Operations Systems, Rotorcraft, Thermal Protection Systems.
- Armstrong Flight Research Center: Technological Strengths: Aerodynamics, Aeronautics Flight Testing, Aeropropulsion, Flight Systems, Thermal Testing Integrated Systems Test and Validation.
- Glenn Research Center: Technological Strengths: Aeropropulsion, Communications, Energy Technology, High-Temperature Materials Research.
- Goddard Space Flight Center: Technological Strengths: Earth and Planetary Science Missions, LIDAR, Cryogenic Systems, Tracking, Telemetry, Remote Sensing, Command.
- Jet Propulsion Laboratory: Technological Strengths: Near/Deep-Space Mission Engineering, Microspacecraft, Space Communications, Information Systems, Remote Sensing, Robotics.
- Johnson Space Center: Technological Strengths: Artificial Intelligence and Human Computer Interface, Life Sciences, Human Space Flight Operations, Avionics, Sensors, Communication.
- Kennedy Space Center: Technological Strengths: Fluids and Fluid Systems, Materials Evaluation, Process Engineering Command, Control, and Monitor Systems, Range Systems, Environmental Engineering and Management.
- Langley Research Center: Technological Strengths: Aerodynamics, Flight Systems, Materials, Structures, Sensors, Measurements, Information Sciences.
- Marshall Space Flight Center: Technological Strengths: Materials, Manufacturing, Nondestructive Evaluations, Biotechnology, Space Propulsion, Controls and Dynamics, Structures, Microgravity Processing.
- Stennis Space Center: Technological Strengths: Propulsion Systems, Test/Monitoring, Remote Sensing, Nonintrusive Instrumentation.
- NASA Headquarters: Technological Strengths: NASA Planning and Management.
I can strongly recommend to you the “Tech Brief” publication. It’s free. You may find further investigation into the areas of research can benefit you and your company. Take a look.
As always, I welcome your comments. Many thanks.
August 5, 2014
The following post is taken from a PDHonline course this author has written for professional engineers. The entire course may be found from PDHonline.org. Look for Introduction to Reliability Engineering.
One of the most difficult issues when designing a product is determining how long it will last and how long it should last. If the product is robust to the point of lasting “forever” the price of purchase will probably be prohibitive compared with competition. If it “dies” the first week, you will eventually lose all sales momentum and your previous marketing efforts will be for naught. It is absolutely amazing to me as to how many products are dead on arrival. They don’t work, right out of the box. This is an indication of slipshod design, manufacturing, assembly or all of the above. It is definitely possible to design and build quality and reliability into a product so that the end user is very satisfied and feels as though he got his money’s worth. The medical, automotive, aerospace and weapons industries are certainly dependent upon reliability methods to insure safe and usable products so premature failure is not an issue. The same thing can be said for consumer products if reliability methods are applied during the design phase of the development program. Reliability methodology will provide products that “fail safe”, if they fail at all. Component failures are not uncommon to any assembly of parts but how that component fails can mean the difference between a product that just won’t work and one that can cause significant injury or even death to the user. It is very interesting to note that German and Japanese companies have put more effort into designing in quality at the product development stage. U.S. companies seem to place a greater emphasis on solving problems after a product has been developed.  Engineers in the United States do an excellent job when cost reducing a product through part elimination, standardization, material substitution, etc but sometimes those efforts relegate reliability to the “back burner”. Producibility, reliability, and quality start with design, at the beginning of the process, and should remain the primary concern throughout product development, testing and manufacturing.
QUALITY VS RELIABILITY:
There seems to be general confusion between quality and reliability. Quality is the “totality of features and characteristics of a product that bear on its ability to satisfy given needs; fitness for use”. “Reliability is a design parameter associated with the ability or inability of a product to perform as expected over a period of time”. It is definitely possible to have a product of considerable quality but one with questionable reliability. Quality AND reliability are crucial today with the degree of technological sophistification, even in consumer products. As you well know, the incorporation of computer driven and / or computer-controlled products has exploded over the past two decades. There is now an engineering discipline called MECHATRONICS that focuses solely on the combining of mechanics, electronics, control engineering and computing. Mr. Tetsuro Mori, a senior engineer working for a Japanese company called Yaskawa, first coined this term. The discipline is also alternately referred to as electromechanical systems. With added complexity comes the very real need to “design in” quality and reliability and to quantify the characteristics of operation, including the failure rate, the “mean time between failure” (MTBF ) and the “mean time to failure” ( MTTF ). Adequate testing will also indicate what components and subsystems are susceptible to failure under given conditions of use. This information is critical to marketing, sales, engineering, manufacturing, quality and, of course, the VP of Finance who pays the bills.
Every engineer involved with the design and manufacture of a product should have a basic knowledge of quality and reliability methods and practices.
I think it’s appropriate to define Reliability and Reliability Engineering. As you will see, there are several definitions, all basically saying the same thing, but important to mention, thereby grounding us for the course to follow.
“Reliability is, after all, engineering in its most practical form.”
James R. Schlesinger
Former Secretary of Defense
“Reliability is a projection of performance over periods of time and is usually defined as a quantifiable design parameter. Reliability can be formally defined as the probability or likelihood that a product will perform its intended function for a specified interval under stated conditions of use. “
John W. Priest
Engineering Design for Producibility and
“ Reliability engineering provides the tools whereby the probability and capability of an item performing intended functions for specified intervals in specified environments without failure can be specified, predicted, designed-in, tested, demonstrated, packaged, transported, stored installed, and started up; and their performance monitored and fed back to all organizations.”
“Reliability is the science aimed at predicting, analyzing, preventing and mitigating
failures over time.”
John D. Healy, PhD
“Reliability is —blood, sweat, and tears engineering to find out what could go wrong —, to organize that knowledge so it is useful to engineers and managers, and then to act
on that knowledge”
Ralph A. Evans
“The conditional probability, at a given confidence level, that the equipment
will perform its intended function for a specified mission time when operating
under the specified application and environmental stresses. “
The General Electric Company
“By its most primitive definition, reliability is the probability that no failures will occur in a given time interval of operation. This time interval may be a single operation, such as a mission, or a number of consecutive operations or missions. The opposite of reliability is unreliability, which is defined as the probability of failure in the same time interval “.
“Reliability Theory and Practice”
Personally, I like the definition given by Dr. Healy although the phrase “performing intended functions for specified intervals in specified environments “ adds a reality to the definition that really should be there. Also, there is generally associated with reliability data a confidence level. We will definitely discuss confidence level later on and how that factors into the reliability process. Reliability, like all other disciplines, has its own specific vocabulary and understanding “the words” is absolutely critical to the overall process we wish to follow.
The main goal of reliability engineering is to minimize failure rate by maximizing MTTF. The two main goals of design for reliability are:
- Predict the reliability of an item; i.e. component, subsystem and system ( fit the life model and/or estimate the MTTF or MTBF )
- Design for environments that promote failure.  To do this, we must understand the KNPs and the KCPs of the entire system or at least the mission critical subassemblies of the system.
The overall effort is concerned with eliminating early failures by observing their distribution and determining, accordingly, the length of time necessary for debugging and methods used to debug a system or subsystem. Further, it is concerned with preventing wearout failures by observing the statistical distribution of wearout and determining the preventative replacement periods for the various parts. This equates to knowing the MTTF and MTBF. Finally, its main attention is focused on chance failures and their prevention, reduction or complete elimination because it is the chance failures that most affect equipment reliability in actual operation. One method of accomplishing the above two goals is by the development and refinement of mathematical models. These models, properly structured, define and quantify the operation and usage of components and systems.
No mechanical or electromechanical product will last forever without preventative maintenance and / or replacing critical components. Reliability engineering seeks to discover the weakest link in the system or subsystem so any eventual product failure may be predicted and consequently forestalled. Any operational interruption may be eliminated by periodically replacing a part or an assembly of parts prior to failure. This predictive ability is achieved by knowing the meantime to failure (MTTF) and the meantime between failures (MTBF) for “mission critical” components and assemblies. With this knowledge, we can provide for continuous and safe operation, relative to a given set of environmental conditions and proper usage of the equipment itself. The test, find, fix (TAAF of TAAR) approach is used throughout reliability testing to discover what components are candidates for continuous “preventative maintenance” and possibly ultimate replacement. Sometimes designing redundancy into a system can prolong the operational life of a subsystem or system but that is generally costly for consumer products. Usually, this is only done when the product absolutely must survive the most rigorous environmental conditions and circumstances. Most consumer products do not have redundant systems. Airplanes, medical equipment and aerospace equipment represent products that must have redundant systems for the sake of continued safety for those using the equipment. As mentioned earlier, at the very worst, we ALWAYS want our mechanism to “fail safe” with absolutely no harm to the end-user or other equipment. This can be accomplished through engineering design and a strong adherence to accepted reliability practices. With this in mind, we start this process by recommending the following steps:
- Establish reliability goals and allocate reliability targets.
- Develop functional block diagrams for all critical systems
- Construct P-diagrams to identify and define KCPs and KNPs
- Benchmark current designs
- Identify the mission critical subsystems and components
- Conduct FMEAs
- Define and execute pre-production life tests; i.e. growth testing
- Conduct life predictions
- Develop and execute reliability audit plans
It is appropriate to mention now that this document assumes the product design is, at least, in the design confirmation phase of the development cycle and we have been given approval to proceed. Most NPI methodologies carry a product though design guidance, design confirmation, pre-pilot, pilot and production phases. Generally, at the pre-pilot point, the design is solidified so that evaluation and reliability testing can be conducted with assurance that any and all changes will be fairly minor and will not involve a “wholesale” redesign of any component or subassembly. This is not to say that when “mission critical components” fail we do not make all efforts to correct the failure(s) and put the product back into reliability testing. At the pre-pilot phase, the market surveys, consumer focus studies and all of the QFD work have been accomplished and we have tentative specifications for our product. Initial prototypes have been constructed and upper management has “signed off” and given approval to proceed into the next development cycles of the project. ONE CAUTION: Any issues involving safety of use must be addressed regardless of any changes becoming necessary for an adequate “fix”. This is imperative and must occur if failures arise, no matter what phase of the program is in progress.
Critical to these efforts will be conducting HALT and HAST testing to “make the product fail”. This will involve DOE (Design of Experiments) planning to quantify AND verify FMEA estimates. Significant time may be saved by carefully structuring a reliability evaluation plan to be accomplished at the component, subsystem and system levels. If you couple these tests with appropriate field-testing, you will develop a product that will “go the distance” relative to your goals and stay well within your SCR (Service Call Rate) requirements. Reliability testing must be an integral part of the basic design process and time must be given to this effort. The NPI process always includes reliability testing and the assessment of those results from that testing. Invariability, some degree of component or subsystem redesign results from HALT or HAST because weaknesses are made known that can and will be eliminated by redesign. In times past, engineering effort has always been to assign a “safety factor” to any design process. This safety factor takes into consideration “unknowns” that may affect the basic design. Unfortunately, this may produce a design that is structurally robust but fails due to Key Noise Parameters (KNPs) or Key Control Parameters (KCPs).
As you might expect, this is a “lick and a promise” relative to the subject of reliability. It’s a very complex subject but one that has provided remarkable life and quality to consumer and commercial products. I would invite you to take a look at the literature and further your understanding of the “ins and outs” of the technology. As always, I welcome your comments.
July 29, 2014
Information for this post came from the NASA web site. All of the information relative to the program and the flight hardware is derived from same.
In my opinion, our country made a huge mistake in abdicating our hard-won position relative to manned space flight. Due to the very near-sighted government types in Washington D.C., we were perfectly willing to let the Russians carry our crews to and from the International Space Station (ISS). According to CNSNews.com – “Russia will charge the U.S. National Aeronautics and Space Administration (NASA) $71 million to transport just one American astronaut to the International Space Station aboard its Soyuz spacecraft in 2016”. That’s more than triple the $22 million per seat charged in 2006, according to a July 8 audit report by NASA’s inspector general. NASA, at this time, has little choice but to pay Russia’s inflated ticket prices. In August of 2011, the U.S. space agency retired its 30-year-old space shuttle program and now, NASA has no way of getting American astronauts to the space station. The Russian Soyuz is “the only vehicle capable of transporting crew to the ISS”. During the second half of 2011, the price per seat jumped to $43 million. The price of purchased seats for launches in 2014 and 2015 are $55.6 million and $60 million, respectively, the audit report noted. Again, 2016, $71 million for a “ride” to the ISS. Could we not see that coming? Are they so blind in Washington that the obvious is overlooked? (Maybe we were trying to improve our golf game or possibly attending a fund raiser.) With issues in Crimea and the Ukraine, we may be denied altogether.
Well, NASA does have one program, the ORION that promises to get manned-space efforts back on track. ORION will push the envelope and investigate manned-space flight well beyond low Earth orbit (LEO).
The spacecraft will launch on Exploration Flight Test-(EFT-1), an un-crewed mission planned for this year, 2014. This test will see Orion travel farther into space than any human spacecraft has gone in more than 40 years. EFT-1 data will influence design decisions, validate existing computer models and innovative new approaches to space systems development, as well as reduce overall mission risks and costs. Lockheed Martin is the prime contractor for EFT-1 flight. The EFT-1 will take Orion to an altitude of approximately 3,600 miles above the Earth’s surface, more than 15 times farther than the International Space Station’s orbital position. By flying Orion out to those distances, NASA will be able to see how hardware and software perform in and return from deep space journeys. A graphic depiction of EFT-1 may be seen with the graphic below. As you can see, the launch vehicle will be the DELTA IV Heavy Rocket.
The Orion flight test vehicle is comprised of five primary elements which will be operated and evaluated during the test flight:
- The Launch Abort System (LAS) – Propels the Orion Crew Module to safety in an emergency during launch or ascent
- The Orion Crew Module (CM) – Houses and transports NASA’s astronauts during spaceflight missions
- The Service Module (SM) – Contains Orion’s propulsion, power and life-support systems
- The Spacecraft Adaptor and Fairings – Connects Orion to the launch vehicle
- The Multi-Purpose Crew Vehicle to Stage Adaptor (MSA) – Connects the entire vehicle structure to the kick stage of the rocket
The JPEGs below will indicate the basic configuration of the system and the five (5) modules comprising the “complete package”.
In the very first un-manned test mission, the following targets and goals will be explored:
- Programmatic Risk Reduction – Critical flight data collected from EFT-1 will validate Orion’s ability to withstand re-entry speeds greater than 20,000 miles per hour and safely return the astronauts to Earth. Reentry at these speeds has not been attempted before. The ablative shields will be given a remarkable test during reentry. Other systems will be evaluated relative to reducing possible risks.
- Technical Risk Reduction – Valuable data about key systems functions and capabilities such as kick stage processing on the launch pad, vehicle fueling and stacking, and crew module recovery will ensure these systems are designed and built correctly.
- Demonstrates Efficiencies – Gives NASA the chance to continue to refine its production and coordination processes, aligning with the agency’s commitment to build the world’s most cutting-edge spacecraft in the most cost-efficient manner. We sometimes look at the entirety of the assembly and fail to realize the tremendous number of individual components needing to network and perform together. This includes the redundant systems certainly required for a complex mission such as this.
- Enhances and Sustains Industry Partnerships – Orion’s design teams will gain important experience and training to ensure the industry is prepared for a launch of Orion in 2017 aboard the SLS. John Doan said: “No man is an Island, entire of itself; every man is a piece of the Continent, a part of the main.” NASA-ORION is the very same way. The teams will be evaluated as well as the “hardware” to make sure continuous success is obtainable and everyone is on board relative to work assignment and job duties.
- Skill Sustainment – Focusing on mission flight-test objectives, helps to reduce or eliminate risks to crew, and refines Orion core-systems development. This is a big objective. Everyone comes home—and not in a body bag. The crew must remain safe at all time during takeoff, the mission and reentry.
The next few years will be exciting years for NASA and ORION will definitely get us back into space. Manned missions will once again be on the agenda. Hopefully, time off will be no detriment to success and mission-critical critical components will meet the demands of NASA engineers and scientists. I welcome your comments.
July 12, 2014
I really don’t know how I missed this one. This document deals with “phone sats”. You can get a better feel for the technology by taking a look at NASA press release 13-107. Let’s do that right now.
NASA Successfully Launches Three Smartphone Satellites
WASHINGTON — Three smartphones destined to become low-cost satellites rode to space Sunday aboard the maiden flight of Orbital Science Corp.’s Antares rocket from NASA’s Wallops Island Flight Facility in Virginia.
The trio of “PhoneSats” is operating in orbit, and may prove to be the lowest-cost satellites ever flown in space. The goal of NASA’s PhoneSat mission is to determine whether a consumer-grade smartphone can be used as the main flight avionics of a capable, yet very inexpensive, satellite.
Transmissions from all three PhoneSats have been received at multiple ground stations on Earth, indicating they are operating normally. The PhoneSat team at the Ames Research Center in Moffett Field, Calif., will continue to monitor the satellites in the coming days. The satellites are expected to remain in orbit for as long as two weeks.
“It’s always great to see a space technology mission make it to orbit — the high frontier is the ultimate testing ground for new and innovative space technologies of the future,” said Michael Gazarik, NASA’s associate administrator for space technology in Washington.
“Smartphones offer a wealth of potential capabilities for flying small, low-cost, powerful satellites for atmospheric or Earth science, communications, or other space-born applications. They also may open space to a whole new generation of commercial, academic and citizen-space users.”
Satellites consisting mainly of the smartphones will send information about their health via radio back to Earth in an effort to demonstrate they can work as satellites in space. The spacecraft also will attempt to take pictures of Earth using their cameras. Amateur radio operators around the world can participate in the mission by monitoring transmissions and retrieving image data from the three satellites. Large images will be transmitted in small chunks and will be reconstructed through a distributed ground station network. The JPEGS shown below will give indication as to the orbit.
The systems are now operating properly and orbiting Earth delivering information that will be used in evaluating the program. I feel NASA has married the private and public sectors to produce workable technology that will represent much lower costs yet, hopefully, the same results. Time will tell. According to Chad Frost, Chief of the Mission Design Division at NASA Ames, “We all carry around smartphones these days, so we’re intimately familiar with what a smartphone is and what it can do. And a few years ago, we had the intriguing idea that you might actually be able to build a spacecraft around a smartphone. So, we were very intrigued by the notion that you could build a very small spacecraft based entirely on consumer electronics devices and other low-cost systems.”
JPEGs of the configuration may be seen by the following JPEG:
PhoneSat is a nano-satellite, categorizing the mass as between one and ten kilograms. Additionally, PhoneSat is a 1U CubeSat, having a volume of around one liter. The PhoneSat Project strives to decrease the cost of satellites while not sacrificing performance. In an effort to achieve this goal, the project is based around Commercial Off-The-Shelf (COTS) electronics to provide functionality for as many parts as possible while still creating a reliable satellite. Two copies of PhoneSat 1.0 were launched mid April 2013 along with an early prototype of PhoneSat 2.0 referred to as PhoneSat 2.0.beta. PhoneSat 2.4 is sitting on the launch pad ready for lift-off. The PhoneSats use a Google Nexus smartphone running the Android 2.3.3. operating system. Two of the PhoneSats have standard smartphone cameras that were used to take images of Earth from space. The first JPEG in this post shows one of those pictures.
Now, here is a fact that blows me away. NASA engineers kept the total cost of the components for the three prototype satellites in the PhoneSat project between $3,500 and $7,000 by using primarily commercial hardware and keeping the design and mission objectives to a minimum.
NASA added items a satellite needs that the smartphones do not have — a larger, external lithium-ion battery bank and a more powerful radio for messages it sends from space. The smartphone’s ability to send and receive calls and text messages has been disabled. Each smartphone is housed in a standard cubesat structure, measuring about 4 inches square. The smartphone acts as the satellite’s onboard computer. Its sensors are used for attitude determination and its camera for Earth observation.
There are several phases to “powering-up” the PhoneSat system. These are as follows:
Phase 1: After the initialization phase, the phone is in phase 1 in which it performs a health check. During this phase, each sensor and subsystem is checked and data is compiled into a standard health packet, stored in the smartphone’s SD card and transmitted over the beacon radio at a regular interval of 30 seconds. The last 10 health packets are stored in the SD card. After every 10 packets sent, the beacon radio is rebooted. This phase happens during the first 24 hours of the mission. The mission time is kept in the phone throughout the mission so that a system reboot during this phase does not reset the 24 hour countdown A health packet consists of: Satellite ID, restart counter, reboot counter, Phase 1 count, Phase 2 count, time, battery voltage, temp 1, temp 2, accel X, accel Y, accel Z, Mag X, Mag Y, Mag Z, text “hello from the avcs”.
Phase 2: This phase starts once a full system health check has been performed. During this phase, image packets and health packets are sent to Earth through the beacon radio. A health packet is sent once for every 9 image packets downlinked.
This phase can be divided in 3 sub-phases:
• Health Data Measurements: Health data is measured and the 10 most recent samples are stored in the SD card.
• Health Data Downlink: Once 9 packets have been sent through the beacon containing image information, the 10th one is reserved for a health packet.
• Image Sequence: One picture is taken every minute until 100 pictures are taken and stored to the SD card. Pictures are then analyzed and the top image is selected. This image is packetized and compiled into standard image packets. These image packets are transmitted over the beacon radio coupled with health packets in the ratio explained above.
Safe Mode: If the watchdog detects that the phone is not sending any data to the radio for a certain period of time, the spacecraft functionality is reduced to the bare minimum. In this condition, the spacecraft only transmits health data containing the last 10 sensor data values stored in the SD card prior to failure. This mode lasts for 90 minutes. After this period, the spacecraft resumes its normal operations. A safe mode packet consists of: Satellite_ID, last 10 voltage values, last 10 temperature sensor 1 values, last 10 temperature sensor 2 values, text “SAFEMODE”.
The timeline for research and development started in 2009. Definite planning has gone into the program. You may see that timeline below.
As mentioned above, PhoneSat 2.0 has already been scheduled for launch later on this year, 2014. The technology is definitely evolving. NASA is working towards extremely low-cost deployments that provide workable communications to government agencies and private concerns.
I welcome your comments.