September 13, 2014
The data presented in this post results from work accomplished by the Pew Charitable Trust.
I have a client located thirty-seven (37) miles from my business office. Fortunately, my commute to their facility is via our interstate highway system. It is absolutely amazing what I see traveling those seventy-four (74) miles most days of the work week. I see people reading the morning paper, ladies applying makeup, every third person talking on the cell phone, texting, people reading a book and TONS of people, mostly younger individuals, rocking out to the music they undoubtedly love. One unmistakable fact—you can’t miss the number of “big rigs” moving across our country. Regardless as to the time of day, they are out in force.
Let’s take a look at several very interesting statistics relative to transportation:
- The transportation sector accounts for seventy percent (70%) of all fuel consumption in the United States.
- Medium and heavy-duty trucks, using 2011 figures, represent seven percent (7%) of all vehicles on the road but consume twenty-five percent (25%) of the fuel used by all vehicles.
- In 2013, trucks consumed 2.7 million barrels of petroleum per day.
- Fuel is the single largest cost of owning and operating heavy-duty trucks, with the average cost per vehicle being $73,000 per year.
- The average fuel consumption for an “eighteen-wheeler” is six and one-half (6.5) miles per gallon.
- Goods and services provided by these trucks account for an average of $1100.00 per year in added expense for each consumer. This is indirect cost passed on to the purchaser.
These facts to me are definitely eye-opening. In August of 2012, the U.S. Environmental Protection Agency (EPA) and the National Highway Traffic Safety Administration (NHTSA) finalized fuel efficiency and emission standards dictating café standards of 54.5 MPG for light-duty trucks and passenger vans. These fuel consumption regulations become effective in 2025. In September of 2011, the first-ever standards for medium and heavy-duty trucks were finalized by the same agencies. This standard covers a time period of 2014 through 2018. Data is now being accumulated for the purpose of further defining the action.
These standards hope to bring about the following beneficial conditions:
- A $50 billion reduction in fuel cost to transportation companies. Truck owners would save approximately $30,000 per year per truck.
- A reduction in carbon pollution by 270 metric tons per year.
- A net fleet savings of $0.21 per mile every thirteen months.
- Saving 1.4 million barrels of petroleum per day.
- Reduction of indirect cost to consumers of $250 near-term and $450 short-term.
- Reduction of air-borne particulate saving health cost by $1.3 billion to $4.2 billion by 2030.
I think these goals are achievable but do present engineering challenges to auto and truck designers and manufacturers. We are now seeing great efforts towards compliance with designers looking at the following areas:
- Design of more efficient engines.
- Using computational fluid dynamics (CFD) methodology to investigate air flow around truck bodies.
- Lighter composite structures and materials to reduce the overall weight of cabs and trailers.
- Using alternate fuels such as CNG (compressed natural gas), fuel cells and on-board hydrogen production.
- Reduction or elimination of “idle” when a semi-truck is stationary
- Disengagement of rotating gears when a truck is stopped.
All efforts are exploratory at this point but great progress is being made to meet the requirements. I would love to hear from you relative to this post.
September 6, 2014
The following resources were used to produce this post: Internet Society, “Global Internet Report 2014”, SITEOPEDIA and HELPGUIDE.ORG, BBC News, “The Age of Internet Overload”.
WHAT IS THE INTERNET:
According to the Global Internet Society, “The Internet is a uniquely universal platform that uses the same standards in every country, so that every user can interact with every other user in ways unimaginable 10 years ago, regardless of the multitude of changes taking place.”
This statement sums it up in a very precise fashion. The Internet has undoubtedly changed the entire world. Open access to the Internet has revolutionized the way individuals communicate and collaborate, entrepreneurs and corporations conduct business, and governments and citizens interact. At the same time, the Internet established a revolutionary open model for its own development and governance, encompassing all stakeholders. Fundamentally, the Internet is a ‘network of networks’ whose protocols are designed to allow networks to interoperate. In the very beginning, these networks represented different academic, government, and research communities whose members needed to cooperate to develop common standards and manage joint resources. Later, as the Internet was commercialized, vendors and operators joined the open protocol development process and helped unleash the unprecedented era of growth and innovation.
INTERNET PENETRATION BY COUNTRY:
If we look at global Internet penetration by country, we see the following:
Internet penetration on a global basis is obvious for countries other than those considered third-world. Internet usage on a daily basis approaches use by one billion individuals per day. There should be no doubt that with numbers such as these, there will be those with obsessive/compulsive disorders producing addiction. With that being the case, what is Internet addiction?
Internet Addiction, otherwise known as computer addiction, online addiction, or Internet addiction disorder (IAD), covers a variety of impulse-control problems, including:
- Cybersex Addiction – compulsive use of Internet pornography, adult chat rooms, or adult fantasy role-play sites impacting negatively on real-life intimate relationships. The Internet is the cheapest, fastest, and most anonymous pornography source. Internet pornographers made over $1 billion in revenues dealing their merchandise on-line. The threat of pornography over the Internet cannot be discounted: 70 percent of children viewing pornography on the Internet do so in public schools and libraries (The Internet Online Summit, 1997). All of us realize that we are surrounded by various forms of pornography, whether noticing the “adult” section of videos at Blockbuster, surfing the Internet, seeing advertising which is clearly sexually suggestive, or innocently going to a movie that just happens to have some kind of sex scene.
- Cyber-Relationship Addiction – addiction to social networking, chat rooms, texting, and messaging to the point where virtual, online friends become more important than real-life relationships with family and friends. Facebook has 1.4 billion profiles, and 1.06 billion of those (or 15 percent of the world’s population) use Facebook regularly. Of those, 78 percent of users access Facebook on a mobile device a minimum of once a month. Every second, there are 8,000 likes on Instagram. Instagram launched in 2010, and boasts 200 million active users in 2014, with over 75 million users daily. Google+ has over 540 million profiles and over 300 million monthly active users. LinkedIn, launched in 2003, has 300 million users, and an average of two new members per second. Forty percent of users on LinkedIn check the site daily, and Mashable is the LinkedIn company with the most engaged following.
- Net Compulsions – such as compulsive online gaming, gambling, stock trading, or compulsive use of online auction sites such as eBay, often resulting in financial and job-related problems. Obsessive playing of off-line computer games, such as Solitaire or Minesweeper, or obsessive computer programming.
- Information Overload – compulsive web surfing or database searching, leading to lower work productivity and less social interaction with family and friends. An average US citizen on an average day consumes 100,500 words, whether that is email, messages on social networks, searching websites or anywhere else digitally. Take a look at the global statistics given below and consider what happens in sixty (60) seconds:
- 168 million e-mails sent
- 694,445 Google searches launched
- 695,000 Facebook updates attempted
- 370,000 Skype calls made
- 98,000 Tweets accomplished
- 20,000 new posts on TUMBLR
- 13,000 iPhone apps downloaded
- 6,600 new pictures on Flickr
- 1,500 new blog entries posted, (just like this one )
- 600+ videos posted totaling over 25 hours duration on YouTube
The most common of these Internet addictions are cybersex, online gambling, and cyber-relationship addiction. Talk about busy.
SIGNS AND SYMPTOMS:
Signs and symptoms of Internet addiction vary from person to person. For example, there are no set hours per day or number of messages sent that indicate Internet addiction. But here are some general warning signs that your Internet use may have become a problem:
- Losing track of time online. Do you frequently find yourself on the Internet longer than you intended? Does a few minutes turn into a few hours? Do you get irritated or cranky if your online time is interrupted? From a business standpoint, I have often heard the Internet is a “black hole” when it comes to wasting time. This is primarily due to net-surfing. I will admit, in the work I do as a consulting engineer, I use the Internet on a daily basis to investigate vendors and companies supplying services to complement my work. I don’t really consider this wasting time but actually saves time spent in research through phone calls, magazine searches, searches through Thomas Register, etc.
- Having trouble completing tasks at work or home. Do you find laundry piling up and little food in the house for dinner because you’ve been busy online? Perhaps you find yourself working late more often because you can’t complete your work on time—then staying even longer when everyone else has gone home so you can use the Internet freely.
- Isolation from family and friends. Is your social life suffering because of all the time you spend online? Are you neglecting your family and friends? Do you feel like no one in your “real” life—even your spouse—understands you like your online friends?
- Feeling guilty or defensive about your Internet use. Are you sick of your spouse nagging you to get off the computer or put your smart phone down and spend time together? Do you hide your Internet use or lie to your boss and family about the amount of time you spend on the computer or mobile devices and what you do while you’re online?
- Feeling a sense of euphoria while involved in Internet activities. Do you use the Internet as an outlet when stressed, sad, or for sexual gratification or excitement? Have you tried to limit your Internet time but failed?
If we look at Internet usage relative to addiction, we see the following for the United States:
This calculates to 988 hours per year for men and 728 hours per year for women. How much time do you spend per year reading a good book, calling your mother, taking a course at a local technical school or university, volunteering in your community, etc? Have you improved your reading speed and reading comprehension lately? You get the picture.
SITEOPEDIA has conducted polls that indicate significant addiction can result from Internet usage. The graphic below will highlight the results of that poll. Note: those indicating they are not addicted may just be lying. The real rates of addiction are estimates at best.
Those indicating they are addicted might consider the following recourse:
- Recognize any underlying problems that may support your Internet addiction. If you are struggling with depression, stress, or anxiety, for example, Internet addiction might be a way to self-soothe rocky moods. Have you had problems with alcohol or drugs in the past? Does anything about your Internet use remind you of how you used to drink or use drugs to numb yourself? Recognize if you need to address treatment in these areas or return to group support meetings.
- Build your coping skills. Perhaps blowing off steam on the Internet is your way of coping with stress or angry feelings. Or maybe you have trouble relating to others, or are excessively shy with people in real life. Building skills in these areas will help you weather the stresses and strains of daily life without resorting to compulsive Internet use.
- Strengthen your support network. The more relationships you have in real life, the less you will need the Internet for social interaction. Set aside dedicated time each week for friends and family. If you are shy, try finding common interest groups such as a sports team, education class, or book reading club. This allows you to interact with others and let relationships develop naturally.
Modify your Internet use step by step:
- To help you see problem areas, keep a log of how much you use the Internet for non-work or non-essential activities. Are there times of day that you use the Internet more? Are there triggers in your day that make you stay online for hours at a time when you only planned to stay for a few minutes?
- Set goals for when you can use the Internet. For example, you might try setting a timer, scheduling use for certain times of day, or making a commitment to turn off the computer, tablet, or smart phone at the same time each night. Or you could reward yourself with a certain amount of online time once you’ve completed a homework assignment or finished the laundry, for instance.
- Replace your Internet usage with healthy activities. If you are bored and lonely, resisting the urge to get back online can be very difficult. Have a plan for other ways to fill the time, such as going to lunch with a coworker, taking a class, or inviting a friend over.
WHAT WE DO:
The fascinating thing about Internet usage is what we actually do with all that time. From the graphic below, we see legitimate usage of the Internet to accomplish “chores” and execute responsibilities. I think shopping online and paying bills certainly fall within reason.
Wasting time on the Internet is a matter of definition. Please keep in mind the graphic below indicates time per DAY. Left side men—right side women.
OK, now that I have your attention, where do we go next?
We just might be doomed as a society. Curb that habit. I welcome your comments:
September 1, 2014
There is absolutely no doubt the entire world is dependent upon the generation and transmission of electricity. Those countries without electrical power are considered third world countries with no immediate hope of improving lives and living conditions and yet there just may be alternatives to generally held methods for generating electricity.
If we look at the definition for renewable energy, we see the following:
Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
We are all familiar with current methodologies for power generation. These are 1.) Hydroelectric, 2.) Nuclear, 3.) Coal-Powered, 4.) Oil-Fired, and 5.) Generation using Natural gas. The graphic below will indicate the percentages of each generation type by technique. This is for the United States. Other countries use generation methods relative to the availability of resources, political pressures and cultural pressures. Germany is in the process of abandoning their use of nuclear energy for power generation. This is a cultural and political decision and not entirely based upon scientific considerations.
You will notice that renewable energy was approximately 12.9 percent of the total generation within the United States in 2013. Please note also that hydroelectric is considered to be a source of renewable energy. This is show by the graphic below. To break this down even further, we look at the following:
Renewable energy is represented by five (5) categories:
One additional possibly is generation of electricity by virtue of tidal processes. This technology is in its infancy with work being accomplished on a “demonstration” scale. It is an up-and-coming methodology but right now does not enjoy a place within the list above.
Just how much energy results from each renewable category?
From above we see there has been growing dependence upon renewable technology as a source of electricity. Wind and biomass production are increasing while hydroelectric decreasing. Geothermal and solar remain about the same. The increase in energy production by biomass is significant. Very significant.
The Energy Information Agency (EIA) has collected the following data:
Why should governments and independent companies continue to consider renewable energy as a source of power? There are compelling reasons.
- ENVIRONMENTAL BENEFITS– For the most part, renewable sources of energy have minimal negative impact on our environment. They are paramount in reducing carbon dioxide emissions. Millions of people are exposed to toxic fumes from cooking fuels and kerosene lanterns, emissions from automobiles and energy sources for generating electricity. All result in chronic eye and lung conditions. Countries such as China and India have days where atmospheric particulate requires masks or face coverings when prolonged periods of outdoor activity are needed.
- ENERGY FOR THE FUTURE—Coal, oil, natural gas, and even nuclear energy are expendable non-renewable sources of energy. Once exhausted—gone forever. Prolonging their use is paramount. We will never completely remove ourselves from being a petro-based economy. Too many bi-products are made from petroleum. It is fantasy to suspect total elimination of petroleum usage.
- JOBS AND ECONOMY—Investments in hardware and infrastructure for renewable energy use requires money but can creates jobs. If you have been following the insanity relative to approval of the Keystone Pipeline you know the argument. On a global basis, we can see the following: (PLEASE NOTE: The numbers are in billions of US dollars )
The point with this graph is showing the increasing investment dollars for R & D efforts and production of infrastructure in allowing generation of energy.
- ENERGY SECURITY– The U.S. imported approximately 10.6 million barrels per day of petroleum in 2012 from about 80 countries. We exported 3.2 MMbd of crude oil and petroleum products, resulting in net imports (imports minus exports) equaling 7.4 MMbd. Net imports accounted for 40% of the petroleum consumed in the United States, the lowest annual average since 1991.
“Petroleum” includes crude oil and refined petroleum products like gasoline, and biofuels like ethanol and biodiesel. In 2012, about 80% of gross petroleum imports were crude oil, and about 57% of all crude oil that was processed in U.S. refineries was imported.
The top five source countries of U.S. petroleum imports in 2012 were Canada, Mexico, Saudi Arabia, Venezuela, and Russia. Their respective rankings vary based on gross petroleum imports or net petroleum imports (gross imports minus exports). Net imports from OPEC countries accounted for 55% of U.S. net imports.
One disadvantage with renewable energy is that it is difficult to generate the quantities of electricity that are as large as those produced by traditional fossil fuel generators. This may mean that we need to reduce the amount of energy we use or simply build more energy facilities. It also indicates that the best solution to our energy problems may be to have a balance of many different power sources.
Another disadvantage of renewable energy sources is the reliability of supply. Renewable energy often relies on the weather for its source of power. Hydro generators need rain to fill dams to supply flowing water. Wind turbines need wind to turn the blades, and solar collectors need clear skies and sunshine to collect heat and make electricity. When these resources are unavailable so is the capacity to make energy from them. This can be unpredictable and inconsistent. The current cost of renewable energy technology is also far in excess of traditional fossil fuel generation. This is because it is a new technology and as such has extremely large capital cost.
CONCLUSIONS: It remains right and proper that the Unites States and other countries continue research and development relative to renewable sources of energy. The cost of power generation is increasing and depletion of non-renewable sources is of great concern. We must continue efforts to improve technologies of renewable power to reduce the cost of infrastructure and delivery.
I would welcome your comments: firstname.lastname@example.org
August 23, 2014
The other day I was visiting a client and discussing a project involving the application of a robotic system to an existing work cell. The process is somewhat complex and we all questioned which employee would manage the operation of the cell including the system. The system is a SCARA type. SCARA is an acronym for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.
In 1981, Sankyo Seiki, Pentel and NEC presented a completely new concept for assembly robots. The robot was developed under the guidance of Hiroshi Makino, a professor at the University of Yamanashi and was called the Selective Compliance Assembly Robot Arm or SCARA.
SCARA’s are generally faster and cleaner than comparable Cartesian (X, Y, Z) robotic systems. Their single pedestal mount requires a small footprint and provides an easy, unhindered form of mounting. On the other hand, SCARA’s can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves. This software typically comes with the SCARA however and is usually transparent to the end-user. The SCARA system used in this work cell had the capability of one hundred programs with 100 data points per program. It was programmed by virtue of a “teach pendant” and “jog” switch controlling the placement of the robotic arm over the material.
Several names were mentioned as to who might ultimately, after training, be capable of taking on this task. When one individual was named, the retort was; “not James, he is only half smart. That got me to thinking about “smarts”. How smart is smart? At what point do we say smart is smart enough?
IQ CHARTS—WHO’S SMART
The concept of IQ or intelligence quotient was developed by either the German psychologist and philosopher Wilhelm Stern in 1912 or by Lewis Terman in 1916. This is depending on which of several sources you consult. Intelligence testing was initially accomplished on a large scale before either of these dates. In 1904 psychologist Alfred Binet was commissioned by the French government to create a testing system to differentiate intellectually normal children from those who were inferior.
From Binet’s work the IQ scale called the “Binet Scale,” (and later the “Simon-Binet Scale”) was developed. Sometime later, “intelligence quotient,” or “IQ,” entered our vocabulary. Lewis M. Terman revised the Simon-Binet IQ Scale, and in 1916 published the Stanford Revision of the Binet-Simon Scale of Intelligence (also known as the Stanford-Binet).
Intelligence tests are one of the most popular types of psychological tests in use today. On the majority of modern IQ tests, the average (or mean) score is set at 100 with a standard deviation of 15 so that scores conform to a normal distribution curve. This means that 68 percent of scores fall within one standard deviation of the mean (that is, between 85 and 115), and 95 percent of scores fall within two standard deviations (between 70 and 130). This may be shown from the following bell-shaped curve:
Why is the average score set to 100? Psychometritians, individuals who study the biology of the brain, utilize a process known as standardization in order to make it possible to compare and interpret the meaning of IQ scores. This process is accomplished by administering the test to a representative sample and using these scores to establish standards, usually referred to as norms, by which all individual scores can be compared. Since the average score is 100, experts can quickly assess individual test scores against the average to determine where these scores fall on the normal distribution.
The following scale resulted for classifying IQ scores:
Over 140 – Genius or almost genius
120 – 140 – Very superior intelligence
110 – 119 – Superior intelligence
90 – 109 – Average or normal intelligence
80 – 89 – Dullness
70 – 79 – Borderline deficiency in intelligence
Under 70 – Feeble-mindedness
Normal Distribution of IQ Scores
From the curve above, we see the following:
50% of IQ scores fall between 90 and 110
68% of IQ scores fall between 85 and 115
95% of IQ scores fall between 70 and 130
99.5% of IQ scores fall between 60 and 140
Low IQ & Mental Retardation
An IQ under 70 is considered as “mental retardation” or limited mental ability. 5% of the population falls below 70 on IQ tests. The severity of the mental retardation is commonly broken into 4 levels:
50-70 – Mild mental retardation (85%)
35-50 – Moderate mental retardation (10%)
20-35 – Severe mental retardation (4%)
IQ < 20 – Profound mental retardation (1%)
High IQ & Genius IQ
Genius or near-genius IQ is considered to start around 140 to 145. Less than 1/4 of 1 percent fall into this category. Here are some common designations on the IQ scale:
115-124 – Above average
125-134 – Gifted
135-144 – Very gifted
145-164 – Genius
165-179 – High genius
180-200 – Highest genius
We are told “Big Al” had an IQ over 160 which would definitely qualify him as being one the most intelligent people on the planet.
Looking at demographics, we see the following:
As you can see, the percentage of individuals considered to be genius is quite small. 0.50 percent to be exact. OK, who are these people?
- Stephen Hawking
Dr. Hawking is a man of Science, a theoretical physicist and cosmologist. Hawking has never failed to astonish everyone with his IQ level of 160. He was born in Oxford, England and has proven himself to be a remarkably intelligent person. Hawking is an Honorary Fellow of the Royal Society of Arts, a lifetime member of the Pontifical Academy of Sciences, and a recipient of the Presidential Medal of Freedom, the highest civilian award in the United States. Hawking was the Lucasian Professor of Mathematics at the University of Cambridge between 1979 and 2009. Hawking has a motor neuron disease related to amyotrophic lateral sclerosis (ALS), a condition that has progressed over the years. He is almost entirely paralyzed and communicates through a speech generating device. Even with this condition, he maintains a very active schedule demonstrating significant mental ability.
- Andrew Wiles
Sir Andrew John Wiles is a remarkably intelligent individual. Sir Andrew is a British mathematician, a member of the Royal Society, and a research professor at Oxford University. His specialty is numbers theory. He proved Fermat’s last theorem and for this effort, he was awarded a special silver plaque. It is reported that he has an IQ of 170.
- Paul Gardner Allen
Paul Gardner Allen is an American business magnate, investor and philanthropist, best known as the co-founder of The Microsoft Corporation. As of March 2013, he was estimated to be the 53rd-richest person in the world, with an estimated wealth of $15 billion. His IQ is reported to be 170. He is considered to be the most influential person in his field and known to be a good decision maker.
- Judit Polgar
Born in Hungary in 1976, Judit Polgár is a chess grandmaster. She is by far the strongest female chess player in history. In 1991, Polgár achieved the title of Grandmaster at the age of 15 years and 4 months, the youngest person to do so until then. Polgar is not only a chess master but a certified brainiac with a recorded IQ of 170. She lived a childhood filled with extensive chess training given by her father. She defeated nine former and current world champions including Garry Kasparov, Boris Spassky, and Anatoly Karpov. Quite amazing.
- Garry Kasparov
Garry Kasparov has totally amazed the world with his outstanding IQ of more than 190. He is a Russian chess Grandmaster, former World Chess Champion, writer, and political activist, considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 months. Kasparov became the youngest ever undisputed World Chess Champion in 1985 at age 22 by defeating then-champion Anatoly Karpov. He held the official FIDE world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. In 1997 he became the first world champion to lose a match to a computer under standard time controls, when he lost to the IBM supercomputer Deep Blue in a highly publicized match. He continued to hold the “Classical” World Chess Championship until his defeat by Vladimir Kramnik in 2000.
- Rick Rosner
Gifted with an amazing IQ of 192. Richard G. “Rick” Rosner (born May 2, 1960) is an American television writer and media figure known for his high intelligence test scores and his unusual career. There are reports that he has achieved some of the highest scores ever recorded on IQ tests designed to measure exceptional intelligence. He has become known for taking part in activities not usually associated with geniuses.
- Kim Ung-Yong
With a verified IQ of 210, Korean civil engineer Ung Yong is considered to be one of the smartest people on the planet. He was born March 7, 1963 and was definitely a child prodigy . He started speaking at the age of 6 months and was able to read Japanese, Korean, German, English and many other languages by his third birthday. When he was four years old, his father said he had memorized about 2000 words in both English and German. He was writing poetry in Korean and Chinese and wrote two very short books of essays and poems (less than 20 pages). Kim was listed in the Guinness Book of World Records under “Highest IQ“; the book gave the boy’s score as about 210. [Guinness retired the “Highest IQ” category in 1990 after concluding IQ tests were too unreliable to designate a single record holder.
- Christopher Hirata
Christopher Hirata’s IQ is approximately 225 which is phenomenal. He was genius from childhood. At the age of 16, he was working with NASA with the Mars mission. At the age of 22, he obtained a PhD from Princeton University. Hirata is teaching astrophysics at the California Institute of Technology.
- Marilyn vos Savant
Marilyn Vos Savant is said to have an IQ of 228. She is an American magazine columnist, author, lecturer, and playwright who rose to fame as a result of the listing in the Guinness Book of World Records under “Highest IQ.” Since 1986 she has written “Ask Marilyn,” a Parade magazine Sunday column where she solves puzzles and answers questions on various subjects.
Terence Tao is an Australian mathematician working in harmonic analysis, partial differential equations, additive combinatorics, ergodic Ramsey theory, random matrix theory, and analytic number theory. He currently holds the James and Carol Collins chair in mathematics at the University of California, Los Angeles where he became the youngest ever promoted to full professor at the age of 24 years. He was a co-recipient of the 2006 Fields Medal and the 2014 Breakthrough Prize in Mathematics.
Tao was a child prodigy, one of the subjects in the longitudinal research on exceptionally gifted children by education researcher Miraca Gross. His father told the press that at the age of two, during a family gathering, Tao attempted to teach a 5-year-old child arithmetic and English. According to Smithsonian Online Magazine, Tao could carry out basic arithmetic by the age of two. When asked by his father how he knew numbers and letters, he said he learned them from Sesame Street.
OK, now before you go running to jump from the nearest bridge, consider the statement below:
Persistence—President Calvin Coolidge said it better than anyone I have ever heard. “Nothing in the world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan “Press on” has solved and always will solve the problems of the human race.”
I personally think Calvin really knew what he was talking about. Most of us get it done by persistence!! ‘Nuff” said.
August 23, 2014
It is a very sad day when we lose an American citizen and doubly sad when the loss is due to terrorist activity. James Wright Foley, a photo-journalist, was captured by ISIS while filming in Syria. He was held captive over a year and beheaded by that remarkably brutal terrorist organization this past week. The gruesome video, posted on U-Tube, has now been taken down.
I have seen digital photographs of children, Christian children, beheaded by these thugs. They will stop at nothing to spread fear throughout the Middle-East and eventually to Western powers unless stopped. They apparently are well-funded, well-organized and use American weapons left when military forces from Iraq deserted their posts. They, for the most part, offered no resistance to the ISIS movements from Syria into Iraq and, of course, we provided no incentives for them to turn back. We watched and did nothing. We did not heed the warning and now it appears the “cat is out of the bag”.
I have no idea what our response, if any, will be but reality indicates we must do something to stop this spread of terror. The only manner seemingly effective is elimination—kill them. They cannot be reasoned with and diplomacy obviously will not be the path leading to resolution of this growing problem. If we look at those areas controlled by ISIS, we see the following:
We are told they are coming for us and will not be satisfied until their flag flies from our White House.
All indications are there will be no “boots on the ground”. If a military response is planed, it will be by virtue of air power. Maybe that will be enough but who really knows? With that being the case, let’s look at what we have in our arsenal.
This is the era of the F-22 Raptor – the world’s premier 5th Generation fighter.
The F-22 is the only fighter capable of simultaneously conducting air-to-air and air-to-ground combat missions with near impunity. This is accomplished with a never-before-seen standard of survivability even in the face of sophisticated airborne and ground-based threats.
In addition to being America’s premier air-superiority fighter, the F-22 evolved from its original concept to become a lethal, survivable and flexible multi-mission fighter. By taking advantage of emerging technologies, the F-22 has emerged as a superior platform for many diverse missions including intelligence gathering, surveillance, reconnaissance and electronic attack.
The Raptor is operational today, protecting our homeland and combat ready for worldwide deployment. F-22s are already assigned to multiple bases across the country.
F-35 Lightning II
The Lockheed Martin F-35 Lightning II is a family of single-seat, single-engine, all weather stealth multirole fighters currently under development. The fifth generation combat aircraft is designed to perform ground attack, reconnaissance, and air defense missions. The F-35 has three main models: the F-35A conventional takeoff and landing (CTOL) variant, the F-35B short take-off and vertical-landing (STOVL) variant, and the F-35C carrier-based CATOBAR (CV) variant.
The F-35 is descended from the X-35, which was the winning design of the Joint Strike Fighter (JSF) program. It is being designed and built by an aerospace industry team led by Lockheed Martin. Other major F-35 industry partners include Northrop Grumman, Pratt & Whitney and BAE Systems. The F-35 took its first flight on 15 December 2006. The United States plans to buy 2,443 aircraft. The F-35 variants are intended to provide the bulk of its manned tactical airpower for the U.S. Air Force, Marine Corps and Navy over the coming decades. Deliveries of the F-35 for the U.S. military are scheduled to be completed in 2037. It should be noted here that problems do exists with this aircraft and it is not yet fully operational.
The F-15E Strike Eagle is a superior next generation multi-role strike fighter that is available today. Its unparalleled range, persistence and weapons load make it the backbone of the U.S. Air Force (USAF). A complement of the latest advanced avionics systems gives the Strike Eagle the capability to perform air-to-air or air-to-surface missions at all altitudes, day or night, in any weather.
The F-15 is a twin-engine, high-performance, all-weather air superiority fighter. First flown in 1972, the Eagle entered U.S. Air Force service in 1974. The Eagle’s most notable characteristics are its great acceleration and maneuverability. It was the first U.S. fighter with engine thrust greater than the basic weight of the aircraft, allowing it to accelerate while in a vertical climb. Its great power, light weight and large wing area combine to make the Eagle very agile.
The F-15 has been produced in single-seat and two-seat versions in its many years of USAF service. The two-seat F-15E Strike Eagle version is a dual-role fighter that can engage both ground and air targets. F-15C, -D, and -E models participated in OPERATION DESERT STORM in 1991, accounting for 32 of 36 USAF air-to-air victories and also attacking Iraqi ground targets. F-15s also served in Bosnia (1994), downed three Serbian MiG-29 fighters in OPERATION ALLIED FORCE (1999), and enforced no-fly zones over Iraq in the 1990s. Eagles also hit Afghan targets in OPERATION ENDURING FREEDOM, and the F-15E version performed air-to-ground missions in OPERATION IRAQI FREEDOM.
The General Dynamics (now Lockheed Martin) F-16 Fighting Falcon is a single-engine multirole fighter aircraft originally developed by General Dynamics for the United States Air Force (USAF). Designed as an air superiority day fighter, it evolved into a successful all-weather multirole aircraft. Over 4,500 aircraft have been built since production was approved in 1976. Although no longer being purchased by the U.S. Air Force, improved versions are still being built for export customers. In 1993, General Dynamics sold its aircraft manufacturing business to the Lockheed Corporation, which in turn became part of Lockheed Martin after a 1995 merger with Martin Marietta.
The F-117A Nighthawk is the world’s first operational aircraft designed to exploit low-observable stealth technology. The unique design of the single-seat F-117A provides exceptional combat capabilities. About the size of an F-15 Eagle, the twin-engine aircraft is powered by two General Electric F404 turbofan engines and has quadruple redundant fly-by-wire flight controls. Air refuelable, it supports worldwide commitments and adds to the deterrent strength of the U.S. military forces.
The first F-117A was delivered in 1982, and the last delivery was in the summer of 1990. The F-117A production decision was made in 1978 with a contract awarded to Lockheed Advanced Development Projects, the “Skunk Works,” in Burbank, Calif. The first flight was in 1981, only 31 months after the full-scale development decision. Lockheed-Martin delivered 59 stealth fighters to the Air Force between August 1982 and July 1990. Five additional test aircraft belong to the company.
The McDonnell Douglas (now Boeing) F/A-18 Hornet is a twin-engine supersonic, all-weather carrier-capable multirole combat jet, designed as both a fighter and attack aircraft (F/A designation for Fighter/Attack). Designed by McDonnell Douglas and Northrop, the F/A-18 was derived from the latter’s YF-17 in the 1970s for use by the United States Navy and Marine Corps. The Hornet is also used by the air forces of several other nations. The U.S. Navy’s Flight Demonstration Squadron, the Blue Angels, has used the Hornet since 1986.
The F/A-18 has a top speed of Mach 1.8 (1,190 mph or 1,915 km/h at 40,000 ft or 12,190 m). It can carry a wide variety of bombs and missiles, including air-to-air and air-to-ground, supplemented by the 20 mm M61 Vulcan cannon. It is powered by two General Electric F404 turbofan engines, which give the aircraft a high thrust-to-weight ratio. The F/A-18 has excellent aerodynamic characteristics, primarily attributed to its leading edge extensions (LEX). The fighter’s primary missions are fighter escort, fleet air defense, Suppression of Enemy Air Defenses (SEAD), air interdiction, close air support and aerial reconnaissance. Its versatility and reliability have proven it to be a valuable carrier asset, though it has been criticized for its lack of range and payload compared to its earlier contemporaries, such as the Grumman F-14 Tomcat in the fighter and strike fighter role, and the Grumman A-6 Intruder and LTV A-7 Corsair II in the attack role.
The A-10 Thunderbolt II, affectionately nicknamed “The Warthog,” was developed for the United States Air Force by the OEM Team from Fairchild Republic Company, now a part of Northrop Grumman Corporation Aerospace Systems Eastern Region located in Bethpage NY and St. Augustine FL. Following in the footsteps of the legendary P47 Thunderbolt, the OEM Team was awarded a study contract in the 1960s to define requirements for a new Close Air Support aircraft, rugged and survivable, to protect combat troops on the ground. This initial study was followed up by a prototype development contract for the A-X, and a final fly-off competition resulting in the selection of the A-10 Thunderbolt II.
Selection of the A-10 Thunderbolt II for this mission was based on the dramatic low altitude maneuverability, lethality, “get home safe” survivability, and mission capable maintainability designed into the jet by the OEM team. This design features a titanium “bathtub” that protects the pilot from injury, and dually redundant flight control systems that allow the pilot to fly the aircraft out of enemy range, despite severe damage such as complete loss of hydraulic capability. These features have been utilized to great effect in both the Desert Storm conflict of the 1990’s and in the more recent Enduring Freedom, Iraqi Freedom, and Global War on Terror engagements.
In 1987, the ™A-10 OEM Team and all A-10 assets were acquired by Grumman Corporation from Fairchild Republic Company, and are now part of the Northrop Grumman Aerospace Systems Eastern Region, presently partnered with Lockheed Martin Systems Integration as a member of the A-10 Prime Team.
The Harrier today is one of the truly unique and most widely known of military aircraft. It is unique as the only fixed wing V/STOL aircraft in the free world. It also is unusual in the international nature of its development, which brought the design from the first British P.1127 prototype to the AV-8B Harrier II of today.
When the Harrier II was first flown in the fall of 1981, 21 years had elapsed since the original Hawker P.1127 first hovered in untethered flight. This basic design, only one of many promising concepts of the time, has weathered its growing up period and reached maturity in the AV-8B.
The 1957 design for the P.1127 was based on a French engine concept, adopted and improved upon by the British. The project was funded by the British Bristol Engine Co. and by the U.S. Government through the Mutual Weapons Development Program.
With the basic configuration of the engine largely determined and with development work under way, Hawker Aircraft Ltd. engineers directed their attention to designing a V/STOL aircraft that would use the engine. Without government/military customer support, they produced a single-engine attack-reconnaissance design that was as simple a V/STOL aircraft as could be devised. Other than the engine’s swivelling nozzles, the reaction control system was the only complication in the effort to provide V/STOL capability.
The F-14 Tomcat is a supersonic, twin-engine, variable sweep wing, two-place strike fighter manufactured by Grumman Aircraft Corporation. The multiple tasks of navigation, target acquisition, electronic counter measures (ECM), and weapons employment are divided between the pilot and the radar intercept officer (RIO). Primary missions include precision strike against ground targets, air superiority, and fleet air defense.
The F-14 Tomcat is a supersonic, twin-engine, variable sweep wing, two-place strike fighter manufactured by Grumman Aircraft Corporation. The multiple tasks of navigation, target acquisition, electronic counter measures (ECM), and weapons employment are divided between the pilot and the radar intercept officer (RIO). Primary missions include precision strike against ground targets, air superiority, and fleet air defense.
The F-14 has completed its decommissioning from the U.S. Navy. It was slated to remain in service through at least 2008, but all F-14A and F-14B airframes have already been retired, and the last two squadrons, the VF-31 Tomcatters and the VF-213 Black Lions, both flying the “D” models, arrived for their last fly-in at Naval Air Station Oceana on March 10, 2006.
I think ISIS, or ISIL, is a real and present danger. We can no longer talk our way around this situation; this is no solution. Playing golf is no solution. Waiting for the next administration is no solution. The only recourse we have is to kill them. Do not let ISIS live to see another sunrise. Let’s let them enjoy their seventy-seven (77) virgins sooner rather than later. I would enjoy your comments.
August 9, 2014
One of the very best publications existing today is “NASA TECH BRIEFS, Engineering Solutions for Design & Manufacturing”. This monthly publication strives to transfer technology from NASA design centers to University and corporate entities hoping the research and development can be commercialized in some fashion. In my opinion, it is a marvelous resource and demonstrates avenues of investigation separate and apart from what we have come to know as the recognized NASA mission. As you well know, in the process of exploration, there are many very useful “down-to-Earth” developments that can utilize and commercialized to benefit manufacturing and our populace at large. These are enumerated in this publication. Several distinct areas within the magazine highlighting papers and studies may be seen as follows:
- Technology Focus: Mechanical Components
- Manufacturing & Prototyping
- Materials & Coatings
- Physical Sciences
- Patents of Note
- New For Design Engineers
As you can see, each of these areas concentrates upon differing subjects, all relating to engineering and product design.
Let me now mention several publications and papers coming from the Volume 38, Number 8 edition. This will give you some feel for the investigative work coming from the NASA research centers across our country. These are in the August 2014 magazine.
- “Extreme Low Frequency Acoustic Measurement System”, Langley Research Center, Hampton, Va.
- “Piezoelectric Actuated Valve for Operation in Extreme Conditions”, Jet Propulsion Laboratory, Pasadena, California.
- “Compact Active Vibration Control System”, Langley Research Center, Hampton, Va.
- “Rotary Series Elastic Actuator”, L.B.J Space Center, Houston, Texas.
- “HALT Technique to Predict the Reliability of Solder Joints in a Shorter Duration”, Jet Propulsion Laboratory, Pasadena, California.
I feel one the great failure of our federal government is the abdication of manned-space programs. WE REALLY SCREWED UP on this one. If you have read any of my previous posting on this subject you will understand my complete and utter amazement relative to that decision by the Executive and Legislative branches of our government. This, to some extent, underscores the deplorable lack of vision existing at the highest levels. We have decided to let the Russians get us up and back. Very bad decision on our part. Now, it is important to note that NASA is far from being dormant-NASA is working.
Let’s take a look at the various NASA locations and the areas of research they are undertaking.
- Ames Research Center: Technological Strengths: Information Technology, Biotechnology, Nanotechnology, Aerospace Operations Systems, Rotorcraft, Thermal Protection Systems.
- Armstrong Flight Research Center: Technological Strengths: Aerodynamics, Aeronautics Flight Testing, Aeropropulsion, Flight Systems, Thermal Testing Integrated Systems Test and Validation.
- Glenn Research Center: Technological Strengths: Aeropropulsion, Communications, Energy Technology, High-Temperature Materials Research.
- Goddard Space Flight Center: Technological Strengths: Earth and Planetary Science Missions, LIDAR, Cryogenic Systems, Tracking, Telemetry, Remote Sensing, Command.
- Jet Propulsion Laboratory: Technological Strengths: Near/Deep-Space Mission Engineering, Microspacecraft, Space Communications, Information Systems, Remote Sensing, Robotics.
- Johnson Space Center: Technological Strengths: Artificial Intelligence and Human Computer Interface, Life Sciences, Human Space Flight Operations, Avionics, Sensors, Communication.
- Kennedy Space Center: Technological Strengths: Fluids and Fluid Systems, Materials Evaluation, Process Engineering Command, Control, and Monitor Systems, Range Systems, Environmental Engineering and Management.
- Langley Research Center: Technological Strengths: Aerodynamics, Flight Systems, Materials, Structures, Sensors, Measurements, Information Sciences.
- Marshall Space Flight Center: Technological Strengths: Materials, Manufacturing, Nondestructive Evaluations, Biotechnology, Space Propulsion, Controls and Dynamics, Structures, Microgravity Processing.
- Stennis Space Center: Technological Strengths: Propulsion Systems, Test/Monitoring, Remote Sensing, Nonintrusive Instrumentation.
- NASA Headquarters: Technological Strengths: NASA Planning and Management.
I can strongly recommend to you the “Tech Brief” publication. It’s free. You may find further investigation into the areas of research can benefit you and your company. Take a look.
As always, I welcome your comments. Many thanks.
August 5, 2014
The following post is taken from a PDHonline course this author has written for professional engineers. The entire course may be found from PDHonline.org. Look for Introduction to Reliability Engineering.
One of the most difficult issues when designing a product is determining how long it will last and how long it should last. If the product is robust to the point of lasting “forever” the price of purchase will probably be prohibitive compared with competition. If it “dies” the first week, you will eventually lose all sales momentum and your previous marketing efforts will be for naught. It is absolutely amazing to me as to how many products are dead on arrival. They don’t work, right out of the box. This is an indication of slipshod design, manufacturing, assembly or all of the above. It is definitely possible to design and build quality and reliability into a product so that the end user is very satisfied and feels as though he got his money’s worth. The medical, automotive, aerospace and weapons industries are certainly dependent upon reliability methods to insure safe and usable products so premature failure is not an issue. The same thing can be said for consumer products if reliability methods are applied during the design phase of the development program. Reliability methodology will provide products that “fail safe”, if they fail at all. Component failures are not uncommon to any assembly of parts but how that component fails can mean the difference between a product that just won’t work and one that can cause significant injury or even death to the user. It is very interesting to note that German and Japanese companies have put more effort into designing in quality at the product development stage. U.S. companies seem to place a greater emphasis on solving problems after a product has been developed.  Engineers in the United States do an excellent job when cost reducing a product through part elimination, standardization, material substitution, etc but sometimes those efforts relegate reliability to the “back burner”. Producibility, reliability, and quality start with design, at the beginning of the process, and should remain the primary concern throughout product development, testing and manufacturing.
QUALITY VS RELIABILITY:
There seems to be general confusion between quality and reliability. Quality is the “totality of features and characteristics of a product that bear on its ability to satisfy given needs; fitness for use”. “Reliability is a design parameter associated with the ability or inability of a product to perform as expected over a period of time”. It is definitely possible to have a product of considerable quality but one with questionable reliability. Quality AND reliability are crucial today with the degree of technological sophistification, even in consumer products. As you well know, the incorporation of computer driven and / or computer-controlled products has exploded over the past two decades. There is now an engineering discipline called MECHATRONICS that focuses solely on the combining of mechanics, electronics, control engineering and computing. Mr. Tetsuro Mori, a senior engineer working for a Japanese company called Yaskawa, first coined this term. The discipline is also alternately referred to as electromechanical systems. With added complexity comes the very real need to “design in” quality and reliability and to quantify the characteristics of operation, including the failure rate, the “mean time between failure” (MTBF ) and the “mean time to failure” ( MTTF ). Adequate testing will also indicate what components and subsystems are susceptible to failure under given conditions of use. This information is critical to marketing, sales, engineering, manufacturing, quality and, of course, the VP of Finance who pays the bills.
Every engineer involved with the design and manufacture of a product should have a basic knowledge of quality and reliability methods and practices.
I think it’s appropriate to define Reliability and Reliability Engineering. As you will see, there are several definitions, all basically saying the same thing, but important to mention, thereby grounding us for the course to follow.
“Reliability is, after all, engineering in its most practical form.”
James R. Schlesinger
Former Secretary of Defense
“Reliability is a projection of performance over periods of time and is usually defined as a quantifiable design parameter. Reliability can be formally defined as the probability or likelihood that a product will perform its intended function for a specified interval under stated conditions of use. “
John W. Priest
Engineering Design for Producibility and
“ Reliability engineering provides the tools whereby the probability and capability of an item performing intended functions for specified intervals in specified environments without failure can be specified, predicted, designed-in, tested, demonstrated, packaged, transported, stored installed, and started up; and their performance monitored and fed back to all organizations.”
“Reliability is the science aimed at predicting, analyzing, preventing and mitigating
failures over time.”
John D. Healy, PhD
“Reliability is —blood, sweat, and tears engineering to find out what could go wrong —, to organize that knowledge so it is useful to engineers and managers, and then to act
on that knowledge”
Ralph A. Evans
“The conditional probability, at a given confidence level, that the equipment
will perform its intended function for a specified mission time when operating
under the specified application and environmental stresses. “
The General Electric Company
“By its most primitive definition, reliability is the probability that no failures will occur in a given time interval of operation. This time interval may be a single operation, such as a mission, or a number of consecutive operations or missions. The opposite of reliability is unreliability, which is defined as the probability of failure in the same time interval “.
“Reliability Theory and Practice”
Personally, I like the definition given by Dr. Healy although the phrase “performing intended functions for specified intervals in specified environments “ adds a reality to the definition that really should be there. Also, there is generally associated with reliability data a confidence level. We will definitely discuss confidence level later on and how that factors into the reliability process. Reliability, like all other disciplines, has its own specific vocabulary and understanding “the words” is absolutely critical to the overall process we wish to follow.
The main goal of reliability engineering is to minimize failure rate by maximizing MTTF. The two main goals of design for reliability are:
- Predict the reliability of an item; i.e. component, subsystem and system ( fit the life model and/or estimate the MTTF or MTBF )
- Design for environments that promote failure.  To do this, we must understand the KNPs and the KCPs of the entire system or at least the mission critical subassemblies of the system.
The overall effort is concerned with eliminating early failures by observing their distribution and determining, accordingly, the length of time necessary for debugging and methods used to debug a system or subsystem. Further, it is concerned with preventing wearout failures by observing the statistical distribution of wearout and determining the preventative replacement periods for the various parts. This equates to knowing the MTTF and MTBF. Finally, its main attention is focused on chance failures and their prevention, reduction or complete elimination because it is the chance failures that most affect equipment reliability in actual operation. One method of accomplishing the above two goals is by the development and refinement of mathematical models. These models, properly structured, define and quantify the operation and usage of components and systems.
No mechanical or electromechanical product will last forever without preventative maintenance and / or replacing critical components. Reliability engineering seeks to discover the weakest link in the system or subsystem so any eventual product failure may be predicted and consequently forestalled. Any operational interruption may be eliminated by periodically replacing a part or an assembly of parts prior to failure. This predictive ability is achieved by knowing the meantime to failure (MTTF) and the meantime between failures (MTBF) for “mission critical” components and assemblies. With this knowledge, we can provide for continuous and safe operation, relative to a given set of environmental conditions and proper usage of the equipment itself. The test, find, fix (TAAF of TAAR) approach is used throughout reliability testing to discover what components are candidates for continuous “preventative maintenance” and possibly ultimate replacement. Sometimes designing redundancy into a system can prolong the operational life of a subsystem or system but that is generally costly for consumer products. Usually, this is only done when the product absolutely must survive the most rigorous environmental conditions and circumstances. Most consumer products do not have redundant systems. Airplanes, medical equipment and aerospace equipment represent products that must have redundant systems for the sake of continued safety for those using the equipment. As mentioned earlier, at the very worst, we ALWAYS want our mechanism to “fail safe” with absolutely no harm to the end-user or other equipment. This can be accomplished through engineering design and a strong adherence to accepted reliability practices. With this in mind, we start this process by recommending the following steps:
- Establish reliability goals and allocate reliability targets.
- Develop functional block diagrams for all critical systems
- Construct P-diagrams to identify and define KCPs and KNPs
- Benchmark current designs
- Identify the mission critical subsystems and components
- Conduct FMEAs
- Define and execute pre-production life tests; i.e. growth testing
- Conduct life predictions
- Develop and execute reliability audit plans
It is appropriate to mention now that this document assumes the product design is, at least, in the design confirmation phase of the development cycle and we have been given approval to proceed. Most NPI methodologies carry a product though design guidance, design confirmation, pre-pilot, pilot and production phases. Generally, at the pre-pilot point, the design is solidified so that evaluation and reliability testing can be conducted with assurance that any and all changes will be fairly minor and will not involve a “wholesale” redesign of any component or subassembly. This is not to say that when “mission critical components” fail we do not make all efforts to correct the failure(s) and put the product back into reliability testing. At the pre-pilot phase, the market surveys, consumer focus studies and all of the QFD work have been accomplished and we have tentative specifications for our product. Initial prototypes have been constructed and upper management has “signed off” and given approval to proceed into the next development cycles of the project. ONE CAUTION: Any issues involving safety of use must be addressed regardless of any changes becoming necessary for an adequate “fix”. This is imperative and must occur if failures arise, no matter what phase of the program is in progress.
Critical to these efforts will be conducting HALT and HAST testing to “make the product fail”. This will involve DOE (Design of Experiments) planning to quantify AND verify FMEA estimates. Significant time may be saved by carefully structuring a reliability evaluation plan to be accomplished at the component, subsystem and system levels. If you couple these tests with appropriate field-testing, you will develop a product that will “go the distance” relative to your goals and stay well within your SCR (Service Call Rate) requirements. Reliability testing must be an integral part of the basic design process and time must be given to this effort. The NPI process always includes reliability testing and the assessment of those results from that testing. Invariability, some degree of component or subsystem redesign results from HALT or HAST because weaknesses are made known that can and will be eliminated by redesign. In times past, engineering effort has always been to assign a “safety factor” to any design process. This safety factor takes into consideration “unknowns” that may affect the basic design. Unfortunately, this may produce a design that is structurally robust but fails due to Key Noise Parameters (KNPs) or Key Control Parameters (KCPs).
As you might expect, this is a “lick and a promise” relative to the subject of reliability. It’s a very complex subject but one that has provided remarkable life and quality to consumer and commercial products. I would invite you to take a look at the literature and further your understanding of the “ins and outs” of the technology. As always, I welcome your comments.