October 29, 2014
The use of natural gas in the form of CNG (compressed natural gas) is becoming an accepted alternative to petroleum; i.e. gasoline. In 2011, the use of natural gas as a fuel for automobiles and trucks rose 7.1 % per year with a remarkable increase of thirty-eight percent (38%) since 2006. That use has more than doubled in the past ten years to almost thirty-nine (38.85) million cubic feet in 2011. It is estimated that by 2017, approximately eight percent (8%) of new North American Class 6-8 commercial vehicles will be natural-gas powered and annual sales will exceed 29,500 units. This estimate was made by Frost & Sullivan. Let’s get a better idea as to the various truck classifications. The chart below will provide information relative to the classifications as defined by the Department of Transportation (DOT).
As you can see, the classifications basically revolve around the gross weight of the vehicle. Both classifications indicate heavy-duty vehicles.
Proven and Reliable – More than 11 million NGVs are in use worldwide, with about 110,000 in the U.S. Some tune-ups for NGVs have been extended by up to 50,000 miles. Some oil changes have been extended by up to 25,000 miles. Pipes and mufflers have lasted longer in NGVs because the natural gas does not react with the metals.
Economical – CNG fleet vehicles realize an overall cost savings of as much as 50% over gasoline, particularly after factoring in available alternative tax credits. If we look at the relative cost and compare fuel types we see the following:
In my home town, Chattanooga, Tennessee, we see an average gasoline price of $2.57 per gallon with a national average of $ 2.64 per gallon. For CNG, the gasoline gallon equivalent or GGE is $1.55 per gallon. Defining GGE, we find the following:
Gasoline gallon equivalent (GGE) or gasoline-equivalent gallon (GEG) is the amount of alternative fuel it takes to equal the energy content of one liquid gallon of gasoline. GGE allows consumers to compare the energy content of competing fuels against a commonly known fuel—gasoline. GGE also compares gasoline to fuels sold as a gas (Natural Gas, Propane, and Hydrogen) and electricity.
Domestic Fuel – Natural gas supplies are abundant domestically, reducing our dependence on foreign oil and the impact of weather-related shortages.
Eco-Conscious – CNG vehicles are much cleaner than traditional vehicles, producing up to 90% lower emissions than gasoline or diesel. Natural gas is the cleanest burning fossil fuel today. CNG vehicles produce the fewest emissions of all vehicle fuel types and emissions contain significantly less pollutants than gasoline. Dedicated CNG vehicles release little or no emissions during fueling.
State Incentives– Some states offer tax credits for each vehicle converted to run on natural gas. Some states offer tax credits for purchasing a vehicles running on CNG. Other states offer car pool lanes if the vehicle runs on CNG. In order for a “Clean Fuel” vehicle to travel in the Express Lanes it must display a “Clean Fuel” sticker/decal which costs $10. Also, in several states CNG vehicles qualify for high occupancy vehicle (HOV) lane access, where applicable.
The following news release was issued by the Atlanta Journal and Constitution in July of 2013.
Atlanta Gas Light teams up with The Langdale Company
ATLANTA – July 23, 2013 – The first compressed natural gas (CNG) fueling station developed under the Atlanta Gas Light (AGL) CNG Program is now open in Valdosta, GA. Approved by the Georgia Public Service Commission (PSC) in 2012, the program is designed to expand public access to the CNG fueling infrastructure throughout the state and enhance Georgia’s role in the emerging CNG market in the southeastern U.S. The Langdale Fuel Company of Valdosta was chosen as the recipient of funding from Atlanta Gas Light for the installation.
That company has worked with MARTA to outfit selected buses with CNG. A graphic of one of those buses given below:
The station itself looks very much like a “standard” filling station we are use to in dispensing gasoline.
You drive up, put the hose in the filler, then start pumping.
The complexities of receiving and compressing natural gas are demonstrated by the graphic below. As you can see, there is significant technology involved with a typical compression “event”.
CNG is definitely a viable alternative fuel for consideration AND there are several companies in the mark place today that can retrofit an automobile engine with the necessary equipment successfully run CNG as a primary fuel. As always, I welcome your comments.
October 25, 2014
There should be no doubt, with the advent of the Internet our daily lives have changed in a remarkable fashion. From the Office for National Statistics (ONS), figures show that 36 million adults – or seventy-three percent (73%) – were daily internet users in 2013, up from the thirty-five percent (35 %) recorded in 2006, when comparable records began. Of course these figures are world-wide. The graphic below will indicate the breakdown by geographic region.
As you can see, our friends in Asia lead the pack by a remarkable margin with forty-five percent (45.1%) engaging the Internet on a daily basis. The chart below will indicate the increase in Internet usage by region also as well as providing additional statistics.
If we look at penetration and growth, we see a huge increase just over the past five years.
The discussion today is not really about usage or the growth of the Internet. We wish to discuss several applications that are revolutionizing our daily lives. This revolution is generally called the Internet of Things or IoT. Other terminology for IoT is M2M or machine to machine. M2M is an absolutely fascinating use of technology with remarkable applications. IoT generally refers to what some call the next-generation Internet, where physical objects are connected via Standard Internet Protocol or IP.
I was watching television several days ago when an advertisement began running. The driver of an automobile was required to look back at her baby snuggly strapped into a car seat. Her attention was diverted for only a second but just long enough for a truck in front of her to stop abruptly. Without applying the brakes, the car came to a gentle stop. A sensor in the grill of her vehicle detected zero movement of the truck ahead and sent that message to an onboard computer. The computer relayed a signal to the brake cylinders, thereby applying pressure—the car came to a stop. Machines “talking” to machines. Using an array of embedded sensors, actuators and a variety of other technologies, the loosely connected “things” can sense aspects of their environment and communicate that information over wired and wireless networks, without human intervention, for a variety of compelling uses. This is a great representation of IoT.
ECONOMY AND THE FUTURE
Let’s take a quick look at where some feel we are going relative to IoT and M2M. The bullets below will give some indications as to what is to come.
- The total economic value-added from IoT across industries will reach $1.9 trillion worldwide in 2020, as predicted by Gartner, “Magic Quadrant for Business Intelligence and Analytics Platforms”
- Fifty billion devices will be connected to the Internet by 2020, predicts Cisco Corporation.
- The remote patient monitoring market doubled from 2007 to 2011 and is projected to double again by 2016. The data generated from sensors is sent to monitoring stations where audible and/or visual indications result when a patient is in trouble.
- The utility smart grid transformation is expected to almost double the customer information system market, from $2.5 billion in 2013 to $5.5 billion in 2020, based on a study from Navigant Research. Of course, this will allow utilities to provide power at lesser rates and with more regularity.
- Wide deployment of IoT technologies in the auto industry could save $100 billion annually in accident reductions, according to McKinsey and Company.
- The industrial Internet could add $10-15 trillion to global GDP, essentially doubling the US economy, says General Electric.
- Seventy-five percent (75%) of global business leaders are exploring the economic opportunities of IoT, according to a report from The Economist.
- The UK government recently approved 45 million pounds (US$76.26 million) in research funding for Internet of Things technologies. (This is a huge sum of money. The Crown feels it will be money well spent.)
- Cities will spend $41 trillion in the next 20 years on infrastructure upgrades for IoT, according to Intel.
- The number of developers involved in IoT activities will reach 1.7 million globally by the end of 2014, according to ABI Research estimates.
As the Internet of Things ramps up and millions of devices become connected to the Internet, there is also a push to enable communication among all types of devices available on the Internet. These devices include process control systems, power line communication devices, precision machinery, and various types of infrastructure. One very critical aspect of properly working IoT devices is the need for simulation. Simulation is an essential element of building an IoT network. These networks are starting to become complex and ubiquitous, and the communication among them can be very unpredictable without considerable modeling. As we saw in the example above, successful braking is dependent upon feedback between the engine and actions of the automobile. As the braking systems are applied, fuel injection to the engine must lessen and eventually stop.
There are four (4) critical competencies needed by IT professionals for the design of systems to bring about successful and enduring operation of M2M devices. These are as follows:
- Learning how to design and implement embedded software. Mechanical engineers need to interface regularly with software specialists so both design aspects of the product evolve concurrently. The ME can no longer “throw it over the fence” and let the “boys” in IT solve the remainder of the problems. Also, a much higher level of software development is needed WITH simulation prior to product launch.
- Communication capabilities will become paramount in developing software. Engineers will need to choose from dozens of proprietary and standard communication protocols, and factor in things like network protocols, potential radio frequency (RF) noise and interference, and the physical fit and placement of new communications components as a part of their requirements. Mechanical and electromechanical design teams now have to think about communications and what domain constraints might affect layout and design.
- Instrumentation is absolutely critical to amass, store and manage data collected by “smart products”. Understanding the functional aspects as to how equipment is to behave will help design engineers anticipate potential failure modes much more effectively, which in turn effects how they specify instrumentation packages into designs.
- Data and security is an absolute MUST for any M2M application and safeguards must be factored into IT considerations. IT, typically has ownership of the data and with this being the case, IT needs to be folded in with initial planning of the product or the assembly of components. Engineers CANNOT build devices in isolation if they want to take complete advantage of all possibilities.
As always, I welcome your comments.
October 11, 2014
What would you call a BIG story? ISIL, Ebola Virus, Benghazi, IRS problems with Tea Party members, the search for the missing Malaysian jet? All are big stories and certainly deserve necessary airtime and commentary. There is one story that has gotten almost zero (0) airtime from the media and one story I feel is absolutely remarkable in importance relative to pushing the technological envelope. The Mars MAVEN mission has been a huge success to date with the unmanned craft now orbiting the “red” planet.
MAVEN is an acronym for NASA’s Mars Atmosphere and Volatile Evolution spacecraft which successfully entered Mars’ orbit at 10:24 p.m. EDT Sunday, Sept. 21, 2014 after traveling 442 million miles. The purpose for the mission is to study the Red Planet’s upper atmosphere as never before. This is the first spacecraft dedicated to exploring the tenuous upper atmosphere of Mars with the following objectives:
- Determine the role the loss of volatile gaseous substances to space from the Martian atmosphere has played through time.
- Determine the current state of the upper atmosphere, ionosphere, and interactions with the solar wind.
- Determine the current rates of escape of neutral gases and ions to space and the processes controlling them.
- Determine the ratio of stable isotopes in the Martian atmosphere.
There is some thought that by understanding the atmospheric conditions on Mars, we will gain better insights as to the evolutionary processes of that planet and maybe some ability to predict evolutionary processes on Earth. Also, discussions are well underway relative to future establishment of colonies on Mars. If that is to ever happen, we definitely will need additional information relative to atmospheric and surface conditions.
The graphic below is a pictorial of the MAVEN system. This is somewhat “busy” but one which captures several significant specifics of the hardware including onboard instrumentation.
Please note the graphic at the bottom comparing what is believed to be early atmospheric conditions with current atmospheric conditions. The loss of magnetic fields surrounding the planet is contributory to atmospheric losses. Could this happen to Earth’s atmosphere? That’s a question that we have yet to answer. Additional specifics can be seen from the following:
After a 442 million mile trip, how did MAVEN hook up with Mars? Very, very carefully. The blue line in the graphic below shows the first part of MAVEN’s trajectory during its initial approach and the beginning of the 35-hour capture orbit. The red section of the line indicates the 33-minute engine burn that slows the spacecraft so it can be captured into Martian orbit. Mars’ orbit around the sun is indicated by the white line to the right of the planet, and the Martian moons’ orbits are dimly visible in the background. This is a remarkable example of engineering and physics allowing for pinpoint accuracy relative to entry and the establishment of orbital stability.
MAVEN carries three instrument suites with eight scientific instrument packages designed to study the upper atmosphere and ionosphere of Mars and its interactions with the solar wind. Three of the instruments are located on the Articulating Payload Platform extending from the bus, including the Imaging Ultraviolet Spectrograph and a mass spectrometer that will sample the atmosphere in situ. The hardware housing these three packages is shown as follows:
The The Particles and Fields Package, built by the University of California at Berkeley with support from CU/LASP and Goddard Space Flight Center, contains six instruments that will characterize the solar wind and the ionosphere of the planet. The Remote Sensing Package, built by CU/LASP, will determine global characteristics of the upper atmosphere and ionosphere. The Neutral Gas and Ion Mass Spectrometer, provided by Goddard Space Flight Center, will measure the composition and isotopes of neutral ions. MAVEN also carries a government-furnished Electra UHF radio, shown by the graphic below, provides back-up data relay capability for the rovers on Mars’ surface.
Lockheed Martin, based in Littleton, Colorado, built the MAVEN spacecraft and provides mission operations. NASA’s Jet Propulsion Laboratory is providing navigation services, and CU/LASP conducts science operations and data distribution.
On February 19, the MAVEN team successfully completed the initial post-launch power-on and checkout of the spacecraft’s Electra ultra-high frequency (UHF) transceiver. This receiver is shown with the graphic below. This relay radio transmitter and receiver will be used for UHF communication with robots on the surface of Mars. Using the orbiter to relay data with this relay radio from Mars rovers and stationary landers boosts the amount of information that can be relayed back to Earth.
A part of NASA’s Mars Scout program, MAVEN is the culmination of 10 years of R&D. Some of that R&D went into designing the materials for the spacecraft’s instruments as well as for the satellite itself, which weighs about as much as a small car and has a 37 ft wingspan, including solar panel arrays. That panel system is shown as follows:
As you can see from the JPEG, the array is huge but necessary to power the complete system.
The craft’s core structures are made with carbon fiber composites made by TenCate Advanced Composites. The company is experienced in the design and fabrication of composites for aerospace applications, having already supplied them to previous Mars missions, including the Rover and Curiosity rovers. For MAVEN, which will orbit Mars for about one Earth year, TenCate engineered composite face sheets sandwiched between aluminum honeycomb sheets for the spacecraft’s primary bus structure.
Other materials in the orbiter include a cylindrical aluminum boat tail on the aft deck that provides engine structural support. The craft is kept at the correct operating temperature — 5F to 104F — using active thermal control and passive measures, such as several thermal materials for conducting or isolating heat. Most of the orbiter is enclosed within multi-layer insulation materials; the outside layer is black Kapton film coated with germanium.
Hopefully, you can see now why I feel MAVEN is a BIG story worthy of considerable air time. It’s a modern-day engineering marvel. I welcome your comments: email@example.com.
September 6, 2014
The following resources were used to produce this post: Internet Society, “Global Internet Report 2014”, SITEOPEDIA and HELPGUIDE.ORG, BBC News, “The Age of Internet Overload”.
WHAT IS THE INTERNET:
According to the Global Internet Society, “The Internet is a uniquely universal platform that uses the same standards in every country, so that every user can interact with every other user in ways unimaginable 10 years ago, regardless of the multitude of changes taking place.”
This statement sums it up in a very precise fashion. The Internet has undoubtedly changed the entire world. Open access to the Internet has revolutionized the way individuals communicate and collaborate, entrepreneurs and corporations conduct business, and governments and citizens interact. At the same time, the Internet established a revolutionary open model for its own development and governance, encompassing all stakeholders. Fundamentally, the Internet is a ‘network of networks’ whose protocols are designed to allow networks to interoperate. In the very beginning, these networks represented different academic, government, and research communities whose members needed to cooperate to develop common standards and manage joint resources. Later, as the Internet was commercialized, vendors and operators joined the open protocol development process and helped unleash the unprecedented era of growth and innovation.
INTERNET PENETRATION BY COUNTRY:
If we look at global Internet penetration by country, we see the following:
Internet penetration on a global basis is obvious for countries other than those considered third-world. Internet usage on a daily basis approaches use by one billion individuals per day. There should be no doubt that with numbers such as these, there will be those with obsessive/compulsive disorders producing addiction. With that being the case, what is Internet addiction?
Internet Addiction, otherwise known as computer addiction, online addiction, or Internet addiction disorder (IAD), covers a variety of impulse-control problems, including:
- Cybersex Addiction – compulsive use of Internet pornography, adult chat rooms, or adult fantasy role-play sites impacting negatively on real-life intimate relationships. The Internet is the cheapest, fastest, and most anonymous pornography source. Internet pornographers made over $1 billion in revenues dealing their merchandise on-line. The threat of pornography over the Internet cannot be discounted: 70 percent of children viewing pornography on the Internet do so in public schools and libraries (The Internet Online Summit, 1997). All of us realize that we are surrounded by various forms of pornography, whether noticing the “adult” section of videos at Blockbuster, surfing the Internet, seeing advertising which is clearly sexually suggestive, or innocently going to a movie that just happens to have some kind of sex scene.
- Cyber-Relationship Addiction – addiction to social networking, chat rooms, texting, and messaging to the point where virtual, online friends become more important than real-life relationships with family and friends. Facebook has 1.4 billion profiles, and 1.06 billion of those (or 15 percent of the world’s population) use Facebook regularly. Of those, 78 percent of users access Facebook on a mobile device a minimum of once a month. Every second, there are 8,000 likes on Instagram. Instagram launched in 2010, and boasts 200 million active users in 2014, with over 75 million users daily. Google+ has over 540 million profiles and over 300 million monthly active users. LinkedIn, launched in 2003, has 300 million users, and an average of two new members per second. Forty percent of users on LinkedIn check the site daily, and Mashable is the LinkedIn company with the most engaged following.
- Net Compulsions – such as compulsive online gaming, gambling, stock trading, or compulsive use of online auction sites such as eBay, often resulting in financial and job-related problems. Obsessive playing of off-line computer games, such as Solitaire or Minesweeper, or obsessive computer programming.
- Information Overload – compulsive web surfing or database searching, leading to lower work productivity and less social interaction with family and friends. An average US citizen on an average day consumes 100,500 words, whether that is email, messages on social networks, searching websites or anywhere else digitally. Take a look at the global statistics given below and consider what happens in sixty (60) seconds:
- 168 million e-mails sent
- 694,445 Google searches launched
- 695,000 Facebook updates attempted
- 370,000 Skype calls made
- 98,000 Tweets accomplished
- 20,000 new posts on TUMBLR
- 13,000 iPhone apps downloaded
- 6,600 new pictures on Flickr
- 1,500 new blog entries posted, (just like this one )
- 600+ videos posted totaling over 25 hours duration on YouTube
The most common of these Internet addictions are cybersex, online gambling, and cyber-relationship addiction. Talk about busy.
SIGNS AND SYMPTOMS:
Signs and symptoms of Internet addiction vary from person to person. For example, there are no set hours per day or number of messages sent that indicate Internet addiction. But here are some general warning signs that your Internet use may have become a problem:
- Losing track of time online. Do you frequently find yourself on the Internet longer than you intended? Does a few minutes turn into a few hours? Do you get irritated or cranky if your online time is interrupted? From a business standpoint, I have often heard the Internet is a “black hole” when it comes to wasting time. This is primarily due to net-surfing. I will admit, in the work I do as a consulting engineer, I use the Internet on a daily basis to investigate vendors and companies supplying services to complement my work. I don’t really consider this wasting time but actually saves time spent in research through phone calls, magazine searches, searches through Thomas Register, etc.
- Having trouble completing tasks at work or home. Do you find laundry piling up and little food in the house for dinner because you’ve been busy online? Perhaps you find yourself working late more often because you can’t complete your work on time—then staying even longer when everyone else has gone home so you can use the Internet freely.
- Isolation from family and friends. Is your social life suffering because of all the time you spend online? Are you neglecting your family and friends? Do you feel like no one in your “real” life—even your spouse—understands you like your online friends?
- Feeling guilty or defensive about your Internet use. Are you sick of your spouse nagging you to get off the computer or put your smart phone down and spend time together? Do you hide your Internet use or lie to your boss and family about the amount of time you spend on the computer or mobile devices and what you do while you’re online?
- Feeling a sense of euphoria while involved in Internet activities. Do you use the Internet as an outlet when stressed, sad, or for sexual gratification or excitement? Have you tried to limit your Internet time but failed?
If we look at Internet usage relative to addiction, we see the following for the United States:
This calculates to 988 hours per year for men and 728 hours per year for women. How much time do you spend per year reading a good book, calling your mother, taking a course at a local technical school or university, volunteering in your community, etc? Have you improved your reading speed and reading comprehension lately? You get the picture.
SITEOPEDIA has conducted polls that indicate significant addiction can result from Internet usage. The graphic below will highlight the results of that poll. Note: those indicating they are not addicted may just be lying. The real rates of addiction are estimates at best.
Those indicating they are addicted might consider the following recourse:
- Recognize any underlying problems that may support your Internet addiction. If you are struggling with depression, stress, or anxiety, for example, Internet addiction might be a way to self-soothe rocky moods. Have you had problems with alcohol or drugs in the past? Does anything about your Internet use remind you of how you used to drink or use drugs to numb yourself? Recognize if you need to address treatment in these areas or return to group support meetings.
- Build your coping skills. Perhaps blowing off steam on the Internet is your way of coping with stress or angry feelings. Or maybe you have trouble relating to others, or are excessively shy with people in real life. Building skills in these areas will help you weather the stresses and strains of daily life without resorting to compulsive Internet use.
- Strengthen your support network. The more relationships you have in real life, the less you will need the Internet for social interaction. Set aside dedicated time each week for friends and family. If you are shy, try finding common interest groups such as a sports team, education class, or book reading club. This allows you to interact with others and let relationships develop naturally.
Modify your Internet use step by step:
- To help you see problem areas, keep a log of how much you use the Internet for non-work or non-essential activities. Are there times of day that you use the Internet more? Are there triggers in your day that make you stay online for hours at a time when you only planned to stay for a few minutes?
- Set goals for when you can use the Internet. For example, you might try setting a timer, scheduling use for certain times of day, or making a commitment to turn off the computer, tablet, or smart phone at the same time each night. Or you could reward yourself with a certain amount of online time once you’ve completed a homework assignment or finished the laundry, for instance.
- Replace your Internet usage with healthy activities. If you are bored and lonely, resisting the urge to get back online can be very difficult. Have a plan for other ways to fill the time, such as going to lunch with a coworker, taking a class, or inviting a friend over.
WHAT WE DO:
The fascinating thing about Internet usage is what we actually do with all that time. From the graphic below, we see legitimate usage of the Internet to accomplish “chores” and execute responsibilities. I think shopping online and paying bills certainly fall within reason.
Wasting time on the Internet is a matter of definition. Please keep in mind the graphic below indicates time per DAY. Left side men—right side women.
OK, now that I have your attention, where do we go next?
We just might be doomed as a society. Curb that habit. I welcome your comments:
September 1, 2014
There is absolutely no doubt the entire world is dependent upon the generation and transmission of electricity. Those countries without electrical power are considered third world countries with no immediate hope of improving lives and living conditions and yet there just may be alternatives to generally held methods for generating electricity.
If we look at the definition for renewable energy, we see the following:
Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
We are all familiar with current methodologies for power generation. These are 1.) Hydroelectric, 2.) Nuclear, 3.) Coal-Powered, 4.) Oil-Fired, and 5.) Generation using Natural gas. The graphic below will indicate the percentages of each generation type by technique. This is for the United States. Other countries use generation methods relative to the availability of resources, political pressures and cultural pressures. Germany is in the process of abandoning their use of nuclear energy for power generation. This is a cultural and political decision and not entirely based upon scientific considerations.
You will notice that renewable energy was approximately 12.9 percent of the total generation within the United States in 2013. Please note also that hydroelectric is considered to be a source of renewable energy. This is show by the graphic below. To break this down even further, we look at the following:
Renewable energy is represented by five (5) categories:
One additional possibly is generation of electricity by virtue of tidal processes. This technology is in its infancy with work being accomplished on a “demonstration” scale. It is an up-and-coming methodology but right now does not enjoy a place within the list above.
Just how much energy results from each renewable category?
From above we see there has been growing dependence upon renewable technology as a source of electricity. Wind and biomass production are increasing while hydroelectric decreasing. Geothermal and solar remain about the same. The increase in energy production by biomass is significant. Very significant.
The Energy Information Agency (EIA) has collected the following data:
Why should governments and independent companies continue to consider renewable energy as a source of power? There are compelling reasons.
- ENVIRONMENTAL BENEFITS– For the most part, renewable sources of energy have minimal negative impact on our environment. They are paramount in reducing carbon dioxide emissions. Millions of people are exposed to toxic fumes from cooking fuels and kerosene lanterns, emissions from automobiles and energy sources for generating electricity. All result in chronic eye and lung conditions. Countries such as China and India have days where atmospheric particulate requires masks or face coverings when prolonged periods of outdoor activity are needed.
- ENERGY FOR THE FUTURE—Coal, oil, natural gas, and even nuclear energy are expendable non-renewable sources of energy. Once exhausted—gone forever. Prolonging their use is paramount. We will never completely remove ourselves from being a petro-based economy. Too many bi-products are made from petroleum. It is fantasy to suspect total elimination of petroleum usage.
- JOBS AND ECONOMY—Investments in hardware and infrastructure for renewable energy use requires money but can creates jobs. If you have been following the insanity relative to approval of the Keystone Pipeline you know the argument. On a global basis, we can see the following: (PLEASE NOTE: The numbers are in billions of US dollars )
The point with this graph is showing the increasing investment dollars for R & D efforts and production of infrastructure in allowing generation of energy.
- ENERGY SECURITY– The U.S. imported approximately 10.6 million barrels per day of petroleum in 2012 from about 80 countries. We exported 3.2 MMbd of crude oil and petroleum products, resulting in net imports (imports minus exports) equaling 7.4 MMbd. Net imports accounted for 40% of the petroleum consumed in the United States, the lowest annual average since 1991.
“Petroleum” includes crude oil and refined petroleum products like gasoline, and biofuels like ethanol and biodiesel. In 2012, about 80% of gross petroleum imports were crude oil, and about 57% of all crude oil that was processed in U.S. refineries was imported.
The top five source countries of U.S. petroleum imports in 2012 were Canada, Mexico, Saudi Arabia, Venezuela, and Russia. Their respective rankings vary based on gross petroleum imports or net petroleum imports (gross imports minus exports). Net imports from OPEC countries accounted for 55% of U.S. net imports.
One disadvantage with renewable energy is that it is difficult to generate the quantities of electricity that are as large as those produced by traditional fossil fuel generators. This may mean that we need to reduce the amount of energy we use or simply build more energy facilities. It also indicates that the best solution to our energy problems may be to have a balance of many different power sources.
Another disadvantage of renewable energy sources is the reliability of supply. Renewable energy often relies on the weather for its source of power. Hydro generators need rain to fill dams to supply flowing water. Wind turbines need wind to turn the blades, and solar collectors need clear skies and sunshine to collect heat and make electricity. When these resources are unavailable so is the capacity to make energy from them. This can be unpredictable and inconsistent. The current cost of renewable energy technology is also far in excess of traditional fossil fuel generation. This is because it is a new technology and as such has extremely large capital cost.
CONCLUSIONS: It remains right and proper that the Unites States and other countries continue research and development relative to renewable sources of energy. The cost of power generation is increasing and depletion of non-renewable sources is of great concern. We must continue efforts to improve technologies of renewable power to reduce the cost of infrastructure and delivery.
I would welcome your comments: firstname.lastname@example.org
August 9, 2014
One of the very best publications existing today is “NASA TECH BRIEFS, Engineering Solutions for Design & Manufacturing”. This monthly publication strives to transfer technology from NASA design centers to University and corporate entities hoping the research and development can be commercialized in some fashion. In my opinion, it is a marvelous resource and demonstrates avenues of investigation separate and apart from what we have come to know as the recognized NASA mission. As you well know, in the process of exploration, there are many very useful “down-to-Earth” developments that can utilize and commercialized to benefit manufacturing and our populace at large. These are enumerated in this publication. Several distinct areas within the magazine highlighting papers and studies may be seen as follows:
- Technology Focus: Mechanical Components
- Manufacturing & Prototyping
- Materials & Coatings
- Physical Sciences
- Patents of Note
- New For Design Engineers
As you can see, each of these areas concentrates upon differing subjects, all relating to engineering and product design.
Let me now mention several publications and papers coming from the Volume 38, Number 8 edition. This will give you some feel for the investigative work coming from the NASA research centers across our country. These are in the August 2014 magazine.
- “Extreme Low Frequency Acoustic Measurement System”, Langley Research Center, Hampton, Va.
- “Piezoelectric Actuated Valve for Operation in Extreme Conditions”, Jet Propulsion Laboratory, Pasadena, California.
- “Compact Active Vibration Control System”, Langley Research Center, Hampton, Va.
- “Rotary Series Elastic Actuator”, L.B.J Space Center, Houston, Texas.
- “HALT Technique to Predict the Reliability of Solder Joints in a Shorter Duration”, Jet Propulsion Laboratory, Pasadena, California.
I feel one the great failure of our federal government is the abdication of manned-space programs. WE REALLY SCREWED UP on this one. If you have read any of my previous posting on this subject you will understand my complete and utter amazement relative to that decision by the Executive and Legislative branches of our government. This, to some extent, underscores the deplorable lack of vision existing at the highest levels. We have decided to let the Russians get us up and back. Very bad decision on our part. Now, it is important to note that NASA is far from being dormant-NASA is working.
Let’s take a look at the various NASA locations and the areas of research they are undertaking.
- Ames Research Center: Technological Strengths: Information Technology, Biotechnology, Nanotechnology, Aerospace Operations Systems, Rotorcraft, Thermal Protection Systems.
- Armstrong Flight Research Center: Technological Strengths: Aerodynamics, Aeronautics Flight Testing, Aeropropulsion, Flight Systems, Thermal Testing Integrated Systems Test and Validation.
- Glenn Research Center: Technological Strengths: Aeropropulsion, Communications, Energy Technology, High-Temperature Materials Research.
- Goddard Space Flight Center: Technological Strengths: Earth and Planetary Science Missions, LIDAR, Cryogenic Systems, Tracking, Telemetry, Remote Sensing, Command.
- Jet Propulsion Laboratory: Technological Strengths: Near/Deep-Space Mission Engineering, Microspacecraft, Space Communications, Information Systems, Remote Sensing, Robotics.
- Johnson Space Center: Technological Strengths: Artificial Intelligence and Human Computer Interface, Life Sciences, Human Space Flight Operations, Avionics, Sensors, Communication.
- Kennedy Space Center: Technological Strengths: Fluids and Fluid Systems, Materials Evaluation, Process Engineering Command, Control, and Monitor Systems, Range Systems, Environmental Engineering and Management.
- Langley Research Center: Technological Strengths: Aerodynamics, Flight Systems, Materials, Structures, Sensors, Measurements, Information Sciences.
- Marshall Space Flight Center: Technological Strengths: Materials, Manufacturing, Nondestructive Evaluations, Biotechnology, Space Propulsion, Controls and Dynamics, Structures, Microgravity Processing.
- Stennis Space Center: Technological Strengths: Propulsion Systems, Test/Monitoring, Remote Sensing, Nonintrusive Instrumentation.
- NASA Headquarters: Technological Strengths: NASA Planning and Management.
I can strongly recommend to you the “Tech Brief” publication. It’s free. You may find further investigation into the areas of research can benefit you and your company. Take a look.
As always, I welcome your comments. Many thanks.
August 5, 2014
The following post is taken from a PDHonline course this author has written for professional engineers. The entire course may be found from PDHonline.org. Look for Introduction to Reliability Engineering.
One of the most difficult issues when designing a product is determining how long it will last and how long it should last. If the product is robust to the point of lasting “forever” the price of purchase will probably be prohibitive compared with competition. If it “dies” the first week, you will eventually lose all sales momentum and your previous marketing efforts will be for naught. It is absolutely amazing to me as to how many products are dead on arrival. They don’t work, right out of the box. This is an indication of slipshod design, manufacturing, assembly or all of the above. It is definitely possible to design and build quality and reliability into a product so that the end user is very satisfied and feels as though he got his money’s worth. The medical, automotive, aerospace and weapons industries are certainly dependent upon reliability methods to insure safe and usable products so premature failure is not an issue. The same thing can be said for consumer products if reliability methods are applied during the design phase of the development program. Reliability methodology will provide products that “fail safe”, if they fail at all. Component failures are not uncommon to any assembly of parts but how that component fails can mean the difference between a product that just won’t work and one that can cause significant injury or even death to the user. It is very interesting to note that German and Japanese companies have put more effort into designing in quality at the product development stage. U.S. companies seem to place a greater emphasis on solving problems after a product has been developed.  Engineers in the United States do an excellent job when cost reducing a product through part elimination, standardization, material substitution, etc but sometimes those efforts relegate reliability to the “back burner”. Producibility, reliability, and quality start with design, at the beginning of the process, and should remain the primary concern throughout product development, testing and manufacturing.
QUALITY VS RELIABILITY:
There seems to be general confusion between quality and reliability. Quality is the “totality of features and characteristics of a product that bear on its ability to satisfy given needs; fitness for use”. “Reliability is a design parameter associated with the ability or inability of a product to perform as expected over a period of time”. It is definitely possible to have a product of considerable quality but one with questionable reliability. Quality AND reliability are crucial today with the degree of technological sophistification, even in consumer products. As you well know, the incorporation of computer driven and / or computer-controlled products has exploded over the past two decades. There is now an engineering discipline called MECHATRONICS that focuses solely on the combining of mechanics, electronics, control engineering and computing. Mr. Tetsuro Mori, a senior engineer working for a Japanese company called Yaskawa, first coined this term. The discipline is also alternately referred to as electromechanical systems. With added complexity comes the very real need to “design in” quality and reliability and to quantify the characteristics of operation, including the failure rate, the “mean time between failure” (MTBF ) and the “mean time to failure” ( MTTF ). Adequate testing will also indicate what components and subsystems are susceptible to failure under given conditions of use. This information is critical to marketing, sales, engineering, manufacturing, quality and, of course, the VP of Finance who pays the bills.
Every engineer involved with the design and manufacture of a product should have a basic knowledge of quality and reliability methods and practices.
I think it’s appropriate to define Reliability and Reliability Engineering. As you will see, there are several definitions, all basically saying the same thing, but important to mention, thereby grounding us for the course to follow.
“Reliability is, after all, engineering in its most practical form.”
James R. Schlesinger
Former Secretary of Defense
“Reliability is a projection of performance over periods of time and is usually defined as a quantifiable design parameter. Reliability can be formally defined as the probability or likelihood that a product will perform its intended function for a specified interval under stated conditions of use. “
John W. Priest
Engineering Design for Producibility and
“ Reliability engineering provides the tools whereby the probability and capability of an item performing intended functions for specified intervals in specified environments without failure can be specified, predicted, designed-in, tested, demonstrated, packaged, transported, stored installed, and started up; and their performance monitored and fed back to all organizations.”
“Reliability is the science aimed at predicting, analyzing, preventing and mitigating
failures over time.”
John D. Healy, PhD
“Reliability is —blood, sweat, and tears engineering to find out what could go wrong —, to organize that knowledge so it is useful to engineers and managers, and then to act
on that knowledge”
Ralph A. Evans
“The conditional probability, at a given confidence level, that the equipment
will perform its intended function for a specified mission time when operating
under the specified application and environmental stresses. “
The General Electric Company
“By its most primitive definition, reliability is the probability that no failures will occur in a given time interval of operation. This time interval may be a single operation, such as a mission, or a number of consecutive operations or missions. The opposite of reliability is unreliability, which is defined as the probability of failure in the same time interval “.
“Reliability Theory and Practice”
Personally, I like the definition given by Dr. Healy although the phrase “performing intended functions for specified intervals in specified environments “ adds a reality to the definition that really should be there. Also, there is generally associated with reliability data a confidence level. We will definitely discuss confidence level later on and how that factors into the reliability process. Reliability, like all other disciplines, has its own specific vocabulary and understanding “the words” is absolutely critical to the overall process we wish to follow.
The main goal of reliability engineering is to minimize failure rate by maximizing MTTF. The two main goals of design for reliability are:
- Predict the reliability of an item; i.e. component, subsystem and system ( fit the life model and/or estimate the MTTF or MTBF )
- Design for environments that promote failure.  To do this, we must understand the KNPs and the KCPs of the entire system or at least the mission critical subassemblies of the system.
The overall effort is concerned with eliminating early failures by observing their distribution and determining, accordingly, the length of time necessary for debugging and methods used to debug a system or subsystem. Further, it is concerned with preventing wearout failures by observing the statistical distribution of wearout and determining the preventative replacement periods for the various parts. This equates to knowing the MTTF and MTBF. Finally, its main attention is focused on chance failures and their prevention, reduction or complete elimination because it is the chance failures that most affect equipment reliability in actual operation. One method of accomplishing the above two goals is by the development and refinement of mathematical models. These models, properly structured, define and quantify the operation and usage of components and systems.
No mechanical or electromechanical product will last forever without preventative maintenance and / or replacing critical components. Reliability engineering seeks to discover the weakest link in the system or subsystem so any eventual product failure may be predicted and consequently forestalled. Any operational interruption may be eliminated by periodically replacing a part or an assembly of parts prior to failure. This predictive ability is achieved by knowing the meantime to failure (MTTF) and the meantime between failures (MTBF) for “mission critical” components and assemblies. With this knowledge, we can provide for continuous and safe operation, relative to a given set of environmental conditions and proper usage of the equipment itself. The test, find, fix (TAAF of TAAR) approach is used throughout reliability testing to discover what components are candidates for continuous “preventative maintenance” and possibly ultimate replacement. Sometimes designing redundancy into a system can prolong the operational life of a subsystem or system but that is generally costly for consumer products. Usually, this is only done when the product absolutely must survive the most rigorous environmental conditions and circumstances. Most consumer products do not have redundant systems. Airplanes, medical equipment and aerospace equipment represent products that must have redundant systems for the sake of continued safety for those using the equipment. As mentioned earlier, at the very worst, we ALWAYS want our mechanism to “fail safe” with absolutely no harm to the end-user or other equipment. This can be accomplished through engineering design and a strong adherence to accepted reliability practices. With this in mind, we start this process by recommending the following steps:
- Establish reliability goals and allocate reliability targets.
- Develop functional block diagrams for all critical systems
- Construct P-diagrams to identify and define KCPs and KNPs
- Benchmark current designs
- Identify the mission critical subsystems and components
- Conduct FMEAs
- Define and execute pre-production life tests; i.e. growth testing
- Conduct life predictions
- Develop and execute reliability audit plans
It is appropriate to mention now that this document assumes the product design is, at least, in the design confirmation phase of the development cycle and we have been given approval to proceed. Most NPI methodologies carry a product though design guidance, design confirmation, pre-pilot, pilot and production phases. Generally, at the pre-pilot point, the design is solidified so that evaluation and reliability testing can be conducted with assurance that any and all changes will be fairly minor and will not involve a “wholesale” redesign of any component or subassembly. This is not to say that when “mission critical components” fail we do not make all efforts to correct the failure(s) and put the product back into reliability testing. At the pre-pilot phase, the market surveys, consumer focus studies and all of the QFD work have been accomplished and we have tentative specifications for our product. Initial prototypes have been constructed and upper management has “signed off” and given approval to proceed into the next development cycles of the project. ONE CAUTION: Any issues involving safety of use must be addressed regardless of any changes becoming necessary for an adequate “fix”. This is imperative and must occur if failures arise, no matter what phase of the program is in progress.
Critical to these efforts will be conducting HALT and HAST testing to “make the product fail”. This will involve DOE (Design of Experiments) planning to quantify AND verify FMEA estimates. Significant time may be saved by carefully structuring a reliability evaluation plan to be accomplished at the component, subsystem and system levels. If you couple these tests with appropriate field-testing, you will develop a product that will “go the distance” relative to your goals and stay well within your SCR (Service Call Rate) requirements. Reliability testing must be an integral part of the basic design process and time must be given to this effort. The NPI process always includes reliability testing and the assessment of those results from that testing. Invariability, some degree of component or subsystem redesign results from HALT or HAST because weaknesses are made known that can and will be eliminated by redesign. In times past, engineering effort has always been to assign a “safety factor” to any design process. This safety factor takes into consideration “unknowns” that may affect the basic design. Unfortunately, this may produce a design that is structurally robust but fails due to Key Noise Parameters (KNPs) or Key Control Parameters (KCPs).
As you might expect, this is a “lick and a promise” relative to the subject of reliability. It’s a very complex subject but one that has provided remarkable life and quality to consumer and commercial products. I would invite you to take a look at the literature and further your understanding of the “ins and outs” of the technology. As always, I welcome your comments.