According to the “Electronic Design Magazine”, ‘Electronic waste is the fastest-growing form of waste. Electromechanical waste results from the Digital Revolution.  The Digital Revolution refers to the advancement of technology from analog electronic and mechanical devices to the digital technology available today. The era started to during the 1980s and is ongoing. The Digital Revolution also marks the beginning of the Information Era.

The Digital Revolution is sometimes also called the Third Industrial Revolution. The development and advancement of digital technologies started with one fundamental idea: The Internet. Here is a brief timeline of how the Digital Revolution progressed:

  • 1947-1979 – The transistor, which was introduced in 1947, paved the way for the development of advanced digital computers. The government, military and other organizations made use of computer systems during the 1950s and 1960s. This research eventually led to the creation of the World Wide Web.
  • 1980s – The computer became a familiar machine and by the end of the decade, being able to use one became a necessity for many jobs. The first cellphone was also introduced during this decade.
  • 1990s – By 1992, the World Wide Web had been introduced, and by 1996 the Internet became a normal part of most business operations. By the late 1990s, the Internet became a part of everyday life for almost half of the American population.
  • 2000s – By this decade, the Digital Revolution had begun to spread all over the developing world; mobile phones were commonly seen, the number of Internet users continued to grow, and the television started to transition from using analog to digital signals.
  • 2010 and beyond – By this decade, Internet makes up more than 25 percent of the world’s population. Mobile communication has also become very important, as nearly 70 percent of the world’s population owns a mobile phone. The connection between Internet websites and mobile gadgets has become a standard in communication. It is predicted that by 2015, the innovation of tablet computers will far surpass personal computers with the use of the Internet and the promise of cloud computing services. This will allow users to consume media and use business applications on their mobile devices, applications that would otherwise be too much for such devices to handle.

In the United States, E-waste represents approximately two percent (2%) of America’s trash in landfills, but seventy percent (70%) of the overall toxic waste.  American recycles about 679,000 tons of E-waste annually, and that figure does not include a large portion of electronics such as TV, DVD and VCR players, and related TV electronics. According to the EPA, E-waste is still the fastest growing municipal waste stream.  Not only is electromechanical waste a major environmental problem it contains valuable resources that could generate revenue and be used again.  Cell phones and other electronic items contain high amounts of precious metals, such as gold, and silver.  Americans dump phones containing more than sixty million ($60,000,000) dollars in gold and silver each year.

The United States and China generated the most e-waste last year – thirty-two (32%) percent of the world’s total. However, on a per capita basis, several countries famed for their environmental awareness and recycling records lead the way. Norway is on top of the world’s electronic waste mountain, generating 62.4 pounds per inhabitant.

Technology has made a significant difference in the ability to deal and handle E-waste products.  One country, Japan, is making a major effort to deal with the problem. Japan has approximately one hundred (100) major electronic waste facilities, as well as numerous smaller, local collection and operating facilities.  From those one hundred major plants, more than thirty (30) utilize the Kubota Vertical Shredder to reduce the overall size of the assemblies. Recycling technology company swissRTec has announced that one of its key products, the Kubota Vertical Shredder, is now available in the United States to take care of E-waste.

WHY IS E-WASTE RECYCLING IMPORTANT:

If we look at why recycling E-waste is important, we see the following:

  • Rich Source of Raw Materials Internationally, only ten to fifteen (10-15) percent of the gold in e-waste is successfully recovered while the rest is lost. Ironically, electronic waste contains deposits of precious metal estimated to be between forty and fifty (40 and 50) times richer than ores mined from the earth, according to the United Nations.
  • Solid Waste Management Because the explosion of growth in the electronics industry, combined with short product life cycle has led to a rapid escalation in the generation of solid waste.
  • Toxic Materials Because old electronic devices contain toxic substances such as lead, mercury, cadmium and chromium, proper processing is essential to ensure that these materials are not released into the environment. They may also contain other heavy metals and potentially toxic chemical flame retardants.
  • International Movement of Hazardous Waste The uncontrolled movement of e-waste to countries where cheap labor and primitive approaches to recycling have resulted in health risks to local residents exposed to the release of toxins continues to an issue of concern.

We are fortunate in Chattanooga to have an E-cycling “stations”.  ForeRunner does just that.  Here is a cut from their web site:

“… with more than 15 years in the computer \ e waste recycling field, Forerunner Computer Recycling has given Chattanooga companies a responsible option to dispose end of life cycle and surplus computer equipment. All Chattanooga based companies face the task of safely disposing of older equipment and their e waste. The EPA estimates that as many as 500 million computers \e- waste will soon become obsolete.

As Chattanooga businesses upgrade existing PCs, more computers and other e waste are finding their way into the waste stream. According to the EPA, over two million tons of electronics waste is discarded each year and goes to U.S. landfills.

Now you have a partner in the computer \ e waste recycling business who understands your need to safely dispose of your computer and electronic equipment in an environmentally responsible manner.

By promoting reuse – computer recycling and electronic recycling – Forerunner Computer Recycling extends the life of computer equipment and reduce e waste. Recycle your computers, recycle your electronics.”

CONCLUSIONS:

I definitely encourage you to look up the recycling E-waste facility in your city or county.  You will be doing our environment a great service in doing so.

Advertisements

One source for this post is Forbes Magazine article, ” U.S. Dependence on Foreign Oil Hits 30-Year Low”, by Mr. Mike Patton.  Other sources were obviously used.

The United States is at this point in time “energy independent”—for the most part.   Do you remember the ‘70s and how, at times, it was extremely difficult to buy gasoline?  If you were driving during the 1970s, you certainly must remember waiting in line for an hour or more just to put gas in the ol’ car? Thanks to the OPEC oil embargo, petroleum was in short supply. At that time, America’s need for crude oil was soaring while U.S. production was falling. As a result, the U.S. was becoming increasingly dependent on foreign suppliers. Things have changed a great deal since then. Beginning in the mid-2000s, America’s dependence on foreign oil began to decline.  One of the reasons for this decline is the abundance of natural gas or methane existent in the US.

“At the rate of U.S. dry natural gas consumption in 2015 of about 27.3 Tcf (trillion cubic feet) per year, the United States has enough natural gas to last about 86 years. The actual number of years will depend on the amount of natural gas consumed each year, natural gas imports and exports, and additions to natural gas reserves. Jul 25, 2017”

For most of the one hundred and fifty (150) years of U.S. oil and gas production, natural gas has played second fiddle to oil. That appeared to change in the mid-2000s, when natural gas became the star of the shale revolution, and eight of every 10 rigs were chasing gas targets.

But natural gas turned out to be a shooting star. Thanks to the industry’s incredible success in leveraging game-changing technology to commercialize ultralow-permeability reservoirs, the market was looking at a supply glut by 2010, with prices below producer break-even values in many dry gas shale plays.

Everyone knows what happened next. The shale revolution quickly transitioned to crude oil production, and eight of every ten (10) rigs suddenly were drilling liquids. What many in the industry did not realize initially, however, is that tight oil and natural gas liquids plays would yield substantial associated gas volumes. With ongoing, dramatic per-well productivity increases in shale plays, and associated dry gas flowing from liquids resource plays, the beat just keeps going with respect to growth in oil, NGL and natural gas supplies in the United States.

Today’s market conditions certainly are not what had once been envisioned for clean, affordable and reliable natural gas. But producers can rest assured that vision of a vibrant, growing and stable market will become a reality; it just will take more time to materialize. There is no doubt that significant demand growth is coming, driven by increased consumption in industrial plants and natural gas-fired power generation, as well as exports, including growing pipeline exports to Mexico and overseas shipments of liquefied natural gas.

Just over the horizon, the natural gas star is poised to again shine brightly. But in the interim, what happens to the supply/demand equation? This is a critically important question for natural gas producers, midstream companies and end-users alike.

Natural gas production in the lower-48 states has increased from less than fifty (50) billion cubic feet a day (Bcf/d) in 2005 to about 70 Bcf/d today. This is an increase of forty (40%) percent over nine years, or a compound annual growth rate of about four (4%) percent. There is no indication that this rate of increase is slowing. In fact, with continuing improvements in drilling efficiency and effectiveness, natural gas production is forecast to reach almost ninety (90) Bcf/d by 2020, representing another twenty-nine (29%) percent increase over 2014 output.

Most of this production growth is concentrated in a few extremely prolific producing regions. Four of these are in a fairway that runs from the Texas Gulf Coast to North Dakota through the middle section of the country, and encompasses the Eagle Ford, the Permian Basin, the Granite Wash, the SouthCentral Oklahoma Oil Play and other basins in Oklahoma, and the Williston Basin. The other major producing region is the Marcellus and Utica shales in the Northeast. Almost all the natural gas supply growth is coming from these regions.

We are at the point where this abundance can allow US companies to export LNG or liquified natural gas.   To move this cleaner-burning fuel across oceans, natural gas must be converted into liquefied natural gas (LNG), a process called liquefaction. LNG is natural gas that has been cooled to –260° F (–162° C), changing it from a gas into a liquid that is 1/600th of its original volume.  This would be the same requirement for Dayton.  The methane gas captured would need to be liquified and stored.  This is accomplished by transporting in a vessel similar to the one shown below:

As you might expect, a vessel such as this requires very specific designs relative to the containment area.  A cut-a-way is given below to indicate just how exacting that design must be to accomplish, without mishap, the transportation of LNG to other areas of the world.

Loading LNG from storage to the vessel is no easy manner either and requires another significant expenditure of capital.

For this reason, LNG facilities over the world are somewhat limited in number.  The map below will indicate their location.

A typical LNG station, both process and loading may be seen below.  This one is in Darwin.

CONCLUSIONS:

With natural gas being in great supply, there will follow increasing demand over the world for this precious commodity.  We already see automobiles using LNG instead of gasoline as primary fuel.  Also, the cost of LNG is significantly less than gasoline even with average prices over the US being around $2.00 +++ dollars per gallon.  According to AAA, the national average for regular, unleaded gasoline has fallen for thirty-five (35) out of thirty-six (36) days to $2.21 per gallon and sits at the lowest mark for this time of year since 2004. Gas prices continue to drop in most parts of the country due to abundant fuel supplies and declining crude oil costs. Average prices are about fifty-five (55) cents less than a year ago, which is motivating millions of Americans to take advantage of cheap gas by taking long road trips this summer.

I think the bottom line is: natural gas is here to stay.


Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.

Astrolabe

October 25, 2017


Information for the following post was taken from an article entitled “It’s Official: Earliest Known Marine Astrolabe Found in Shipwreck” by Laura Geggel, senior writer for LiveScience, 25 October 2017.

It’s amazing to me how much history is yet to be discovered, understood and transmitted to readers such as you and me.   I read a fascinating article some months ago indicating the history we do NOT know far exceeds the history we DO know.  Of course, the “winners” get to write their version of what happened.  This is as it has always been. In the great and grand scheme of things, we have artifacts and mentifacts.

ARTIFACT:

“Any object made by human beings, especially with a view to subsequent use.  A handmade object, as a tool, or the remains of one, as shard of pottery, characteristic of an earlier time or cultural stage, especially such an object found at an archaeological excavation.”

MENTIFACT:

“Mentifact (sometimes called a “psychofact”) is a term coined by Sir Julian Sorell Huxley, used together with the related terms “sociofact” and “artifact” to describe how cultural traits, such as “beliefs, values, ideas,” take on a life of their own spanning over generations, and are conceivable as objects in themselves.”

The word astrolabe is defined as follows:

The astrolabe is a very ancient astronomical computer for solving problem relating to time and position of the Sun and stars.  Several types of astrolabes have been made.  By far, the most popular type is the planispheric astrolabe, on which the celestial sphere is projected onto the plane of the equator.  A typical old astrolabe was made of brass and was approximately six (6) inches in diameter, although much larger and smaller astrolabes were also fabricated.

The subject for this post is the device shown as follows:

FIND:

More than 500 years ago, a fierce storm sank a ship carrying the earliest known marine astrolabe — a device that helped sailors navigate at sea, new research finds. Divers found the artifact in 2014, but were unsure exactly what it was at the time. Now, thanks to a 3D-imaging scanner, scientists were able to find etchings on the bronze disc that confirmed it was an astrolabe.

“It was fantastic to apply our 3D scanning technology to such an exciting project and help with the identification of such a rare and fascinating item,” Mark Williams, a professorial fellow at the Warwick Manufacturing Group at the University of Warwick, in the United Kingdom, said in a statement. Williams and his team did the scan.

 

The marine astrolabe likely dates to between 1495 and 1500, and was aboard a ship known as the Esmeralda, which sank in 1503. The Esmeralda was part of a fleet led by Portuguese explorer Vasco da Gama, the first known person to sail directly from Europe to India.

In 2014, an expedition led by Blue Water Recoveries excavated the Esmeralda shipwreck and recovered the astrolabe. But because researchers couldn’t discern any navigational markings on the almost seven (7) inch-diameter (17.5 centimeters) disc, they were cautious about labeling it without further evidence.

Now, the new scan reveals etchings around the edge of the disc, each separated by five degrees, Williams found. This detail proves it’s an astrolabe, as these markings would have helped mariners measure the height of the sun above the horizon at noon — a strategy that helped them figure out their location while at sea, Williams said.  The disc is also engraved with the Portuguese coat of arms and the personal emblem of Dom Manuel I, Portugal’s king from 1495 to1521.  “Usually we are working on engineering-related challenges, so to be able to take our expertise and transfer that to something totally different and so historically significant was a really interesting opportunity,” Williams said.

CONCLUSIONS:

The only manner in which the use of this device could be known is by three-dimensional scanning techniques.  Once again, modern technology allows for the unveiling of the truth.  The engravings indicating Portugal’s king nailed the time period.  This is a significant find and confirms early voyages throughout history.

AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

DEGREE OR NO DEGREE

October 7, 2017


The availability of information in books (as always), on the Internet, through seminars and professional shows, scientific publications, pod-casts, Webinars, etc. is amazing in today’s “digital age”.  That begs the question—Is a college degree really necessary?   Can you rise to a level of competence and succeed by being self-taught?  For most, a college degree is the way to open doors. For a precious few, however, no help is needed.

Let’s look at twelve (12) individuals who did just that.

The co-founder of Apple and the force behind the iPod, iPhone, and iPad, Steve Jobs attended Reed College, an academically-rigorous liberal arts college with a heavy emphasis on social sciences and literature. Shortly after enrolling in 1972, however, he dropped out and took a job as a technician at Atari.

Legendary industrialist Howard Hughes is often said to have graduated from Cal Tech, but the truth is that the California school has no record of his having attended classes there. He did enroll at Rice University in Texas in 1924, but dropped out prematurely due the death of his father.

Arguably Harvard’s most famous dropout, Bill Gates was already an accomplished software programmer when he started as a freshman at the Massachusetts campus in 1973. His passion for software actually began before high school, at the Lakeside School in Seattle, Washington, where he was programming in BASIC by age 13.

Just like his fellow Microsoft co-founder Bill Gates, Paul Allen was a college dropout.

Like Gates, he was also a star student (a perfect score on the SAT) who honed his programming skills at the Lakeside School in Seattle. Unlike Gates, however, he went on to study at Washington State University before leaving in his second year to work as a programmer at Honeywell in Boston.

Even for his time, Thomas Edison had little formal education. His schooling didn’t start until age eight, and then only lasted a few months.

Edison said that he learned most of his reading, writing, and math at home from his mother. Still, he became known as one of America’s most prolific inventors, amassing 1,093 U.S. patents and changing the world with such devices as the phonograph, fluoroscope, stock ticker, motion picture camera, mechanical vote recorder, and long-lasting incandescent electric light bulb. He is also credited with patenting a system of electrical power distribution for homes, businesses, and factories.

Michael Dell, founder of Dell Computer Corp., seemed destined for a career in the computer industry long before he dropped out of the University of Texas. He purchased his first calculator at age seven, applied to take a high school equivalency exam at age eight, and performed his first computer teardown at age 15.

A pioneer of early television technology, Philo T. Farnsworth was a brilliant student who dropped out of Brigham Young University after the death of his father, according to Biography.com.

Although born in a log cabin, Farnsworth quickly grasped technical concepts, sketching out his revolutionary idea for a television vacuum tube while still in high school, much to the confusion of teachers and fellow students.

Credited with inventing the controls that made fixed-wing powered flight possible, the Wright Brothers had little formal education.

Neither attended college, but they gained technical knowledge from their experiences working with printing presses, bicycles, and motors. By doing so, they were able to develop a three-axis controller, which served as the means to steer and maintain the equilibrium of an aircraft.

Stanford Ovshinsky managed to amass 400 patents covering subjects ranging from nickel-metal hydride batteries to amorphous silicon semiconductors to hydrogen fuel cells, all without the benefit of a college education. He is best known for his formation of Energy Conversion Devices and his pioneering work in nickel-metal hydride batteries, which have been widely used in hybrid and electric cars, as well as laptop computers, digital cameras, and cell phones.

Preston Tucker, designer of the infamous 1948 Tucker sedan, worked as a machinist, police officer and car salesman, but was not known to have attended college. Still, he managed to become founder of the Tucker Aviation Corp. and the Tucker Corp.

Larry Ellison dropped out of his pre-med studies at the University of Illinois in his second year and left the University of Chicago after only one term, but his brief academic experiences eventually led him to the top of the computer industry.

A Harvard dropout, Mark Zuckerberg was considered a prodigy before he even set foot on campus.

He began doing BASIC programming in middle school, created an instant messaging system while in high school, and learned to read and write French, Hebrew, Latin, and ancient Greek prior to enrolling in college.

CONCLUSIONS:

In conclusions, I want to leave you with a quote from President Calvin Coolidge:

Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent.


WHERE WE ARE:

The manufacturing industry remains an essential component of the U.S. economy.  In 2016, manufacturing accounted for almost twelve percent (11.7%) of the U.S. gross domestic product (GDP) and contributed slightly over two trillion dollars ($2.18 trillion) to our economy. Every dollar spent in manufacturing adds close to two dollars ($1.81) to the economy because it contributes to development in auxiliary sectors such as logistics, retail, and business services.  I personally think this is a striking number when you compare that contribution to other sectors of our economy.  Interestingly enough, according to recent research, manufacturing could constitute as much as thirty-three percent (33%) of the U.S. GDP if both its entire value chain and production for other sectors are included.  Research from the Bureau of Labor Statistics shows that employment in manufacturing has been trending up since January of 2017. After double-digit gains in the first quarter of 2017, six thousand (6,000) new jobs were added in April.  Currently, the manufacturing industry employs 12,396,000 people, which equals more than nine percent (9%) of the U.S. workforce.   Nonetheless, many experts are concerned that these employment gains are soon to be halted by the ever-rising adoption of automation. Yet automation is inevitable—and like in the previous industrial revolutions, automation is likely to result in job creation in the long term.  If we look back at the Industrial Revolution.

INDUSTRIAL REVOLUTION:

The Industrial Revolution began in the late 18th century when a series of new inventions such as the spinning jenny and steam engine transformed manufacturing in Britain. The changes in British manufacturing spread across Europe and America, replacing traditional rural lifestyles as people migrated to cities in search of work. Men, women and children worked in the new factories operating machines that spun and wove cloth, or made pottery, paper and glass.

Women under 20 made comprised the majority of all factory workers, according to an article on the Industrial Revolution by the Economic History Association. Many power loom workers, and most water frame and spinning jenny workers, were women. However, few women were mule spinners, and male workers sometimes violently resisted attempts to hire women for this position, although some women did work as assistant mule spinners. Many children also worked in the factories and mines, operating the same dangerous equipment as adult workers.  As you might suspect, this was a great departure from times prior to the revolution.

WHERE WE ARE GOING:

In an attempt to create more jobs, the new administration is reassessing free trade agreements, leveraging tariffs on imports, and promising tax incentives to manufacturers to keep their production plants in the U.S. Yet while these measures are certainly making the U.S. more attractive for manufacturers, they’re unlikely to directly increase the number of jobs in the sector. What it will do, however, is free up more capital for manufacturers to invest in automation. This will have the following benefits:

  • Automation will reduce production costs and make U.S. companies more competitive in the global market. High domestic operating costs—in large part due to comparatively high wages—compromise the U.S. manufacturing industry’s position as the world leader. Our main competitor is China, where low-cost production plants currently produce almost eighteen percent (17.6%) of the world’s goods—just zero-point percent (0.6%) less than the U.S. Automation allows manufacturers to reduce labor costs and streamline processes. Lower manufacturing costs results in lower product prices, which in turn will increase demand.

Low-cost production plants in China currently produce 17.6% of the world’s goods—just 0.6% less

than the U.S.

  • Automation increases productivity and improves quality. Smart manufacturing processes that make use of technologies such as robotics, big data, analytics, sensors, and the IoT are faster, safer, more accurate, and more consistent than traditional assembly lines. Robotics provide 24/7 labor, while automated systems perform real-time monitoring of the production process. Irregularities, such as equipment failures or quality glitches, can be immediately addressed. Connected plants use sensors to keep track of inventory and equipment performance, and automatically send orders to suppliers when necessary. All of this combined minimizes downtime, while maximizing output and product quality.
  • Manufacturers will re-invest in innovation and R&D. Cutting-edge technologies. such as robotics, additive manufacturing, and augmented reality (AR) are likely to be widely adopted within a few years. For example, Apple® CEO Tim Cook recently announced the tech giant’s $1 billion investment fund aimed at assisting U.S. companies practicing advanced manufacturing. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D. An important aspect of innovation will involve determining how to integrate increasingly sophisticated technologies with human functions to create highly effective solutions that support manufacturers’ outcomes.

Technologies such as robotics, additive manufacturing, and augmented reality are likely to be widely adopted soon. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D.

HOW AUTOMATION WILL AFFECT THE WORKFORCE:

Now, let’s look at the five ways in which automation will affect the workforce.

  • Certain jobs will be eliminated.  By 2025, 3.5 million jobs will be created in manufacturing—yet due to the skills gap, two (2) million will remain unfilled. Certain repetitive jobs, primarily on the assembly line will be eliminated.  This trend is with us right now.  Retraining of employees is imperative.
  • Current jobs will be modified.  In sixty percent (60%) of all occupations, thirty percent (30%) of the tasks can be automated.  For the first time, we hear the word “co-bot”.  Co-bot is robotic assisted manufacturing where an employee works side-by-side with a robotic system.  It’s happening right now.
  • New jobs will be created. There are several ways automation will create new jobs. First, lower operating costs will make U.S. products more affordable, which will result in rising demand. This in turn will increase production volume and create more jobs. Second, while automation can streamline and optimize processes, there are still tasks that haven’t been or can’t be fully automated. Supervision, maintenance, and troubleshooting will all require a human component for the foreseeable future. Third, as more manufacturers adopt new technologies, there’s a growing need to fill new roles such as data scientists and IoT engineers. Fourth, as technology evolves due to practical application, new roles that integrate human skills with technology will be created and quickly become commonplace.
  • There will be a skills gap between eliminated jobs and modified or new roles. Manufacturers should partner with educational institutions that offer vocational training in STEM fields. By offering students on-the-job training, they can foster a skilled and loyal workforce.  Manufacturers need to step up and offer additional job training.  Employees need to step up and accept the training that is being offered.  Survival is dependent upon both.
  • The manufacturing workforce will keep evolving. Manufacturers must invest in talent acquisition and development—both to build expertise in-house and to facilitate continuous innovation.  Ten years ago, would you have heard the words, RFID, Biometrics, Stereolithography, Additive manufacturing?  I don’t think so.  The workforce MUST keep evolving because technology will only improve and become a more-present force on the manufacturing floor.

As always, I welcome your comments.

%d bloggers like this: