December 12, 2017

The other day I was visiting a client and discussing a project involving the application of a robotic system to an existing work cell.  The process is somewhat complex and we all questioned which employee would manage the operation of the cell including the system.  The system is a SCARA type.  SCARA is an acronym for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.

In 1981, Sankyo SeikiPentel and NEC presented a completely new concept for assembly robots. The robot was developed under the guidance of Hiroshi Makino, a professor at the University of Yamanashi and was called the Selective Compliance Assembly Robot Arm or SCARA.

SCARA’s are generally faster and cleaner than comparable Cartesian (X, Y, Z) robotic systems.  Their single pedestal mount requires a small footprint and provides an easy, unhindered form of mounting. On the other hand, SCARA’s can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves. This software typically comes with the SCARA however and is usually transparent to the end-user.   The SCARA system used in this work cell had the capability of one hundred programs with 100 data points per program.  It was programmed by virtue of a “teach pendant” and “jog” switch controlling the placement of the robotic arm over the material.

Several names were mentioned as to who might ultimately, after training, be capable of taking on this task.  When one individual was named, the retort was; “not James, he is only half smart.  That got me to thinking about “smarts”.  How smart is smart?   At what point do we say smart is smart enough?


The concept of IQ or intelligence quotient was developed by either the German psychologist and philosopher Wilhelm Stern in 1912 or by Lewis Terman in 1916.  This is depending on which of several sources you consult.   Intelligence testing was initially accomplished on a large scale before either of these dates. In 1904 psychologist Alfred Binet was commissioned by the French government to create a testing system to differentiate intellectually normal children from those who were inferior.

From Binet’s work the IQ scale called the “Binet Scale,” (and later the “Simon-Binet Scale”) was developed. Sometime later, “intelligence quotient,” or “IQ,” entered our vocabulary.  Lewis M. Terman revised the Simon-Binet IQ Scale, and in 1916 published the Stanford Revision of the Binet-Simon Scale of Intelligence (also known as the Stanford-Binet).

Intelligence tests are one of the most popular types of psychological tests in use today. On the majority of modern IQ tests, the average (or mean) score is set at 100 with a standard deviation of 15 so that scores conform to a normal distribution curve.  This means that 68 percent of scores fall within one standard deviation of the mean (that is, between 85 and 115), and 95 percent of scores fall within two standard deviations (between 70 and 130).  This may be shown from the following bell-shaped curve:

Why is the average score set to 100?  Psychometritians, individuals who study the biology of the brain, utilize a process known as standardization in order to make it possible to compare and interpret the meaning of IQ scores. This process is accomplished by administering the test to a representative sample and using these scores to establish standards, usually referred to as norms, by which all individual scores can be compared. Since the average score is 100, experts can quickly assess individual test scores against the average to determine where these scores fall on the normal distribution.

The following scale resulted for classifying IQ scores:

IQ Scale

Over 140 – Genius or almost genius
120 – 140 – Very superior intelligence
110 – 119 – Superior intelligence
90 – 109 – Average or normal intelligence
80 – 89 – Dullness
70 – 79 – Borderline deficiency in intelligence
Under 70 – Feeble-mindedness

Normal Distribution of IQ Scores

From the curve above, we see the following:

50% of IQ scores fall between 90 and 110
68% of IQ scores fall between 85 and 115
95% of IQ scores fall between 70 and 130
99.5% of IQ scores fall between 60 and 140

Low IQ & Mental Retardation

An IQ under 70 is considered as “mental retardation” or limited mental ability. 5% of the population falls below 70 on IQ tests. The severity of the mental retardation is commonly broken into 4 levels:

50-70 – Mild mental retardation (85%)
35-50 – Moderate mental retardation (10%)
20-35 – Severe mental retardation (4%)
IQ < 20 – Profound mental retardation (1%)

High IQ & Genius IQ

Genius or near-genius IQ is considered to start around 140 to 145. Less than 1/4 of 1 percent fall into this category. Here are some common designations on the IQ scale:

115-124 – Above average
125-134 – Gifted
135-144 – Very gifted
145-164 – Genius
165-179 – High genius
180-200 – Highest genius

We are told “Big Al” had an IQ over 160 which would definitely qualify him as being one the most intelligent people on the planet.

As you can see, the percentage of individuals considered to be genius is quite small. 0.50 percent to be exact.  OK, who are these people?

  1. Stephen Hawking

Dr. Hawking is a man of Science, a theoretical physicist and cosmologist.  Hawking has never failed to astonish everyone with his IQ level of 160. He was born in Oxford, England and has proven himself to be a remarkably intelligent person.   Hawking is an Honorary Fellow of the Royal Society of Arts, a lifetime member of the Pontifical Academy of Sciences, and a recipient of the Presidential Medal of Freedom, the highest civilian award in the United States.  Hawking was the Lucasian Professor of Mathematics at the University of Cambridge between 1979 and 2009. Hawking has a motor neuron disease related to amyotrophic lateral sclerosis (ALS), a condition that has progressed over the years. He is almost entirely paralyzed and communicates through a speech generating device. Even with this condition, he maintains a very active schedule demonstrating significant mental ability.

  1. Andrew Wiles

Sir Andrew John Wiles is a remarkably intelligent individual.  Sir Andrew is a British mathematician, a member of the Royal Society, and a research professor at Oxford University.  His specialty is numbers theory.  He proved Fermat’s last theorem and for this effort, he was awarded a special silver plaque.    It is reported that he has an IQ of 170.

  1. Paul Gardner Allen

Paul Gardner Allen is an American business magnate, investor and philanthropist, best known as the co-founder of The Microsoft Corporation. As of March 2013, he was estimated to be the 53rd-richest person in the world, with an estimated wealth of $15 billion. His IQ is reported to be 170. He is considered to be the most influential person in his field and known to be a good decision maker.

  1. Judit Polgar

Born in Hungary in 1976, Judit Polgár is a chess grandmaster. She is by far the strongest female chess player in history. In 1991, Polgár achieved the title of Grandmaster at the age of 15 years and 4 months, the youngest person to do so until then. Polgar is not only a chess master but a certified brainiac with a recorded IQ of 170. She lived a childhood filled with extensive chess training given by her father. She defeated nine former and current world champions including Garry Kasparov, Boris Spassky, and Anatoly Karpov.  Quite amazing.

  1. Garry Kasparov

Garry Kasparov has totally amazed the world with his outstanding IQ of more than 190. He is a Russian chess Grandmaster, former World Chess Champion, writer, and political activist, considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 months.  Kasparov became the youngest ever undisputed World Chess Champion in 1985 at age 22 by defeating then-champion Anatoly Karpov.   He held the official FIDE world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. In 1997 he became the first world champion to lose a match to a computer under standard time controls, when he lost to the IBM supercomputer Deep Blue in a highly publicized match. He continued to hold the “Classical” World Chess Championship until his defeat by Vladimir Kramnik in 2000.

  1. Rick Rosner

Gifted with an amazing IQ of 192.  Richard G. “Rick” Rosner (born May 2, 1960) is an American television writer and media figure known for his high intelligence test scores and his unusual career. There are reports that he has achieved some of the highest scores ever recorded on IQ tests designed to measure exceptional intelligence. He has become known for taking part in activities not usually associated with geniuses.

  1. Kim Ung-Yong

With a verified IQ of 210, Korean civil engineer Ung Yong is considered to be one of the smartest people on the planet.  He was born March 7, 1963 and was definitely a child prodigy .  He started speaking at the age of 6 months and was able to read Japanese, Korean, German, English and many other languages by his third birthday. When he was four years old, his father said he had memorized about 2000 words in both English and German.  He was writing poetry in Korean and Chinese and wrote two very short books of essays and poems (less than 20 pages). Kim was listed in the Guinness Book of World Records under “Highest IQ“; the book gave the boy’s score as about 210. [Guinness retired the “Highest IQ” category in 1990 after concluding IQ tests were too unreliable to designate a single record holder.

  1. Christopher Hirata

Christopher Hirata’s  IQ is approximately 225 which is phenomenal. He was genius from childhood. At the age of 16, he was working with NASA with the Mars mission.  At the age of 22, he obtained a PhD from Princeton University.  Hirata is teaching astrophysics at the California Institute of Technology.

  1. Marilyn vos Savant

Marilyn Vos Savant is said to have an IQ of 228. She is an American magazine columnist, author, lecturer, and playwright who rose to fame as a result of the listing in the Guinness Book of World Records under “Highest IQ.” Since 1986 she has written “Ask Marilyn,” a Parade magazine Sunday column where she solves puzzles and answers questions on various subjects.

1.Terence Tao

Terence Tao is an Australian mathematician working in harmonic analysis, partial differential equations, additive combinatorics, ergodic Ramsey theory, random matrix theory, and analytic number theory.  He currently holds the James and Carol Collins chair in mathematics at the University of California, Los Angeles where he became the youngest ever promoted to full professor at the age of 24 years. He was a co-recipient of the 2006 Fields Medal and the 2014 Breakthrough Prize in Mathematics.

Tao was a child prodigy, one of the subjects in the longitudinal research on exceptionally gifted children by education researcher Miraca Gross. His father told the press that at the age of two, during a family gathering, Tao attempted to teach a 5-year-old child arithmetic and English. According to Smithsonian Online Magazine, Tao could carry out basic arithmetic by the age of two. When asked by his father how he knew numbers and letters, he said he learned them from Sesame Street.

OK, now before you go running to jump from the nearest bridge, consider the statement below:

Persistence—President Calvin Coolidge said it better than anyone I have ever heard. “Nothing in the world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent.   Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent.  The slogan “Press on” has solved and always will solve the problems of the human race.” 

I personally think Calvin really knew what he was talking about.  Most of us get it done by persistence!! ‘Nuff” said.



December 10, 2017

If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.


Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).


Just what applications could take advantage of “deep learning?”


A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.


Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.


Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.


A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.


Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.


Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.


An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.


Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.



  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.


  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.


Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.

Portions of the following post were taken from the September 2017 Machine Design Magazine.

We all like to keep up with salary levels within our chosen profession.  It’s a great indicator of where we stand relative to our peers and the industry we participate in.  The state of the engineering profession has always been relatively stable. Engineers are as essential to the job market as doctors are to medicine. Even in the face of automation and the fear many have of losing their jobs to robots, engineers are still in high demand.  I personally do not think most engineers will be out-placed by robotic systems.  That fear definitely resides with on-line manufacturing positions with duties that are repetitive in nature.  As long as engineers can think, they will have employment.

The Machine Design Annual Salary & Career Report collected information and opinions from more than two thousand (2,000) Machine Design readers. The employee outlook is very good with thirty-three percent (33%) indicating they are staying with their current employer and thirty-six percent (36%) of employers focusing on job retention. This is up fifteen percent (15%) from 2016.  From those who responded to the survey, the average reported salary for engineers across the country was $99,922, and almost sixty percent (57.9%) reported a salary increase while only ten percent (9.7%) reported a salary decrease. The top three earning industries with the largest work forces were 1.) industrial controls systems and equipment, 2.) research & development, and 3.) medical products. Among these industries, the average salary was $104,193. The West Coast looks like the best place for engineers to earn a living with the average salary in the states of California, Washington, and Oregon was $116,684. Of course, the cost of living in these three states is definitely higher than other regions of the country.


As is the ongoing trend in engineering, the profession is dominated by male engineers, with seventy-one percent (71%) being over fifty (50) years of age. However, the MD report shows an up-swing of young engineers entering the profession.  One effort that has been underway for some years now is encouraging more women to enter the profession.  With seventy-one percent (71%) of the engineering workforce being over fifty, there is a definite need to attract participants.    There was an increase in engineers within between twenty-five (25) and thirty-five (35).  This was up from 5.6% to 9.2%.  The percentage of individuals entering the profession increased as well, with engineers with less than fourteen (14) years of experience increasing five percent (5%) from last year.  Even with all the challenges of engineering, ninety-two percent (92%) would still recommend the engineering profession to their children, grandchildren and others. One engineer responds, “In fact, wherever I’ll go, I always will have an engineer’s point of view. Trying to understand how things work, and how to improve them.”


When asked about foreign labor forces, fifty-four percent (54%) believe H1-B visas hurt engineering employment opportunities and sixty-one percent (61%) support measures to reform the system. In terms of outsourcing, fifty-two percent (52%) reported their companies outsource work—the main reason being lack of in-house talent. However, seventy-three percent (73%) of the outsourced work is toward other U.S. locations. When discussing the future, the job force, fifty-five percent (55%) of engineers believe there is a job shortage, specifically in the skilled labor area. An overwhelming eighty-seven percent (87%) believe that we lack a skilled labor force. According to the MD readers, the strongest place for job growth is in automation at forty-five percent (45%) and the strongest place to look for skilled laborers is in vocational schools at thirty-two percent (32%). The future of engineering is dependent on the new engineers not only in school today, but also in younger people just starting their young science, technology, engineering, and mathematic (STEM) interests. With the average engineer being fifty (50) years or old, the future of engineering will rely heavily on new engineers willing to carry the torch—eighty-seven percent (87%) of our engineers believe there needs to be more focus on STEM at an earlier age to make sure the future of engineering is secure.

With being the case, let us now look at the numbers.

The engineering profession is a “graying” profession as mentioned earlier.  The next digital picture will indicate that, for the most part, those in engineering have been in for the “long haul”.  They are “lifers”.  This fact speaks volumes when trying to influence young men and women to consider the field of engineering.  If you look at “years in the profession”, “work location” and years at present employer” we see the following:

The slide below is a surprise to me and I think the first time the question has been asked by Machine Design.  How much of your engineering training is theory vs. practice? You can see the greatest response is almost fourteen percent (13.6%) with a fifty/fifty balance between theory and practice.  In my opinion, this is as it should be.

“The theory can be learned in a school, but the practical applications need to be learned on the job. The academic world is out of touch with the current reality of practical applications since they do not work in

that area.” “My university required three internships prior to graduating. This allowed them to focus significantly on theoretical, fundamental knowledge and have the internships bolster the practical.”


The demands made on engineers by their respective companies can sometimes be time-consuming.  The respondents indicated the following certifications their companies felt necessary.




The lowest salary is found with contract design and manufacturing.  Even this salary, would be much desired by just about any individual.

As we mentioned earlier, the West Coast provides the highest salary with several states in the New England area coming is a fairly close second.



This one should be no surprise.  The greater number of years in the profession—the greater the salary level.  Forty (40) plus years provides an average salary of approximately $100,000.  Management, as you might expect, makes the highest salary with an average being $126,052.88.



As mentioned earlier, outsourcing is a huge concern to the engineering community. The chart below indicates where the jobs go.



Most engineers will tell you they stay in the profession because they love the work. The euphoria created by a “really neat” design stays with an engineer much longer than an elevated pay check.  Engineers love solving problems.  Only two percent (2%) told MD they are not satisfied at all with their profession or current employer.  This is significant.

Any reason or reasons for leaving the engineering profession are shown by the following graphic.


As mentioned earlier, engineers are very worried about the H1-B visa program and trade policies issued by President Trump and the Legislative Branch of our country.  The Trans-Pacific Partnership has been “nixed” by President Trump but trade policies such as NAFTA and trade between the EU are still of great concern to engineers.  Trade with China, patent infringement, and cyber security remain big issues with the STEM profession and certainly engineers.



I think it’s very safe to say that, for the most part, engineers are very satisfied with the profession and the salary levels offered by the profession.  Job satisfaction is great making the dawn of a new day something NOT to be dreaded.


October 13, 2017

Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.


Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

In preparation for this post, I asked my fifteen-year old grandson to define product logistics and product supply chain.  He looked at me as though I had just fallen off a turnip truck.  I said you know, how does a manufacturer or producer of products get those products to the customer—the eventual user of the device or commodity.  How does that happen? I really need to go do my homework.  Can I think about this and give you an answer tomorrow?


Let’s take a look at Logistics and Supply Chain Management:

“Logistics typically refers to activities that occur within the boundaries of a single organization and Supply Chain refers to networks of companies that work together and coordinate their actions to deliver a product to market. Also, traditional logistics focuses its attention on activities such as procurement, distribution, maintenance, and inventory management. Supply Chain Management (SCM) acknowledges all of traditional logistics and also includes activities such as marketing, new product development, finance, and customer service” – from Essential of Supply Chain Management by Michael Hugos.

“Logistics is about getting the right product, to the right customer, in the right quantity, in the right condition, at the right place, at the right time, and at the right cost (the seven Rs of Logistics)” – from Supply Chain Management: A Logistics Perspective By John J. Coyle et al

Now, that wasn’t so difficult, was it?  A good way to look at is as follows:


There have been remarkable advancements in supply chain logistics over the past decade.  Most of those advancements have resulted from companies bringing digital technologies into the front office, the warehouse, and transportation to the eventual customer.   Mobile technologies are certainly changing how products are tracked outside the four walls of the warehouse and the distribution center.  Realtime logistics management is within the grasp of many very savvy shippers.  To be clear:

Mobile networking refers to technology that can support voice and/or data network connectivity using wireless, via a radio transmission solution. The most familiar application of mobile networking is the mobile phone or tablet or i-pad.  From real-time goods tracking to routing assistance to the Internet of Things (IoT) “cutting wires” in the area that lies between the warehouse and the customer’s front door is gaining ground as shippers grapple with fast order fulfillment, smaller order sizes, and ever-evolving customer expectations.

In return for their tech investments, shippers and logistics managers are gaining benefits such as short-ended lead times, improved supply chain visibility, error reductions, optimized transportation networks and better inventory management.  If we combine these advantages we see that “wireless” communications are helping companies work smarter and more efficiently in today’s very fast-paced business world.


Let’s look now at six (6) mobility trends.

  1. Increasingly Sophisticated Vehicle Communications—There was a time when the only contact a driver had with home base was after an action, such as load drop-off, took place or when there was an in-route problem. Today, as you might expect, truck drivers, pilots and others responsible for getting product to the customer can communicate real-time.  Cell phones have revolutionized and made possible real-time communication.
  2. Trucking Apps—By 2015, Frost & Sullivan indicated the size of the mobile trucking app market hit $35.4 billion dollars. Mobile apps are being launched, targeting logistics almost constantly. With the launch of UBER Freight, the competition in the trucking app space has heated up considerably, pressing incumbents to innovate and move much faster than ever before.
  3. Its’ Not Just for the Big Guys Anymore: At one time, fleet mobility solutions were reserved for larger companies that could afford them.  As technology has advanced and become more mainstream and affordable, so have fleet mobility solution.
  4. Mobility Helps Pinpoint Performance and Productivity Gaps: Knowing where everything is at any one given time is “golden”. It is the Holy Grail for every logistics manager.  Mobility is putting that goal within their reach.
  5. More Data Means More Mobile Technology to Generate and Support Logistics: One great problem that is now being solved, is how to handle perishable goods and refrigerated consumer items.  Shippers who handle these commodities are now using sensors to detect trailer temperatures, dead batteries, and other problems that would impact their cargos.  Using sensors, and the data they generate, shippers can hopefully make much better business decisions and head off problems before they occur.  Sensors, if monitored properly, can indicate trends and predict eventual problems.
  6. Customers Want More Information and Data—They Want It Now: Customer’s expectations for real-time shipment data is now available at their fingertips without having to pick up a telephone or send an e-mail.  Right now, that information is available quickly online or with a smartphone.


The world is changing at light speed, and mobility communications is one technology making this possible.  I have no idea as to where we will be in ten years, but it just might be exciting.

Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”


Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.


Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.

%d bloggers like this: