SEVEN DEADLY SINS

August 4, 2018


The web site given below is a great site for mechanical engineers and other engineering types involved with projects technology and otherwise.  The “Seven Deadly Sins” caught my attention because these traits apply to just about all projects including those we undertake at home. Let’s take a look.

  1. Rushing projects

More haste, less speed. In other words, if you’ve left things to the last minute or you have taken on too much just to impress your superiors and can’t cope with the workload, it’s a recipe for design disaster.

Mechanical design is a complex process. I might add that most projects that require thought require planning.  If you wish to build a deck for your home—you MUST plan. You need plenty of time to think, plan, reflect, analyse and create. If you’re pressed for time then you’ll probably start cutting corners to get it finished quickly and make glaring errors that won’t get picked up soon enough, as you don’t have time to go back over it to check. To avoid this, make sure you have a well-organized work schedule, don’t take on too much and plan the process of each design carefully before starting.

  1. Poor attention to detail

This is a very broad mistake, but worth mentioning in its own right as it’s so important to develop the right mindset.  The devil is truly in the details. You need to be able to focus on the design or project adequate periods of time and get into the habit of coming back to take a second or even third look at your design.  Checking it over with a fine toothcomb is not time wasted.

  1. Getting the dimensions wrong

Even some of the best engineering minds in the world get it wrong sometimes. Just look at the mistakes NASA has made over the years. One of their biggest mistakes was the loss of a Mars orbiter worth $125 million in 1999. The error came about when engineers from the contractor Lockheed Martin used imperial measurements, while NASA engineers used metric. The conversions were incorrect which wasn’t picked up by either team, thus causing the vessel to orbit 25km closer to the planet dipping into the atmosphere causing the engines to overheat. The moral of the story? Check your dimensions and conversions. In fact, don’t just check them, double or triple check them, then get someone else to check them. Especially when there’s $125 million on the line! How many times have you heard—measure twice, cut once?

  1. Falling behind the curve

Don’t get left behind. Not staying up-to-date with industry developments or the latest technology is a big mistake for mechanical design engineers and individuals considering and planning projects. In this technological age things change fast, so make sure your knowledge is relevant.  The latest “gadget” may just be the device you need to make a good project a great project.   Also, depending upon the project, building codes and building permits may come in to play. Some years ago, I built a backyard deck adjacent to my house.  It was a big project for me and necessitated a building permit from my community.  I found that out when I was visited by one of our local commissioners. The project was delayed until I had the necessary permit.

  1. Not thinking about the assembly process

It’s easy to get wrapped up in your design and forget about the practicality of actually putting it together. Make sure you are thinking about misassemble during the design. Try to foolproof your design, in other words, you want to make sure that, if possible, the pieces can only go together in one way to avoid the chance of misassemble. I’m sure you all have heard about the guy who built a boat in his basement only to discover he had to disassemble the boat in order to get it out of his basement.   In manufacturing, this is known as ‘poka yoke’.

  1. Not applying common sense checks

Make sure the results of your calculations  and planning make sense. Always question everything you do. Question it, check it, and check it again is a good motto to live by.

  1. No consideration of design presentation

At the end of the day, your design is going to be seen by lots of people including your “significant other”.  It needs to be clear, not just to you, but to everyone else. Also, make sure you are constantly practicing and developing your interpersonal skills. There’s a good chance you’ll have to explain your design and rational for that design in person, therefore make sure you figure out how you’re going to communicate the concepts and practicalities of the design beforehand.  You need to make sure when that neighbor asks—“why did you do It that way”- you have a logical answer.

Just a thought.

Advertisements

I feel that most individuals, certainly most adults, wonder if anyone is out there.  Are there other planets with intelligent life and is that life humanoid or at least somewhat intelligent?  The first effort would be to define intelligent.  Don’t laugh but this does have some merit and has been considered by behavioral scientists for a significant length of time.  On Earth, human intelligence took nearly four (4) Billion years to develop. If living beings develop advanced technology, they can make their existence known to the Universe. A working definition of “intelligent” includes self-awareness, use of tools, and use of language. There are other defining traits, as follows:

  • Crude perceptive abilities: Like concept of a handshake (sending a message and acknowledging receipt of one sent by you)
  • Crude communication abilities: Some primitive language and vocabulary
  • Sentience: Should be able of original thought and motivation, some form of self -awareness
  • Retention: Ability to remember and recall information on will
  • Some form of mathematical ability like counting

Please feel free to apply your own definition to intelligence. You will probably come as close as anyone to a workable one.

TESS:

NASA is looking and one manner in which the search occurs is with the new satellite TESS.

The Transiting Exoplanet Survey Satellite (TESS) is an Explorer-class planet finder.   TESS will pick up the search for exoplanets as the Kepler Space Telescope runs out of fuel.

Kepler, which has discovered more than 4,500 potential planets and confirmed exoplanets, launched in 2009. After mechanical failure in 2013, it entered a new phase of campaigns to survey other areas of the sky for exoplanets, called the K2 mission. This enabled researchers to discover even more exoplanets, understand the evolution of stars and gain insight about supernovae and black holes.

Soon, Kepler’s mission will end, and it will be abandoned in space, orbiting the sun, therefore:  never getting closer to Earth than the moon.

The spaceborne all-sky transit survey, TESS will identify planets ranging from Earth-sized to gas giants, orbiting a wide range of stellar types and orbital distances. The principal goal of the TESS mission is to detect small planets with bright host stars in the solar neighborhood, so that detailed characterizations of the planets and their atmospheres can be performed. TESS is only one satellite used to determine if there are any “goldy-locks” planets in our solar system. TESS will survey an area four hundred (400) times larger than Kepler observed. This includes two hundred thousand (200,000) of the brightest nearby stars. Over the course of two years, the four wide-field cameras on board will stare at different sectors of the sky for days at a time.

TESS will begin by looking at the Southern Hemisphere sky for the first year and move to the Northern Hemisphere in the second year. It can accomplish this lofty goal by dividing the sky into thirteen (13) sections and looking at each one for twenty-seven (27) days before moving on to the next.

The various missions launched to discover exoplanets may be seen below.

As mentioned earlier, TESS will monitor the brightness of more than two hundred thousand (200,000) stars during a two-year mission, searching for temporary drops in brightness caused by planetary transits. Transits occur when a planet’s orbit carries it directly in front of its parent star as viewed from Earth. TESS is expected to catalog more than fifteen hundred (1,500) transiting exoplanet candidates, including a sample of approximately five hundred (500) Earth-sized and ‘Super Earth’ planets, with radii less than twice that of the Earth. TESS will detect small rock-and-ice planets orbiting a diverse range of stellar types and covering a wide span of orbital periods, including rocky worlds in the habitable zones of their host stars.  This is a major undertaking and you might suspect so joint-ventures are an absolute must.  With that being the case, the major parterners in this endeavor may be seen as follows:

The project overview is given by the next pictorial.

In summary:

TESS will tile the sky with 26 observation sectors:

  • At least 27 days staring at each 24° × 96° sector
  • Brightest 200,000 stars at 1-minute cadence
  • Full frame images with 30-minute cadence
  • Map Southern hemisphere in first year
  • Map Northern hemisphere in second year
  • Sectors overlap at ecliptic poles for sensitivity to smaller and longer period planets in JWST Continuous Viewing Zone (CVZ)

TESS observes from unique High Earth Orbit (HEO):

  • Unobstructed view for continuous light curves
  • Two 13.7-day orbits per observation sector
  • Stable 2:1 resonance with Moon’s orbit
  • Thermally stable and low-radiation

The physical hardware looks as follows:

You can’t tell much about the individual components from the digital picture above but suffice it to say that TESS is a significant improvement relative to Kepler as far as technology.  The search continues and I do not know what will happen if we ever discover ET.  Imagine the areas of life that would affect?

 

 

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.


Portions of the following post were taken from the September 2017 Machine Design Magazine.

We all like to keep up with salary levels within our chosen profession.  It’s a great indicator of where we stand relative to our peers and the industry we participate in.  The state of the engineering profession has always been relatively stable. Engineers are as essential to the job market as doctors are to medicine. Even in the face of automation and the fear many have of losing their jobs to robots, engineers are still in high demand.  I personally do not think most engineers will be out-placed by robotic systems.  That fear definitely resides with on-line manufacturing positions with duties that are repetitive in nature.  As long as engineers can think, they will have employment.

The Machine Design Annual Salary & Career Report collected information and opinions from more than two thousand (2,000) Machine Design readers. The employee outlook is very good with thirty-three percent (33%) indicating they are staying with their current employer and thirty-six percent (36%) of employers focusing on job retention. This is up fifteen percent (15%) from 2016.  From those who responded to the survey, the average reported salary for engineers across the country was $99,922, and almost sixty percent (57.9%) reported a salary increase while only ten percent (9.7%) reported a salary decrease. The top three earning industries with the largest work forces were 1.) industrial controls systems and equipment, 2.) research & development, and 3.) medical products. Among these industries, the average salary was $104,193. The West Coast looks like the best place for engineers to earn a living with the average salary in the states of California, Washington, and Oregon was $116,684. Of course, the cost of living in these three states is definitely higher than other regions of the country.

PROFILE OF THE ENGINEER IN THE USA TODAY:

As is the ongoing trend in engineering, the profession is dominated by male engineers, with seventy-one percent (71%) being over fifty (50) years of age. However, the MD report shows an up-swing of young engineers entering the profession.  One effort that has been underway for some years now is encouraging more women to enter the profession.  With seventy-one percent (71%) of the engineering workforce being over fifty, there is a definite need to attract participants.    There was an increase in engineers within between twenty-five (25) and thirty-five (35).  This was up from 5.6% to 9.2%.  The percentage of individuals entering the profession increased as well, with engineers with less than fourteen (14) years of experience increasing five percent (5%) from last year.  Even with all the challenges of engineering, ninety-two percent (92%) would still recommend the engineering profession to their children, grandchildren and others. One engineer responds, “In fact, wherever I’ll go, I always will have an engineer’s point of view. Trying to understand how things work, and how to improve them.”

 

When asked about foreign labor forces, fifty-four percent (54%) believe H1-B visas hurt engineering employment opportunities and sixty-one percent (61%) support measures to reform the system. In terms of outsourcing, fifty-two percent (52%) reported their companies outsource work—the main reason being lack of in-house talent. However, seventy-three percent (73%) of the outsourced work is toward other U.S. locations. When discussing the future, the job force, fifty-five percent (55%) of engineers believe there is a job shortage, specifically in the skilled labor area. An overwhelming eighty-seven percent (87%) believe that we lack a skilled labor force. According to the MD readers, the strongest place for job growth is in automation at forty-five percent (45%) and the strongest place to look for skilled laborers is in vocational schools at thirty-two percent (32%). The future of engineering is dependent on the new engineers not only in school today, but also in younger people just starting their young science, technology, engineering, and mathematic (STEM) interests. With the average engineer being fifty (50) years or old, the future of engineering will rely heavily on new engineers willing to carry the torch—eighty-seven percent (87%) of our engineers believe there needs to be more focus on STEM at an earlier age to make sure the future of engineering is secure.

With being the case, let us now look at the numbers.

The engineering profession is a “graying” profession as mentioned earlier.  The next digital picture will indicate that, for the most part, those in engineering have been in for the “long haul”.  They are “lifers”.  This fact speaks volumes when trying to influence young men and women to consider the field of engineering.  If you look at “years in the profession”, “work location” and years at present employer” we see the following:

The slide below is a surprise to me and I think the first time the question has been asked by Machine Design.  How much of your engineering training is theory vs. practice? You can see the greatest response is almost fourteen percent (13.6%) with a fifty/fifty balance between theory and practice.  In my opinion, this is as it should be.

“The theory can be learned in a school, but the practical applications need to be learned on the job. The academic world is out of touch with the current reality of practical applications since they do not work in

that area.” “My university required three internships prior to graduating. This allowed them to focus significantly on theoretical, fundamental knowledge and have the internships bolster the practical.”

ENGINEERING CERTIFICATIONS:

The demands made on engineers by their respective companies can sometimes be time-consuming.  The respondents indicated the following certifications their companies felt necessary.

 

 

SALARIES:

The lowest salary is found with contract design and manufacturing.  Even this salary, would be much desired by just about any individual.

As we mentioned earlier, the West Coast provides the highest salary with several states in the New England area coming is a fairly close second.

 

SALARY LEVELS VS. EXPERIENCE:

This one should be no surprise.  The greater number of years in the profession—the greater the salary level.  Forty (40) plus years provides an average salary of approximately $100,000.  Management, as you might expect, makes the highest salary with an average being $126,052.88.

OUTSOURCING:

 

As mentioned earlier, outsourcing is a huge concern to the engineering community. The chart below indicates where the jobs go.

JOB SATISFACTION:

 

Most engineers will tell you they stay in the profession because they love the work. The euphoria created by a “really neat” design stays with an engineer much longer than an elevated pay check.  Engineers love solving problems.  Only two percent (2%) told MD they are not satisfied at all with their profession or current employer.  This is significant.

Any reason or reasons for leaving the engineering profession are shown by the following graphic.

ENGINEERING AND SOCIETY: 

As mentioned earlier, engineers are very worried about the H1-B visa program and trade policies issued by President Trump and the Legislative Branch of our country.  The Trans-Pacific Partnership has been “nixed” by President Trump but trade policies such as NAFTA and trade between the EU are still of great concern to engineers.  Trade with China, patent infringement, and cyber security remain big issues with the STEM profession and certainly engineers.

 

CONCLUSIONS:

I think it’s very safe to say that, for the most part, engineers are very satisfied with the profession and the salary levels offered by the profession.  Job satisfaction is great making the dawn of a new day something NOT to be dreaded.

AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.


WHERE WE ARE:

The manufacturing industry remains an essential component of the U.S. economy.  In 2016, manufacturing accounted for almost twelve percent (11.7%) of the U.S. gross domestic product (GDP) and contributed slightly over two trillion dollars ($2.18 trillion) to our economy. Every dollar spent in manufacturing adds close to two dollars ($1.81) to the economy because it contributes to development in auxiliary sectors such as logistics, retail, and business services.  I personally think this is a striking number when you compare that contribution to other sectors of our economy.  Interestingly enough, according to recent research, manufacturing could constitute as much as thirty-three percent (33%) of the U.S. GDP if both its entire value chain and production for other sectors are included.  Research from the Bureau of Labor Statistics shows that employment in manufacturing has been trending up since January of 2017. After double-digit gains in the first quarter of 2017, six thousand (6,000) new jobs were added in April.  Currently, the manufacturing industry employs 12,396,000 people, which equals more than nine percent (9%) of the U.S. workforce.   Nonetheless, many experts are concerned that these employment gains are soon to be halted by the ever-rising adoption of automation. Yet automation is inevitable—and like in the previous industrial revolutions, automation is likely to result in job creation in the long term.  If we look back at the Industrial Revolution.

INDUSTRIAL REVOLUTION:

The Industrial Revolution began in the late 18th century when a series of new inventions such as the spinning jenny and steam engine transformed manufacturing in Britain. The changes in British manufacturing spread across Europe and America, replacing traditional rural lifestyles as people migrated to cities in search of work. Men, women and children worked in the new factories operating machines that spun and wove cloth, or made pottery, paper and glass.

Women under 20 made comprised the majority of all factory workers, according to an article on the Industrial Revolution by the Economic History Association. Many power loom workers, and most water frame and spinning jenny workers, were women. However, few women were mule spinners, and male workers sometimes violently resisted attempts to hire women for this position, although some women did work as assistant mule spinners. Many children also worked in the factories and mines, operating the same dangerous equipment as adult workers.  As you might suspect, this was a great departure from times prior to the revolution.

WHERE WE ARE GOING:

In an attempt to create more jobs, the new administration is reassessing free trade agreements, leveraging tariffs on imports, and promising tax incentives to manufacturers to keep their production plants in the U.S. Yet while these measures are certainly making the U.S. more attractive for manufacturers, they’re unlikely to directly increase the number of jobs in the sector. What it will do, however, is free up more capital for manufacturers to invest in automation. This will have the following benefits:

  • Automation will reduce production costs and make U.S. companies more competitive in the global market. High domestic operating costs—in large part due to comparatively high wages—compromise the U.S. manufacturing industry’s position as the world leader. Our main competitor is China, where low-cost production plants currently produce almost eighteen percent (17.6%) of the world’s goods—just zero-point percent (0.6%) less than the U.S. Automation allows manufacturers to reduce labor costs and streamline processes. Lower manufacturing costs results in lower product prices, which in turn will increase demand.

Low-cost production plants in China currently produce 17.6% of the world’s goods—just 0.6% less

than the U.S.

  • Automation increases productivity and improves quality. Smart manufacturing processes that make use of technologies such as robotics, big data, analytics, sensors, and the IoT are faster, safer, more accurate, and more consistent than traditional assembly lines. Robotics provide 24/7 labor, while automated systems perform real-time monitoring of the production process. Irregularities, such as equipment failures or quality glitches, can be immediately addressed. Connected plants use sensors to keep track of inventory and equipment performance, and automatically send orders to suppliers when necessary. All of this combined minimizes downtime, while maximizing output and product quality.
  • Manufacturers will re-invest in innovation and R&D. Cutting-edge technologies. such as robotics, additive manufacturing, and augmented reality (AR) are likely to be widely adopted within a few years. For example, Apple® CEO Tim Cook recently announced the tech giant’s $1 billion investment fund aimed at assisting U.S. companies practicing advanced manufacturing. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D. An important aspect of innovation will involve determining how to integrate increasingly sophisticated technologies with human functions to create highly effective solutions that support manufacturers’ outcomes.

Technologies such as robotics, additive manufacturing, and augmented reality are likely to be widely adopted soon. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D.

HOW AUTOMATION WILL AFFECT THE WORKFORCE:

Now, let’s look at the five ways in which automation will affect the workforce.

  • Certain jobs will be eliminated.  By 2025, 3.5 million jobs will be created in manufacturing—yet due to the skills gap, two (2) million will remain unfilled. Certain repetitive jobs, primarily on the assembly line will be eliminated.  This trend is with us right now.  Retraining of employees is imperative.
  • Current jobs will be modified.  In sixty percent (60%) of all occupations, thirty percent (30%) of the tasks can be automated.  For the first time, we hear the word “co-bot”.  Co-bot is robotic assisted manufacturing where an employee works side-by-side with a robotic system.  It’s happening right now.
  • New jobs will be created. There are several ways automation will create new jobs. First, lower operating costs will make U.S. products more affordable, which will result in rising demand. This in turn will increase production volume and create more jobs. Second, while automation can streamline and optimize processes, there are still tasks that haven’t been or can’t be fully automated. Supervision, maintenance, and troubleshooting will all require a human component for the foreseeable future. Third, as more manufacturers adopt new technologies, there’s a growing need to fill new roles such as data scientists and IoT engineers. Fourth, as technology evolves due to practical application, new roles that integrate human skills with technology will be created and quickly become commonplace.
  • There will be a skills gap between eliminated jobs and modified or new roles. Manufacturers should partner with educational institutions that offer vocational training in STEM fields. By offering students on-the-job training, they can foster a skilled and loyal workforce.  Manufacturers need to step up and offer additional job training.  Employees need to step up and accept the training that is being offered.  Survival is dependent upon both.
  • The manufacturing workforce will keep evolving. Manufacturers must invest in talent acquisition and development—both to build expertise in-house and to facilitate continuous innovation.  Ten years ago, would you have heard the words, RFID, Biometrics, Stereolithography, Additive manufacturing?  I don’t think so.  The workforce MUST keep evolving because technology will only improve and become a more-present force on the manufacturing floor.

As always, I welcome your comments.


Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.

%d bloggers like this: