AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

Advertisements

DEGREE OR NO DEGREE

October 7, 2017


The availability of information in books (as always), on the Internet, through seminars and professional shows, scientific publications, pod-casts, Webinars, etc. is amazing in today’s “digital age”.  That begs the question—Is a college degree really necessary?   Can you rise to a level of competence and succeed by being self-taught?  For most, a college degree is the way to open doors. For a precious few, however, no help is needed.

Let’s look at twelve (12) individuals who did just that.

The co-founder of Apple and the force behind the iPod, iPhone, and iPad, Steve Jobs attended Reed College, an academically-rigorous liberal arts college with a heavy emphasis on social sciences and literature. Shortly after enrolling in 1972, however, he dropped out and took a job as a technician at Atari.

Legendary industrialist Howard Hughes is often said to have graduated from Cal Tech, but the truth is that the California school has no record of his having attended classes there. He did enroll at Rice University in Texas in 1924, but dropped out prematurely due the death of his father.

Arguably Harvard’s most famous dropout, Bill Gates was already an accomplished software programmer when he started as a freshman at the Massachusetts campus in 1973. His passion for software actually began before high school, at the Lakeside School in Seattle, Washington, where he was programming in BASIC by age 13.

Just like his fellow Microsoft co-founder Bill Gates, Paul Allen was a college dropout.

Like Gates, he was also a star student (a perfect score on the SAT) who honed his programming skills at the Lakeside School in Seattle. Unlike Gates, however, he went on to study at Washington State University before leaving in his second year to work as a programmer at Honeywell in Boston.

Even for his time, Thomas Edison had little formal education. His schooling didn’t start until age eight, and then only lasted a few months.

Edison said that he learned most of his reading, writing, and math at home from his mother. Still, he became known as one of America’s most prolific inventors, amassing 1,093 U.S. patents and changing the world with such devices as the phonograph, fluoroscope, stock ticker, motion picture camera, mechanical vote recorder, and long-lasting incandescent electric light bulb. He is also credited with patenting a system of electrical power distribution for homes, businesses, and factories.

Michael Dell, founder of Dell Computer Corp., seemed destined for a career in the computer industry long before he dropped out of the University of Texas. He purchased his first calculator at age seven, applied to take a high school equivalency exam at age eight, and performed his first computer teardown at age 15.

A pioneer of early television technology, Philo T. Farnsworth was a brilliant student who dropped out of Brigham Young University after the death of his father, according to Biography.com.

Although born in a log cabin, Farnsworth quickly grasped technical concepts, sketching out his revolutionary idea for a television vacuum tube while still in high school, much to the confusion of teachers and fellow students.

Credited with inventing the controls that made fixed-wing powered flight possible, the Wright Brothers had little formal education.

Neither attended college, but they gained technical knowledge from their experiences working with printing presses, bicycles, and motors. By doing so, they were able to develop a three-axis controller, which served as the means to steer and maintain the equilibrium of an aircraft.

Stanford Ovshinsky managed to amass 400 patents covering subjects ranging from nickel-metal hydride batteries to amorphous silicon semiconductors to hydrogen fuel cells, all without the benefit of a college education. He is best known for his formation of Energy Conversion Devices and his pioneering work in nickel-metal hydride batteries, which have been widely used in hybrid and electric cars, as well as laptop computers, digital cameras, and cell phones.

Preston Tucker, designer of the infamous 1948 Tucker sedan, worked as a machinist, police officer and car salesman, but was not known to have attended college. Still, he managed to become founder of the Tucker Aviation Corp. and the Tucker Corp.

Larry Ellison dropped out of his pre-med studies at the University of Illinois in his second year and left the University of Chicago after only one term, but his brief academic experiences eventually led him to the top of the computer industry.

A Harvard dropout, Mark Zuckerberg was considered a prodigy before he even set foot on campus.

He began doing BASIC programming in middle school, created an instant messaging system while in high school, and learned to read and write French, Hebrew, Latin, and ancient Greek prior to enrolling in college.

CONCLUSIONS:

In conclusions, I want to leave you with a quote from President Calvin Coolidge:

Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent.


WHERE WE ARE:

The manufacturing industry remains an essential component of the U.S. economy.  In 2016, manufacturing accounted for almost twelve percent (11.7%) of the U.S. gross domestic product (GDP) and contributed slightly over two trillion dollars ($2.18 trillion) to our economy. Every dollar spent in manufacturing adds close to two dollars ($1.81) to the economy because it contributes to development in auxiliary sectors such as logistics, retail, and business services.  I personally think this is a striking number when you compare that contribution to other sectors of our economy.  Interestingly enough, according to recent research, manufacturing could constitute as much as thirty-three percent (33%) of the U.S. GDP if both its entire value chain and production for other sectors are included.  Research from the Bureau of Labor Statistics shows that employment in manufacturing has been trending up since January of 2017. After double-digit gains in the first quarter of 2017, six thousand (6,000) new jobs were added in April.  Currently, the manufacturing industry employs 12,396,000 people, which equals more than nine percent (9%) of the U.S. workforce.   Nonetheless, many experts are concerned that these employment gains are soon to be halted by the ever-rising adoption of automation. Yet automation is inevitable—and like in the previous industrial revolutions, automation is likely to result in job creation in the long term.  If we look back at the Industrial Revolution.

INDUSTRIAL REVOLUTION:

The Industrial Revolution began in the late 18th century when a series of new inventions such as the spinning jenny and steam engine transformed manufacturing in Britain. The changes in British manufacturing spread across Europe and America, replacing traditional rural lifestyles as people migrated to cities in search of work. Men, women and children worked in the new factories operating machines that spun and wove cloth, or made pottery, paper and glass.

Women under 20 made comprised the majority of all factory workers, according to an article on the Industrial Revolution by the Economic History Association. Many power loom workers, and most water frame and spinning jenny workers, were women. However, few women were mule spinners, and male workers sometimes violently resisted attempts to hire women for this position, although some women did work as assistant mule spinners. Many children also worked in the factories and mines, operating the same dangerous equipment as adult workers.  As you might suspect, this was a great departure from times prior to the revolution.

WHERE WE ARE GOING:

In an attempt to create more jobs, the new administration is reassessing free trade agreements, leveraging tariffs on imports, and promising tax incentives to manufacturers to keep their production plants in the U.S. Yet while these measures are certainly making the U.S. more attractive for manufacturers, they’re unlikely to directly increase the number of jobs in the sector. What it will do, however, is free up more capital for manufacturers to invest in automation. This will have the following benefits:

  • Automation will reduce production costs and make U.S. companies more competitive in the global market. High domestic operating costs—in large part due to comparatively high wages—compromise the U.S. manufacturing industry’s position as the world leader. Our main competitor is China, where low-cost production plants currently produce almost eighteen percent (17.6%) of the world’s goods—just zero-point percent (0.6%) less than the U.S. Automation allows manufacturers to reduce labor costs and streamline processes. Lower manufacturing costs results in lower product prices, which in turn will increase demand.

Low-cost production plants in China currently produce 17.6% of the world’s goods—just 0.6% less

than the U.S.

  • Automation increases productivity and improves quality. Smart manufacturing processes that make use of technologies such as robotics, big data, analytics, sensors, and the IoT are faster, safer, more accurate, and more consistent than traditional assembly lines. Robotics provide 24/7 labor, while automated systems perform real-time monitoring of the production process. Irregularities, such as equipment failures or quality glitches, can be immediately addressed. Connected plants use sensors to keep track of inventory and equipment performance, and automatically send orders to suppliers when necessary. All of this combined minimizes downtime, while maximizing output and product quality.
  • Manufacturers will re-invest in innovation and R&D. Cutting-edge technologies. such as robotics, additive manufacturing, and augmented reality (AR) are likely to be widely adopted within a few years. For example, Apple® CEO Tim Cook recently announced the tech giant’s $1 billion investment fund aimed at assisting U.S. companies practicing advanced manufacturing. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D. An important aspect of innovation will involve determining how to integrate increasingly sophisticated technologies with human functions to create highly effective solutions that support manufacturers’ outcomes.

Technologies such as robotics, additive manufacturing, and augmented reality are likely to be widely adopted soon. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D.

HOW AUTOMATION WILL AFFECT THE WORKFORCE:

Now, let’s look at the five ways in which automation will affect the workforce.

  • Certain jobs will be eliminated.  By 2025, 3.5 million jobs will be created in manufacturing—yet due to the skills gap, two (2) million will remain unfilled. Certain repetitive jobs, primarily on the assembly line will be eliminated.  This trend is with us right now.  Retraining of employees is imperative.
  • Current jobs will be modified.  In sixty percent (60%) of all occupations, thirty percent (30%) of the tasks can be automated.  For the first time, we hear the word “co-bot”.  Co-bot is robotic assisted manufacturing where an employee works side-by-side with a robotic system.  It’s happening right now.
  • New jobs will be created. There are several ways automation will create new jobs. First, lower operating costs will make U.S. products more affordable, which will result in rising demand. This in turn will increase production volume and create more jobs. Second, while automation can streamline and optimize processes, there are still tasks that haven’t been or can’t be fully automated. Supervision, maintenance, and troubleshooting will all require a human component for the foreseeable future. Third, as more manufacturers adopt new technologies, there’s a growing need to fill new roles such as data scientists and IoT engineers. Fourth, as technology evolves due to practical application, new roles that integrate human skills with technology will be created and quickly become commonplace.
  • There will be a skills gap between eliminated jobs and modified or new roles. Manufacturers should partner with educational institutions that offer vocational training in STEM fields. By offering students on-the-job training, they can foster a skilled and loyal workforce.  Manufacturers need to step up and offer additional job training.  Employees need to step up and accept the training that is being offered.  Survival is dependent upon both.
  • The manufacturing workforce will keep evolving. Manufacturers must invest in talent acquisition and development—both to build expertise in-house and to facilitate continuous innovation.  Ten years ago, would you have heard the words, RFID, Biometrics, Stereolithography, Additive manufacturing?  I don’t think so.  The workforce MUST keep evolving because technology will only improve and become a more-present force on the manufacturing floor.

As always, I welcome your comments.

AN AVERAGE DAY FOR DATA

August 4, 2017


I am sure you have heard the phrase “big data” and possibly wondered just what that terminology relates to.  Let’s get the “official” definition, as follows:

The amount of data that’s being created and stored on a global level is almost inconceivable, and it just keeps growing. That means there’s even more potential to glean key insights from business information – yet only a small percentage of data is actually analyzed. What does that mean for businesses? How can they make better use of the raw information that flows into their organizations every day?

The concept gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the four plus complexity:

  • Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
  • Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
  • Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
  • In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
  • Today’s data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.

AN AVERAGE DAY IN THE LIFE OF BIG DATA:

I picture is worth a thousand words but let us now quantify, on a daily basis, what we mean by big data.

  • U-Tube’s viewers are watching a billion (1,000,000,000) hours of videos each day.
  • We perform over forty thousand (40,000) searches per second on Google alone. That is approximately three and one-half (3.5) billion searches per day and roughly one point two (1.2) trillion searches per year, world-wide.
  • Five years ago, IBM estimated two point five (2.5) exabytes (2.5 billion gigabytes of data generated every day. It has grown since then.
  • The number of e-mail sent per day is around 269 billion. That is about seventy-four (74) trillion e-mails per year. Globally, the data stored in data centers will quintuple by 2020 to reach 915 exabytes.  This is up 5.3-fold with a compound annual growth rate (CAGR) of forty percent (40%) from 171 exabytes in 2015.
  • On average, an autonomous car will churn out 4 TB of data per day, when factoring in cameras, radar, sonar, GPS and LIDAR. That is just for one hour per day.  Every autonomous car will generate the data equivalent to almost 3,000 people.
  • By 2024, mobile networks will see machine-to-machine (M2M) connections jump ten-fold to 2.3 billion from 250 million in 2014, this is according to Machina Research.
  • The data collected by BMW’s current fleet of 40 prototype autonomous care during a single test session would fill the equivalent stack of CDs 60 miles high.

We have become a world that lives “by the numbers” and I’m not too sure that’s altogether troubling.  At no time in our history have we had access to data that informs, miss-informs, directs, challenges, etc etc as we have at this time.  How we use that data makes all the difference in our daily lives.  I have a great friend named Joe McGuinness. His favorite expressions: “It’s about time we learn to separate the fly s_____t from the pepper.  If we apply this phrase to big data, he may just be correct. Be careful out there.


One of the best things the automotive industry accomplishes is showing us what might be in our future.  They all have the finances, creative talent and vision to provide a glimpse into their “wish list” for upcoming vehicles.  Mercedes Benz has done just that with their futuristic F 015 Luxury in Motion.

In order to provide a foundation for the new autonomous F 015 Luxury in Motion research vehicle, an interdisciplinary team of experts from Mercedes-Benz has devised a scenario that incorporates different aspects of day-to-day mobility. Above and beyond its mobility function, this scenario perceives the motor car as a private retreat that additionally offers an important added value for society at large. (I like the word retreat.) If you take a look at how much time the “average” individual spends in his or her automobile or truck, we see the following:

  • On average, Americans drive 29.2 miles per day, making two trips with an average total duration of forty-six (46) minutes. This and other revealing data are the result of a ground-breaking study currently underway by the AAA Foundation for Traffic Safety and the Urban Institute.
  • Motorists age sixteen (16) years and older drive, on average, 29.2 miles per day or 10,658 miles per year.
  • Women take more driving trips, but men spend twenty-five (25) percent more time behind the wheel and drive thirty-five (35) percent more miles than women.
  • Both teenagers and seniors over the age of seventy-five (75) drive less than any other age group; motorists 30-49 years old drive an average 13,140 miles annually, more than any other age group.
  • The average distance and time spent driving increase in relation to higher levels of education. A driver with a grade school or some high school education drove an average of 19.9 miles and 32 minutes daily, while a college graduate drove an average of 37.2 miles and 58 minutes.
  • Drivers who reported living “in the country” or “a small town” drive greater distances (12,264 miles annually) and spend a greater amount of time driving than people who described living in a “medium sized town” or city (9,709 miles annually).
  • Motorists in the South drive the most (11,826 miles annually), while those in the Northeast drive the least (8,468 miles annually).

With this being the case, why not enjoy it?

The F 015 made its debut at the Consumer Electronics Show in Las Vegas more than two years ago. It’s packed with advanced (or what was considered advanced in 2015) autonomous technology, and can, in theory, run for almost 900 kilometers on a mixture of pure electric power and a hydrogen fuel cell.

But while countless other vehicles are still trying to prove that cars can, literally, drive themselves, the Mercedes-Benz offering takes this for granted. Instead, this vehicle wants us to consider what we’ll actually do while the car is driving us around.

The steering wheel slides into the dashboard to create more of a “lounge” space. The seating configuration allows four people to face each other if they want to talk. And when the onboard conversation dries up, a bewildering collection of screens — one on the rear wall, and one on each of the doors — offers plenty of opportunity to interact with various media.

The F 015 could have done all of this as a flash-in-the-pan show car — seen at a couple of major events before vanishing without trace. But in fact, it has been touring almost constantly since that Vegas debut.

“Anyone who focuses solely on the technology has not yet grasped how autonomous driving will change our society,” emphasizes Dr Dieter Zetsche, Chairman of the Board of Management of Daimler AG and Head of Mercedes-Benz Cars. “The car is growing beyond its role as a mere means of transport and will ultimately become a mobile living space.”

The visionary research vehicle was born, a vehicle which raises comfort and luxury to a new level by offering a maximum of space and a lounge character on the inside. Every facet of the F 015 Luxury in Motion is the utmost reflection of the Mercedes way of interpreting the terms “modern luxury”, emotion and intelligence.

This innovative four-seater is a forerunner of a mobility revolution, and this is immediately apparent from its futuristic appearance. Sensuousness and clarity, the core elements of the Mercedes-Benz design philosophy, combine to create a unique, progressive aesthetic appeal.

OK, with this being the case, let us now take a pictorial look at what the “Benz” has to offer.

One look and you can see the car is definitely aerodynamic in styling.  I am very sure that much time has been spent with this “ride” in wind tunnels with slip streams being monitored carefully.  That is where drag coefficients are determined initially.

The two JPEGs above indicate the front and rear swept glass windshields that definitely reduce induced drag.

The interiors are the most striking feature of this automobile.

Please note, this version is a four-seater but with plenty of leg-room.

Each occupant has a touch screen, presumably for accessing wireless or the Internet.  One thing, as yet there is no published list price for the car.  I’m sure that is being considered at this time but no USD numbers to date.  Also, as mentioned the car is self-driving so that brings on added complexities.  By design, this vehicle is a moving computer.  It has to be.  I am always very interested in maintenance and training necessary to diagnose and repair a vehicle such as this.  Infrastructure MUST be in place to facilitate quick turnaround when trouble arises–both mechanical and electrical.

As always, I welcome your comments.


Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.


Various definitions of product lifecycle management or PLM have been issued over the years but basically: product lifecycle management is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products.  PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

“In recent years, great emphasis has been put on disposal of a product after its service life has been met.  How to get rid of a product or component is extremely important. Disposal methodology is covered by RoHS standards for the European Community.  If you sell into the EU, you will have to designate proper disposal.  Dumping in a landfill is no longer appropriate.

Since this course deals with the application of PLM to industry, we will now look at various industry definitions.

Industry Definitions

PLM is a strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination, and use of product definition information across the extended enterprise, and spanning from product concept to end of life integrating people, processes, business systems, and information. PLM forms the product information backbone for a company and its extended enterprise.” Source:  CIMdata

“Product life cycle management or PLM is an all-encompassing approach for innovation, new product development and introduction (NPDI) and product information management from initial idea to the end of life.  PLM Systems is an enabling technology for PLM integrating people, data, processes, and business systems and providing a product information backbone for companies and their extended enterprise.” Source:  PLM Technology Guide

“The core of PLM (product life cycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.” Source:  Wikipedia article on Product Lifecycle Management

“Product life cycle management is the process of managing product-related design, production and maintenance information. PLM may also serve as the central repository for secondary information, such as vendor application notes, catalogs, customer feedback, marketing plans, archived project schedules, and other information acquired over the product’s life.” Source:  Product Lifecycle Management

“It is important to note that PLM is not a definition of a piece, or pieces, of technology. It is a definition of a business approach to solving the problem of managing the complete set of product definition information-creating that information, managing it through its life, and disseminating and using it throughout the lifecycle of the product. PLM is not just a technology, but is an approach in which processes are as important, or more important than data.” Source:  CIMdata

“PLM or Product Life Cycle Management is a process or system used to manage the data and design process associated with the life of a product from its conception and envisioning through its manufacture, to its retirement and disposal. PLM manages data, people, business processes, manufacturing processes, and anything else pertaining to a product. A PLM system acts as a central information hub for everyone associated with a given product, so a well-managed PLM system can streamline product development and facilitate easier communication among those working on/with a product. Source:  Aras

A pictorial representation of PLM may be seen as follows:

Hopefully, you can see that PLM deals with methodologies from “white napkin design to landfill disposal”.  Please note, documentation is critical to all aspects of PLM and good document production, storage and retrieval is extremely important to the overall process.  We are talking about CAD, CAM, CAE, DFSS, laboratory testing notes, etc.  In other words, “the whole nine yards of product life”.   If you work in a company with ISO certification, PLM is a great method to insure retaining that certification.

In looking at the four stages of a products lifecycle, we see the following:

Four Stages of Product Life Cycle—Marketing and Sales:

Introduction: When the product is brought into the market. In this stage, there’s heavy marketing activity, product promotion and the product is put into limited outlets in a few channels for distribution. Sales take off slowly in this stage. The need is to create awareness, not profits.

The second stage is growth. In this stage, sales take off, the market knows of the product; other companies are attracted, profits begin to come in and market shares stabilize.

The third stage is maturity, where sales grow at slowing rates and finally stabilize. In this stage, products get differentiated, price wars and sales promotion become common and a few weaker players exit.

The fourth stage is decline. Here, sales drop, as consumers may have changed, the product is no longer relevant or useful. Price wars continue, several products are withdrawn and cost control becomes the way out for most products in this stage.

Benefits of PLM Relative to the Four Stages of Product Life:

Considering the benefits of Product Lifecycle Management, we realize the following:

  • Reduced time to market
  • Increase full price sales
  • Improved product quality and reliability
  • Reduced prototypingcosts
  • More accurate and timely request for quote generation
  • Ability to quickly identify potential sales opportunities and revenue contributions
  • Savings through the re-use of original data
  • frameworkfor product optimization
  • Reduced waste
  • Savings through the complete integration of engineering workflows
  • Documentation that can assist in proving compliance for RoHSor Title 21 CFR Part 11
  • Ability to provide contract manufacturers with access to a centralized product record
  • Seasonal fluctuation management
  • Improved forecasting to reduce material costs
  • Maximize supply chain collaboration
  • Allowing for much better “troubleshooting” when field problems arise. This is accomplished by laboratory testing and reliability testing documentation.

PLM considers not only the four stages of a product’s lifecycle but all of the work prior to marketing and sales AND disposal after the product is removed from commercialization.   With this in mind, why is PLM a necessary business technique today?  Because increases in technology, manpower and specialization of departments, PLM was needed to integrate all activity toward the design, manufacturing and support of the product. Back in the late 1960s when the F-15 Eagle was conceived and developed, almost all manufacturing and design processes were done by hand.  Blueprints or drawings needed to make the parts for the F15 were created on a piece of paper. No electronics, no emails – all paper for documents. This caused a lack of efficiency in design and manufacturing compared to today’s technology.  OK, another example of today’s technology and the application of PLM.

If we look at the processes for Boeings DREAMLINER, we see the 787 Dreamliner has about 2.3 million parts per airplane.  Development and production of the 787 has involved a large-scale collaboration with numerous suppliers worldwide. They include everything from “fasten seatbelt” signs to jet engines and vary in size from small fasteners to large fuselage sections. Some parts are built by Boeing, and others are purchased from supplier partners around the world.  In 2012, Boeing purchased approximately seventy-five (75) percent of its supplier content from U.S. companies. On the 787 program, content from non-U.S. suppliers accounts for about thirty (30) percent of purchased parts and assemblies.  PLM or Boeing’s version of PLM was used to bring about commercialization of the 787 Dreamliner.

 

%d bloggers like this: