Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.


WHERE WE ARE:

The manufacturing industry remains an essential component of the U.S. economy.  In 2016, manufacturing accounted for almost twelve percent (11.7%) of the U.S. gross domestic product (GDP) and contributed slightly over two trillion dollars ($2.18 trillion) to our economy. Every dollar spent in manufacturing adds close to two dollars ($1.81) to the economy because it contributes to development in auxiliary sectors such as logistics, retail, and business services.  I personally think this is a striking number when you compare that contribution to other sectors of our economy.  Interestingly enough, according to recent research, manufacturing could constitute as much as thirty-three percent (33%) of the U.S. GDP if both its entire value chain and production for other sectors are included.  Research from the Bureau of Labor Statistics shows that employment in manufacturing has been trending up since January of 2017. After double-digit gains in the first quarter of 2017, six thousand (6,000) new jobs were added in April.  Currently, the manufacturing industry employs 12,396,000 people, which equals more than nine percent (9%) of the U.S. workforce.   Nonetheless, many experts are concerned that these employment gains are soon to be halted by the ever-rising adoption of automation. Yet automation is inevitable—and like in the previous industrial revolutions, automation is likely to result in job creation in the long term.  If we look back at the Industrial Revolution.

INDUSTRIAL REVOLUTION:

The Industrial Revolution began in the late 18th century when a series of new inventions such as the spinning jenny and steam engine transformed manufacturing in Britain. The changes in British manufacturing spread across Europe and America, replacing traditional rural lifestyles as people migrated to cities in search of work. Men, women and children worked in the new factories operating machines that spun and wove cloth, or made pottery, paper and glass.

Women under 20 made comprised the majority of all factory workers, according to an article on the Industrial Revolution by the Economic History Association. Many power loom workers, and most water frame and spinning jenny workers, were women. However, few women were mule spinners, and male workers sometimes violently resisted attempts to hire women for this position, although some women did work as assistant mule spinners. Many children also worked in the factories and mines, operating the same dangerous equipment as adult workers.  As you might suspect, this was a great departure from times prior to the revolution.

WHERE WE ARE GOING:

In an attempt to create more jobs, the new administration is reassessing free trade agreements, leveraging tariffs on imports, and promising tax incentives to manufacturers to keep their production plants in the U.S. Yet while these measures are certainly making the U.S. more attractive for manufacturers, they’re unlikely to directly increase the number of jobs in the sector. What it will do, however, is free up more capital for manufacturers to invest in automation. This will have the following benefits:

  • Automation will reduce production costs and make U.S. companies more competitive in the global market. High domestic operating costs—in large part due to comparatively high wages—compromise the U.S. manufacturing industry’s position as the world leader. Our main competitor is China, where low-cost production plants currently produce almost eighteen percent (17.6%) of the world’s goods—just zero-point percent (0.6%) less than the U.S. Automation allows manufacturers to reduce labor costs and streamline processes. Lower manufacturing costs results in lower product prices, which in turn will increase demand.

Low-cost production plants in China currently produce 17.6% of the world’s goods—just 0.6% less

than the U.S.

  • Automation increases productivity and improves quality. Smart manufacturing processes that make use of technologies such as robotics, big data, analytics, sensors, and the IoT are faster, safer, more accurate, and more consistent than traditional assembly lines. Robotics provide 24/7 labor, while automated systems perform real-time monitoring of the production process. Irregularities, such as equipment failures or quality glitches, can be immediately addressed. Connected plants use sensors to keep track of inventory and equipment performance, and automatically send orders to suppliers when necessary. All of this combined minimizes downtime, while maximizing output and product quality.
  • Manufacturers will re-invest in innovation and R&D. Cutting-edge technologies. such as robotics, additive manufacturing, and augmented reality (AR) are likely to be widely adopted within a few years. For example, Apple® CEO Tim Cook recently announced the tech giant’s $1 billion investment fund aimed at assisting U.S. companies practicing advanced manufacturing. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D. An important aspect of innovation will involve determining how to integrate increasingly sophisticated technologies with human functions to create highly effective solutions that support manufacturers’ outcomes.

Technologies such as robotics, additive manufacturing, and augmented reality are likely to be widely adopted soon. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D.

HOW AUTOMATION WILL AFFECT THE WORKFORCE:

Now, let’s look at the five ways in which automation will affect the workforce.

  • Certain jobs will be eliminated.  By 2025, 3.5 million jobs will be created in manufacturing—yet due to the skills gap, two (2) million will remain unfilled. Certain repetitive jobs, primarily on the assembly line will be eliminated.  This trend is with us right now.  Retraining of employees is imperative.
  • Current jobs will be modified.  In sixty percent (60%) of all occupations, thirty percent (30%) of the tasks can be automated.  For the first time, we hear the word “co-bot”.  Co-bot is robotic assisted manufacturing where an employee works side-by-side with a robotic system.  It’s happening right now.
  • New jobs will be created. There are several ways automation will create new jobs. First, lower operating costs will make U.S. products more affordable, which will result in rising demand. This in turn will increase production volume and create more jobs. Second, while automation can streamline and optimize processes, there are still tasks that haven’t been or can’t be fully automated. Supervision, maintenance, and troubleshooting will all require a human component for the foreseeable future. Third, as more manufacturers adopt new technologies, there’s a growing need to fill new roles such as data scientists and IoT engineers. Fourth, as technology evolves due to practical application, new roles that integrate human skills with technology will be created and quickly become commonplace.
  • There will be a skills gap between eliminated jobs and modified or new roles. Manufacturers should partner with educational institutions that offer vocational training in STEM fields. By offering students on-the-job training, they can foster a skilled and loyal workforce.  Manufacturers need to step up and offer additional job training.  Employees need to step up and accept the training that is being offered.  Survival is dependent upon both.
  • The manufacturing workforce will keep evolving. Manufacturers must invest in talent acquisition and development—both to build expertise in-house and to facilitate continuous innovation.  Ten years ago, would you have heard the words, RFID, Biometrics, Stereolithography, Additive manufacturing?  I don’t think so.  The workforce MUST keep evolving because technology will only improve and become a more-present force on the manufacturing floor.

As always, I welcome your comments.


Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.

COLLABORATIVE ROBOTICS

June 26, 2017


I want to start this discussion with defining collaboration.  According to Merriam-Webster:

  • to work jointly with others or together especially in an intellectual endeavor.An international team of scientists collaborated on the study.
  • to cooperate with or willingly assist an enemy of one’s country and especially an occupying force suspected of collaborating with the enemy
  • to cooperate with an agency or instrumentality with which one is not immediately connected.

We are going to adopt the first definition to work jointly with others.  Well, what if the “others” are robotic systems?

Collaborative robots, or cobots as they have come to be known, are robot robotic systems designed to operate collaboratively or in conjunction with humans.  The term “Collaborative Robot is a verb, not a noun. The collaboration is dependent on what the robot is doing, not the robot itself.”  With that in mind, collaborative robotic systems and applications generally combine some or all of the following characteristics:

  • They are designed to be safe around people. This is accomplished by using sensors to prevent touching or by limiting the force if the system touches a human or a combination of both.
  • They are often relatively light weight and can be moved from task to task as needed. This means they can be portable or mobile and can be mounted on movable tables.
  • They do not require skill to program. Most cobots are simple enough that anyone who can use a smartphone or tablet can teach or program them. Most robotic systems of this type are programmed by using a “teach pendent”. The most-simple can allow up to ninety (90) programs to be installed.
  • Just as a power saw is intended to help, not replace, the carpenter, the cobot is generally intended to assist, not replace, the production worker. (This is where the collaboration gets its name. It assists the human is accomplishing a task.)  The production worker generally works side-by-side with the robot.
  • Collaborative robots are generally simpler than more traditional robots, which makes them cheaper to buy, operate and maintain.

There are two basic approaches to making cobots safe. One approach, taken by Universal, Rethink and others, is to make the robot inherently safe. If it makes contact with a human co-worker, it immediately stops so the worker feels no more than a gentle nudge. Rounded surfaces help make that nudge even more gentle. This approach limits the maximum load that the robot can handle as well as the speed. A robot moving a fifty (50) pound part at high speed will definitely hurt no matter how quickly it can stop upon making contact.

A sensor-based approach allows collaborative use in faster and heavier applications. Traditionally, physical barriers such as cages or light curtains have been used to stop the robot when a person enters the perimeter. Modern sensors can be more discriminating, sensing not only the presence of a person but their location as well. This allows the robot to slow down, work around the person or stop as the situation demands to maintain safety. When the person moves away, the robot can automatically resume normal operation.

No discussion of robot safety can ignore the end-of-arm tooling (EOAT).  If the robot and operator are handing parts back and forth, the tooling needs to be designed so that, if the person gets their fingers caught, they can’t be hurt.

The next digital photographs will give you some idea as to how humans and robotic systems can work together and the tasks they can perform.

The following statistics are furnished by “Digital Engineering” February 2017.

  • By 2020, more than three (3) million workers on a global basis will be supervised by a “robo-boss”.
  • Forty-five (45) percent of all work activities could be automated using already demonstrated technology and fifty-nine (59) percent of all manufacturing activities could be automated, given technical considerations.
  • At the present time, fifty-nine (59) percent of US manufacturers are using some form of robotic technology.
  • Artificial Intelligence (AI), will replace sixteen (16) percent of American jobs by 2025 and will create nine (9) percent of American jobs.
  • By 2018, six (6) billion connected devices will be used to assist commerce and manufacturing.

CONCLUSIONS: OK, why am I posting this message?  Robotic systems and robots themselves WILL become more and more familiar to us as the years go by.  The usage is already in a tremendous number of factories and on manufacturing floors.  Right now, most of the robotic work cells used in manufacturing are NOT collaborative.  The systems are SCARA (The SCARA acronym stands for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.) type and perform a Pick-and-place function or a very specific task such as laying down a bead of adhesive on a plastic or metal part.  Employee training will be necessary if robotic systems are used and if those systems are collaborative in nature.  In other words—get ready for it.  Train for this to happen so that when it does you are prepared.


If you work or have worked in manufacturing you know robotic systems have definitely had a distinct impact on assembly, inventory acquisition from storage areas and finished-part warehousing.   There is considerable concern that the “rise of the machines” will eventually replace individuals performing a verity of tasks.  I personally do not feel this will be the case although there is no doubt robotic systems have found their way onto the manufacturing floor.

From the “Executive Summary World Robotics 2016 Industrial Robots”, we see the following:

2015:  By far the highest volume ever recorded in 2015, robot sales increased by 15% to 253,748 units, again by far the highest level ever recorded for one year. The main driver of the growth in 2015 was the general industry with an increase of 33% compared to 2014, in particular the electronics industry (+41%), metal industry (+39%), the chemical, plastics and rubber industry (+16%). The robot sales in the automotive industry only moderately increased in 2015 after a five-year period of continued considerable increase. China has significantly expanded its leading position as the biggest market with a share of 27% of the total supply in 2015.

In looking at the chart below, we can see the sales picture with perspective and show how system sales have increased from 2003.

It is very important to note that seventy-five percent (75%) of global robot sales comes from five (5) countries.

There were five major markets representing seventy-five percent (75%) of the total sales volume in 2015:  China, the Republic of Korea, Japan, the United States, and Germany.

As you can see from the bar chart above, sales volume increased from seventy percent (70%) in 2014. Since 2013 China is the biggest robot market in the world with a continued dynamic growth. With sales of about 68,600 industrial robots in 2015 – an increase of twenty percent (20%) compared to 2014 – China alone surpassed Europe’s total sales volume (50,100 units). Chinese robot suppliers installed about 20,400 units according to the information from the China Robot Industry Alliance (CRIA). Their sales volume was about twenty-nine percent (29%) higher than in 2014. Foreign robot suppliers increased their sales by seventeen percent (17%) to 48,100 units (including robots produced by international robot suppliers in China). The market share of Chinese robot suppliers grew from twenty-five percent (25%) in 2013 to twenty-nine percent (29%) in 2015. Between 2010 and 2015, total supply of industrial robots increased by about thirty-six percent (36%) per year on average.

About 38,300 units were sold to the Republic of Korea, fifty-five percent (55%) more than in 2014. The increase is partly due to a number of companies which started to report their data only in 2015. The actual growth rate in 2015 is estimated at about thirty percent (30%) to thirty-five percent (35%.)

In 2015, robot sales in Japan increased by twenty percent (20%) to about 35,000 units reaching the highest level since 2007 (36,100 units). Robot sales in Japan followed a decreasing trend between 2005 (reaching the peak at 44,000 units) and 2009 (when sales dropped to only 12,767 units). Between 2010 and 2015, robot sales increased by ten percent (10%) on average per year (CAGR).

Increase in robot installations in the United States continued in 2015, by five percent (5%) to the peak of 27,504 units. Driver of this continued growth since 2010 was the ongoing trend to automate production in order to strengthen American industries on the global market and to keep manufacturing at home, and in some cases, to bring back manufacturing that had previously been sent overseas.

Germany is the fifth largest robot market in the world. In 2015, the number of robots sold increased slightly to a new record high at 20,105 units compared to 2014 (20,051 units). In spite of the high robot density of 301 units per 10,000 employees, annual sales are still very high in Germany. Between 2010 and 2015, annual sales of industrial robots increased by an average of seven percent (7%) in Germany (CAGR).

From the graphic below, you can see which industries employ robotic systems the most.

Growth rates will not lessen with projections through 2019 being as follows:

A fascinating development involves the assistance of human endeavor by robotic systems.  This fairly new technology is called collaborative robots of COBOTS.  Let’s get a definition.

COBOTS:

A cobot or “collaborative robot” is a robot designed to assist human beings as a guide or assistor in a specific task. A regular robot is designed to be programmed to work more or less autonomously. In one approach to cobot design, the cobot allows a human to perform certain operations successfully if they fit within the scope of the task and to steer the human on a correct path when the human begins to stray from or exceed the scope of the task.

“The term ‘collaborative’ is used to distinguish robots that collaborate with humans from robots that work behind fences without any direct interaction with humans.  “In contrast, articulated, cartesian, delta and SCARA robots distinguish different robot kinematics.

Traditional industrial robots excel at applications that require extremely high speeds, heavy payloads and extreme precision.  They are reliable and very useful for many types of high volume, low mix applications.  But they pose several inherent challenges for higher mix environments, particularly in smaller companies.  First and foremost, they are very expensive, particularly when considering programming and integration costs.  They require specialized engineers working over several weeks or even months to program and integrate them to do a single task.  And they don’t multi-task easily between jobs since that setup effort is so substantial.  Plus, they can’t be readily integrated into a production line with people because they are too dangerous to operate in close proximity to humans.

For small manufacturers with limited budgets, space and staff, a collaborative robot such as Baxter (shown below) is an ideal fit because it overcomes many of these challenges.  It’s extremely intuitive, integrates seamlessly with other automation technologies, is very flexible and is quite affordable with a base price of only $25,000.  As a result, Baxter is well suited for many applications, such as those requiring manual labor and a high degree of flexibility, that are currently unmet by traditional technologies.

Baxter is one example of collaborative robotics and some say is by far the safest, easiest, most flexible and least costly robot of its kind today.  It features a sophisticated multi-tier safety design that includes a smooth, polymer exterior with fewer pinch points; back-drivable joints that can be rotated by hand; and series elastic actuators which help it to minimize the likelihood of injury during inadvertent contact.

It’s also incredibly simple to use.  Line workers and other non-engineers can quickly learn to train the robot themselves, by hand.  With Baxter, the robot itself is the interface, with no teaching pendant or external control system required.  And with its ease of use and diverse skill set, Baxter is extremely flexible, capable of being utilized across multiple lines and tasks in a fraction of the time and cost it would take to re-program other robots.  Plus, Baxter is made in the U.S.A., which is a particularly appealing aspect for many of our customers looking to re-shore their own production operations.

The digital picture above shows a lady work alongside a collaborative robotic system, both performing a specific task. The lady feels right at home with her mechanical friend only because usage demands a great element of safety.

Certifiable safety is the most important precondition for a collaborative robot system to be applied to an industrial setting.  Available solutions that fulfill the requirements imposed by safety standardization often show limited performance or productivity gains, as most of today’s implemented scenarios are often limited to very static processes. This means a strict stop and go of the robot process, when the human enters or leaves the work space.

Collaborative systems are still a work in progress but the technology has greatly expanded the use and this is primarily due to satisfying safety requirements.  Upcoming years will only produce greater acceptance and do not be surprised if you see robots and humans working side by side on every manufacturing floor over the next decade.

As always, I welcome your comments.


Biomedical Engineering may be a fairly new term so some of you.   What is a biomedical engineer?  What do they do? What companies to they work for?  What educational background is necessary for becoming a biomedical engineer?  These are good questions.  From LifeScience we have the follow definition:

“Biomedical engineering, or bioengineering, is the application of engineering principles to the fields of biology and health care. Bioengineers work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems.”

Biomedical engineering has evolved over the years in response to advancements in science and technology.  This is NOT a new classification for engineering involvement.  Engineers have been at this for a while.  Throughout history, humans have made increasingly more effective devices to diagnose and treat diseases and to alleviate, rehabilitate or compensate for disabilities or injuries. One example is the evolution of hearing aids to mitigate hearing loss through sound amplification. The ear trumpet, a large horn-shaped device that was held up to the ear, was the only “viable form” of hearing assistance until the mid-20th century, according to the Hearing Aid Museum. Electrical devices had been developed before then, but were slow to catch on, the museum said on its website.

The possibilities of a bioengineer’s charge are as follows:

The equipment envisioned, designed, prototyped, tested and eventually commercialized has made a resounding contribution and value-added to our healthcare system.  OK, that’s all well and good but exactly what do bioengineers do on a daily basis?  What do they hope to accomplish?   Please direct your attention to the digital figure below.  As you can see, the world of the bioengineer can be somewhat complex with many options available.

The breadth of activity of biomedical engineers is significant. The field has moved from being concerned primarily with the development of medical devices in the 1950s and 1960s to include a wider ranging set of activities. As illustrated in the figure above, the field of biomedical engineering now includes many new career areas. These areas include:

  • Application of engineering system analysis (physiologic modeling, simulation, and control to biological problems
  • Detection, measurement, and monitoring of physiologic signals (i.e., biosensors and biomedical instrumentation)
  • Diagnostic interpretation via signal-processing techniques of bioelectric data
  • Therapeutic and rehabilitation procedures and devices (rehabilitation engineering)
  • Devices for replacement or augmentation of bodily functions (artificial organs)
  • Computer analysis of patient-related data and clinical decision making (i.e., medical informatics and artificial intelligence)
  • Medical imaging; that is, the graphical display of anatomic detail or physiologic Function.
  • The creation of new biologic products (i.e., biotechnology and tissue engineering)

Typical pursuits of biomedical engineers include

  • Research in new materials for implanted artificial organs
  • Development of new diagnostic instruments for blood analysis
  • Writing software for analysis of medical research data
  • Analysis of medical device hazards for safety and efficacy
  • Development of new diagnostic imaging systems
  • Design of telemetry systems for patient monitoring
  • Design of biomedical sensors
  • Development of expert systems for diagnosis and treatment of diseases
  • Design of closed-loop control systems for drug administration
  • Modeling of the physiologic systems of the human body
  • Design of instrumentation for sports medicine
  • Development of new dental materials
  • Design of communication aids for individuals with disabilities
  • Study of pulmonary fluid dynamics
  • Study of biomechanics of the human body
  • Development of material to be used as replacement for human skin

I think you will agree, these areas of interest encompass any one of several engineering disciplines; i.e. mechanical, chemical, electrical, computer science, and even civil engineering as applied to facilities and hospital structures.

RISE OF THE MACHINES

March 20, 2017


Movie making today is truly remarkable.  To me, one of the very best parts is animation created by computer graphics.  I’ve attended “B” movies just to see the graphic displays created by talented programmers.  The “Terminator” series, at least the first movie in that series, really captures the creative essence of graphic design technology.  I won’t replay the movie for you but, the “terminator” goes back in time to carry out its prime directive—Kill John Conner.  The terminator, a robotic humanoid, has decision-making capability as well as human-like mobility that allows the plot to unfold.  Artificial intelligence or AI is a fascinating technology many companies are working on today.  Let’s get a proper definition of AI as follows:

“the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Question:  Are Siri, Cortana, and Alexa eventually going to be more literate than humans? Anyone excited about the recent advancements in artificial intelligence (AI) and machine learning should also be concerned about human literacy as well. That’s according to Protect Literacy , a global campaign, backed by education company Pearson, aimed at creating awareness and fighting against illiteracy.

Project Literacy, which has been raising awareness for its cause at SXSW 2017, recently released a report, “ 2027: Human vs. Machine Literacy ,” that projects machines powered by AI and voice recognition will surpass the literacy levels of one in seven American adults in the next ten (10) years. “While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to simple text search task…exceeding the abilities of millions of people who are nonliterate,” Kate James, Project Literacy spokesperson and Chief Corporate Affairs and Global Marketing Officer at Pearson, wrote in the report. In light of this the organization is calling for “society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.”  (I would invite you to re-read this statement and shudder in your boots as I did.)

While the past twenty-five (25) years have seen disappointing progress in U.S. literacy, there have been huge gains in linguistic performance by a totally different type of actor – computers. Dramatic advances in natural language processing (Hirschberg and Manning, 2015) have led to the rise of language technologies like search engines and machine translation that “read” text and produce answers or translations that are useful for people. While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to the simple text search task above – exceeding the abilities of millions of people who are nonliterate.

According to the National National Centre for Education Statistics machine literacy has already exceeded the literacy abilities of the estimated three percent (3%) of non-literate adults in the US.

Comparing demographic data from the Global Developer Population and Demographic Study 2016 v2 and the 2015 Digest of Education Statistics finds there are more software engineers in the U.S. than school teachers, “We are focusing so much on teaching algorithms and AI to be better at language that we are forgetting that fifty percent (50%)  of adults cannot read a book written at an eighth grade level,” Project Literacy said in a statement.  I retired from General Electric Appliances.   Each engineer was required to write, or at least the first draft, of the Use and Care Manuals for specific cooking products.  We were instructed to 1.) Use plenty of graphic examples and 2.) Write for a fifth-grade audience.  Even with that, we know from experience that many consumers never use and have no intention of reading their Use and Care Manual.  With this being the case, many of the truly cool features are never used.  They may as well buy the most basic product.

Research done by Business Insider reveals that thirty-two (32) million Americans cannot currently read a road sign. Yet at the same time there are ten (10) million self-driving cars predicted to be on the roads by 2020. (One could argue this will further eliminate the need for literacy, but that is debatable.)  If we look at literacy rates for the top ten (10) countries on our planet we see the following:

Citing research from Venture Scanner , Project Literacy found that in 2015 investment in AI technologies, including natural language processing, speech recognition, and image recognition, reached $47.2 billion. Meanwhile, data on US government spending shows that the 2017 U.S. Federal Education Budget for schools (pre-primary through secondary school) is $40.4 billion.  I’m not too sure funding for education always goes to benefit students education. In other words, throwing more money at this problem may not always provide desired results, but there is no doubt, funding for AI will only increase.

“Human literacy levels have stalled since 2000. At any time, this would be a cause for concern, when one in ten people worldwide…still cannot read a road sign, a voting form, or a medicine label,” James wrote in the report. “In popular discussion about advances in artificial intelligence, it is easy

CONCLUSION:  AI will only continue to advance and there will come a time when robotic systems will be programmed with basic decision-making skills.  To me, this is not only fascinating but more than a little scary.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.


One of the items on my bucket list has been to attend the Consumer Electronics Show in Las Vegas.  (I probably need to put a rush on this one because the clock is ticking.)  For 50 years, CES has been the launching pad for innovation and new technology.  Much of this technology has changed the world. Held in Las Vegas every year, it is the world’s gathering place for all who thrive on the business of consumer technologies and where next-generation innovations are introduced to the commercial marketplace.   The International Consumer Electronics Show (International CES) showcases more than 3,800 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more; a conference program with more than three hundred (300) conference sessions and more than one-hundred and sixty-five thousand attendees from one hundred1 (50) countries.  Because it is owned and produced by the Consumer Technology Association (CTA)™ — formerly the Consumer Electronics Association (CEA)® — the technology trade association representing the $287 billion U.S. consumer technology industry, and it attracts the world’s business leaders and pioneering thinkers to a forum where the industry’s most relevant issues are addressed.  The range of products is immense as seen from the listing of product categories below.

PRODUCT CATEGORIES:

  • 3D Printing
  • Accessories
  • Augmented Reality
  • Audio
  • Communications Infrastructure
  • Computer Hardware/Software/Services
  • Content Creation & Distribution
  • Digital/Online Media
  • Digital Imaging/Photography
  • Drones
  • Electronic Gaming
  • Fitness and Sports
  • Health and Biotech
  • Internet Services
  • Personal Privacy & Cyber Security
  • Robotics
  • Sensors
  • Smart Home
  • Startups
  • Vehicle Technology
  • Video
  • Wearables
  • Wireless Devices & Services

If we look at world-changing revolution and evolution coming from CES over the years, we may see the following advances in technology, most of which now commercialized:

  • Videocassette Recorder (VCR), 1970
  • Laserdisc Player, 1974
  • Camcorder and Compact Disc Player, 1981
  • Digital Audio Technology, 1990
  • Compact Disc – Interactive, 1991
  • Digital Satellite System (DSS), 1994
  • Digital Versatile Disk (DVD), 1996
  • High Definition Television (HDTV), 1998
  • Hard-disc VCR (PVR), 1999
  • Satellite Radio, 2000
  • Microsoft Xbox and Plasma TV, 2001
  • Home Media Server, 2002
  • Blu-Ray DVD and HDTV PVR, 2003
  • HD Radio, 2004
  • IP TV, 2005
  • Convergence of content and technology, 2007
  • OLED TV, 2008
  • 3D HDTV, 2009
  • Tablets, Netbooks and Android Devices, 2010
  • Connected TV, Smart Appliances, Android Honeycomb, Ford’s Electric Focus, Motorola Atrix, Microsoft Avatar Kinect, 2011
  • Ultrabooks, 3D OLED, Android 4.0 Tablets, 2012
  • Ultra HDTV, Flexible OLED, Driverless Car Technology, 2013
  • 3D Printers, Sensor Technology, Curved UHD, Wearable Technologies, 2014
  • 4K UHD, Virtual Reality, Unmanned Systems, 2015

Why don’t we do this, let’s now take a very brief look at several exhibits to get a feel for the products.  Here we go.

Augmented Reality (AR):

Through specially designed hardware and software full of cameras, sensors, algorithms and more, your perception of reality can be instantly altered in context with your environment. Applications include sports scores showing on TV during a match, the path of trajectory overlaying an image, gaming, construction plans and more.  VR (virtual reality) equipment is becoming extremely popular, not only with consumers, but with the Department of Defense, Department of Motor Vehicles, and companies venturing out to technology for training purposes.

augmented-reality

Cyber Security:

The Cyber & Personal Security Marketplace will feature innovations ranging from smart wallets and safe payment apps to secure messaging and private Internet access.  If you have never been hacked, you are one in a million.  I really don’t think there are many people who have remained unaffected by digital fraud.  One entire section of the CES is devoted to cyber security.

cyber-security

E-Commerce:

Enterprise solutions are integral for business. From analytics, consulting, integration and cyber security to e-commerce and mobile payment, the options are ever-evolving.  As you well know, each year the number of online shoppers increases and will eventually outpace the number of shoppers visiting “brick-and-motor stores.  Some feel this may see the demise of shopping centers altogether.

e-commerce

Self-Driving Autonomous Automobiles:

Some say if you are five years old or under you may never need a driver’s license.  I personally think this is a little far-fetched but who knows.  Self-driving automobiles are featured prominently at the CES.

self-driving-automobiles

Virtual Reality (VR):

Whether it will be the launch of the next wave of immersive multimedia for virtual reality systems and environments or gaming hardware, software and accessories designed for mobile, PCs or consoles, these exhibitors are sure to energize, empower and excite at CES 2017.

vr

i-Products:

From electronic plug-ins to fashionable cases, speakers, headphones and exciting new games and applications, the product Marketplace will feature the latest third-party accessories and software for your Apple iPod®, iPhone® and iPad® devices.

i-products

3-D Printing:

Most 3D printers are used for building prototypes for the medical, aerospace, engineering and automotive industries. But with the advancement of the digital technology supporting it, these machines are moving toward more compact units with affordable price points for today’s consumer.

30-d-printing

Robotic Systems:

The Robotics Marketplace will showcase intelligent, autonomous machines that are changing the way we live at work, at school, at the doctor’s office and at home.

robotics

Healthcare and Wellness:

Digital health continues to grow at an astonishing pace, with innovative solutions for diagnosing, monitoring and treating illnesses, to advancements in health care delivery and smarter lifestyles.

health-and-wellness

Sports Technology:

In a world where an athlete’s success hinges on milliseconds or millimeters, high-performance improvement and feedback are critical.

sports-technology

CONCLUSIONS:

I think it’s amazing and to our credit as a country that CES exists and presents, on an annual basis, designs and visions from the best and brightest.  A great show-place for ideas the world over from established companies and companies who wish to make their mark on technology.  Can’t wait to go—maybe next year.  As always, I welcome your comments.

FARADAY FUTURE FFZERO1

January 5, 2017


I certainly had no idea engineers and automobile manufacturers have been working on autonomous or driverless automobiles for years. Experiments have been conducted on automating automobiles since the 1920.  Very promising trials took place in the 1950s and work has proceeded since then. The first self-sufficient and truly autonomous cars appeared in the 1980s, with Carnegie Mellon University‘s Navlab and ALV  projects in 1984 and Mercedes-Benz and Bundeswehr University Munich‘s Eureka Prometheus Projects in 1987. Since then, numerous major companies and research organizations have developed working prototype autonomous vehicles including Mercedes-BenzGeneral MotorsContinental Automotive Systems,  Autoliv Inc., Bosch, Nissan, Toyota, Audi, Volvo, Vislab from the University of Parma, Oxford University, and Google.  In July 2013, Vislab demonstrated the BRAiVE, a vehicle that moved autonomously on a mixed traffic route open to public traffic.  

As of 2013, four U.S. states have passed laws permitting autonomous cars: NevadaFloridaCalifornia, and Michigan.  With the intensity involved, I’m quite sure there will be others to follow.   In Europe, cities in Belgium, France, Italy and the UK are planning to operate transport systems for driverless cars, and Germany, the Netherlands, and Spain have allowed testing robotic cars in traffic.

There is absolutely no way progress could be accomplished without technologies such as GPS, proximity sensors, visual cameras and of course the software necessary to drive each system and integrate each system so success may result.  These technologies will continue to improve over the next few years.  I heard a comment yesterday that indicated if your son or daughter is under ten years old, he or she may never have the need for a driver’s license.  Time will tell.

CLASSIFICATIONS OF DRIVERLESS VEHICLES:

The American Society of Automotive Engineers has developed five stages or classifications of autonomous automobiles.  These stages are as follows:

sae-classifications

FARADAY FUTURE FFZERO1:  THE CAR

I would like to introduce to you now the FARADAY FUTURE FFZERO1.   Future’s 1,000-horsepower concept car should make Tesla very, very nervous.  The media announcement was made just this week and is as follows:

LAS VEGAS — With a thumping bass soundtrack in a lengthy airplane hangar-like building in Vegas, Faraday Future unveiled their new FF 91 electric “super car” on 4 January 2017.

The automaker was criticized at last year’s Consumer Electronics Show (CES) for showing off their FFZERO1 concept car, which turned out to be more style than substance. This year’s unveiling of the FF 91 was different, in that they attempted to show off a real vehicle that consumers will be able to order soon.

Filled with more hyperbole and superlatives than a car show and tech conference combined, Faraday Future promises the fastest acceleration for a production automobile at 0-60mph in 2.39s with a whopping 1050hp. They also laid claim to the most advanced battery technology in the industry, and boldly claimed they would disrupt all aspects of the car industry.

Faraday Future even dared to put themselves on a roadmap of “historical steps in technology,” equating their electric vehicle to the creation of the electric motor by Michael Faraday, alternating current by Nikola Tesla and even the internet by Tim Berners-Lee.  Digital pictures that follow will indicate the overall design of the vehicle. The first JPEG shows the initial rollout and introduction at the CES 2017 this week.

unveiling-and-media-announcement

faraday-concept

faraday-2

faraday-1

DESCRIPTION:

First off, although it’s a concept high-performance one-seater, it rides on FF’s new Variable Platform Architecture (VPA) on which it will base all its future cars. Essentially, it’s a skateboard-style chassis with that allows FF to easily scale up or down the platform for different vehicle types.  Moreover, with this layout, FF can have one, two or three motor setups, making for front-, rear- or all-wheel drive. And, from a safety standpoint, the structure also makes for larger crumple zones. While the variable chassis is all well and good, you won’t spend any time interacting with it, really. You will, however, spend lots of time in the FF cabin. Thankfully, that’s been as well thought out as the platform.

Inside the FFZERO1, just like future FF production cars, the steering column has been fitted with a smartphone. This allows it to become the focal point for the interface between the driver and the car — from sitting behind the wheel or from inside the owner’s home. When commanded by that smartphone, the autonomous FFZERO1 (oh, yeah, it can drive itself, too) can come retrieve the driver.     More of that as we move along.

The driver sits at a perfect 45-degree angle that is most beneficial to circulation in a seat derived from NASA designs. There, the driver can easily view the propeller-shaped, asymmetric instrument panel. Moreover, in this electric race car, the driver wears an unique Halo Safety System with integrated head and neck support, oxygen and water supply — combined into a prototype helmet.

Rethinking where passengers are placed in a vehicle, since all the power components are beneath the driver rather than in front, Faraday Future designers pushed the driver near the front and shaped around the single seat a “perfectly aerodynamic teardrop profile.” This is accented by FF’s soon-to-be signature ‘UFO line’ that runs around the center of the vehicle. This mystical line and is, as FF put it, “intended to give the sense that this vehicle is not completely of this world.”

Combining form and function, FF has created aero tunnels that run through the interior length of the vehicle. These allow air to flow through the car rather than around it. More than accentuating the alien look of the thing, the tunnels also dramatically reduce drag and improve battery cooling. This does away for any need of bulky, space-stealing radiator.  This is truly an innovative design and one that surely will be copied by other manufactures.

Amazingly, all of this was pulled together in just 18 months when the team of multidisciplinary experts from the technology, automotive, aerospace and digital content came together to create a new line of electric cars. Apparently working nights and weekends, FF was able to take the all-digital FFZERO1 and turn it into the concept model you see today.

The FFZERO1 unveiling comes after news of FF’s plans to invest $1 billion, reportedly backed by the Chinese, in the creation of a 3 million-square-foot manufacturing facility in North Las Vegas. FF plans to break ground on this phase one investment in the next few weeks, ultimately employing 4,500 people.

Now, if you’re anything like me, you’re already wondering how such a team and design happened to come together so quickly and create something that seems not only promising but also industry-changing. Is Faraday Future the cover for the long-rumored Apple Car set to debut in 2019? I guess we’ll have to wait and see.

SELF-DRIVING:

A very impressive demonstration was the self-parking capability of the vehicle itself.

self-parking

The company demonstrated a self-parking capability in the lot outside, showing the car searching the aisles for an empty space and then backing in to it.

COSTS AND AVAILABILITY:

Faraday plans to release the FF91 in 2018. To pre-order, hopefuls will need to provide a refundable $5,000 (£4,080) deposit.  Prospective buyers were told they would be able to connect to the forthcoming car via a virtual “FFID” account.

“For the car to have a 130-kWh battery pack, it would be very heavy, and very expensive – extremely expensive to have a battery that size.”  On stage, Faraday executive Peter Savagian explained that the FF19 would be chargeable from various electrical standards. He added its range would extend to 482 miles (775km) when driven at 55mph. Many analysts expect interest in electric vehicles to continue to rise in coming years. “We estimate around one in 10 vehicles will be electric or hybrid by 2020, at around 8 million vehicles,” said Simon Bryant at analysts Futuresource.  I personally feel this is very optimistic but time will certainly tell.  I do not plan on owning a driverless vehicle in my lifetime but who knows.

As always, I welcome your comments.