I think everyone is probably aware of Isaac Asimov’s “Three Laws of Robotics”.  If not, let me present a refresher as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are fairly straightforward with little complexity.  In Ms. Maelle Gavet’s book “Trampled by Unicorns”, she presents what she feels must be the laws of AI.  If I may, let me quote from that book as follows:

  • AI must not harm a human being, or indeed humanity in general, nor allow a human being to come to harm through inaction.
  • AI must obey orders given by human beings, except where such orders would conflict with Asimov’s First Law.
  • AI should not be trained on human data sets where the human source of that data has not given their explicit consent.  Furthermore, those human sources should be able to opt out at any time, and have all of their data permanently deleted.
  • Creators of AI (companies and individuals) must be held fully accountable for the effects of their technology.  If the AI component of an autonomous vehicle, for example, makes a decision that ends up killing someone, the buck would stop with, say, Tesla, as the OEM—no ifs, buts, or lame excuses. (You cannot or at least should not blame an individual user of equipment for a programming error.)
  • People must be alerted immediately whenever they begin talking to, or otherwise interacting with, AI, where they might reasonably expect to be engaging with other humans. Today’s status quo, in which, say, customer enquiries are handled by bots with human names, is the start of a very slippery slope. (A person must be given the option as to whether or not they might engage when an AI source is announced.)
  • Companies behind AI used to determine access to certain services, such as medical care, school admissions, loans, or mortgage applications, or for criminal justice purposes, such as sentencing, must make its source code openly available upon requires.  A bit like Freedom of Information, this would ensure transparency and protect against built-in bias.  There will be exceptions, of course, how the IRS uses AI to identify fraudsters or financial service firms use it to spot fraudulent behavior are examples.  These should be kept to a bare minimum.

CONCLUSIONS:  I’m sure there will be other rules and regulations relative to the use of AI as time goes along, but these represent a good start.  One thing is implied—there will be a great increase in the number of companies using AI in their products and general services.  We have turned the corner and companies will be seeking possible uses for AI to streamline their product offerings and save money.  The point here, individuals on the payroll represent a liability to many CEOs and CFOs.  If a computer program can do it, they will gladly make the change.  It’s just inevitable.

Experts say AI will change one hundred percent (100%) of jobs over the next ten (10) years, but there is a fear that the next generation isn’t prepared for the shift. It’s imperative for teachers to learn how to infuse their content and curriculum with the knowledge, skills, and values driving innovation in AI today so that their students are prepared to be successful in the modern workforce, regardless of their career paths.

According to Jeff Jarvis, director of the Tow-Knight Center for Entrepreneurial Journalism at the City University of New York, “By 2025, artificial intelligence will be built into the algorithmic architecture of countless functions of business and communication, increasing relevance, reducing noise, increasing efficiency, and reducing risk across everything from finding information to making transactions. If robot cars are not yet driving on their own, robotic and intelligent functions will be taking over more of the work of manufacturing and moving.”

Stowe Boyd, lead researcher for GigaOM Research, predicted, “Pizzas will not be delivered by teenagers hoping for a tip. Food will be raised by robotic vehicles, even in small plot urban farms that will become the norm, since so many people will have lost their jobs to ‘bots. Your X-rays will be reviewed by a battery of Watson-grade AIs, and humans will only be pulled in when the machines disagree. Robotic sex partners will be a commonplace, although the source of scorn and division, the way that critics today bemoan selfies as an indicator of all that’s wrong with the world.”

Lillie Coney, a legislative director specializing in technology policy in the U.S. House of Representatives, replied, “It is not the large things that will make AI acceptable it will be the small things—portable devices that can aid a person or organization in accomplishing desired outcomes well. AI embedded into everyday technology that proves to save time, energy, and stress that will push consumer demand for it.”

People who have their ears to the ground all say—AI is coming.  Be ready.

DIGITAL TRANSFORMATION

June 22, 2020


OK, I admit, I generally read a document occurring online by printing it out first.  It’s not the size of my monitor or the font size or the font type.  I suppose I’m really “old-school” and the feel of a piece of paper in my hand is preferable.  There is one thing, I’m always writing in the margins, making notes, checking references, summarizing, and it helps to have a paper copy.   Important documents are saved to my hard-drive AND saved in a hard-copy file. I probably do need a digital transformation.

The June issue of “Control Engineering” published an excellent article on digital transformation with the following definition: “Digital transformation is about transforming and changing the business for the future and creating new and better ways of doing that business.”    In other words, it’s about becoming more efficient, faster, and with fewer errors.  Digital transformation creates new capabilities and new processes, reduces capital costs and operating costs, empowers teams, improves decision making, creates new and better products and services for customers.   All of this involves being able to communicate effectively with all individuals understanding the vocabulary.  This is where we sometimes get confused.  We say one thing but mean quite another.  I would like now to describe and define several words and phrases used when discussing digital transformation.

  • Artificial Intelligence (AI)—Systems that can analyze great amounts of data and extract trends and knowledge from seemingly incoherent numbers.
  • Industrial Internet of Things (IIoT)—Smart devices, smart machines, and smart sensors only work and make sense when they are connected and can talk to one another.
  • Machine Learning (ML)—Smart machines create and extend their own mathematical models to make decisions, and even predictions, without having to be programmed; they essentially learn from the past and from the world around them.
  • Augmented Reality (AR)—Anything and everything in the real world can be enhanced, or augmented by digital transformation. It does not have to be only visual; it can be any or all of the five (5) senses.
  • Virtual Reality (VR)—Virtual reality has been around for some by in the world of gaming.  It is also being used to create simulations, training, and providing instruction in a graphic manner.
  • Digital Twin—Digital twins are connected to their physical counterparts to create cyber-physical systems.  Digital twins get continuous real-time data streams from the physical twin, becoming a digital replica.
  • Digital Thread—A digital thread provides data from start to finish for processes—manufacturing and otherwise.
  • Manufacturing Execution Systems (MES)—Any facility that executes manufacturing orders through programming.
  • Radio Frequency Identification (RFID)—A system that interrogates and records data relative to parts, subassemblies, and overall assemblies.
  • Advanced Robotics—Autonomous robotic systems that facilitate manufacturing, parts “picking and placing”, and other operations that can be automated using robotic systems.
  • Collaborative Robotic Systems—Systems that interact with humans to accomplish a specific task.
  • Mobile Internet—Cell phones, i-pads, laptops, etc.  Any system that can “travel” with an individual user.
  • 3D Printing—Additive manufacturing that builds a product by adding material layer by layer to form a finished part.
  • Cloud and Edge Computing—On-demand data storage and on-demand computing power from any location.

I am sure other words describing technology will result from the digital transformation age.  We all need to get use to it because there is absolutely no going back.  Jump in, become familiar with available technology that can and will transform the manner in which we do business. 

CRYSTAL BALL

January 30, 2020


Abraham Lincoln once said, ‘The best way to predict your future is to create it,’” At first this might seem obvious but I think it shows remarkable insight.  Engineers and scientists the world over have been doing that for centuries.   

 Charles H. Duell was the Commissioner of US patent office in 1899.  Mr. Deull’s most famous attributed utterance is “everything that can be invented has been invented.”  The only thing this proves; P.B. Barnum was correct—there is a fool born every minute.  Mr. Duell just may fit that mold. 

The November/December 2019 edition of “Industry Week” provided an article entitled “TOP 10 TECHNOLOGIES TO WATCH IN 2020”.  I personally would say in the decade of the twenties.  Let’s take a look at what their predictions are.  The article was written specifically to address manufacturing in the decade of the “20s but I feel the items will apply to professions other than manufacturing.  You’re going to like this one.

INDUSTRIAL INTERNET OF THINGS (IIOT): I’ve been writing about this one. The Internet of Things is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.  In a data-fueled environment, IIOT provides the means to gather data in near real-time fashion from seamlessly all-connected devices.  The IoT and the IIoT is happening right now.  It truly is an idea whose time has come.

EDGE COMPUTING:  As production equipment continues to advance, equipment cannot always wait for data to move across the network before taking action.  Edge computing puts vital processing power where it is needed, only transmitting vital information back through the network. In the beginning, there was One Big Computer. Then, in the Unix era, we learned how to connect to that computer using dumb (not a pejorative) terminals. Next, we had personal computers, which was the first-time regular people really owned the hardware that did the work.

Right now, in 2020, we’re firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that’s in the cloud — as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might’ve enjoyed in the personal computing era.

As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.

The advent of edge computing as a buzzword you should perhaps pay attention to is the realization by these companies that there isn’t much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the “cloud” lie at the “edge.”

So, what is edge?

The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.

5G NETWORK:  As manufacturers continue to embrace mobile technology, 5G provides the stability and speed needed to wirelessly process growing data sets common in today’s production environments.  5G is crucial as manufacturers close the last mile to connect the entire array of devices to the IIOT. 5 G components and networks allow wearable technology, hand-held devices, and fast data acquisition on the factory floor or the retail establishment.

3-D PRINTING:  The rise of the experience economy is ushering in the need for mass production. The ongoing maturity of 3D printing and additive manufacturing is answering the call with the ability to leverage an ever-growing list of new materials. 

WEARABLES: From monitoring employee health to providing augmented training and application assistance, a growing array of wearable form factors represents an intriguing opportunity for manufacturing to put a host of other technologies in action including AI, machine learning, virtual reality and augmented reality.

ARTIFICIAL INTELLEGENCE (AI) AND MACHINE LEARNING (ML):  AI, and more specifically ML, empower manufacturers to benefit from data-based insights specific to their individual operations.  Advancing the evolution from prevention to predictive maintenance is just the beginning.  AI fuels opportunities within generative design, enhanced robotic collaboration and improved market understanding.

ROBOTICS/AUTOMATION:   It’s not going to stop.  The increasingly collaborative nature or today’s robots is refining how manufacturers maximize automated environments—often leveraging cobots to handle difficult yet repetitive tasks. OK, what is a cobot?  Cobots, or collaborative robots, are robots intended to interact with humans in a shared space or to work safely in close proximity. Cobots stand in contrast to traditional industrial robots which are designed to work autonomously with safety assured by isolation from human contact. 

BLOCKCHAIN:  The manufacturing-centric uses cases for blockchain, an inherently secure technology, include auditable supply chain optimization, improved product trust, better maintenance tracking, IIOT device verification and reduction of systematic failures.  A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. By design, a blockchain is resistant to modification of the data.

QUANTUM COMPUTING:  According to the recent IBM report, “Exploring Quantum Computing Use Cases for Manufacturing”, quantum computing’s entry into the manufacturing realm will allow companies to solve problems impossible to address with conventional computers. Potential benefits include the ability to discover, design and develop materials with more strength-to-weight ratios, batteries that offer significantly higher energy densities as well as more efficient synthetic and catalytic processes that could help with energy generation and carbon capture.

DRONES:  From the ability to make just-in-time component deliveries to potentially fueling AI engines with operational observations, drones represent a significant opportunity to optimize production environments.   It is imperative that legislation be written to give our FAA guidelines relative to drone usage.  Right now, that is not really underway. 

CONCLUSIONS:  Maybe Mr. Duell was incorrect in his pronouncement.  We are definitely not done

ARTIFICIAL INTELLIGENCE

February 12, 2019


Just what do we know about Artificial Intelligence or AI?  Portions of this post were taken from Forbes Magazine.

John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automation theory and complex information processing. The proposal for the conference said, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions focus on AI being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human). The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Merriam-Webster defines artificial intelligence this way:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

About thirty (30) year ago, a professor at the Harvard Business School (Dr. Shoshana Zuboff) articulated three laws based on research into the consequences that widespread computing would have on society. Dr. Zuboff had degrees in philosophy and social psychology so she was definitely ahead of her time relative to the unknown field of AI.  Her document “In the Age of the Smart Machine: The Future of Work and Power”, she postulated the following three laws:

  • Everything that can be automated will be automated
  • Everything that can be informated will be informated. (NOTE: Informated was coined by Zuboff to describe the process of turning descriptions and measurements of activities, events and objects into information.)
  • In the absence of countervailing restrictions and sanctions, every digital application that can be sued for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

At that time there was definitely a significant lack of computing power.  That ship has sailed and is no longer a great hinderance to AI advancement that it certainly once was.

 

WHERE ARE WE?

In recent speech, Russian president Vladimir Putin made an incredibly prescient statement: “Artificial intelligence is the future, not only for Russia, but for all of humankind.” He went on to highlight both the risks and rewards of AI and concluded by declaring that whatever country comes to dominate this technology will be the “ruler of the world.”

As someone who closely monitors global events and studies emerging technologies, I think Putin’s lofty rhetoric is entirely appropriate. Funding for global AI startups has grown at a sixty percent (60%) compound annual growth rate since 2010. More significantly, the international community is actively discussing the influence AI will exert over both global cooperation and national strength. In fact, the United Arab Emirates just recently appointed its first state minister responsible for AI.

Automation and digitalization have already had a radical effect on international systems and structures. And considering that this technology is still in its infancy, every new development will only deepen the effects. The question is: Which countries will lead the way, and which ones will follow behind?

If we look at criteria necessary for advancement, there are the seven countries in the best position to rule the world with the help of AI.  These countries are as follows:

  • Russia
  • The United States of America
  • China
  • Japan
  • Estonia
  • Israel
  • Canada

The United States and China are currently in the best position to reap the rewards of AI. These countries have the infrastructure, innovations and initiative necessary to evolve AI into something with broadly shared benefits. In fact, China expects to dominate AI globally by 2030. The United States could still maintain its lead if it makes AI a top priority and charges necessary investments while also pulling together all required government and private sector resources.

Ultimately, however, winning and losing will not be determined by which country gains the most growth through AI. It will be determined by how the entire global community chooses to leverage AI — as a tool of war or as a tool of progress.

Ideally, the country that uses AI to rule the world will do it through leadership and cooperation rather than automated domination.

CONCLUSIONS:  We dare not neglect this disruptive technology.  We cannot afford to lose this battle.


Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.


Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.