DIGITAL TRANSFORMATION

June 22, 2020


OK, I admit, I generally read a document occurring online by printing it out first.  It’s not the size of my monitor or the font size or the font type.  I suppose I’m really “old-school” and the feel of a piece of paper in my hand is preferable.  There is one thing, I’m always writing in the margins, making notes, checking references, summarizing, and it helps to have a paper copy.   Important documents are saved to my hard-drive AND saved in a hard-copy file. I probably do need a digital transformation.

The June issue of “Control Engineering” published an excellent article on digital transformation with the following definition: “Digital transformation is about transforming and changing the business for the future and creating new and better ways of doing that business.”    In other words, it’s about becoming more efficient, faster, and with fewer errors.  Digital transformation creates new capabilities and new processes, reduces capital costs and operating costs, empowers teams, improves decision making, creates new and better products and services for customers.   All of this involves being able to communicate effectively with all individuals understanding the vocabulary.  This is where we sometimes get confused.  We say one thing but mean quite another.  I would like now to describe and define several words and phrases used when discussing digital transformation.

  • Artificial Intelligence (AI)—Systems that can analyze great amounts of data and extract trends and knowledge from seemingly incoherent numbers.
  • Industrial Internet of Things (IIoT)—Smart devices, smart machines, and smart sensors only work and make sense when they are connected and can talk to one another.
  • Machine Learning (ML)—Smart machines create and extend their own mathematical models to make decisions, and even predictions, without having to be programmed; they essentially learn from the past and from the world around them.
  • Augmented Reality (AR)—Anything and everything in the real world can be enhanced, or augmented by digital transformation. It does not have to be only visual; it can be any or all of the five (5) senses.
  • Virtual Reality (VR)—Virtual reality has been around for some by in the world of gaming.  It is also being used to create simulations, training, and providing instruction in a graphic manner.
  • Digital Twin—Digital twins are connected to their physical counterparts to create cyber-physical systems.  Digital twins get continuous real-time data streams from the physical twin, becoming a digital replica.
  • Digital Thread—A digital thread provides data from start to finish for processes—manufacturing and otherwise.
  • Manufacturing Execution Systems (MES)—Any facility that executes manufacturing orders through programming.
  • Radio Frequency Identification (RFID)—A system that interrogates and records data relative to parts, subassemblies, and overall assemblies.
  • Advanced Robotics—Autonomous robotic systems that facilitate manufacturing, parts “picking and placing”, and other operations that can be automated using robotic systems.
  • Collaborative Robotic Systems—Systems that interact with humans to accomplish a specific task.
  • Mobile Internet—Cell phones, i-pads, laptops, etc.  Any system that can “travel” with an individual user.
  • 3D Printing—Additive manufacturing that builds a product by adding material layer by layer to form a finished part.
  • Cloud and Edge Computing—On-demand data storage and on-demand computing power from any location.

I am sure other words describing technology will result from the digital transformation age.  We all need to get use to it because there is absolutely no going back.  Jump in, become familiar with available technology that can and will transform the manner in which we do business. 


First, let us define a collaborative robot or cobot:  “Cobots, or collaborative robots, are robots intended to interact with humans in a shared space or to work safely in close proximity.  Cobots stand in contrast to traditional industrial robots which are designed to work autonomously with safety assured by isolation from human contact.   Cobot safety may rely on lightweight construction materials, rounded edges, and limits on speed or force. Safety may also require sensors and software to assure good collaborative behavior.”

A picture is probably worth a thousand words so take a look.

You will notice the lady above is “collaborating” with the robotic system above.  They BOTH are providing an assembly operation.


The robotic system shown above is drilling a hole in flat metal material while the worker watches.  The drill pattern has been previously chosen and programmed into the computer driving the system.

HISTORY:

The first definition of a cobot comes from a 1999 US patent filing for “an apparatus and method for direct physical integration between a person and a general-purpose manipulator controlled by a computer.”   This description basically refers to what we would now call an Intelligent Assist Device or IAD. An IAD is the ancestor of modern cobots, which resulted from the efforts of General Motors to implement robotics in the automotive sector of our economy.   This new device could move in a non-caged environment to help human workers in assembly operations.  For safety reasons, it had no internal source of motion power.  Please note the “non-caged” description.  For safety reasons, most robotic, non-COBOT, systems are surrounded with safety barriers to protect employees.  COBOTS are generally not of that category. 

In 2004, robotics developer KUKA released their LBR3, a lightweight COBOT with motion of its own.  This was the result of a long collaboration between company and the German Aerospace Center Institute.  Its motion-controlled capabilities were later refined in two updated versions and released in 2008 and 2013.

In 2008, Universal Robots released the UR5, a COBOT THAT COULD safely operate alongside employees, eliminating the need for safety caging or fencing.  The robot helped launch the era of flexible, user-friendly and very cost-effective collaborative robots.  These gave small-to-medium manufacturers the possibly of automating their facilities without investing in cost-prohibitive technology or in a complete make-over of their manufacturing capability.

As with all revolutionary technology, COBOTS were initially met with significant skepticism by the manufacturing industry.  Many facility managers saw them as technological marvels but questioned the possibility of integrating them into actual working environments. Today, however, the market for industrial COBOTS has an annual growth rate of fifty percent (50%) and it is estimated that it will hit three billion USD ($3.00 billion) in global revenue by the end of 2020.

There are limitations at the present time relative to applying COBOTS to manufacturing processes. The most important ones are the need for fine dexterity—for example, when picking up small and delicate pieces and the ability to make decisions rapidly to avoid obstacles without stopping production.   Some of these issues are being overcome by integrating vision systems allowing the COBOT to adapt to environmental changes.  This include obstacles of different nature and variation in the position of the object they are supposed to pick up and locations where they must be dropped off.   This new technology not only eliminates the need for precise positioning, but allows manufacturers to finally combine safety and maximum productivity.  The increased sensitivity will allow several COBOTS to work together independently, performing different tasks without colliding.


The magazine “Foundry Management & Technology” is used as a source for this post.

If you follow the literature at all, you know that robotic systems have gained significant usage in manufacturing methodologies.  Now, when I say robotic systems, I mean a system of the type shown below.

This is a “pick-and-place “or SCARA (Selective Compliance Articulated Robotic Arm) type system.  We are definitely not talking about the one shown below.

Human robotic systems are well into the future.  We are talking about robotic systems used strictly in manufacturing work cells. 

From experience, the cost of deploying a robotic system can go well beyond the price tag of the robot itself.  You have direct installation costs, cost for electrical and pneumatic inputs, cost for tooling, jigs, fixtures, grippers, welding rigs, costs for engineering and robotic maintenance, insurance, etc.  All of these costs MUST be factored in to discover, or at least estimate, the overall cost of operating a system. 

A report by the Boston Consulting Group suggests that in order to arrive at a solid cost-estimate for robotic systems, customers should multiply the machine’s cost by a minimum of three.  In other words, let us say that a six-axis robot costs $65,000.00, customers should therefore budget $195,000.00 for the entire investment. This is a great “rule-of-thumb” which should represent a starting point. Due to the varying nature of manufacturing facilities, estimated costs fluctuate dramatically according to the specific industrial sector and size of the operation.  Please keep in mind that these costs are not always linear in nature and may vary during machinery lifecycle. 

Let’s look at an example. A manufacturer plans to use two SCARA robots to automate a pick-and-place process.  The robots will operate three shifts daily, six days per week, forty-eight (48) weeks per year.  Equivalent labor would require two operators per shift, equating to six (6) operators generating the same throughput over the same period of time.  Now, using the lowest average salary of a U.S. production employee, we would have to pay approximately $25,000.00 per employee per year or approximately $150,000.00 per year.  When employing robotic systems, human labor is not completely eliminated. A good rule-of-thumb for labor estimation alongside a robotic system is twenty-five percent (25%) of existing labor costs.  This would reduce the human labor to $37,500.00 per year—a great savings producing an acceptable ROI. This estimating method does NOT account for down time of equipment for maintenance and/or parts replacement.  That must be factored into the mix as well.  There will also be some expense for training personnel to monitor and use the equipment.  This involves training to set up the systems and initiate the manufacturing process. 

Robotic systems are predictable.  They can eliminate human error.  They do not take lunch breaks and if maintained properly can provide years of usable production. The payback is there and if a suitable vendor is chosen, a great marriage will occur.  Vendor support when operating a robotic system is an absolute must—a must.

CRYSTAL BALL

January 30, 2020


Abraham Lincoln once said, ‘The best way to predict your future is to create it,’” At first this might seem obvious but I think it shows remarkable insight.  Engineers and scientists the world over have been doing that for centuries.   

 Charles H. Duell was the Commissioner of US patent office in 1899.  Mr. Deull’s most famous attributed utterance is “everything that can be invented has been invented.”  The only thing this proves; P.B. Barnum was correct—there is a fool born every minute.  Mr. Duell just may fit that mold. 

The November/December 2019 edition of “Industry Week” provided an article entitled “TOP 10 TECHNOLOGIES TO WATCH IN 2020”.  I personally would say in the decade of the twenties.  Let’s take a look at what their predictions are.  The article was written specifically to address manufacturing in the decade of the “20s but I feel the items will apply to professions other than manufacturing.  You’re going to like this one.

INDUSTRIAL INTERNET OF THINGS (IIOT): I’ve been writing about this one. The Internet of Things is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.  In a data-fueled environment, IIOT provides the means to gather data in near real-time fashion from seamlessly all-connected devices.  The IoT and the IIoT is happening right now.  It truly is an idea whose time has come.

EDGE COMPUTING:  As production equipment continues to advance, equipment cannot always wait for data to move across the network before taking action.  Edge computing puts vital processing power where it is needed, only transmitting vital information back through the network. In the beginning, there was One Big Computer. Then, in the Unix era, we learned how to connect to that computer using dumb (not a pejorative) terminals. Next, we had personal computers, which was the first-time regular people really owned the hardware that did the work.

Right now, in 2020, we’re firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that’s in the cloud — as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might’ve enjoyed in the personal computing era.

As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.

The advent of edge computing as a buzzword you should perhaps pay attention to is the realization by these companies that there isn’t much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the “cloud” lie at the “edge.”

So, what is edge?

The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.

5G NETWORK:  As manufacturers continue to embrace mobile technology, 5G provides the stability and speed needed to wirelessly process growing data sets common in today’s production environments.  5G is crucial as manufacturers close the last mile to connect the entire array of devices to the IIOT. 5 G components and networks allow wearable technology, hand-held devices, and fast data acquisition on the factory floor or the retail establishment.

3-D PRINTING:  The rise of the experience economy is ushering in the need for mass production. The ongoing maturity of 3D printing and additive manufacturing is answering the call with the ability to leverage an ever-growing list of new materials. 

WEARABLES: From monitoring employee health to providing augmented training and application assistance, a growing array of wearable form factors represents an intriguing opportunity for manufacturing to put a host of other technologies in action including AI, machine learning, virtual reality and augmented reality.

ARTIFICIAL INTELLEGENCE (AI) AND MACHINE LEARNING (ML):  AI, and more specifically ML, empower manufacturers to benefit from data-based insights specific to their individual operations.  Advancing the evolution from prevention to predictive maintenance is just the beginning.  AI fuels opportunities within generative design, enhanced robotic collaboration and improved market understanding.

ROBOTICS/AUTOMATION:   It’s not going to stop.  The increasingly collaborative nature or today’s robots is refining how manufacturers maximize automated environments—often leveraging cobots to handle difficult yet repetitive tasks. OK, what is a cobot?  Cobots, or collaborative robots, are robots intended to interact with humans in a shared space or to work safely in close proximity. Cobots stand in contrast to traditional industrial robots which are designed to work autonomously with safety assured by isolation from human contact. 

BLOCKCHAIN:  The manufacturing-centric uses cases for blockchain, an inherently secure technology, include auditable supply chain optimization, improved product trust, better maintenance tracking, IIOT device verification and reduction of systematic failures.  A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. By design, a blockchain is resistant to modification of the data.

QUANTUM COMPUTING:  According to the recent IBM report, “Exploring Quantum Computing Use Cases for Manufacturing”, quantum computing’s entry into the manufacturing realm will allow companies to solve problems impossible to address with conventional computers. Potential benefits include the ability to discover, design and develop materials with more strength-to-weight ratios, batteries that offer significantly higher energy densities as well as more efficient synthetic and catalytic processes that could help with energy generation and carbon capture.

DRONES:  From the ability to make just-in-time component deliveries to potentially fueling AI engines with operational observations, drones represent a significant opportunity to optimize production environments.   It is imperative that legislation be written to give our FAA guidelines relative to drone usage.  Right now, that is not really underway. 

CONCLUSIONS:  Maybe Mr. Duell was incorrect in his pronouncement.  We are definitely not done


Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.

ROBONAUGHTS

September 4, 2016


OK, if you are like me, your sitting there asking yourself just what on Earth is a robonaught?  A robot is an electromechanical device used primarily to take the labor and sometimes danger from human activity.  As you well know, robotic systems have been in use for many years with each year providing systems of increasing sophistication.  An astronaut is an individual operating in outer space.  Let’s take a proper definition for ROBONAUGHT as provided by NASA.

“A Robonaut is a dexterous humanoid robot built and designed at NASA Johnson Space Center in Houston, Texas. Our challenge is to build machines that can help humans work and explore in space. Working side by side with humans, or going where the risks are too great for people, Robonauts will expand our ability for construction and discovery. Central to that effort is a capability we call dexterous manipulation, embodied by an ability to use one’s hand to do work, and our challenge has been to build machines with dexterity that exceeds that of a suited astronaut.”

My information is derived from “NASA Tech Briefs”, Vol 40, No 7, July 2016 publication.

If you had your own personal robotic system, what would you ask that system to do?  Several options surface in my world as follows: 1.) Mow the lawn, 2.) Trim hedges, 3.) Wash my cars, 4.) Clean the gutters, 5.) Vacuum floors in our house, 6.) Wash windows, and 7.) Do the laundry.   (As you can see, I’m not really into yard work or even house work.)  Just about all of the tasks I do on a regular basis are home-grown, outdoor jobs and time-consuming.

For NASA, the International Space Station (ISS) has become a marvelous test-bed for developing the world’s most advanced robotic technology—technology that definitely represents the cutting-edge in space exploration and ground research.  The ISS now hosts a significant array of state-of-the are robotic projects including human-scale dexterous robots and free-flying robots.  (NOTE:  The vendor is Astrobee and they have developed for NASA a free-flyer robotic system consists of structure, propulsion, power, guidance, navigation and control (GN&C), command and data handling (C&DH), avionics, communications, dock mechanism, and perching arm subsystems. The Astrobee element is designed to be self-contained and capable of autonomous localization, orientation, navigation and holonomic motion as well as autonomous resupply of consumables while operating inside the USOS.)  These robotic systems are not only enabling the future of human-robot space exploration but promising extraordinary benefits for Earth-bound applications.

The initial purpose for exploring the design and fabrication of a human robotic system was to assist astronauts in completing tasks in which an additional pair or pairs of hands would be very helpful or to perform jobs either too hazardous or too mundane for crewmembers.  For this reason, the  Robonaut 2, was NASA’s first humanoid robot in space and was selected as the NASA Government Invention of the Year for 2014. Many outstanding inventions were considered for this award but Robonaut 2 was chosen after a challenging review by the NASA selection committee that evaluated the robot in the following areas: 1.) Aerospace Significance, 2.) Industry Significance, 3.) Humanitarian Significance, 4.) Technology Readiness Level, 5.) NASA Use, and 6.) Industry Use and Creativity. Robonaut 2 technologies have resulted in thirty-nine (39) issued patents, with several more under review. The NASA Invention of the Year is a first for a humanoid robot and with another in a series of firsts for Robonaut 2 that include: first robot inside a human space vehicle operating without a cage, and first robot to work with human-rated tools in space.  The R2 system developed by NASA is shown in the following JPEGs:

R2 Robotic System

R2 Robotic System(2)

R2 Robotic System(3)

 

Robonaut 2, NASA’s first humanoid robot in space, was selected as the NASA Government Invention of the Year for 2014. Many outstanding inventions were considered for this award, and Robonaut 2 was chosen after a challenging review by the NASA selection committee that evaluated the robot in the following areas: Aerospace Significance, Industry Significance, Humanitarian Significance, Technology Readiness Level, NASA Use, Industry Use and Creativity. Robonaut 2 technologies have resulted in thirty-nine (39) issued patents, with several more under review. The NASA Invention of the Year is a first for a humanoid robot and another in a series of firsts for Robonaut 2 that include: first robot inside a human space vehicle operating without a cage, and first robot to work with human-rated tools in space.

R2 first powered up for the first time in August 2011. Since that time, robotics engineers have tested R2 on ISS, completing tasks ranging from velocity air measurements to handrail cleaning—simple but necessary tasks that require a great deal of crew time.   R2 also has an on-board task of flipping switches and pushing buttons, each time controlled by space station crew members through the use of virtual reality gear. According to Steve Gaddis, “we are currently working on teaching him how to look for handrails and avoid obstacles.”

The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012.  Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators (“legs”), more capable processors, and new sensors. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions.

One advantage of a humanoid design is that Robonaut can take over simple, repetitive, or especially dangerous tasks on places such as the International Space Station. Because R2 is approaching human dexterity, tasks such as changing out an air filter can be performed without modifications to the existing design.

More and more we are seeing robotic systems do the work of humans.  It is just a matter of time before we see their usage here on terra-ferma.  I mean human-type robotic systems used to serve man.  Let’s just hope we do not evolve into the “age of the machines”.  I think I may take another look at the movie Terminator.

ENCODERS

May 21, 2016


Once a month a group of guys and I get together for lunch.  Great friends needing to solve the world’s problems.  (Here lately, it’s taken much longer than the one and one-half hours we spend during our meeting.)  One of our friends, call him Joe, just underwent surgery for prostate cancer.  This is called a Prostatectomy and is done every day.  His description of the “event” was fascinating.  To begin with, the surgeon was about twenty (20) feet from the operating table. Yes, that’s correct; the entire surgery was accomplished via robotic systems. OK, why is this procedure more desirable than the “standard” procedure”?   The robotic-assisted approach is less invasive, reduces bleeding and offers large 3-D views of the operating fields. The mechanical arms for the robotic system are controlled by the surgeon and provide greater precision than the human hand.  This allows the surgeon more control when separating nerves and muscles from the prostate. This benefits patients by lowering the risk of side effects, such as erectile dysfunction and incontinence, while also completely removing cancer tissue.  The equipment looks very similar, if not identical to the one given in the JPEG below.  Let’s take a look.

Prostate Surgery and Robotic Systems

As you can see, the electromechanical devices are remarkably sophisticated and represent significant advantages in medical technology.  The equipment you are seeing above is called the “patient side cart”. It looks as follows:

Surgical Side Cart

During a robotic prostatectomy, the patient side cart is positioned next to the operating table.  The system you see above is a da Vinci robotic arm arranged to provide entry points into the human body and prostate.  EndoWrist instruments, and the da Vinci Insite Vision System, are mounted onto the robot’s electromechanical arms representing the surgeon’s left and right hands. They provide the functionality to perform complex tissue manipulation through the entry points, or ports.  EndoWrist instruments include forceps, scissors, electrocautery, scalpels and other surgical tools. If the surgeon needs to change an Endowrist instrument, common during robotic prostatectomy, the instrument is withdrawn from the surgical system using controls at the console. Typically, an operating room nurse standing near the patient physically removes the EndoWrist instruments and replaces them with new instruments.

There are certainly other types of surgery performed today using robotic systems.  Several of these are as follows:

One electromechanical device that helps to make this remarkable procedure possible is called an encoder.  Let’s define an encoder.

An encoder is a sensor of mechanical motion that generates digital signals in response to motion. As an electro-mechanical device, an encoder is able to provide motion control system users with information concerning position, velocity and direction. There are two different types of encoders: linear and rotary. A linear encoder responds to motion along a path, while a rotary encoder responds to rotational motion. An encoder is generally categorized by the means of its output. An incremental encoder generates a train of pulses which can be used to determine position and speed. An absolute encoder generates unique bit configurations to track positions directly.

As you might expect, knowing the exact position of a medical device used during surgery is absolutely critical to the outcome.  The surgeon MUST know the angular position of the device at all times to ensure no errors are made.  Nerves, tendons and muscles must be left intact.  This information is provided by encoders and encoder data systems.

ENCODER TYPES:

Linear and rotary encoders are broken down into two main types: the absolute encoder and the incremental encoder. The construction of these two types of encoders is quite similar; however they differ in physical properties and the interpretation of movement.

Incremental rotary encoders utilize a transparent disk which contains opaque sections that are equally spaced to determine movement. A light emitting diode is used to pass through the glass disk and is detected by a photo detector. This causes the encoder to generate a train of equally spaced pulses as it rotates. The output of incremental rotary encoders is measured in pulses per revolution which is used to keep track of position or determine speed.  This type of encoder is required with the medical system given above.

Absolute encoders utilize stationary mask in between the photodetector and the encoder disk as shown below. The output signal generated from an absolute encoder is in digital bits which correspond to a unique position. The bit configuration is produced by the light which is received by the photodetector when the disk rotates. The light configuration received is translated into gray code. As a result, each position has its own unique bit configuration.

Typical construction for a rotary encoder is given as follows:

Rotary Encoders

Please note the following features:

  • Electrical connection to the right of the encoder body.
  • Encoder shaft that couples to the medical device.
  • Electrical specifications indicating the device is driven by a five (5) volt +/- 5% source.

Encoder Specifics

You can see from the above illustrated parts breakdown that a rotary encoder is quite technical in design.

SYSTEM ACCURACY:

System accuracy is critical, especially during surgery. Let’s look.

An encoder’s performance is typically stated as resolution, rather than accuracy of measurement. The encoder may be able to resolve movement into precise bits very accurately, but the accuracy of each bit is limited by the quality of the machine motion being monitored. For example, if there are deflections of machine elements under load, or if there is a drive screw with 0.1 inch of play, using a 1000 count-per-turn encoder with an output reading to 0.001 inch will not improve the 0.1 inch tolerance on the measurement. The encoder only reports position; it cannot improve on the basic accuracy of the shaft motion from which the position is sensed.  As you can see, the best encoders, hopefully those used in a surgical device, can deliver accuracy to 0.10 inch.  Remarkable accuracy for a robotic device and absolutely necessary.

CONCLUSIONS: 

TECHNOLOGY DELIVERS.  Ours lives are much better served with advancing technology and certainly technology applied to the medical profession. This is the reason engineers and technologists endure the rigor necessary to achieve talents that ultimately will be directed to solving problems and advancing technology you have seen from the post above.

As always, I welcome your comments.  bobjengr@comcast.net

%d bloggers like this: