AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

Advertisements

AN AVERAGE DAY FOR DATA

August 4, 2017


I am sure you have heard the phrase “big data” and possibly wondered just what that terminology relates to.  Let’s get the “official” definition, as follows:

The amount of data that’s being created and stored on a global level is almost inconceivable, and it just keeps growing. That means there’s even more potential to glean key insights from business information – yet only a small percentage of data is actually analyzed. What does that mean for businesses? How can they make better use of the raw information that flows into their organizations every day?

The concept gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the four plus complexity:

  • Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
  • Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
  • Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
  • In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
  • Today’s data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.

AN AVERAGE DAY IN THE LIFE OF BIG DATA:

I picture is worth a thousand words but let us now quantify, on a daily basis, what we mean by big data.

  • U-Tube’s viewers are watching a billion (1,000,000,000) hours of videos each day.
  • We perform over forty thousand (40,000) searches per second on Google alone. That is approximately three and one-half (3.5) billion searches per day and roughly one point two (1.2) trillion searches per year, world-wide.
  • Five years ago, IBM estimated two point five (2.5) exabytes (2.5 billion gigabytes of data generated every day. It has grown since then.
  • The number of e-mail sent per day is around 269 billion. That is about seventy-four (74) trillion e-mails per year. Globally, the data stored in data centers will quintuple by 2020 to reach 915 exabytes.  This is up 5.3-fold with a compound annual growth rate (CAGR) of forty percent (40%) from 171 exabytes in 2015.
  • On average, an autonomous car will churn out 4 TB of data per day, when factoring in cameras, radar, sonar, GPS and LIDAR. That is just for one hour per day.  Every autonomous car will generate the data equivalent to almost 3,000 people.
  • By 2024, mobile networks will see machine-to-machine (M2M) connections jump ten-fold to 2.3 billion from 250 million in 2014, this is according to Machina Research.
  • The data collected by BMW’s current fleet of 40 prototype autonomous care during a single test session would fill the equivalent stack of CDs 60 miles high.

We have become a world that lives “by the numbers” and I’m not too sure that’s altogether troubling.  At no time in our history have we had access to data that informs, miss-informs, directs, challenges, etc etc as we have at this time.  How we use that data makes all the difference in our daily lives.  I have a great friend named Joe McGuinness. His favorite expressions: “It’s about time we learn to separate the fly s_____t from the pepper.  If we apply this phrase to big data, he may just be correct. Be careful out there.

DIGITAL READINESS GAPS

April 23, 2017


This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center.  This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.

HOW DO ADULTS LEARN?  Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty.  I think of “book learning” first and digital as a backup.  They head straight for their i-pad or i-phone.  GOOGLE is a verb and not a company name as far as they are concerned.  (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)

According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:

  • Adult learning is self-directed/autonomous
    Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
  • Adult learning utilizes knowledge & life experiences
    Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
  • Adult learning is goal-oriented
    The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
  • Adult learning is relevancy-oriented
    One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
  • Adult learning highlights practicality
    Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
  • Adult learning encourages collaboration
    Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.

One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.

As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.

Specifically, the Pew report made their assessment relative to American adults according to five main factors:

  • Their confidence in using computers,
  • Their facility with getting new technology to work
  • Their use of digital tools for learning
  • Their ability to determine the trustworthiness of online information,
  • Their familiarity with contemporary “education tech” terms.

It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:

The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness.  The chart below will indicate their classifications.

The breakdown is as follows:

Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.

  • A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
  • We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
  • A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.

Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.

  • A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
  • Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.

CONCLUSIONS:

To me, one of the greatest lessons from my university days—NEVER STOP LEARNING.  I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years.  If you stop learning, the information you receive will become obsolete in five years.  At the pace of technology today, that may be five months.  You never stop learning AND you embrace existent technology.  In other words—do digital. Digital is your friend.  GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy.  At least, start there then, trust but verify.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.


FACTS:

  • 707,758 motor vehicles were reported stolen in the United States in 2015, up three point one (3.1) percent from 2014, according to the FBI.
  • A motor vehicle was stolen in the United States every forty-five (45) seconds in 2015.
  • Eight of the top ten cities with the highest rate of vehicle theft in 2015 were in California, according to the National Insurance Crime Bureau.
  • Nationwide, the 2015 motor vehicle theft rate per 100,000 people was 220.2, up two point two (2.2) percent from 2015.2 in 2014. The highest rate was reported in the West, 371.5 or up eight point two (8.2) percent from 342.2 in 2014.
  • In 2015, only thirteen point one (13.1) percent of motor vehicle thefts were cleared, either by arrests or by exceptional mean, compared with 2014 percent for arson and nineteen point four (19.4) percent for all property crimes. Very disappointing statistics indeed.
  • Autos accounted for 74.7 percent of all motor vehicles stolen in 2015, trucks and buses accounted for 14.8 percent and other vehicles for 10.5 percent.

Given below are the cities in which most vehicles are stolen:

top-10-cities-for-stolen-vehicles

TOP TEN VEHICLES STOLEN:

The National Insurance Crime Bureau ranked the 10 most stolen vehicles in the country with data from the NCIC. Let’s take a look.  The actual numbers are in parentheses.

  1. Honda Accord (52,244)
  2. Honda Civic(49,430)
  3. Ford pickup (full size) (29,396)
  4. Chevrolet pickup (full size) (27,771)
  5. Toyota Camry (15,466)
  6. Ram pickup (full size) (11,212)
  7. Toyota Corolla(10,547)
  8. Nissan Altima (10,374)
  9. Dodge Caravan (9,798)
  10. Chevrolet Impala(9,225)

Automotive engineers continue to examine smartphone system and design to provide models for the development of an increasingly sophisticated user experience, with large center information displays and capacitive touchscreen being a good example.  Now designers are adding another smartphone feature, the fingerprint sensor to enhance modernization of the driver’s interface to functions in and beyond the automobile. This and other forms of biometric authentication, show great promise if implemented with sensitivity to user privacy and the extremes of the automotive operating environment.

BIOMETRICS:

Just what is the science of Biometrics?

Biometrics may be a fairly new term to some individuals so it is entirely appropriate at this time to define the technology.  This will lay the groundwork for the discussion to follow.  According to the International Biometric Society:

“Biometrics is used to refer to the emerging field of technology devoted to identification of individuals using biological traits, such as those based on retinal or iris scanning, fingerprints, or face recognition.”

The terms “Biometrics” and “Biometry” have been used since early in the 20th century to refer to the field of development of statistical and mathematical methods applicable to data analysis problems in the biological sciences.

From the Free Dictionary, we see the following definition:

  • The statistical study of biological phenomena.
  • The measurement of physical characteristics, such as fingerprints, DNA, or retinal patterns for use in verifying the identity of individuals.
  • Biometricsrefers to metrics related to human characteristics. Biometrics authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance.

Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics. Physiological characteristics are related to the shape of the body.  Examples include, but are not limited to fingerprint, palm veins and odor/scent.  Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to typing rhythm, gait, and voice.  Some researchers have coined the term behaviometrics to describe the latter class of biometrics.

More traditional means of access control include token-based identification systems, such as a driver’s license or passport, and knowledge-based identification systems, such as a password or personal identification number.  Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns about the ultimate use of this information.

The oldest biometric identifier is facial recognition. The dimensions, proportions and physical attributes of a person’s face are unique and occur very early in infants.   A child will (obviously) recognize a parent, a brother or sister.  It is only since the advent of computers and accompanying software that the ability to quantify facial features has become possible.

The FBI has long been a leader in biometrics and has used various forms of biometric identification since the very earliest day.  This Federal institution assumed responsibility for managing the national fingerprint collection in 1924.  As you know, fingerprints vary from person to person (even identical twins have different prints) and don’t change over time. As a result, they are an effective way of identifying fugitives and helping to prove both guilt and innocence.

AUTOMOTIVE BIOMETRICS USING FINGERPRINT TECHNOLOGY:

What areas of a typical vehicle might benefit from specifically identifying a human being and matching that person to a particular car? Several possibilities come to mind:

  • Secure Access;
    ● Ignition Permission;
    ● Seat Reservations;
    ● On board communication systems;
    ● Anti-Theft programs;
    ● Driving license suspension programs.

All of these would insure privacy and access.  The two digital photographs below will serve to indicate how this methodology might work for an automobile.

starting-the-car

The fingerprint reader can be located in the steering wheel so the driver can concentrate in a better fashion.  This definitely desirable if biometric fingerprints are used for purposes other than starting the vehicle.

starting-the-car2

With this in mind, there are three mainstream fingerprint-sensing technologies available for automotive applications. These are as follows:

  • Capacitive Sensing—This is used in the world’s best-selling smartphones due to very small size: a sensing pad a few tens of microns thick and a small controller allow for very low power consumption.
  • Optical Fingerprint Sensing—Optical sensors are highly reliable and accurate, and so are widely used at border crossings. However, the sensors require a backlight to illuminate the finer.  They are still comparatively bulky compared to capacitive solutions.
  • Ultrasonic Sensing—This offers reliable detection of fingerprints in 3 D but has not found its way into mainstream mobile devices and is relative expensive.

CONCLUSIONS:

I believe biometrics will play a much bigger role in the automotive industry over the next few years.  Biometric fingerprinting could be used in a host of areas including:

  • Access to cabin compartment
  • Starting
  • Accessing cellphone communications
  • Allowing for application software located on cellphone so warm up in very cold climates could be made possible.

Now—here is the downside.  Someone has to be capable of troubleshooting a failed device and fix same if difficulties arise.  As complexity grows, we move more toward replace than fix.  Replace is costly.

As always, I welcome your comments.

DIALYSIS PUMPS

February 8, 2017


I entered the university shortly after Sir Isaac Newton and Gottfried Leibniz invented calculus. (OK, I’m not quite that old but you do get the picture.) At any rate, I’ve been a mechanical engineer for a lengthy period of time.  If I had to do it all over again, I would choose Biomedical Engineering instead of mechanical engineering.  Biomedical really fascinates me.  The medical “hardware” and software available today is absolutely marvelous.  As with most great technologies, it has been evolutionary instead of revolutionary.    One such evolution has been the development of the dialysis pump to facilitate administrating insulin to patients suffering with diabetes.

On my way to exercise Monday, Wednesday and Friday, I pass three dialysis clinics.  I am amazed that on some days the parking lots are, not only full, but cars are parked on the roads on either side of the buildings. Almost always, I see at least one ambulance parked in front of the clinic having delivered a patient to the facilities.  In Chattanooga proper, there are nine (9) clinics and approximately 3,306 dialysis centers in the United States. These centers employ 127,671 individuals and bring in twenty-two billion dollars ($22B) in revenue.  There is a four-point four percent (4.4%) growth rate on an annual basis. Truly, diabetes has reached epidemic proportions in our country.

Diabetes is not only one of the most common chronic diseases, it is also complex and difficult to treat.  Insulin is often administered between meals to keep blood sugar within target range.  This range is determined by the number of carbohydrates ingested. Four hundred (400) million adults worldwide suffer from diabetes with one and one-half million (1.5) deaths on an annual basis.  It is no wonder that so many scientists, inventors, and pharmaceutical and medical device companies are turning their attention to improving insulin delivery devices.   There are today several delivery options, as follows:

  • Syringes
  • Pens
  • Insulin Injection Aids
  • Inhaled Insulin Devices
  • External Pumps
  • Implantable Pumps

Insulin pumps, especially the newer devices, have several advantages over traditional injection methods.  These advantages make using pumps a preferable treatment option.  In addition to eliminating the need for injections at work, at the gym, in restaurants and other settings, the pumps are highly adjustable thus allowing the patient to make precise changes based on exercise levels and types of food being consumed.

These delivery devices require: 1.) An insulin cartridge, 2.) A battery-operated pump, and 3.) Computer chips that allow the patient to control the dosage.  A detailed list of components is given below.  Most modern devices have a display window or graphical user interface (GUI) and selection keys to facilitate changes and administrating insulin.  A typical pump is shown as follows:

insulin-pump

Generally, insulin pumps consist of a reservoir, a microcontroller with battery, flexible catheter tubing, and a subcutaneous needle. When the first insulin pumps were created in the 1970-80’s, they were quite bulky (think 1980’s cell phone). In contrast, most pumps today are a little smaller than a pager. The controller and reservoir are usually housed together. Patients often will wear the pump on a belt clip or place it in a pocket as shown below. A basic interface lets the patient adjust the rate of insulin or select a pre-set. The insulins used are rapid acting, and the reservoir typically holds 200-300 units of insulin. The catheter is similar to most IV tubing (often smaller in diameter), and connects directly to the needle. Patients insert the needle into their abdominal wall, although the upper arm or thigh can be used. The needle infusion set can be attached via any number of adhesives, but tape can do in a pinch. The needle needs to be re-sited every 2-3 days.

pump-application

As you can see from the above JPEG, the device itself can be clipped onto clothing and worn during the day for continued use.

The pump can help an individual patient more closely mimic the way a healthy pancreas functions. The pump, through a Continuous Subcutaneous Insulin Infusion (CSII), replaces the need for frequent injections by delivering precise doses of rapid-acting insulin 24 hours a day to closely match your body’s needs.  Two definitions should be understood relative to insulin usage.  These are as follows:

  • Basal Rate: A programmed insulin rate made of small amounts of insulin delivered continuously mimics the basal insulin production by the pancreas for normal functions of the body (not including food). The programmed rate is determined by your healthcare professional based on your personal needs. This basal rate delivery can also be customized according to your specific daily needs. For example, it can be suspended or increased / decreased for a definite time frame: this is not possible with basal insulin injections.
  • Bolus Dose: Additional insulin can be delivered “on demand” to match the food you are going to eat or to correct high blood sugar. Insulin pumps have bolus calculators that help you calculate your bolus amount based on settings that are pre-determined by your healthcare professional and again based on your special needs.

A modern insulin pump can accomplish both basal and bolus needs as the situation demands.

The benefits relative to traditional methods are as follows:

  • Easier dosing: calculating insulin requirements can be a complex task with many different aspects to be considered. It is important that the device ensures accurate dosing by taking into account any insulin already in the body, the current glucose levels, carbohydrate intake and personal insulin settings.
  • Greater flexibility:  The pump must be capable of instant adjustment to allow for exercise, during illness or to deliver small boluses to cover meals and snacks. This can easily be done with a touch of a button with the more-modern devices. There should be a temporary basal rate option to proportionally reduce or increase the basal insulin rate, during exercise or illness, for example.
  • More convenience: The device must offer additional convenience of a wirelessly connected blood glucose meter. This meter automatically sends blood glucose values to the pump, allowing more accurate calculations and to deliver insulin boluses discreetly.

These wonderful devices all result from technology and technological advances.  Needs DO generate devices.  I hope you enjoy this post and as always, I welcome your comments.


One of the items on my bucket list has been to attend the Consumer Electronics Show in Las Vegas.  (I probably need to put a rush on this one because the clock is ticking.)  For 50 years, CES has been the launching pad for innovation and new technology.  Much of this technology has changed the world. Held in Las Vegas every year, it is the world’s gathering place for all who thrive on the business of consumer technologies and where next-generation innovations are introduced to the commercial marketplace.   The International Consumer Electronics Show (International CES) showcases more than 3,800 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more; a conference program with more than three hundred (300) conference sessions and more than one-hundred and sixty-five thousand attendees from one hundred1 (50) countries.  Because it is owned and produced by the Consumer Technology Association (CTA)™ — formerly the Consumer Electronics Association (CEA)® — the technology trade association representing the $287 billion U.S. consumer technology industry, and it attracts the world’s business leaders and pioneering thinkers to a forum where the industry’s most relevant issues are addressed.  The range of products is immense as seen from the listing of product categories below.

PRODUCT CATEGORIES:

  • 3D Printing
  • Accessories
  • Augmented Reality
  • Audio
  • Communications Infrastructure
  • Computer Hardware/Software/Services
  • Content Creation & Distribution
  • Digital/Online Media
  • Digital Imaging/Photography
  • Drones
  • Electronic Gaming
  • Fitness and Sports
  • Health and Biotech
  • Internet Services
  • Personal Privacy & Cyber Security
  • Robotics
  • Sensors
  • Smart Home
  • Startups
  • Vehicle Technology
  • Video
  • Wearables
  • Wireless Devices & Services

If we look at world-changing revolution and evolution coming from CES over the years, we may see the following advances in technology, most of which now commercialized:

  • Videocassette Recorder (VCR), 1970
  • Laserdisc Player, 1974
  • Camcorder and Compact Disc Player, 1981
  • Digital Audio Technology, 1990
  • Compact Disc – Interactive, 1991
  • Digital Satellite System (DSS), 1994
  • Digital Versatile Disk (DVD), 1996
  • High Definition Television (HDTV), 1998
  • Hard-disc VCR (PVR), 1999
  • Satellite Radio, 2000
  • Microsoft Xbox and Plasma TV, 2001
  • Home Media Server, 2002
  • Blu-Ray DVD and HDTV PVR, 2003
  • HD Radio, 2004
  • IP TV, 2005
  • Convergence of content and technology, 2007
  • OLED TV, 2008
  • 3D HDTV, 2009
  • Tablets, Netbooks and Android Devices, 2010
  • Connected TV, Smart Appliances, Android Honeycomb, Ford’s Electric Focus, Motorola Atrix, Microsoft Avatar Kinect, 2011
  • Ultrabooks, 3D OLED, Android 4.0 Tablets, 2012
  • Ultra HDTV, Flexible OLED, Driverless Car Technology, 2013
  • 3D Printers, Sensor Technology, Curved UHD, Wearable Technologies, 2014
  • 4K UHD, Virtual Reality, Unmanned Systems, 2015

Why don’t we do this, let’s now take a very brief look at several exhibits to get a feel for the products.  Here we go.

Augmented Reality (AR):

Through specially designed hardware and software full of cameras, sensors, algorithms and more, your perception of reality can be instantly altered in context with your environment. Applications include sports scores showing on TV during a match, the path of trajectory overlaying an image, gaming, construction plans and more.  VR (virtual reality) equipment is becoming extremely popular, not only with consumers, but with the Department of Defense, Department of Motor Vehicles, and companies venturing out to technology for training purposes.

augmented-reality

Cyber Security:

The Cyber & Personal Security Marketplace will feature innovations ranging from smart wallets and safe payment apps to secure messaging and private Internet access.  If you have never been hacked, you are one in a million.  I really don’t think there are many people who have remained unaffected by digital fraud.  One entire section of the CES is devoted to cyber security.

cyber-security

E-Commerce:

Enterprise solutions are integral for business. From analytics, consulting, integration and cyber security to e-commerce and mobile payment, the options are ever-evolving.  As you well know, each year the number of online shoppers increases and will eventually outpace the number of shoppers visiting “brick-and-motor stores.  Some feel this may see the demise of shopping centers altogether.

e-commerce

Self-Driving Autonomous Automobiles:

Some say if you are five years old or under you may never need a driver’s license.  I personally think this is a little far-fetched but who knows.  Self-driving automobiles are featured prominently at the CES.

self-driving-automobiles

Virtual Reality (VR):

Whether it will be the launch of the next wave of immersive multimedia for virtual reality systems and environments or gaming hardware, software and accessories designed for mobile, PCs or consoles, these exhibitors are sure to energize, empower and excite at CES 2017.

vr

i-Products:

From electronic plug-ins to fashionable cases, speakers, headphones and exciting new games and applications, the product Marketplace will feature the latest third-party accessories and software for your Apple iPod®, iPhone® and iPad® devices.

i-products

3-D Printing:

Most 3D printers are used for building prototypes for the medical, aerospace, engineering and automotive industries. But with the advancement of the digital technology supporting it, these machines are moving toward more compact units with affordable price points for today’s consumer.

30-d-printing

Robotic Systems:

The Robotics Marketplace will showcase intelligent, autonomous machines that are changing the way we live at work, at school, at the doctor’s office and at home.

robotics

Healthcare and Wellness:

Digital health continues to grow at an astonishing pace, with innovative solutions for diagnosing, monitoring and treating illnesses, to advancements in health care delivery and smarter lifestyles.

health-and-wellness

Sports Technology:

In a world where an athlete’s success hinges on milliseconds or millimeters, high-performance improvement and feedback are critical.

sports-technology

CONCLUSIONS:

I think it’s amazing and to our credit as a country that CES exists and presents, on an annual basis, designs and visions from the best and brightest.  A great show-place for ideas the world over from established companies and companies who wish to make their mark on technology.  Can’t wait to go—maybe next year.  As always, I welcome your comments.

%d bloggers like this: