CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.


As a parent, you absolutely dread that call from your child indicating he or she has a problem—maybe a huge problem.  On April 25th of this year we received a call from our oldest son.  He was taking a late lunch at a local restaurant in downtown Chattanooga when he suddenly collapsed, fell backwards and hit his head on the sidewalk.  An onlooker rushed over to help him and quickly decided he needed a visit to Memorial Hospital emergency room.  Something just did not feel right.  He called us on the way to the ER. Once in the ER and after approximately five (5) hours and one CAT Scan later, the attending physician informed us that our son had a great deal of fluid collecting at the top of his brain and there was a great deal of swelling.  The decision was made by them to move him to Erlanger Hospital.  Erlanger has better facilities for neurological surgery if that became necessary.  At 1:32 A.M. Wednesday morning we received word that our son had a tumor at the base of his brain stem.  It was somewhat smaller than a tennis ball and in all probability, had been growing for the last ten years.  Surgery was necessary and quickly to avoid a stroke or a heart attack.  The tumor was pressing on the spinal cannel and nerve bundles.  Much delay at this point would be catastrophic.  It is amazing to me that there were no signs of difficulty prior to his fall.  Nothing to tell us a problem existed at all.

Erlanger referred us to the Semmes-Murphy Clinic in Memphis where all documentation from Memorial and Erlanger had been sent.  Founded one hundred (100) years ago by Eustace Semmes, MD, and Francis Murphey, MD, Semmes-Murphey Neurologic & Spine Institute has been a leader in the development of technology and procedures that improve the quality of care for patients with neurological and spine disorders. This continuing leadership has made the Semmes-Murphey name instantly recognizable to physicians across the country and the world, many of whom refer their patients here for treatment.  Dr. Madison Michael performed the eight (8) hour surgery nine (9) days ago to remove the tumor.  He is a miracle worker.  The surgery was successful but with lingering issues needing to be addressed as time allows and physical therapy dictates. Our son has lost hearing in his left ear, double vision, some atrophy in his extremities, and loss of stability.  There was also great difficulty in swallowing for three days after surgery and at one time we felt there might be the need for a feeding tube insertion.  That proved not to be the case since he eventually passed the swallow test.  That test is as follows:

  • Water
  • Applesauce
  • Jell-O-like substance
  • Oatmeal
  • Solid food

He did eventually pass.

We have a long road of recovery ahead of us but there is optimism he can regain most, if not all of his cognitive and physical abilities.  We do suspect the hearing is gone and will never return, but he is alive.

CRANIAL NERVES:

Our brain is a remarkably delicate and wonderful piece of equipment.  The ultimate computer with absolutely no equal.  Let’s take a look.

The cranial nerves exist as a set of twelve (12) paired nerves arising directly from the brain. The first two nerves (olfactory and optic) arise from the cerebrum, whereas the remaining ten emerge from the brain stem. This is where our son’s tumor was located so the surgery would have to be performed by one of the very best neurosurgeons in the United States.  That’s Dr. Michael.

The names of the cranial nerves relate to their function and they are also numerically identified in roman numerals (I-XII). The images below will indicate the specific location of the cranial nerves and the functions they perform.

You see from above the complexity of the brain and what each area contributes to cognitive, mobility and sensory abilities.  Remarkably impressive central computer.

The image below shows the approximate location relative to positioning of the nerve bundles and the functions those nerves provide.

 

 

Doctor Michael indicated the nerves are like spider webs and to be successful those nerves would have to be pushed away to allow access to the tumor.   The digital below will indicate the twelve (12) nerve bundles as follows:

Olfactory–This is a type of sensory nerve that contributes in the sense of smell in human beings. These basically provide the specific cells that are termed as olfactory epithelium. It carries the information from the nasal epithelium to the olfactory center in brain.

Optic–This is a type of sensory nerve that transforms information about vision to the brain. To be specific this supplies information to the retina in the form of ganglion cells.

Oculomoter–This is a form of motor nerve that supplies to different centers along the midbrain. Its functions include superiorly uplifting the eyelid, superiorly rotating the eyeball, construction of the pupil on the exposure to light and operating several eye muscles.

Trochlear–This motor nerve also supplies to the midbrain and performs the function of handling the eye muscles and turning the eye.

Trigeminal–This is a type of the largest cranial nerve in all and performs many sensory functions related to the nose, eyes, tongue and teeth. It basically is further divided in three branches that are ophthalmic, maxillary and mandibular nerve. This is a type of mixed nerve that performs sensory and motor functions in the brain.

Abducent–This is a type of motor nerve that supplies to the pons and performs the function of turning the eye laterally.

Facial–This motor nerve is responsible for different types of facial expressions. This also performs some functions of sensory nerve by supplying information about touch on the face and senses of tongue in mouth. It is basically present over the brain stem.

Vestibulocochlear–This motor nerve is basically functional in providing information related to balance of head and sense of sound or hearing. It carries vestibular as well as cochlear information to the brain and is placed near the inner ear.

Glossopharyngeal–This is a sensory nerve which carries sensory information from the pharynx (initial portion of throat) and some portion of tongue and palate. The information sent is about temperature, pressure and other related facts. It also covers some portion of taste buds and salivary glands. The nerve also carries some motor functions such as helping in swallowing food.

Vagus–This is also a type of mixed nerve that carries both motor and sensory functions. This basically deals with the area of the pharynx, larynx, esophagus, trachea, bronchi, some portion of heart and palate. It works by constricting muscles of the above areas. In sensory part, it contributes in the tasting ability of the human being.

Spinal accessory–As the name intimates this motor nerve supplies information about the spinal cord, trapeziusand other surrounding muscles. It also provides muscle movement of the shoulders and surrounding neck.

Hypoglossal–This is a typical motor nerve that deals with the muscles of tongue.

CONCLUSION: I do not wish anyone gain this information as a result of having gone through this exercise.  It’s fascinating but I could have gone a lifetime not needing to know.  Just my thoughts.

DIGITAL READINESS GAPS

April 23, 2017


This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center.  This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.

HOW DO ADULTS LEARN?  Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty.  I think of “book learning” first and digital as a backup.  They head straight for their i-pad or i-phone.  GOOGLE is a verb and not a company name as far as they are concerned.  (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)

According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:

  • Adult learning is self-directed/autonomous
    Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
  • Adult learning utilizes knowledge & life experiences
    Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
  • Adult learning is goal-oriented
    The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
  • Adult learning is relevancy-oriented
    One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
  • Adult learning highlights practicality
    Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
  • Adult learning encourages collaboration
    Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.

One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.

As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.

Specifically, the Pew report made their assessment relative to American adults according to five main factors:

  • Their confidence in using computers,
  • Their facility with getting new technology to work
  • Their use of digital tools for learning
  • Their ability to determine the trustworthiness of online information,
  • Their familiarity with contemporary “education tech” terms.

It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:

The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness.  The chart below will indicate their classifications.

The breakdown is as follows:

Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.

  • A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
  • We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
  • A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.

Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.

  • A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
  • Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.

CONCLUSIONS:

To me, one of the greatest lessons from my university days—NEVER STOP LEARNING.  I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years.  If you stop learning, the information you receive will become obsolete in five years.  At the pace of technology today, that may be five months.  You never stop learning AND you embrace existent technology.  In other words—do digital. Digital is your friend.  GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy.  At least, start there then, trust but verify.


If you work or have worked in manufacturing you know robotic systems have definitely had a distinct impact on assembly, inventory acquisition from storage areas and finished-part warehousing.   There is considerable concern that the “rise of the machines” will eventually replace individuals performing a verity of tasks.  I personally do not feel this will be the case although there is no doubt robotic systems have found their way onto the manufacturing floor.

From the “Executive Summary World Robotics 2016 Industrial Robots”, we see the following:

2015:  By far the highest volume ever recorded in 2015, robot sales increased by 15% to 253,748 units, again by far the highest level ever recorded for one year. The main driver of the growth in 2015 was the general industry with an increase of 33% compared to 2014, in particular the electronics industry (+41%), metal industry (+39%), the chemical, plastics and rubber industry (+16%). The robot sales in the automotive industry only moderately increased in 2015 after a five-year period of continued considerable increase. China has significantly expanded its leading position as the biggest market with a share of 27% of the total supply in 2015.

In looking at the chart below, we can see the sales picture with perspective and show how system sales have increased from 2003.

It is very important to note that seventy-five percent (75%) of global robot sales comes from five (5) countries.

There were five major markets representing seventy-five percent (75%) of the total sales volume in 2015:  China, the Republic of Korea, Japan, the United States, and Germany.

As you can see from the bar chart above, sales volume increased from seventy percent (70%) in 2014. Since 2013 China is the biggest robot market in the world with a continued dynamic growth. With sales of about 68,600 industrial robots in 2015 – an increase of twenty percent (20%) compared to 2014 – China alone surpassed Europe’s total sales volume (50,100 units). Chinese robot suppliers installed about 20,400 units according to the information from the China Robot Industry Alliance (CRIA). Their sales volume was about twenty-nine percent (29%) higher than in 2014. Foreign robot suppliers increased their sales by seventeen percent (17%) to 48,100 units (including robots produced by international robot suppliers in China). The market share of Chinese robot suppliers grew from twenty-five percent (25%) in 2013 to twenty-nine percent (29%) in 2015. Between 2010 and 2015, total supply of industrial robots increased by about thirty-six percent (36%) per year on average.

About 38,300 units were sold to the Republic of Korea, fifty-five percent (55%) more than in 2014. The increase is partly due to a number of companies which started to report their data only in 2015. The actual growth rate in 2015 is estimated at about thirty percent (30%) to thirty-five percent (35%.)

In 2015, robot sales in Japan increased by twenty percent (20%) to about 35,000 units reaching the highest level since 2007 (36,100 units). Robot sales in Japan followed a decreasing trend between 2005 (reaching the peak at 44,000 units) and 2009 (when sales dropped to only 12,767 units). Between 2010 and 2015, robot sales increased by ten percent (10%) on average per year (CAGR).

Increase in robot installations in the United States continued in 2015, by five percent (5%) to the peak of 27,504 units. Driver of this continued growth since 2010 was the ongoing trend to automate production in order to strengthen American industries on the global market and to keep manufacturing at home, and in some cases, to bring back manufacturing that had previously been sent overseas.

Germany is the fifth largest robot market in the world. In 2015, the number of robots sold increased slightly to a new record high at 20,105 units compared to 2014 (20,051 units). In spite of the high robot density of 301 units per 10,000 employees, annual sales are still very high in Germany. Between 2010 and 2015, annual sales of industrial robots increased by an average of seven percent (7%) in Germany (CAGR).

From the graphic below, you can see which industries employ robotic systems the most.

Growth rates will not lessen with projections through 2019 being as follows:

A fascinating development involves the assistance of human endeavor by robotic systems.  This fairly new technology is called collaborative robots of COBOTS.  Let’s get a definition.

COBOTS:

A cobot or “collaborative robot” is a robot designed to assist human beings as a guide or assistor in a specific task. A regular robot is designed to be programmed to work more or less autonomously. In one approach to cobot design, the cobot allows a human to perform certain operations successfully if they fit within the scope of the task and to steer the human on a correct path when the human begins to stray from or exceed the scope of the task.

“The term ‘collaborative’ is used to distinguish robots that collaborate with humans from robots that work behind fences without any direct interaction with humans.  “In contrast, articulated, cartesian, delta and SCARA robots distinguish different robot kinematics.

Traditional industrial robots excel at applications that require extremely high speeds, heavy payloads and extreme precision.  They are reliable and very useful for many types of high volume, low mix applications.  But they pose several inherent challenges for higher mix environments, particularly in smaller companies.  First and foremost, they are very expensive, particularly when considering programming and integration costs.  They require specialized engineers working over several weeks or even months to program and integrate them to do a single task.  And they don’t multi-task easily between jobs since that setup effort is so substantial.  Plus, they can’t be readily integrated into a production line with people because they are too dangerous to operate in close proximity to humans.

For small manufacturers with limited budgets, space and staff, a collaborative robot such as Baxter (shown below) is an ideal fit because it overcomes many of these challenges.  It’s extremely intuitive, integrates seamlessly with other automation technologies, is very flexible and is quite affordable with a base price of only $25,000.  As a result, Baxter is well suited for many applications, such as those requiring manual labor and a high degree of flexibility, that are currently unmet by traditional technologies.

Baxter is one example of collaborative robotics and some say is by far the safest, easiest, most flexible and least costly robot of its kind today.  It features a sophisticated multi-tier safety design that includes a smooth, polymer exterior with fewer pinch points; back-drivable joints that can be rotated by hand; and series elastic actuators which help it to minimize the likelihood of injury during inadvertent contact.

It’s also incredibly simple to use.  Line workers and other non-engineers can quickly learn to train the robot themselves, by hand.  With Baxter, the robot itself is the interface, with no teaching pendant or external control system required.  And with its ease of use and diverse skill set, Baxter is extremely flexible, capable of being utilized across multiple lines and tasks in a fraction of the time and cost it would take to re-program other robots.  Plus, Baxter is made in the U.S.A., which is a particularly appealing aspect for many of our customers looking to re-shore their own production operations.

The digital picture above shows a lady work alongside a collaborative robotic system, both performing a specific task. The lady feels right at home with her mechanical friend only because usage demands a great element of safety.

Certifiable safety is the most important precondition for a collaborative robot system to be applied to an industrial setting.  Available solutions that fulfill the requirements imposed by safety standardization often show limited performance or productivity gains, as most of today’s implemented scenarios are often limited to very static processes. This means a strict stop and go of the robot process, when the human enters or leaves the work space.

Collaborative systems are still a work in progress but the technology has greatly expanded the use and this is primarily due to satisfying safety requirements.  Upcoming years will only produce greater acceptance and do not be surprised if you see robots and humans working side by side on every manufacturing floor over the next decade.

As always, I welcome your comments.


BIOGRAPHY:

Born in Louisiana in 1925, Elmore Leonard was inspired by Remarque’s All Quiet on the Western Front. Leonard’s determination to be a writer stayed with him through a stint in the U.S. Navy and a job in advertising. His early credits include mostly Westerns, including 3:10 to Yuma. When that genre became less popular, Leonard turned to crime novels set in Detroit, Michigan, including Get ShortyJackie Brown and Out of Sight. The prolific writer died in Detroit on August 20, 2013, at age 87.

Famed Western/crime novelist Elmore John Leonard Jr. was born on October 11, 1925, in New Orleans, Louisiana. The early part of Leonard’s youth was largely defined by his family’s constant moves, which were the result of his father’s job as a site locator for General Motors. Not long after his 9th birthday, however, Leonard’s family found a permanent home in Detroit, Michigan.

It was in Detroit that Leonard got hooked on a serialization of the Erich Maria Remarque novel All Quite on the Western Front in the Detroit Times. The book became an inspiration for Leonard, who decided he wanted to try fiction writing as well. He wrote his first play that same year, when he was in fifth grade, and would go on to write for his high school paper.

After graduating from high school in 1943 and serving three subsequent years in the U.S. Navy, Leonard returned home and enrolled at the University of Detroit. As a college student, he pushed himself to write more, and graduated in 1950 with a dual degree in English and philosophy. Still an unknown, however, Leonard didn’t have the means to strike out on his own as a writer. Instead, he found work with an advertising agency, using his off time to draft stories—many of them Westerns.

When the popular demand for Westerns waned in the 1960s, Leonard focused on a new genre: crime. With stories often set against the gritty background of his native Detroit, Leonard’s crime novels, complete with rich dialogue and flawed central characters, earned the writer a group of dedicated readers. It wasn’t until the 1980s, however, that Leonard truly became a star. The man who never got enough publicity buzz, according to his fans, was suddenly appearing everywhere. In 1984, he landed on the cover of Newsweek under the label the “Dickens of Detroit.” Hollywood came calling shortly after, and many of Leonard’s novels were adapted into movies, including the crime smashes Get Shorty and Jackie Brown.

THE HOT KID:

That’s where we come in.  The title “HOT KID” refers to young Deputy U.S. Marshal Carl Webster, a quick-drawing, incredibly slick young man who wants to become the most famous lawman west of the Mississippi, and does little to hide his vanity. At fifteen (15) years of age, Webster witnessed the vicious Emmet Long shooting an officer in a drugstore robbery, but what rankled him the most is that Long snatched away Webster’s peach ice-cream cone and called him a “greaser.” Webster gets his revenge six years later by making Long the first in what will become an impressive list of vanquished outlaws, and he seals his fame with a cool catchphrase: “If I have to pull my weapon, I’ll shoot to kill.” (Funny how often he “has” to pull it.) Webster’s chief rival is Jack Belmont, the black-hearted son of an oil millionaire who’s out to show up his dad by knocking off more banks than Pretty Boy Floyd. Both stand to gain from the purple pen of Tony Antonelli, a True Detective magazine writer who follows the story as it develops, and plans to stretch his two-cents-a-word bounty to the limit.

In the “The Hot Kid” , bank robbers have become so common that “thief” seems close to a legitimate occupation, right alongside gun moll, bootlegger, and prostitute. Set over thirteen (13) years in ’20s and ’30s Oklahoma and Kansas City, the book is populated by characters looking to make names for themselves, joining legends like Bonnie and Clyde, Machine Gun Kelly, and John Dillinger in headlines and crime magazines across the country. In this world, notoriety means more than money, and that counts for figures on both sides of the law, who engage in a game of one-upmanship that has little to do with the usual interests of crime or justice. Though Leonard doesn’t sketch them as broadly as the colorful hoods found in his contemporary crime novels, the ambitions of these larger-than-life characters take on infectiously comical dimensions.

READER COMMENTS:

I certainly enjoyed the book and must admit it was my first Elmore Lenord read.  I do NOT know why I have not stumbled upon his works before since he has written eighty-seven books.  I think his is an acquired taste.  There is absolutely no doubt, at least in my mind, about his writing ability.  The very fact he has remained a “top read” over the years is a testament to his style being accepted by most avid readers.  He is concise and brief with rhetoric. He knows how to paint a story and keep the reader interesting.  This is not one of those books you cannot put down, but it is one you definitely want to finish.  In changing from Westerns to Crime, he maintains your interest to the point you really must find out how the darn thing ends.  I can definitely recommend “The Hot Kid” to you. It’s fairly short and will involve a couple of days on and off or your time.  READ IT.

I like to include reviews of others who have read this book.  I do this frequently. Remember, there is not much difference between a lump of coal and a diamond.  Everyone has their own perspective and that’s what I like to do with the comments below.

DAVID:   FOUR STAR:  My first Elmore Leonard novel. He’s a terse, pacey author, and The Hot Kid is pretty much Hollywood in a book, but a nicely-filmed Hollywood with engaging if not terribly deep characters.

ANDREW P:   FOUR STAR:  This book came to my attention in an unusual way. I just listened to the audible version of NOS4A2 by Joe Hill and at the end the author gives some recommendations on audio books. ‘The Hot Kid’ was one that he praised so I used my next audible credit on it.

EVA SMITH:  FIVE STAR:   In one of life’s little coincidences, I was sorting through books and came across two by Elmore Leonard. I’d read them so long ago that I’d forgotten most of the plot points and the writing was so good that I gave both of them a re-read. Mr. Leonard picked that week to die so I saw it as a sign that I should seek out more of his books. Just finished “The Hot Kid.” Excellent.

BENJAMIN THOMAS:  FIVE STAR:  Elmore Leonard is a writer after my own heart. He started with westerns and then turned to crime fiction, becoming one of the best-selling crime fiction writers of all time. When I saw the audio book, “The Hot Kid” on the library shelves this time, I just couldn’t pass it up because I knew I’d be in for a treat. I also needed a relatively short book this time so I could complete it before the end of the year.

JEFF DICKSON:  FIVE STAR:  A really, really good tale by Leonard. Story is of a hot shot U.S. Marchall (sp) in Oklahoma and Kansas City area during the depression years and one particular inept criminal he goes after. Highly recommended.

STEVE:  TWO STAR:  This might be my last Leonard novel. Starts out strong, but then the conversations begin sounding familiar. This is probably a good beach book for some, but I found that the writing was a bit too breezy, the dialogue a bit too hip. At this point in his career, I’m tempted to say Leonard can write these in his sleep, but there’s some nice historical details that shows he’s not on auto-pilot. For those who like Leonard, and his period pieces, check out a lesser known title, The Moonshine War.

As always, I welcome your comments.


At one time in the world there were only two distinctive branches of engineering, civil and military.

The word engineer was initially used in the context of warfare, dating back to 1325 when engine’er (literally, one who operates an engine) referred to “a constructor of military engines”.  In this context, “engine” referred to a military machine, i. e., a mechanical contraption used in war (for example, a catapult).

As the design of civilian structures such as bridges and buildings developed as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the older discipline. As the prevalence of civil engineering outstripped engineering in a military context and the number of disciplines expanded, the original military meaning of the word “engineering” is now largely obsolete. In its place, the term “military engineering” has come to be used.

OK, so that’s how we got here.  If you follow my posts you know I primarily concentrate on STEM (science, technology, engineering and mathematics) professions.  Engineering is somewhat uppermost since I am a mechanical engineer.

There are many branches of the engineering profession.  Distinct areas of endeavor that attract individuals and capture their professional lives.  Several of these are as follows:

  • Electrical Engineering
  • Mechanical Engineering
  • Civil Engineering
  • Chemical Engineering
  • Biomedical Engineering
  • Engineering Physics
  • Nuclear Engineering
  • Petroleum Engineering
  • Materials Engineering

Of course, there are others but the one I wish to concentrate on with this post is the growing branch of engineering—Biomedical Engineering. Biomedical engineering, or bioengineering, is the application of engineering principles to the fields of biology and health care. Bioengineers work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems.  As such, the possibilities of a bioengineer’s charge are as follows:

Biomedical engineering has evolved over the years in response to advancements in science and technology.  This is NOT a new classification for engineering involvement.  Engineers have been at this for a while.  Throughout history, humans have made increasingly more effective devices to diagnose and treat diseases and to alleviate, rehabilitate or compensate for disabilities or injuries. One example is the evolution of hearing aids to mitigate hearing loss through sound amplification. The ear trumpet, a large horn-shaped device that was held up to the ear, was the only “viable form” of hearing assistance until the mid-20th century, according to the Hearing Aid Museum. Electrical devices had been developed before then, but were slow to catch on, the museum said on its website.

The works of Alexander Graham Bell and Thomas Edison on sound transmission and amplification in the late 19th and early 20th centuries were applied to make the first tabletop hearing aids. These were followed by the first portable (or “luggable”) devices using vacuum-tube amplifiers powered by large batteries. However, the first wearable hearing aids had to await the development of the transistor by William Shockley and his team at Bell Laboratories. Subsequent development of micro-integrated circuits and advance battery technology has led to miniature hearing aids that fit entirely within the ear canal.

Let’s take a very quick look at several devices designed by biomedical engineering personnel.

MAGNETIC RESONANCE IMAGING:

POSITION EMISSION TOMOGRAPHY OR (PET) SCAN:

NOTE: PET scans represent a different technology relative to MRIs. The scan uses a special dye that has radioactive tracers. These tracers are injected into a vein in your arm. Your organs and tissues then absorb the tracer.

BLOOD CHEMISTRY MONOTORING EQUIPMENT:

ELECTROCARDIOGRAM MONITORING DEVICE (EKG):

INSULIN PUMP:

COLONOSCOPY:

THE PROFESSION:

Biomedical engineers design and develop medical systems, equipment and devices. According to the U.S. Bureau of Labor Statistics (BLS), this requires in-depth knowledge of the operational principles of the equipment (electronic, mechanical, biological, etc.) as well as knowledge about the application for which it is to be used. For instance, in order to design an artificial heart, an engineer must have extensive knowledge of electrical engineeringmechanical engineering and fluid dynamics as well as an in-depth understanding of cardiology and physiology. Designing a lab-on-a-chip requires knowledge of electronics, nanotechnology, materials science and biochemistry. In order to design prosthetic replacement limbs, expertise in mechanical engineering and material properties as well as biomechanics and physiology is essential.

The critical skills needed by a biomedical engineer include a well-rounded understanding of several areas of engineering as well as the specific area of application. This could include studying physiology, organic chemistry, biomechanics or computer science. Continuing education and training are also necessary to keep up with technological advances and potential new applications.

SCHOOLS OFFERING BIO-ENGINEERING:

If we take a look at the top schools offering Biomedical engineering, we see the following:

  • MIT
  • Stanford
  • University of California-San Diego
  • Rice University
  • University of California-Berkley
  • University of Pennsylvania
  • University of Michigan—Ann Arbor
  • Georgia Tech
  • Johns Hopkins
  • Duke University

As you can see, these are among the most prestigious schools in the United States.  They have had established engineering programs for decades.  Bio-engineering does not represent a new discipline for them.  There are several others and I would definitely recommend you go online to take a look if you are interested in seeing a complete list of colleges and universities offering a four (4) or five (5) year degree.

SALARY LEVELS:

The median annual wage for biomedical engineers was $86,950 in May 2014. The median wage is the wage at which half the workers in an occupation earned more than that amount and half earned less. The lowest ten (10) percent earned less than $52,680, and the highest ten (10) percent earned more than $139,350.  As you might expect, salary levels vary depending upon several factors:

  • Years of experience
  • Location within the United States
  • Size of company
  • Research facility and corporate structure
  • Bonus or profit sharing arrangement of company

EXPECTATIONS FOR EMPLOYMENT:

In their list of top jobs for 2015, CNNMoney classified Biomedical Engineering as the 37th best job in the US, and of the jobs in the top 37, Biomedical Engineering 10-year job growth was the third highest (27%) behind Information Assurance Analyst (37%) and Product Analyst (32%). CNN previously reported Biomedical Engineer as the top job in the US in 2012 with a predicted 10-year growth rate of nearly 62% ‘Biomedical Engineer’ was listed as a high-paying low-stress job according to Time magazine.  There is absolutely no doubt that medical technology will advance as time go on so biomedical engineers will continue to be in demand.

As always, I welcome your comments.

RISE OF THE MACHINES

March 20, 2017


Movie making today is truly remarkable.  To me, one of the very best parts is animation created by computer graphics.  I’ve attended “B” movies just to see the graphic displays created by talented programmers.  The “Terminator” series, at least the first movie in that series, really captures the creative essence of graphic design technology.  I won’t replay the movie for you but, the “terminator” goes back in time to carry out its prime directive—Kill John Conner.  The terminator, a robotic humanoid, has decision-making capability as well as human-like mobility that allows the plot to unfold.  Artificial intelligence or AI is a fascinating technology many companies are working on today.  Let’s get a proper definition of AI as follows:

“the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Question:  Are Siri, Cortana, and Alexa eventually going to be more literate than humans? Anyone excited about the recent advancements in artificial intelligence (AI) and machine learning should also be concerned about human literacy as well. That’s according to Protect Literacy , a global campaign, backed by education company Pearson, aimed at creating awareness and fighting against illiteracy.

Project Literacy, which has been raising awareness for its cause at SXSW 2017, recently released a report, “ 2027: Human vs. Machine Literacy ,” that projects machines powered by AI and voice recognition will surpass the literacy levels of one in seven American adults in the next ten (10) years. “While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to simple text search task…exceeding the abilities of millions of people who are nonliterate,” Kate James, Project Literacy spokesperson and Chief Corporate Affairs and Global Marketing Officer at Pearson, wrote in the report. In light of this the organization is calling for “society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.”  (I would invite you to re-read this statement and shudder in your boots as I did.)

While the past twenty-five (25) years have seen disappointing progress in U.S. literacy, there have been huge gains in linguistic performance by a totally different type of actor – computers. Dramatic advances in natural language processing (Hirschberg and Manning, 2015) have led to the rise of language technologies like search engines and machine translation that “read” text and produce answers or translations that are useful for people. While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to the simple text search task above – exceeding the abilities of millions of people who are nonliterate.

According to the National National Centre for Education Statistics machine literacy has already exceeded the literacy abilities of the estimated three percent (3%) of non-literate adults in the US.

Comparing demographic data from the Global Developer Population and Demographic Study 2016 v2 and the 2015 Digest of Education Statistics finds there are more software engineers in the U.S. than school teachers, “We are focusing so much on teaching algorithms and AI to be better at language that we are forgetting that fifty percent (50%)  of adults cannot read a book written at an eighth grade level,” Project Literacy said in a statement.  I retired from General Electric Appliances.   Each engineer was required to write, or at least the first draft, of the Use and Care Manuals for specific cooking products.  We were instructed to 1.) Use plenty of graphic examples and 2.) Write for a fifth-grade audience.  Even with that, we know from experience that many consumers never use and have no intention of reading their Use and Care Manual.  With this being the case, many of the truly cool features are never used.  They may as well buy the most basic product.

Research done by Business Insider reveals that thirty-two (32) million Americans cannot currently read a road sign. Yet at the same time there are ten (10) million self-driving cars predicted to be on the roads by 2020. (One could argue this will further eliminate the need for literacy, but that is debatable.)  If we look at literacy rates for the top ten (10) countries on our planet we see the following:

Citing research from Venture Scanner , Project Literacy found that in 2015 investment in AI technologies, including natural language processing, speech recognition, and image recognition, reached $47.2 billion. Meanwhile, data on US government spending shows that the 2017 U.S. Federal Education Budget for schools (pre-primary through secondary school) is $40.4 billion.  I’m not too sure funding for education always goes to benefit students education. In other words, throwing more money at this problem may not always provide desired results, but there is no doubt, funding for AI will only increase.

“Human literacy levels have stalled since 2000. At any time, this would be a cause for concern, when one in ten people worldwide…still cannot read a road sign, a voting form, or a medicine label,” James wrote in the report. “In popular discussion about advances in artificial intelligence, it is easy

CONCLUSION:  AI will only continue to advance and there will come a time when robotic systems will be programmed with basic decision-making skills.  To me, this is not only fascinating but more than a little scary.

%d bloggers like this: