CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

NATIONAL TELEPHONE DAY

April 25, 2017


OK, are you ready for a bit of ridiculous trivia?  Today, 25 April 2017, is National Telephone Day.  I do not think there will be any denial that the telephone has revolutionized communication the world over.

It was February 14, 1876, when Marcellus Bailey, one of Alexander Graham Bell’s attorneys rushed into the US Patent office in Boston to file for what would later be called the telephone. Later that same day, Elisha Gray filed a patent caveat for a similar device. A caveat is an intent to file for a patent. There is also a third contender, Antonio Meucci.  Mr. Meucci filed a caveat in November of 1871 for a talking telegraph but failed to renew the caveat due to hardships. Because Bell’s patent was submitted first, it was awarded to him on March 7, 1876. Gray contested this decision in court, but without success.

Born March 3, 1847, in Edinburgh, United Kingdom, Bell was an instructor at a boys’ boarding school. The sounds of speech were an integral part of his life. His father developed a “Visible Speech” system for deaf students to communicate. Bell would later become friend and benefactor of Helen Keller. Three days after his patent was approved, Bell spoke the first words by telephone to his assistant. “Mr. Watson, come here! I want to see you!”  By May of the same year, Bell and his team were ready for a public demonstration, and there would be no better place than the World’s Fair in Philadelphia. On May 10, 1876, in a crowded Machinery Hall a man’s voice was transmitted from a small horn and carried out through a speaker to the audience. One year later, the White House installed its first phone. The telephone revolution began. Bell Telephone Company was founded on July 9, 1877, and the first public telephone lines were installed from Boston to Sommerville, Massachusetts the same year.  By the end of the decade, there were nearly 50,000 phones in the United States.  In May of 1967, the 1 millionth telephone was installed.

Growing up in in the 50’s, I remember the rotary telephone shown by the digital picture below.  We were on a three-party line.  As I recall, ours was a two-ring phone call.  Of course, there was snooping.  Big time snooping by the other two families on our line.

Let’s take a quick look at how the cell phone has literally taken over this communication method.

  • The number of mobile devices rose nine (9) percent in the first six months of 2011, to 327.6 million — more than the 315 million people living in the U.S., Puerto Rico, Guam and the U.S. Virgin Islands. Wireless network data traffic rose 111 percent, to 341.2 billion megabytes, during the same period.
  • Nearly two-thirds of Americans are now smartphone owners, and for many these devices are a key entry point to the online world. Sixty-four percent( 64) ofAmerican adults now own a smartphone of some kind, up from thirty-five percent (35%) in the spring of 2011. Smartphone ownership is especially high among younger Americans, as well as those with relatively high income and education levels.
  • Ten percent (10%) of Americans own a smartphone but do not have any other form of high-speed internet access at home beyond their phone’s data plan.
  • Using a broader measure of the access options available to them, fifteen percent (15% of Americans own a smartphone but say that they have a limited number of ways to get online other than their cell phone.
  • Younger adults — Fifteen percent (15%) of Americans ages 18-29 are heavily dependent on a smartphone for online access.
  • Those with low household incomes and levels of educational attainment — Some thirteen percent (13%) of Americans with an annual household income of less than $30,000 per year are smartphone-dependent. Just one percent (1%) of Americans from households earning more than $75,000 per year rely on their smartphones to a similar degree for online access.
  • Non-whites — Twelve percent (12%) of African Americans and thirteen percent (13%) of Latinos are smartphone-dependent, compared with four percent (4%) of whites
  • Sixty-two percent (62%) of smartphone owners have used their phone in the past year to look up information about a health condition
  • Fifty-seven percent (57%) have used their phone to do online banking.
  • Forty-four percent (44%) have used their phone to look up real estate listings or other information about a place to live.
  • Forty-three percent (43%) to look up information about a job.
  • Forty percent (40%) to look up government services or information.
  • Thirty percent (30%) to take a class or get educational content
  • Eighteen percent (18%) to submit a job application.
  • Sixty-eight percent (68%) of smartphone owners use their phone at least occasionally to follow along with breaking news events, with thirty-three percent (33%) saying that they do this “frequently.”
  • Sixty-seven percent (67%) use their phone to share pictures, videos, or commentary about events happening in their community, with 35% doing so frequently.
  • Fifty-six percent (56%) use their phone at least occasionally to learn about community events or activities, with eighteen percent (18%) doing this “frequently.”

OK, by now you get the picture.  The graphic below will basically summarize the cell phone phenomenon relative to other digital devices including desktop and laptop computers. By the way, laptop and desktop computer purchases have somewhat declined due to the increased usage of cell phones for communication purposes.

The number of smart phone users in the United States from 2012 to a projected 2021 in millions is given below.

CONCLUSION: “Big Al” (Mr. Bell that is.) probably knew he was on to something.  At any rate, the trend will continue towards infinity over the next few decades.

 

DIGITAL READINESS GAPS

April 23, 2017


This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center.  This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.

HOW DO ADULTS LEARN?  Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty.  I think of “book learning” first and digital as a backup.  They head straight for their i-pad or i-phone.  GOOGLE is a verb and not a company name as far as they are concerned.  (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)

According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:

  • Adult learning is self-directed/autonomous
    Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
  • Adult learning utilizes knowledge & life experiences
    Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
  • Adult learning is goal-oriented
    The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
  • Adult learning is relevancy-oriented
    One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
  • Adult learning highlights practicality
    Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
  • Adult learning encourages collaboration
    Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.

One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.

As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.

Specifically, the Pew report made their assessment relative to American adults according to five main factors:

  • Their confidence in using computers,
  • Their facility with getting new technology to work
  • Their use of digital tools for learning
  • Their ability to determine the trustworthiness of online information,
  • Their familiarity with contemporary “education tech” terms.

It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:

The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness.  The chart below will indicate their classifications.

The breakdown is as follows:

Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.

  • A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
  • We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
  • A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.

Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.

  • A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
  • Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.

CONCLUSIONS:

To me, one of the greatest lessons from my university days—NEVER STOP LEARNING.  I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years.  If you stop learning, the information you receive will become obsolete in five years.  At the pace of technology today, that may be five months.  You never stop learning AND you embrace existent technology.  In other words—do digital. Digital is your friend.  GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy.  At least, start there then, trust but verify.


If you work or have worked in manufacturing you know robotic systems have definitely had a distinct impact on assembly, inventory acquisition from storage areas and finished-part warehousing.   There is considerable concern that the “rise of the machines” will eventually replace individuals performing a verity of tasks.  I personally do not feel this will be the case although there is no doubt robotic systems have found their way onto the manufacturing floor.

From the “Executive Summary World Robotics 2016 Industrial Robots”, we see the following:

2015:  By far the highest volume ever recorded in 2015, robot sales increased by 15% to 253,748 units, again by far the highest level ever recorded for one year. The main driver of the growth in 2015 was the general industry with an increase of 33% compared to 2014, in particular the electronics industry (+41%), metal industry (+39%), the chemical, plastics and rubber industry (+16%). The robot sales in the automotive industry only moderately increased in 2015 after a five-year period of continued considerable increase. China has significantly expanded its leading position as the biggest market with a share of 27% of the total supply in 2015.

In looking at the chart below, we can see the sales picture with perspective and show how system sales have increased from 2003.

It is very important to note that seventy-five percent (75%) of global robot sales comes from five (5) countries.

There were five major markets representing seventy-five percent (75%) of the total sales volume in 2015:  China, the Republic of Korea, Japan, the United States, and Germany.

As you can see from the bar chart above, sales volume increased from seventy percent (70%) in 2014. Since 2013 China is the biggest robot market in the world with a continued dynamic growth. With sales of about 68,600 industrial robots in 2015 – an increase of twenty percent (20%) compared to 2014 – China alone surpassed Europe’s total sales volume (50,100 units). Chinese robot suppliers installed about 20,400 units according to the information from the China Robot Industry Alliance (CRIA). Their sales volume was about twenty-nine percent (29%) higher than in 2014. Foreign robot suppliers increased their sales by seventeen percent (17%) to 48,100 units (including robots produced by international robot suppliers in China). The market share of Chinese robot suppliers grew from twenty-five percent (25%) in 2013 to twenty-nine percent (29%) in 2015. Between 2010 and 2015, total supply of industrial robots increased by about thirty-six percent (36%) per year on average.

About 38,300 units were sold to the Republic of Korea, fifty-five percent (55%) more than in 2014. The increase is partly due to a number of companies which started to report their data only in 2015. The actual growth rate in 2015 is estimated at about thirty percent (30%) to thirty-five percent (35%.)

In 2015, robot sales in Japan increased by twenty percent (20%) to about 35,000 units reaching the highest level since 2007 (36,100 units). Robot sales in Japan followed a decreasing trend between 2005 (reaching the peak at 44,000 units) and 2009 (when sales dropped to only 12,767 units). Between 2010 and 2015, robot sales increased by ten percent (10%) on average per year (CAGR).

Increase in robot installations in the United States continued in 2015, by five percent (5%) to the peak of 27,504 units. Driver of this continued growth since 2010 was the ongoing trend to automate production in order to strengthen American industries on the global market and to keep manufacturing at home, and in some cases, to bring back manufacturing that had previously been sent overseas.

Germany is the fifth largest robot market in the world. In 2015, the number of robots sold increased slightly to a new record high at 20,105 units compared to 2014 (20,051 units). In spite of the high robot density of 301 units per 10,000 employees, annual sales are still very high in Germany. Between 2010 and 2015, annual sales of industrial robots increased by an average of seven percent (7%) in Germany (CAGR).

From the graphic below, you can see which industries employ robotic systems the most.

Growth rates will not lessen with projections through 2019 being as follows:

A fascinating development involves the assistance of human endeavor by robotic systems.  This fairly new technology is called collaborative robots of COBOTS.  Let’s get a definition.

COBOTS:

A cobot or “collaborative robot” is a robot designed to assist human beings as a guide or assistor in a specific task. A regular robot is designed to be programmed to work more or less autonomously. In one approach to cobot design, the cobot allows a human to perform certain operations successfully if they fit within the scope of the task and to steer the human on a correct path when the human begins to stray from or exceed the scope of the task.

“The term ‘collaborative’ is used to distinguish robots that collaborate with humans from robots that work behind fences without any direct interaction with humans.  “In contrast, articulated, cartesian, delta and SCARA robots distinguish different robot kinematics.

Traditional industrial robots excel at applications that require extremely high speeds, heavy payloads and extreme precision.  They are reliable and very useful for many types of high volume, low mix applications.  But they pose several inherent challenges for higher mix environments, particularly in smaller companies.  First and foremost, they are very expensive, particularly when considering programming and integration costs.  They require specialized engineers working over several weeks or even months to program and integrate them to do a single task.  And they don’t multi-task easily between jobs since that setup effort is so substantial.  Plus, they can’t be readily integrated into a production line with people because they are too dangerous to operate in close proximity to humans.

For small manufacturers with limited budgets, space and staff, a collaborative robot such as Baxter (shown below) is an ideal fit because it overcomes many of these challenges.  It’s extremely intuitive, integrates seamlessly with other automation technologies, is very flexible and is quite affordable with a base price of only $25,000.  As a result, Baxter is well suited for many applications, such as those requiring manual labor and a high degree of flexibility, that are currently unmet by traditional technologies.

Baxter is one example of collaborative robotics and some say is by far the safest, easiest, most flexible and least costly robot of its kind today.  It features a sophisticated multi-tier safety design that includes a smooth, polymer exterior with fewer pinch points; back-drivable joints that can be rotated by hand; and series elastic actuators which help it to minimize the likelihood of injury during inadvertent contact.

It’s also incredibly simple to use.  Line workers and other non-engineers can quickly learn to train the robot themselves, by hand.  With Baxter, the robot itself is the interface, with no teaching pendant or external control system required.  And with its ease of use and diverse skill set, Baxter is extremely flexible, capable of being utilized across multiple lines and tasks in a fraction of the time and cost it would take to re-program other robots.  Plus, Baxter is made in the U.S.A., which is a particularly appealing aspect for many of our customers looking to re-shore their own production operations.

The digital picture above shows a lady work alongside a collaborative robotic system, both performing a specific task. The lady feels right at home with her mechanical friend only because usage demands a great element of safety.

Certifiable safety is the most important precondition for a collaborative robot system to be applied to an industrial setting.  Available solutions that fulfill the requirements imposed by safety standardization often show limited performance or productivity gains, as most of today’s implemented scenarios are often limited to very static processes. This means a strict stop and go of the robot process, when the human enters or leaves the work space.

Collaborative systems are still a work in progress but the technology has greatly expanded the use and this is primarily due to satisfying safety requirements.  Upcoming years will only produce greater acceptance and do not be surprised if you see robots and humans working side by side on every manufacturing floor over the next decade.

As always, I welcome your comments.


Biomedical Engineering may be a fairly new term so some of you.   What is a biomedical engineer?  What do they do? What companies to they work for?  What educational background is necessary for becoming a biomedical engineer?  These are good questions.  From LifeScience we have the follow definition:

“Biomedical engineering, or bioengineering, is the application of engineering principles to the fields of biology and health care. Bioengineers work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems.”

Biomedical engineering has evolved over the years in response to advancements in science and technology.  This is NOT a new classification for engineering involvement.  Engineers have been at this for a while.  Throughout history, humans have made increasingly more effective devices to diagnose and treat diseases and to alleviate, rehabilitate or compensate for disabilities or injuries. One example is the evolution of hearing aids to mitigate hearing loss through sound amplification. The ear trumpet, a large horn-shaped device that was held up to the ear, was the only “viable form” of hearing assistance until the mid-20th century, according to the Hearing Aid Museum. Electrical devices had been developed before then, but were slow to catch on, the museum said on its website.

The possibilities of a bioengineer’s charge are as follows:

The equipment envisioned, designed, prototyped, tested and eventually commercialized has made a resounding contribution and value-added to our healthcare system.  OK, that’s all well and good but exactly what do bioengineers do on a daily basis?  What do they hope to accomplish?   Please direct your attention to the digital figure below.  As you can see, the world of the bioengineer can be somewhat complex with many options available.

The breadth of activity of biomedical engineers is significant. The field has moved from being concerned primarily with the development of medical devices in the 1950s and 1960s to include a wider ranging set of activities. As illustrated in the figure above, the field of biomedical engineering now includes many new career areas. These areas include:

  • Application of engineering system analysis (physiologic modeling, simulation, and control to biological problems
  • Detection, measurement, and monitoring of physiologic signals (i.e., biosensors and biomedical instrumentation)
  • Diagnostic interpretation via signal-processing techniques of bioelectric data
  • Therapeutic and rehabilitation procedures and devices (rehabilitation engineering)
  • Devices for replacement or augmentation of bodily functions (artificial organs)
  • Computer analysis of patient-related data and clinical decision making (i.e., medical informatics and artificial intelligence)
  • Medical imaging; that is, the graphical display of anatomic detail or physiologic Function.
  • The creation of new biologic products (i.e., biotechnology and tissue engineering)

Typical pursuits of biomedical engineers include

  • Research in new materials for implanted artificial organs
  • Development of new diagnostic instruments for blood analysis
  • Writing software for analysis of medical research data
  • Analysis of medical device hazards for safety and efficacy
  • Development of new diagnostic imaging systems
  • Design of telemetry systems for patient monitoring
  • Design of biomedical sensors
  • Development of expert systems for diagnosis and treatment of diseases
  • Design of closed-loop control systems for drug administration
  • Modeling of the physiologic systems of the human body
  • Design of instrumentation for sports medicine
  • Development of new dental materials
  • Design of communication aids for individuals with disabilities
  • Study of pulmonary fluid dynamics
  • Study of biomechanics of the human body
  • Development of material to be used as replacement for human skin

I think you will agree, these areas of interest encompass any one of several engineering disciplines; i.e. mechanical, chemical, electrical, computer science, and even civil engineering as applied to facilities and hospital structures.


I know I’m spoiled.  I like to know that when I get behind the wheel, put the key in the ignition, start my vehicle, pull out of the driveway, etc. I can get to my destination without mechanical issues.  I think we all are basically there.  Now, to do that, you have to maintain your “ride”.  I have a 1999 Toyota Pre-runner with 308,000 plus miles. Every three thousand miles I have it serviced.  Too much you say?  Well, I do have 308K and it’s still humming like a Singer Sewing Machine.

Mr. Charles Murry has been following the automotive industry for over thirty years.  Mr. Murry is also a senior editor for Design News Daily Magazine.  Much of the information below results from his recent post on the TEN MOST UNRELIABLE VEHICLES.  Each year Consumer Reports receives over one-half million consumer surveys on reliability information relative to the vehicles they drive.  The story is not always not a good one.  Let’s take a look at what CU readers consider the must unreliable vehicles and why.

Please keep in mind this is a CU report based upon feedback from vehicle owners.  Please do not shoot the messenger.  As always, I welcome your comments and hope this help your buying research.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.

%d bloggers like this: