CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

NATIONAL TELEPHONE DAY

April 25, 2017


OK, are you ready for a bit of ridiculous trivia?  Today, 25 April 2017, is National Telephone Day.  I do not think there will be any denial that the telephone has revolutionized communication the world over.

It was February 14, 1876, when Marcellus Bailey, one of Alexander Graham Bell’s attorneys rushed into the US Patent office in Boston to file for what would later be called the telephone. Later that same day, Elisha Gray filed a patent caveat for a similar device. A caveat is an intent to file for a patent. There is also a third contender, Antonio Meucci.  Mr. Meucci filed a caveat in November of 1871 for a talking telegraph but failed to renew the caveat due to hardships. Because Bell’s patent was submitted first, it was awarded to him on March 7, 1876. Gray contested this decision in court, but without success.

Born March 3, 1847, in Edinburgh, United Kingdom, Bell was an instructor at a boys’ boarding school. The sounds of speech were an integral part of his life. His father developed a “Visible Speech” system for deaf students to communicate. Bell would later become friend and benefactor of Helen Keller. Three days after his patent was approved, Bell spoke the first words by telephone to his assistant. “Mr. Watson, come here! I want to see you!”  By May of the same year, Bell and his team were ready for a public demonstration, and there would be no better place than the World’s Fair in Philadelphia. On May 10, 1876, in a crowded Machinery Hall a man’s voice was transmitted from a small horn and carried out through a speaker to the audience. One year later, the White House installed its first phone. The telephone revolution began. Bell Telephone Company was founded on July 9, 1877, and the first public telephone lines were installed from Boston to Sommerville, Massachusetts the same year.  By the end of the decade, there were nearly 50,000 phones in the United States.  In May of 1967, the 1 millionth telephone was installed.

Growing up in in the 50’s, I remember the rotary telephone shown by the digital picture below.  We were on a three-party line.  As I recall, ours was a two-ring phone call.  Of course, there was snooping.  Big time snooping by the other two families on our line.

Let’s take a quick look at how the cell phone has literally taken over this communication method.

  • The number of mobile devices rose nine (9) percent in the first six months of 2011, to 327.6 million — more than the 315 million people living in the U.S., Puerto Rico, Guam and the U.S. Virgin Islands. Wireless network data traffic rose 111 percent, to 341.2 billion megabytes, during the same period.
  • Nearly two-thirds of Americans are now smartphone owners, and for many these devices are a key entry point to the online world. Sixty-four percent( 64) ofAmerican adults now own a smartphone of some kind, up from thirty-five percent (35%) in the spring of 2011. Smartphone ownership is especially high among younger Americans, as well as those with relatively high income and education levels.
  • Ten percent (10%) of Americans own a smartphone but do not have any other form of high-speed internet access at home beyond their phone’s data plan.
  • Using a broader measure of the access options available to them, fifteen percent (15% of Americans own a smartphone but say that they have a limited number of ways to get online other than their cell phone.
  • Younger adults — Fifteen percent (15%) of Americans ages 18-29 are heavily dependent on a smartphone for online access.
  • Those with low household incomes and levels of educational attainment — Some thirteen percent (13%) of Americans with an annual household income of less than $30,000 per year are smartphone-dependent. Just one percent (1%) of Americans from households earning more than $75,000 per year rely on their smartphones to a similar degree for online access.
  • Non-whites — Twelve percent (12%) of African Americans and thirteen percent (13%) of Latinos are smartphone-dependent, compared with four percent (4%) of whites
  • Sixty-two percent (62%) of smartphone owners have used their phone in the past year to look up information about a health condition
  • Fifty-seven percent (57%) have used their phone to do online banking.
  • Forty-four percent (44%) have used their phone to look up real estate listings or other information about a place to live.
  • Forty-three percent (43%) to look up information about a job.
  • Forty percent (40%) to look up government services or information.
  • Thirty percent (30%) to take a class or get educational content
  • Eighteen percent (18%) to submit a job application.
  • Sixty-eight percent (68%) of smartphone owners use their phone at least occasionally to follow along with breaking news events, with thirty-three percent (33%) saying that they do this “frequently.”
  • Sixty-seven percent (67%) use their phone to share pictures, videos, or commentary about events happening in their community, with 35% doing so frequently.
  • Fifty-six percent (56%) use their phone at least occasionally to learn about community events or activities, with eighteen percent (18%) doing this “frequently.”

OK, by now you get the picture.  The graphic below will basically summarize the cell phone phenomenon relative to other digital devices including desktop and laptop computers. By the way, laptop and desktop computer purchases have somewhat declined due to the increased usage of cell phones for communication purposes.

The number of smart phone users in the United States from 2012 to a projected 2021 in millions is given below.

CONCLUSION: “Big Al” (Mr. Bell that is.) probably knew he was on to something.  At any rate, the trend will continue towards infinity over the next few decades.

 

RISE OF THE MACHINES

March 20, 2017


Movie making today is truly remarkable.  To me, one of the very best parts is animation created by computer graphics.  I’ve attended “B” movies just to see the graphic displays created by talented programmers.  The “Terminator” series, at least the first movie in that series, really captures the creative essence of graphic design technology.  I won’t replay the movie for you but, the “terminator” goes back in time to carry out its prime directive—Kill John Conner.  The terminator, a robotic humanoid, has decision-making capability as well as human-like mobility that allows the plot to unfold.  Artificial intelligence or AI is a fascinating technology many companies are working on today.  Let’s get a proper definition of AI as follows:

“the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Question:  Are Siri, Cortana, and Alexa eventually going to be more literate than humans? Anyone excited about the recent advancements in artificial intelligence (AI) and machine learning should also be concerned about human literacy as well. That’s according to Protect Literacy , a global campaign, backed by education company Pearson, aimed at creating awareness and fighting against illiteracy.

Project Literacy, which has been raising awareness for its cause at SXSW 2017, recently released a report, “ 2027: Human vs. Machine Literacy ,” that projects machines powered by AI and voice recognition will surpass the literacy levels of one in seven American adults in the next ten (10) years. “While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to simple text search task…exceeding the abilities of millions of people who are nonliterate,” Kate James, Project Literacy spokesperson and Chief Corporate Affairs and Global Marketing Officer at Pearson, wrote in the report. In light of this the organization is calling for “society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.”  (I would invite you to re-read this statement and shudder in your boots as I did.)

While the past twenty-five (25) years have seen disappointing progress in U.S. literacy, there have been huge gains in linguistic performance by a totally different type of actor – computers. Dramatic advances in natural language processing (Hirschberg and Manning, 2015) have led to the rise of language technologies like search engines and machine translation that “read” text and produce answers or translations that are useful for people. While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to the simple text search task above – exceeding the abilities of millions of people who are nonliterate.

According to the National National Centre for Education Statistics machine literacy has already exceeded the literacy abilities of the estimated three percent (3%) of non-literate adults in the US.

Comparing demographic data from the Global Developer Population and Demographic Study 2016 v2 and the 2015 Digest of Education Statistics finds there are more software engineers in the U.S. than school teachers, “We are focusing so much on teaching algorithms and AI to be better at language that we are forgetting that fifty percent (50%)  of adults cannot read a book written at an eighth grade level,” Project Literacy said in a statement.  I retired from General Electric Appliances.   Each engineer was required to write, or at least the first draft, of the Use and Care Manuals for specific cooking products.  We were instructed to 1.) Use plenty of graphic examples and 2.) Write for a fifth-grade audience.  Even with that, we know from experience that many consumers never use and have no intention of reading their Use and Care Manual.  With this being the case, many of the truly cool features are never used.  They may as well buy the most basic product.

Research done by Business Insider reveals that thirty-two (32) million Americans cannot currently read a road sign. Yet at the same time there are ten (10) million self-driving cars predicted to be on the roads by 2020. (One could argue this will further eliminate the need for literacy, but that is debatable.)  If we look at literacy rates for the top ten (10) countries on our planet we see the following:

Citing research from Venture Scanner , Project Literacy found that in 2015 investment in AI technologies, including natural language processing, speech recognition, and image recognition, reached $47.2 billion. Meanwhile, data on US government spending shows that the 2017 U.S. Federal Education Budget for schools (pre-primary through secondary school) is $40.4 billion.  I’m not too sure funding for education always goes to benefit students education. In other words, throwing more money at this problem may not always provide desired results, but there is no doubt, funding for AI will only increase.

“Human literacy levels have stalled since 2000. At any time, this would be a cause for concern, when one in ten people worldwide…still cannot read a road sign, a voting form, or a medicine label,” James wrote in the report. “In popular discussion about advances in artificial intelligence, it is easy

CONCLUSION:  AI will only continue to advance and there will come a time when robotic systems will be programmed with basic decision-making skills.  To me, this is not only fascinating but more than a little scary.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.


Forbes Magazine recently published what they consider to be the top ten (10) trends in technology.  It’s a very interesting list and I could not argue with any item. The writer of the Forbes article is David W. Cearley.  Mr. Cearley is the vice president and Gartner Fellow at Gartner.  He specializes in analyzing emerging and strategic business and technology trends and explores how these trends shape the way individuals and companies derive value from technology.   Let’s take a quick look.

  • DEVICE MESH—This trend takes us far beyond our desktop PC, Tablet or even our cell phone.  The trend encompasses the full range of endpoints with which humans might interact. In other words, just about anything you interact with could possibly be linked to the internet for instant access.  This could mean individual devices interacting with each other in a fashion desired by user programming.  Machine to machine, M2M.
  • AMBIENT USER EXPERIENCE–All of our digital interactions can become synchronized into a continuous and ambient digital experience that preserves our experience across traditional boundaries of devices, time and space. The experience blends physical, virtual and electronic environments, and uses real-time contextual information as the ambient environment changes or as the user moves from one place to another.
  • 3-D PRINTING MATERIALS—If you are not familiar with “additive manufacturing” you are really missing a fabulous technology. Right now, 3-D Printing is somewhat in its infancy but progress is not just weekly or monthly but daily.  The range of materials that can be used for the printing process improves in a remarkable manner. You really need to look into this.
  • INFORMATION OF EVERYTHING— Everything surrounding us in the digital mesh is producing, using and communicating with virtually unmeasurable amounts of information. Organizations must learn how to identify what information provides strategic value, how to access data from different sources, and explore how algorithms leverage Information of Everything to fuel new business designs. I’m sure by now you have heard of “big data”.  Information of everything will provide mountains of data that must be sifted through so usable “stuff” results.  This will continue to be an ever-increasing task for programmers.
  • ADVANCED MACHINE LEARNING– Rise of the Machines.  Machines talking to each other and learning from each other.  (Maybe a little more frightening that it should be.) Advanced machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner. This feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.
  • ADAPTIVE SECURITY ARCHITECTURE— The complexities of digital business and the algorithmic economy, combined with an emerging “hacker industry,” significantly increase the threat surface for an organization. IT leaders must focus on detecting and responding to threats, as well as more traditional blocking and other measures to prevent attacks. I don’t know if you have ever had your identity stolen but it is NOT fun.  Corrections are definitely time-consuming.
  • ADVANCED SYSTEM ARCHITECTURE–The digital mesh and smart machines require intense computing architecture demands to make them viable for organizations. They’ll get this added boost from ultra-efficient-neuromorphic architectures. Systems built on graphics processing units (GPUs) and field-programmable gate-arrays (FPGAs) will function more like human brains that are particularly suited to be applied to deep learning and other pattern-matching algorithms that smart machines use. FPGA-based architecture will allow distribution with less power into the tiniest Internet of Things (IoT) endpoints, such as homes, cars, wristwatches and even human beings.
  • Mesh App and Service ArchitectureThe mesh app and service architecture are what enable delivery of apps and services to the flexible and dynamic environment of the digital mesh. This architecture will serve users’ requirements as they vary over time. It brings together the many information sources, devices, apps, services and microservices into a flexible architecture in which apps extend across multiple endpoint devices and can coordinate with one another to produce a continuous digital experience.
  • INTERNET OF THINGS (IoT) and ARCHITECTURE PLATFORMS– IoT platforms exist behind the mesh app and service architecture. The technologies and standards in the IoT platform form a base set of capabilities for communicating, controlling, managing and securing endpoints in the IoT. The platforms aggregate data from endpoints behind the scenes from an architectural and a technology standpoint to make the IoT a reality.
  • Autonomous Agents and ThingsAdvanced machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner. This feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.

CONCLUSIONS:  You have certainly noticed by now that ALL of the trends, with the exception of 3-D Printing are rooted in Internet access and Internet protocols.  We are headed towards a totally connected world in which our every move is traceable.  Traceable unless we choose to fly under the radar.

FASTER THAN A ’57 CHEVY

November 5, 2016


I grew up in the ’50s in post-world war two (WWII) decade.  It truly was a very simple time as compared to the chaotic, time-obsessed, “hair-on-fire”, get-it-done-at-any cost times we experience today.  One expression I remember very clearly was: “faster than a ’57 Chevy”.  Anything over walking speed was faster than a ’57 Chevy.  This, of course, was handed down from the older kids to guys my age.  The object of that expression may be seen below.

57-chevy

(I told you those were much more simple days.) If we only knew what was coming down the pike, we would have never never used that expression.  You know what is really faster than a ’57 Chevy?  Let’s take a look.

This month the Top 500 biannual ranking of the world’s fastest, publicly known supercomputers will be updated.  The list release will coincide with SC16, the International Conference for High Performance Computing, Networking, Storage and Analysis held in Salt Lake City from November 13 to November 18. The last Top 500 update in June revealed that China maintained its grip on the number one spot with the new and surprising Sunway TaihuLight device, which reached ninety-three (93) petaflops per second or ninety-three quadrillion calculations per second or “faster than a ’57 Chevy”.

Let’s refresh our memory.  A petaflop is a measure of a computer’s processing speed and can be expressed as:

  • A quadrillion (thousand trillion) floating point operations per second (FLOPS)
  • A thousand teraflops
  • 10 to the 15th power FLOPS
  • 2 to the 50th power FLOPS

THE LIST

NUMBER 1: National Supercomputing Center in Wuxi, China: Sunway TaihuLight with 10,649,600 cores running 15,371 Kw.  The Sunway is shown below:

sunway-taihulight

NUMBER 2:  National Super Computer Center in Guangzhou, China: Tianhe-2(MilkyWay-2) with 3,120,000 cores running 17,808 Kw.

tianhe-2

NUMBER 3:  DOE/SC/OAK RIDGE NATIONAL LABORATORY, UNITED STATES:  Titan-Cray XK7: 560,640 cores running 8,209 Kw.

titan-cray-xk7

NUMBER 4:  DOE/NNSA/LLNL, UNITED STATES:  Sequioa-BlueGene/Q:  1,572,864 cores running 7,890 Kw.

sequoia-blue-genie

NUMBER 5:  RIKEN ADVANCED INSTITUTE FOR COMPUTATIONAL SCIENCE, JAPAN: K Computer SPARC64: 705,024 cores running 12,660 Kw.

k-computersparc64

Not only has China outdone itself in terms of the fastest supercomputer, it is now home to the largest number of supercomputers on the list.  One hundred sixty-seven (167) to be exact.  This doubles the number in the United States.  This year marks the first time since the Top 500 rankings began twenty-three (23) years ago, that the United States cannot lay claim to the most machines on the list. All is lost—well not quite.

In September, the DOE’s Exascale Computing Project or ECP, announced the first round of funding for advanced computers.  It awarded $39.8 million to fifteen (15) application development proposals for full funding and seven proposals for seed funding.  This is significant and will provide necessary financing to keep up and even surpass the Chinese.

HISTORY:

If we look at the history of computing power, we see the following:

history-of-computing-power-performance

Compare that with the cost of computing power:

evolution-of-computing-power-costs

CONCLUSIONS: 

As you can see, the advances in computing power are remarkable but at a significant cost.  Speed vs. cost.  This is one very expensive technology.  Fortunately, the computers we mortals use, do not require the speeds cutting-edge technology requires.  With that being the case, computing power on the domestic scale has decreased significantly over the years.

IOT

September 17, 2016


The graphic for this post is taken from the article “The IoT is Not a DIY Project”, Desktop Engineering, June 16, 2016.

OK, I’m connected.  Are you really?  Do you know what completely connected means?  Well, it does appear the numbers are truly in.  The world’s top research firms and business technology prognosticators all agree that the Internet of Things or IoT, is growing at an amazing pace.  This very fact indicates there are new revenue models for business that any executive would be extremely foolish to ignore.  The possibilities for additional revenue streams is staggering. Every design engineering team across the globe has been asked to design products that build connectivity into their structures and operating environments.  Can your product “talk to and through the internet”?   To prove a point, let’s look at several numbers that represent reality.

by-the-numbers

This IoT chart indicates where we are and where we might be going over the next few years and decades. Over six and one-half billion, (6.6 billion) connected “things” by the end of 2015.  Everything from refrigerators to automobiles is in the process of being connected or will be connected to the internet.  This connectivity allows communication from the device to the user of the device.  This connectivity can tie the device to GPS tracking, thereby detailing its location down to mere feet, if not inches. (NOTE:  The desirability of this feature is somewhat in question but it is definitely possible.)

Imagine, $1.3 TRILLION in world spending by 2019 to accomplish connectivity of hardware with the internet.  This is a prediction by IDC Technologies. (IDC Technologies, Inc. is a Premier Technology Services Organization with primary focus in IT services. A very capable organization devoted to following IT services and market demands for IT services.)  This huge number reflects the fact, as shown above, that forty percent (40%) of the top one hundred (100) discrete manufacturers will rely on connected products to provide equipment and services to customers by 2018.

Now, connectivity does not come freely or without barriers. Some of these are as follows:

  • THE NEED: It’s all about the business. IoT is a classic example of organizations needing to take a step back and determine whether there is a strong business case for pursuing IoT before they get on board with implementation. Championing IoT simply because it’s the latest technology may be enough for engineers, but it means nothing to customers or the company’s financials unless there is a smart business strategy to back it up. The customer DRIVES incorporation of IoT into your product or your service.  If the entity does not need IoT—DON’T DO IT.  Who need a refrigerator that communicates with the internet?  Maybe yes—maybe no.
  • Resources don’t come cheap. IoT commands a great deal of expertise in areas where most companies are lacking. By some estimates, it can take over one hundred and fifty (150) months of manpower and an investment in eleven (11) unique long-term roles to sufficiently develop and support a full IoT-connected product development stack. Most companies evaluating the IoT space aren’t software development or connectivity experts and would be better served focusing engineering resources on core competencies. IoT for even the largest company is a definite commitment.  You probably cannot do-it-yourself in your spare time.  Don’t even think about it.
  • Growing pains come with scale. Even if the initial IoT implementation goes off without a hitch, scaling the system to accommodate a larger universe of “things,” additional features and product lines typically brings new, unanticipated challenges. It’s critical to make sure your system design is future-proofed from the start, and building for scale adds complexity to an already complex project. Plan for the future and future expansion of IoT.  Things in the business world generally increase if immediate success or even partial success is accomplished.
  • Security is a top concern. There are multiple vulnerability points in an IoT system, and many engineering organizations don’t have the internal expertise to address them sufficiently. Rather than staffing up a dedicated security organization, companies should consider aligning with external partners with proven, connected product security expertise. To me, this is the greatest concern. We read every day about web sites and digital systems being hacked.  It still represents a HUGE problem with the internet.  Encryption to lessen or eliminate hacking is a definite need.
  • Identity management challenges. Related to security, this is a critical step to ensuring users can control their own IoT devices, and there are limitations on who can make changes or initiate updates. Again, it’s an area where many engineering organizations lack sufficient competency.
  • Data deluge. Connected products spin off a massive amount of data, which requires competency in data management practices and new Big Data technologies. Not only that, but the IoT data needs to be organized and integrated into existing business systems. For engineering organizations light on data management manpower, this can be a problem, not to mention, a huge impediment to the success of any connected business. Data organization is the great need here.  Mountains of data can result from IoT.  Determine what data you need to further your business and improve customer service.  I feel the 80/20 rule might apply here.
  • Long-term maintenance. If you build an IoT system on your own, you’re probably going to have to support it on your own, which requires an additional investment in manpower. A system built in-house will require frequent updates over the course of its lifetime, which can quickly eat up entire budgets and consume already stretched engineering resources. Remember, if you build an IoT system you MUST maintain that system—always.
  • Time-to-revenue delays. It takes time and effort to build these systems from scratch, and every hour spent on engineering prolongs development and increases the time-to-market cycle. Companies trying to ride the IoT wave need to get products into the hands of customers as soon as possible to stay abreast of competition and maximize financial gain. I cannot stress too much the need for focus teams inquiring from potential customers their wants and desires relative to incorporating connectivity into products and services.  Ask your customers up front what they want.
  • On-going interoperability requirements. Maintaining full control of the IoT technology stack also means being responsible for on-going integration requests and keeping up with continuously changing standards. As the connected product business matures, this can be a lot of work that could be handled more efficiently by a third party.
  • Service distractions. The work involved in managing in-house solutions can distract from one of the more important advantages of IoT: Gaining a picture of product usage and customer requirements that can be leveraged for optimized, proactive service. If companies are spending all their time troubleshooting their own IoT hardware and software, they have less time to devote to customers’ problems or growing their IoT-enabled business.

If I were a stock trader doing business with the markets on a daily basis, I certainly would address those companies “folding into” and providing services to IoT methodologies.  Also, businesses need to listen to their customers a gauge the importance of incorporating internet connectivity into the products and services they provide.  This, apparently, is the way business is going and so as not to be left out or lose your customer base, you may have to yield to the wishes of your clients.  Just a thought.

As always, I welcome your comments.

%d bloggers like this: