AN AVERAGE DAY FOR DATA

August 4, 2017


I am sure you have heard the phrase “big data” and possibly wondered just what that terminology relates to.  Let’s get the “official” definition, as follows:

The amount of data that’s being created and stored on a global level is almost inconceivable, and it just keeps growing. That means there’s even more potential to glean key insights from business information – yet only a small percentage of data is actually analyzed. What does that mean for businesses? How can they make better use of the raw information that flows into their organizations every day?

The concept gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the four plus complexity:

  • Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
  • Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
  • Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
  • In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
  • Today’s data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.

AN AVERAGE DAY IN THE LIFE OF BIG DATA:

I picture is worth a thousand words but let us now quantify, on a daily basis, what we mean by big data.

  • U-Tube’s viewers are watching a billion (1,000,000,000) hours of videos each day.
  • We perform over forty thousand (40,000) searches per second on Google alone. That is approximately three and one-half (3.5) billion searches per day and roughly one point two (1.2) trillion searches per year, world-wide.
  • Five years ago, IBM estimated two point five (2.5) exabytes (2.5 billion gigabytes of data generated every day. It has grown since then.
  • The number of e-mail sent per day is around 269 billion. That is about seventy-four (74) trillion e-mails per year. Globally, the data stored in data centers will quintuple by 2020 to reach 915 exabytes.  This is up 5.3-fold with a compound annual growth rate (CAGR) of forty percent (40%) from 171 exabytes in 2015.
  • On average, an autonomous car will churn out 4 TB of data per day, when factoring in cameras, radar, sonar, GPS and LIDAR. That is just for one hour per day.  Every autonomous car will generate the data equivalent to almost 3,000 people.
  • By 2024, mobile networks will see machine-to-machine (M2M) connections jump ten-fold to 2.3 billion from 250 million in 2014, this is according to Machina Research.
  • The data collected by BMW’s current fleet of 40 prototype autonomous care during a single test session would fill the equivalent stack of CDs 60 miles high.

We have become a world that lives “by the numbers” and I’m not too sure that’s altogether troubling.  At no time in our history have we had access to data that informs, miss-informs, directs, challenges, etc etc as we have at this time.  How we use that data makes all the difference in our daily lives.  I have a great friend named Joe McGuinness. His favorite expressions: “It’s about time we learn to separate the fly s_____t from the pepper.  If we apply this phrase to big data, he may just be correct. Be careful out there.


Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.


If you work or have worked in manufacturing you know robotic systems have definitely had a distinct impact on assembly, inventory acquisition from storage areas and finished-part warehousing.   There is considerable concern that the “rise of the machines” will eventually replace individuals performing a verity of tasks.  I personally do not feel this will be the case although there is no doubt robotic systems have found their way onto the manufacturing floor.

From the “Executive Summary World Robotics 2016 Industrial Robots”, we see the following:

2015:  By far the highest volume ever recorded in 2015, robot sales increased by 15% to 253,748 units, again by far the highest level ever recorded for one year. The main driver of the growth in 2015 was the general industry with an increase of 33% compared to 2014, in particular the electronics industry (+41%), metal industry (+39%), the chemical, plastics and rubber industry (+16%). The robot sales in the automotive industry only moderately increased in 2015 after a five-year period of continued considerable increase. China has significantly expanded its leading position as the biggest market with a share of 27% of the total supply in 2015.

In looking at the chart below, we can see the sales picture with perspective and show how system sales have increased from 2003.

It is very important to note that seventy-five percent (75%) of global robot sales comes from five (5) countries.

There were five major markets representing seventy-five percent (75%) of the total sales volume in 2015:  China, the Republic of Korea, Japan, the United States, and Germany.

As you can see from the bar chart above, sales volume increased from seventy percent (70%) in 2014. Since 2013 China is the biggest robot market in the world with a continued dynamic growth. With sales of about 68,600 industrial robots in 2015 – an increase of twenty percent (20%) compared to 2014 – China alone surpassed Europe’s total sales volume (50,100 units). Chinese robot suppliers installed about 20,400 units according to the information from the China Robot Industry Alliance (CRIA). Their sales volume was about twenty-nine percent (29%) higher than in 2014. Foreign robot suppliers increased their sales by seventeen percent (17%) to 48,100 units (including robots produced by international robot suppliers in China). The market share of Chinese robot suppliers grew from twenty-five percent (25%) in 2013 to twenty-nine percent (29%) in 2015. Between 2010 and 2015, total supply of industrial robots increased by about thirty-six percent (36%) per year on average.

About 38,300 units were sold to the Republic of Korea, fifty-five percent (55%) more than in 2014. The increase is partly due to a number of companies which started to report their data only in 2015. The actual growth rate in 2015 is estimated at about thirty percent (30%) to thirty-five percent (35%.)

In 2015, robot sales in Japan increased by twenty percent (20%) to about 35,000 units reaching the highest level since 2007 (36,100 units). Robot sales in Japan followed a decreasing trend between 2005 (reaching the peak at 44,000 units) and 2009 (when sales dropped to only 12,767 units). Between 2010 and 2015, robot sales increased by ten percent (10%) on average per year (CAGR).

Increase in robot installations in the United States continued in 2015, by five percent (5%) to the peak of 27,504 units. Driver of this continued growth since 2010 was the ongoing trend to automate production in order to strengthen American industries on the global market and to keep manufacturing at home, and in some cases, to bring back manufacturing that had previously been sent overseas.

Germany is the fifth largest robot market in the world. In 2015, the number of robots sold increased slightly to a new record high at 20,105 units compared to 2014 (20,051 units). In spite of the high robot density of 301 units per 10,000 employees, annual sales are still very high in Germany. Between 2010 and 2015, annual sales of industrial robots increased by an average of seven percent (7%) in Germany (CAGR).

From the graphic below, you can see which industries employ robotic systems the most.

Growth rates will not lessen with projections through 2019 being as follows:

A fascinating development involves the assistance of human endeavor by robotic systems.  This fairly new technology is called collaborative robots of COBOTS.  Let’s get a definition.

COBOTS:

A cobot or “collaborative robot” is a robot designed to assist human beings as a guide or assistor in a specific task. A regular robot is designed to be programmed to work more or less autonomously. In one approach to cobot design, the cobot allows a human to perform certain operations successfully if they fit within the scope of the task and to steer the human on a correct path when the human begins to stray from or exceed the scope of the task.

“The term ‘collaborative’ is used to distinguish robots that collaborate with humans from robots that work behind fences without any direct interaction with humans.  “In contrast, articulated, cartesian, delta and SCARA robots distinguish different robot kinematics.

Traditional industrial robots excel at applications that require extremely high speeds, heavy payloads and extreme precision.  They are reliable and very useful for many types of high volume, low mix applications.  But they pose several inherent challenges for higher mix environments, particularly in smaller companies.  First and foremost, they are very expensive, particularly when considering programming and integration costs.  They require specialized engineers working over several weeks or even months to program and integrate them to do a single task.  And they don’t multi-task easily between jobs since that setup effort is so substantial.  Plus, they can’t be readily integrated into a production line with people because they are too dangerous to operate in close proximity to humans.

For small manufacturers with limited budgets, space and staff, a collaborative robot such as Baxter (shown below) is an ideal fit because it overcomes many of these challenges.  It’s extremely intuitive, integrates seamlessly with other automation technologies, is very flexible and is quite affordable with a base price of only $25,000.  As a result, Baxter is well suited for many applications, such as those requiring manual labor and a high degree of flexibility, that are currently unmet by traditional technologies.

Baxter is one example of collaborative robotics and some say is by far the safest, easiest, most flexible and least costly robot of its kind today.  It features a sophisticated multi-tier safety design that includes a smooth, polymer exterior with fewer pinch points; back-drivable joints that can be rotated by hand; and series elastic actuators which help it to minimize the likelihood of injury during inadvertent contact.

It’s also incredibly simple to use.  Line workers and other non-engineers can quickly learn to train the robot themselves, by hand.  With Baxter, the robot itself is the interface, with no teaching pendant or external control system required.  And with its ease of use and diverse skill set, Baxter is extremely flexible, capable of being utilized across multiple lines and tasks in a fraction of the time and cost it would take to re-program other robots.  Plus, Baxter is made in the U.S.A., which is a particularly appealing aspect for many of our customers looking to re-shore their own production operations.

The digital picture above shows a lady work alongside a collaborative robotic system, both performing a specific task. The lady feels right at home with her mechanical friend only because usage demands a great element of safety.

Certifiable safety is the most important precondition for a collaborative robot system to be applied to an industrial setting.  Available solutions that fulfill the requirements imposed by safety standardization often show limited performance or productivity gains, as most of today’s implemented scenarios are often limited to very static processes. This means a strict stop and go of the robot process, when the human enters or leaves the work space.

Collaborative systems are still a work in progress but the technology has greatly expanded the use and this is primarily due to satisfying safety requirements.  Upcoming years will only produce greater acceptance and do not be surprised if you see robots and humans working side by side on every manufacturing floor over the next decade.

As always, I welcome your comments.

RISE OF THE MACHINES

March 20, 2017


Movie making today is truly remarkable.  To me, one of the very best parts is animation created by computer graphics.  I’ve attended “B” movies just to see the graphic displays created by talented programmers.  The “Terminator” series, at least the first movie in that series, really captures the creative essence of graphic design technology.  I won’t replay the movie for you but, the “terminator” goes back in time to carry out its prime directive—Kill John Conner.  The terminator, a robotic humanoid, has decision-making capability as well as human-like mobility that allows the plot to unfold.  Artificial intelligence or AI is a fascinating technology many companies are working on today.  Let’s get a proper definition of AI as follows:

“the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Question:  Are Siri, Cortana, and Alexa eventually going to be more literate than humans? Anyone excited about the recent advancements in artificial intelligence (AI) and machine learning should also be concerned about human literacy as well. That’s according to Protect Literacy , a global campaign, backed by education company Pearson, aimed at creating awareness and fighting against illiteracy.

Project Literacy, which has been raising awareness for its cause at SXSW 2017, recently released a report, “ 2027: Human vs. Machine Literacy ,” that projects machines powered by AI and voice recognition will surpass the literacy levels of one in seven American adults in the next ten (10) years. “While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to simple text search task…exceeding the abilities of millions of people who are nonliterate,” Kate James, Project Literacy spokesperson and Chief Corporate Affairs and Global Marketing Officer at Pearson, wrote in the report. In light of this the organization is calling for “society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.”  (I would invite you to re-read this statement and shudder in your boots as I did.)

While the past twenty-five (25) years have seen disappointing progress in U.S. literacy, there have been huge gains in linguistic performance by a totally different type of actor – computers. Dramatic advances in natural language processing (Hirschberg and Manning, 2015) have led to the rise of language technologies like search engines and machine translation that “read” text and produce answers or translations that are useful for people. While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to the simple text search task above – exceeding the abilities of millions of people who are nonliterate.

According to the National National Centre for Education Statistics machine literacy has already exceeded the literacy abilities of the estimated three percent (3%) of non-literate adults in the US.

Comparing demographic data from the Global Developer Population and Demographic Study 2016 v2 and the 2015 Digest of Education Statistics finds there are more software engineers in the U.S. than school teachers, “We are focusing so much on teaching algorithms and AI to be better at language that we are forgetting that fifty percent (50%)  of adults cannot read a book written at an eighth grade level,” Project Literacy said in a statement.  I retired from General Electric Appliances.   Each engineer was required to write, or at least the first draft, of the Use and Care Manuals for specific cooking products.  We were instructed to 1.) Use plenty of graphic examples and 2.) Write for a fifth-grade audience.  Even with that, we know from experience that many consumers never use and have no intention of reading their Use and Care Manual.  With this being the case, many of the truly cool features are never used.  They may as well buy the most basic product.

Research done by Business Insider reveals that thirty-two (32) million Americans cannot currently read a road sign. Yet at the same time there are ten (10) million self-driving cars predicted to be on the roads by 2020. (One could argue this will further eliminate the need for literacy, but that is debatable.)  If we look at literacy rates for the top ten (10) countries on our planet we see the following:

Citing research from Venture Scanner , Project Literacy found that in 2015 investment in AI technologies, including natural language processing, speech recognition, and image recognition, reached $47.2 billion. Meanwhile, data on US government spending shows that the 2017 U.S. Federal Education Budget for schools (pre-primary through secondary school) is $40.4 billion.  I’m not too sure funding for education always goes to benefit students education. In other words, throwing more money at this problem may not always provide desired results, but there is no doubt, funding for AI will only increase.

“Human literacy levels have stalled since 2000. At any time, this would be a cause for concern, when one in ten people worldwide…still cannot read a road sign, a voting form, or a medicine label,” James wrote in the report. “In popular discussion about advances in artificial intelligence, it is easy

CONCLUSION:  AI will only continue to advance and there will come a time when robotic systems will be programmed with basic decision-making skills.  To me, this is not only fascinating but more than a little scary.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.


Forbes Magazine recently published what they consider to be the top ten (10) trends in technology.  It’s a very interesting list and I could not argue with any item. The writer of the Forbes article is David W. Cearley.  Mr. Cearley is the vice president and Gartner Fellow at Gartner.  He specializes in analyzing emerging and strategic business and technology trends and explores how these trends shape the way individuals and companies derive value from technology.   Let’s take a quick look.

  • DEVICE MESH—This trend takes us far beyond our desktop PC, Tablet or even our cell phone.  The trend encompasses the full range of endpoints with which humans might interact. In other words, just about anything you interact with could possibly be linked to the internet for instant access.  This could mean individual devices interacting with each other in a fashion desired by user programming.  Machine to machine, M2M.
  • AMBIENT USER EXPERIENCE–All of our digital interactions can become synchronized into a continuous and ambient digital experience that preserves our experience across traditional boundaries of devices, time and space. The experience blends physical, virtual and electronic environments, and uses real-time contextual information as the ambient environment changes or as the user moves from one place to another.
  • 3-D PRINTING MATERIALS—If you are not familiar with “additive manufacturing” you are really missing a fabulous technology. Right now, 3-D Printing is somewhat in its infancy but progress is not just weekly or monthly but daily.  The range of materials that can be used for the printing process improves in a remarkable manner. You really need to look into this.
  • INFORMATION OF EVERYTHING— Everything surrounding us in the digital mesh is producing, using and communicating with virtually unmeasurable amounts of information. Organizations must learn how to identify what information provides strategic value, how to access data from different sources, and explore how algorithms leverage Information of Everything to fuel new business designs. I’m sure by now you have heard of “big data”.  Information of everything will provide mountains of data that must be sifted through so usable “stuff” results.  This will continue to be an ever-increasing task for programmers.
  • ADVANCED MACHINE LEARNING– Rise of the Machines.  Machines talking to each other and learning from each other.  (Maybe a little more frightening that it should be.) Advanced machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner. This feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.
  • ADAPTIVE SECURITY ARCHITECTURE— The complexities of digital business and the algorithmic economy, combined with an emerging “hacker industry,” significantly increase the threat surface for an organization. IT leaders must focus on detecting and responding to threats, as well as more traditional blocking and other measures to prevent attacks. I don’t know if you have ever had your identity stolen but it is NOT fun.  Corrections are definitely time-consuming.
  • ADVANCED SYSTEM ARCHITECTURE–The digital mesh and smart machines require intense computing architecture demands to make them viable for organizations. They’ll get this added boost from ultra-efficient-neuromorphic architectures. Systems built on graphics processing units (GPUs) and field-programmable gate-arrays (FPGAs) will function more like human brains that are particularly suited to be applied to deep learning and other pattern-matching algorithms that smart machines use. FPGA-based architecture will allow distribution with less power into the tiniest Internet of Things (IoT) endpoints, such as homes, cars, wristwatches and even human beings.
  • Mesh App and Service ArchitectureThe mesh app and service architecture are what enable delivery of apps and services to the flexible and dynamic environment of the digital mesh. This architecture will serve users’ requirements as they vary over time. It brings together the many information sources, devices, apps, services and microservices into a flexible architecture in which apps extend across multiple endpoint devices and can coordinate with one another to produce a continuous digital experience.
  • INTERNET OF THINGS (IoT) and ARCHITECTURE PLATFORMS– IoT platforms exist behind the mesh app and service architecture. The technologies and standards in the IoT platform form a base set of capabilities for communicating, controlling, managing and securing endpoints in the IoT. The platforms aggregate data from endpoints behind the scenes from an architectural and a technology standpoint to make the IoT a reality.
  • Autonomous Agents and ThingsAdvanced machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner. This feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.

CONCLUSIONS:  You have certainly noticed by now that ALL of the trends, with the exception of 3-D Printing are rooted in Internet access and Internet protocols.  We are headed towards a totally connected world in which our every move is traceable.  Traceable unless we choose to fly under the radar.

%d bloggers like this: