AN AVERAGE DAY FOR DATA

August 4, 2017


I am sure you have heard the phrase “big data” and possibly wondered just what that terminology relates to.  Let’s get the “official” definition, as follows:

The amount of data that’s being created and stored on a global level is almost inconceivable, and it just keeps growing. That means there’s even more potential to glean key insights from business information – yet only a small percentage of data is actually analyzed. What does that mean for businesses? How can they make better use of the raw information that flows into their organizations every day?

The concept gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the four plus complexity:

  • Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
  • Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
  • Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
  • In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
  • Today’s data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.

AN AVERAGE DAY IN THE LIFE OF BIG DATA:

I picture is worth a thousand words but let us now quantify, on a daily basis, what we mean by big data.

  • U-Tube’s viewers are watching a billion (1,000,000,000) hours of videos each day.
  • We perform over forty thousand (40,000) searches per second on Google alone. That is approximately three and one-half (3.5) billion searches per day and roughly one point two (1.2) trillion searches per year, world-wide.
  • Five years ago, IBM estimated two point five (2.5) exabytes (2.5 billion gigabytes of data generated every day. It has grown since then.
  • The number of e-mail sent per day is around 269 billion. That is about seventy-four (74) trillion e-mails per year. Globally, the data stored in data centers will quintuple by 2020 to reach 915 exabytes.  This is up 5.3-fold with a compound annual growth rate (CAGR) of forty percent (40%) from 171 exabytes in 2015.
  • On average, an autonomous car will churn out 4 TB of data per day, when factoring in cameras, radar, sonar, GPS and LIDAR. That is just for one hour per day.  Every autonomous car will generate the data equivalent to almost 3,000 people.
  • By 2024, mobile networks will see machine-to-machine (M2M) connections jump ten-fold to 2.3 billion from 250 million in 2014, this is according to Machina Research.
  • The data collected by BMW’s current fleet of 40 prototype autonomous care during a single test session would fill the equivalent stack of CDs 60 miles high.

We have become a world that lives “by the numbers” and I’m not too sure that’s altogether troubling.  At no time in our history have we had access to data that informs, miss-informs, directs, challenges, etc etc as we have at this time.  How we use that data makes all the difference in our daily lives.  I have a great friend named Joe McGuinness. His favorite expressions: “It’s about time we learn to separate the fly s_____t from the pepper.  If we apply this phrase to big data, he may just be correct. Be careful out there.


One of the best things the automotive industry accomplishes is showing us what might be in our future.  They all have the finances, creative talent and vision to provide a glimpse into their “wish list” for upcoming vehicles.  Mercedes Benz has done just that with their futuristic F 015 Luxury in Motion.

In order to provide a foundation for the new autonomous F 015 Luxury in Motion research vehicle, an interdisciplinary team of experts from Mercedes-Benz has devised a scenario that incorporates different aspects of day-to-day mobility. Above and beyond its mobility function, this scenario perceives the motor car as a private retreat that additionally offers an important added value for society at large. (I like the word retreat.) If you take a look at how much time the “average” individual spends in his or her automobile or truck, we see the following:

  • On average, Americans drive 29.2 miles per day, making two trips with an average total duration of forty-six (46) minutes. This and other revealing data are the result of a ground-breaking study currently underway by the AAA Foundation for Traffic Safety and the Urban Institute.
  • Motorists age sixteen (16) years and older drive, on average, 29.2 miles per day or 10,658 miles per year.
  • Women take more driving trips, but men spend twenty-five (25) percent more time behind the wheel and drive thirty-five (35) percent more miles than women.
  • Both teenagers and seniors over the age of seventy-five (75) drive less than any other age group; motorists 30-49 years old drive an average 13,140 miles annually, more than any other age group.
  • The average distance and time spent driving increase in relation to higher levels of education. A driver with a grade school or some high school education drove an average of 19.9 miles and 32 minutes daily, while a college graduate drove an average of 37.2 miles and 58 minutes.
  • Drivers who reported living “in the country” or “a small town” drive greater distances (12,264 miles annually) and spend a greater amount of time driving than people who described living in a “medium sized town” or city (9,709 miles annually).
  • Motorists in the South drive the most (11,826 miles annually), while those in the Northeast drive the least (8,468 miles annually).

With this being the case, why not enjoy it?

The F 015 made its debut at the Consumer Electronics Show in Las Vegas more than two years ago. It’s packed with advanced (or what was considered advanced in 2015) autonomous technology, and can, in theory, run for almost 900 kilometers on a mixture of pure electric power and a hydrogen fuel cell.

But while countless other vehicles are still trying to prove that cars can, literally, drive themselves, the Mercedes-Benz offering takes this for granted. Instead, this vehicle wants us to consider what we’ll actually do while the car is driving us around.

The steering wheel slides into the dashboard to create more of a “lounge” space. The seating configuration allows four people to face each other if they want to talk. And when the onboard conversation dries up, a bewildering collection of screens — one on the rear wall, and one on each of the doors — offers plenty of opportunity to interact with various media.

The F 015 could have done all of this as a flash-in-the-pan show car — seen at a couple of major events before vanishing without trace. But in fact, it has been touring almost constantly since that Vegas debut.

“Anyone who focuses solely on the technology has not yet grasped how autonomous driving will change our society,” emphasizes Dr Dieter Zetsche, Chairman of the Board of Management of Daimler AG and Head of Mercedes-Benz Cars. “The car is growing beyond its role as a mere means of transport and will ultimately become a mobile living space.”

The visionary research vehicle was born, a vehicle which raises comfort and luxury to a new level by offering a maximum of space and a lounge character on the inside. Every facet of the F 015 Luxury in Motion is the utmost reflection of the Mercedes way of interpreting the terms “modern luxury”, emotion and intelligence.

This innovative four-seater is a forerunner of a mobility revolution, and this is immediately apparent from its futuristic appearance. Sensuousness and clarity, the core elements of the Mercedes-Benz design philosophy, combine to create a unique, progressive aesthetic appeal.

OK, with this being the case, let us now take a pictorial look at what the “Benz” has to offer.

One look and you can see the car is definitely aerodynamic in styling.  I am very sure that much time has been spent with this “ride” in wind tunnels with slip streams being monitored carefully.  That is where drag coefficients are determined initially.

The two JPEGs above indicate the front and rear swept glass windshields that definitely reduce induced drag.

The interiors are the most striking feature of this automobile.

Please note, this version is a four-seater but with plenty of leg-room.

Each occupant has a touch screen, presumably for accessing wireless or the Internet.  One thing, as yet there is no published list price for the car.  I’m sure that is being considered at this time but no USD numbers to date.  Also, as mentioned the car is self-driving so that brings on added complexities.  By design, this vehicle is a moving computer.  It has to be.  I am always very interested in maintenance and training necessary to diagnose and repair a vehicle such as this.  Infrastructure MUST be in place to facilitate quick turnaround when trouble arises–both mechanical and electrical.

As always, I welcome your comments.

THE USS GERALD R. FORD

July 28, 2017


This past Saturday the U.S. Navy commissioned its most powerful warship yet: the USS Gerald Ford.  The nuclear-powered aircraft carrier is outfitted with state-of-the-art technology and capacity for more aircraft and weaponry than ever before. A digital photograph of the carrier is given below.  As you can see, it is a massive vessel with a length of 1,106-feet; comparing to the size of approximately three football fields. The extra room allows for an expanded flight deck making it easier for jets and drones to maneuver. Additionally, it features a better-positioned “island” structure, giving the captain of the ship improved visibility.

It features an electromagnetic launch system and advanced arresting gear for faster and more efficient take-offs and landings. The vessel is outfitted with touchscreen navigation display in place of a traditional throttle and is equipped with a reactor plant that can power the ship for up to twenty (20) years without refueling. Imagine, twenty (20) years without refueling!

With more than double the electrical capacity and automated equipment, means the ship can sail more efficiently with at least six hundred (600) crew members. Once officially deployed in 2020, it will house 2,600 sailors.  The Navy says this manning level will save more than four ($4) billion over the ship’s fifty (50)-year lifespan.

Built by Huntington Ingalls Industries, the Gerald R. Ford (CVN 78) is the first new aircraft carrier design since USS Nimitz (CVN 68), or the first new design in about forty (40) years.  The Navy has ordered three of these Ford-class carriers -combined price tag approximately forty-two ($42 billion) of which this is the very first.

This first CVN 78 is named after former U.S. President Gerald R. Ford to pay tribute to his lifetime of service to the nation in the Navy and the U.S. government. During World War II, Ford was a Navy lieutenant commander serving on the light carrier USS Monterey (a CVL 26).

The Ford class is designed with sailors and safety in mind. The design incorporates features to improve quality of life on board and to make maintenance easier. A great deal of consideration was put into improved survivability against a wide range of current and anticipated future threats.  Obviously, that helps keeps sailors safer.  The digital photograph below will give you some idea as to areas of improved survivability.

The military will be able to launch about 33 percent more aircraft than it can from the older carriers. The more aircraft the military can launch, the more bombs it can hit a target with.

SPECIFICATIONS:

I have a sneaking feeling there are other marvelous improvements but those are certainly classified.  All-in-all, this is one mean machine and with our ageing fleet of naval ships, a very welcomed addition.


Various definitions of product lifecycle management or PLM have been issued over the years but basically: product lifecycle management is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products.  PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

“In recent years, great emphasis has been put on disposal of a product after its service life has been met.  How to get rid of a product or component is extremely important. Disposal methodology is covered by RoHS standards for the European Community.  If you sell into the EU, you will have to designate proper disposal.  Dumping in a landfill is no longer appropriate.

Since this course deals with the application of PLM to industry, we will now look at various industry definitions.

Industry Definitions

PLM is a strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination, and use of product definition information across the extended enterprise, and spanning from product concept to end of life integrating people, processes, business systems, and information. PLM forms the product information backbone for a company and its extended enterprise.” Source:  CIMdata

“Product life cycle management or PLM is an all-encompassing approach for innovation, new product development and introduction (NPDI) and product information management from initial idea to the end of life.  PLM Systems is an enabling technology for PLM integrating people, data, processes, and business systems and providing a product information backbone for companies and their extended enterprise.” Source:  PLM Technology Guide

“The core of PLM (product life cycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.” Source:  Wikipedia article on Product Lifecycle Management

“Product life cycle management is the process of managing product-related design, production and maintenance information. PLM may also serve as the central repository for secondary information, such as vendor application notes, catalogs, customer feedback, marketing plans, archived project schedules, and other information acquired over the product’s life.” Source:  Product Lifecycle Management

“It is important to note that PLM is not a definition of a piece, or pieces, of technology. It is a definition of a business approach to solving the problem of managing the complete set of product definition information-creating that information, managing it through its life, and disseminating and using it throughout the lifecycle of the product. PLM is not just a technology, but is an approach in which processes are as important, or more important than data.” Source:  CIMdata

“PLM or Product Life Cycle Management is a process or system used to manage the data and design process associated with the life of a product from its conception and envisioning through its manufacture, to its retirement and disposal. PLM manages data, people, business processes, manufacturing processes, and anything else pertaining to a product. A PLM system acts as a central information hub for everyone associated with a given product, so a well-managed PLM system can streamline product development and facilitate easier communication among those working on/with a product. Source:  Aras

A pictorial representation of PLM may be seen as follows:

Hopefully, you can see that PLM deals with methodologies from “white napkin design to landfill disposal”.  Please note, documentation is critical to all aspects of PLM and good document production, storage and retrieval is extremely important to the overall process.  We are talking about CAD, CAM, CAE, DFSS, laboratory testing notes, etc.  In other words, “the whole nine yards of product life”.   If you work in a company with ISO certification, PLM is a great method to insure retaining that certification.

In looking at the four stages of a products lifecycle, we see the following:

Four Stages of Product Life Cycle—Marketing and Sales:

Introduction: When the product is brought into the market. In this stage, there’s heavy marketing activity, product promotion and the product is put into limited outlets in a few channels for distribution. Sales take off slowly in this stage. The need is to create awareness, not profits.

The second stage is growth. In this stage, sales take off, the market knows of the product; other companies are attracted, profits begin to come in and market shares stabilize.

The third stage is maturity, where sales grow at slowing rates and finally stabilize. In this stage, products get differentiated, price wars and sales promotion become common and a few weaker players exit.

The fourth stage is decline. Here, sales drop, as consumers may have changed, the product is no longer relevant or useful. Price wars continue, several products are withdrawn and cost control becomes the way out for most products in this stage.

Benefits of PLM Relative to the Four Stages of Product Life:

Considering the benefits of Product Lifecycle Management, we realize the following:

  • Reduced time to market
  • Increase full price sales
  • Improved product quality and reliability
  • Reduced prototypingcosts
  • More accurate and timely request for quote generation
  • Ability to quickly identify potential sales opportunities and revenue contributions
  • Savings through the re-use of original data
  • frameworkfor product optimization
  • Reduced waste
  • Savings through the complete integration of engineering workflows
  • Documentation that can assist in proving compliance for RoHSor Title 21 CFR Part 11
  • Ability to provide contract manufacturers with access to a centralized product record
  • Seasonal fluctuation management
  • Improved forecasting to reduce material costs
  • Maximize supply chain collaboration
  • Allowing for much better “troubleshooting” when field problems arise. This is accomplished by laboratory testing and reliability testing documentation.

PLM considers not only the four stages of a product’s lifecycle but all of the work prior to marketing and sales AND disposal after the product is removed from commercialization.   With this in mind, why is PLM a necessary business technique today?  Because increases in technology, manpower and specialization of departments, PLM was needed to integrate all activity toward the design, manufacturing and support of the product. Back in the late 1960s when the F-15 Eagle was conceived and developed, almost all manufacturing and design processes were done by hand.  Blueprints or drawings needed to make the parts for the F15 were created on a piece of paper. No electronics, no emails – all paper for documents. This caused a lack of efficiency in design and manufacturing compared to today’s technology.  OK, another example of today’s technology and the application of PLM.

If we look at the processes for Boeings DREAMLINER, we see the 787 Dreamliner has about 2.3 million parts per airplane.  Development and production of the 787 has involved a large-scale collaboration with numerous suppliers worldwide. They include everything from “fasten seatbelt” signs to jet engines and vary in size from small fasteners to large fuselage sections. Some parts are built by Boeing, and others are purchased from supplier partners around the world.  In 2012, Boeing purchased approximately seventy-five (75) percent of its supplier content from U.S. companies. On the 787 program, content from non-U.S. suppliers accounts for about thirty (30) percent of purchased parts and assemblies.  PLM or Boeing’s version of PLM was used to bring about commercialization of the 787 Dreamliner.

 

COLLABORATIVE ROBOTICS

June 26, 2017


I want to start this discussion with defining collaboration.  According to Merriam-Webster:

  • to work jointly with others or together especially in an intellectual endeavor.An international team of scientists collaborated on the study.
  • to cooperate with or willingly assist an enemy of one’s country and especially an occupying force suspected of collaborating with the enemy
  • to cooperate with an agency or instrumentality with which one is not immediately connected.

We are going to adopt the first definition to work jointly with others.  Well, what if the “others” are robotic systems?

Collaborative robots, or cobots as they have come to be known, are robot robotic systems designed to operate collaboratively or in conjunction with humans.  The term “Collaborative Robot is a verb, not a noun. The collaboration is dependent on what the robot is doing, not the robot itself.”  With that in mind, collaborative robotic systems and applications generally combine some or all of the following characteristics:

  • They are designed to be safe around people. This is accomplished by using sensors to prevent touching or by limiting the force if the system touches a human or a combination of both.
  • They are often relatively light weight and can be moved from task to task as needed. This means they can be portable or mobile and can be mounted on movable tables.
  • They do not require skill to program. Most cobots are simple enough that anyone who can use a smartphone or tablet can teach or program them. Most robotic systems of this type are programmed by using a “teach pendent”. The most-simple can allow up to ninety (90) programs to be installed.
  • Just as a power saw is intended to help, not replace, the carpenter, the cobot is generally intended to assist, not replace, the production worker. (This is where the collaboration gets its name. It assists the human is accomplishing a task.)  The production worker generally works side-by-side with the robot.
  • Collaborative robots are generally simpler than more traditional robots, which makes them cheaper to buy, operate and maintain.

There are two basic approaches to making cobots safe. One approach, taken by Universal, Rethink and others, is to make the robot inherently safe. If it makes contact with a human co-worker, it immediately stops so the worker feels no more than a gentle nudge. Rounded surfaces help make that nudge even more gentle. This approach limits the maximum load that the robot can handle as well as the speed. A robot moving a fifty (50) pound part at high speed will definitely hurt no matter how quickly it can stop upon making contact.

A sensor-based approach allows collaborative use in faster and heavier applications. Traditionally, physical barriers such as cages or light curtains have been used to stop the robot when a person enters the perimeter. Modern sensors can be more discriminating, sensing not only the presence of a person but their location as well. This allows the robot to slow down, work around the person or stop as the situation demands to maintain safety. When the person moves away, the robot can automatically resume normal operation.

No discussion of robot safety can ignore the end-of-arm tooling (EOAT).  If the robot and operator are handing parts back and forth, the tooling needs to be designed so that, if the person gets their fingers caught, they can’t be hurt.

The next digital photographs will give you some idea as to how humans and robotic systems can work together and the tasks they can perform.

The following statistics are furnished by “Digital Engineering” February 2017.

  • By 2020, more than three (3) million workers on a global basis will be supervised by a “robo-boss”.
  • Forty-five (45) percent of all work activities could be automated using already demonstrated technology and fifty-nine (59) percent of all manufacturing activities could be automated, given technical considerations.
  • At the present time, fifty-nine (59) percent of US manufacturers are using some form of robotic technology.
  • Artificial Intelligence (AI), will replace sixteen (16) percent of American jobs by 2025 and will create nine (9) percent of American jobs.
  • By 2018, six (6) billion connected devices will be used to assist commerce and manufacturing.

CONCLUSIONS: OK, why am I posting this message?  Robotic systems and robots themselves WILL become more and more familiar to us as the years go by.  The usage is already in a tremendous number of factories and on manufacturing floors.  Right now, most of the robotic work cells used in manufacturing are NOT collaborative.  The systems are SCARA (The SCARA acronym stands for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.) type and perform a Pick-and-place function or a very specific task such as laying down a bead of adhesive on a plastic or metal part.  Employee training will be necessary if robotic systems are used and if those systems are collaborative in nature.  In other words—get ready for it.  Train for this to happen so that when it does you are prepared.


Information for this post is taken from the following companies:

  • Wholers Associates
  • Gartner
  • Oerlikon
  • SmartTech Publishing

3-D ADDITIVE MANUFACTURING:

I think before we get up and running let us define “additive manufacturing”.

Additive Manufacturing or AM is an appropriate name to describe the technologies that build 3D objects by adding layer-upon-layer of material, whether the material is plastic, metal, concrete human tissue. Believe it or not, additive manufacturing is now, on a limited basis, able to construct objects from human tissue to repair body parts that have been damaged and/or absent.

Common to AM technologies is the use of a computer, 3D modeling software (Computer Aided Design or CAD), machine equipment and layering material.  Once a CAD sketch is produced, the AM equipment reads in data from the CAD file and lays downs or adds successive layers of liquid, powder, sheet material or other, in a layer-upon-layer fashion to fabricate a 3D object.

The term AM encompasses many technologies including subsets like 3D Printing, Rapid Prototyping (RP), Direct Digital Manufacturing (DDM), layered manufacturing and additive fabrication.

AM application is limitless. Early use of AM in the form of Rapid Prototyping focused on preproduction visualization models. More recently, AM is being used to fabricate end-use products in aircraft, dental restorations, medical implants, automobiles, and even fashion products.

RAPID PROTOTYPING & MANUFACTURING (RP&M) TECHNOLOGIES:

There are several viable options available today that take advantage of rapid prototyping technologies.   All of the methods shown below are considered to be rapid prototyping and manufacturing technologies.

  • (SLA) Stereolithography
  • (SLS) Selective Laser Sintering
  • (FDM) Fused Deposition Modeling
  • (3DP) Three-Dimensional Printing
  • (Pjet) Poly-Jet
  • Laminated Object Manufacturing

PRODUCT POSSIBILITIES:

Frankly, if it the configuration can be programmed, it can be printed.  The possibilities are absolutely endless.

Assortment of components: flange mount and external gear.

Bone fragment depicting a fractured bone.  This printed product will aid the efforts of a surgeon to make the necessary repair.

More and more, 3D printing is used to model teeth and jaw lines prior to extensive dental work.  It gives the dental surgeon a better look at a patients mouth prior to surgery.

You can see the intricate detail of the Eiffel Tower and the show sole in the JPEGs above.  3D printing can provide an enormous amount of detail to the end user.

THE MARKET:

3D printing is a disruptive technology that is definitely on the rise.  Let’s take a look at future possibilities and current practices.

GROWTH:

Wohlers Associates has been tracking the market for machines that produce metal parts for fourteen (14) years.  The Wohlers Report 2014 marks only the second time for the company to publish detailed information on metal based AM machine unit sales by year. The following chart shows that 348 of 3D machines were sold in 2013, compared to 198 in 2012—growth of an impressive 75.8%.

Additive manufacturing industry grew by 17.4% in worldwide revenues in 2016, reaching $6.063 billion.

MATERIALS USED:

Nearly one-half of the 3D printing/additive manufacturing service providers surveyed in 2016 offered metal printing.

GLOBAL MARKETS:

NUMBER OF VENDORS OFFERING EQUIPMENT:

The number of companies producing and selling additive manufacturing equipment

  • 2014—49
  • 2015—62
  • 2016—97

USERS:

World-wide shipments of 3D printers were projected to reach 455,772 units in 2016. 6.7 million units are expected to be shipped by 2020

More than 278,000 desktop 3D printers (under $5,000) were sold worldwide last year, according to Wohlers Associates. The report has a chart to illustrate and it looks like the proverbial hockey stick that you hear venture capitalists talk about: Growth that moves rapidly from horizontal to vertical (from 2010 to 2015 for desktop).

According to Wohlers Report 2016, the additive manufacturing (AM) industry grew 25.9% (CAGR – Corporate Annual Growth Rate) to $5.165 billion in 2015. Frequently called 3D printing by those outside of manufacturing circles, the industry growth consists of all AM products and services worldwide. The CAGR for the previous three years was 33.8%. Over the past 27 years, the CAGR for the industry is an impressive 26.2%. Clearly, this is not a market segment that is declining as you might otherwise read.

THE MARKET:

  • About 20 to 25% of the $26.5 billion market forecast for 2021 is expected to be the result of metal additive manufacturing.
  • The market for polymers and plastics for 3D printing will reach $3.2 billion by 2022
  • The primary market for metal additive manufacturing, including systems and power materials, will grow to over $6.6 billion by 2026.

CONCLUSIONS:

We see more and more products and components manufactured by 3D Printing processes.  Additive manufacturing just now enjoying acceptance from larger and more established companies whose products are in effect “mission critical”.  As material choices continue to grow, a greater number of applications will emerge.  For the foreseeable future, additive manufacturing is one of the technologies to be associated with.

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

%d bloggers like this: