AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?

Advertisements

HACKED OFF

October 2, 2017


Portions of this post are taken from an article by Rob Spiegel of Design News Daily.

You can now anonymously hire a cybercriminal online for as little as six to ten dollars ($6 to $10) per hour, says Rodney Joffe, senior vice president at Neustar, a cybersecurity company. As it becomes easier to engineer such attacks, with costs falling, more businesses are getting targeted. About thirty-two (32) percent of information technology professionals surveyed said DDoS attacks cost their companies $100,000 an hour or more. That percentage is up from thirty (30) percent reported in 2014, according to Neustar’s survey of over 500 high-level IT professionals. The data was released Monday.

Hackers are costing consumers and companies between $375 and $575 billion, annually, according to a study published this past Monday, a number only expected to grow as online information stealing expands with increased Internet use.  This number blows my mind.   I actually had no idea the costs were so great.  Great and increasing.

Online crime is estimated at 0.8 percent of worldwide GDP, with developed countries in regions including North America and Europe losing more than countries in Latin American or Africa, according to the new study published by the Center for Strategic and International Studies and funded by cybersecurity firm McAfee.

That amount rivals the amount of worldwide GDP – 0.9 percent – that is spent on managing the narcotics trade. This difference in costs for developed nations may be due to better accounting or transparency in developed nations, as the cost of online crime can be difficult to measure and some companies do not do disclose when they are hacked for fear of damage to their reputations, the report said.

Cyber attacks have changed in recent years. Gone are the days when relatively benign bedroom hackers entered organizations to show off their skills.  No longer is it a guy in the basement of his or her mom’s home eating Doritos.  Attackers now are often sophisticated criminals who target employees who have access to the organization’s jewels. Instead of using blunt force, these savvy criminals use age-old human fallibility to con unwitting employees into handing over the keys to the vault.  Professional criminals like the crime opportunities they’ve found on the internet. It’s far less dangerous than slinging guns. Cybersecurity is getting worse. Criminal gangs have discovered they can carry out crime more effectively over the internet, and there’s less chance of getting caught.   Hacking individual employees is often the easiest way into a company.  One of the cheapest and most effective ways to target an organization is to target its people. Attackers use psychological tricks that have been used throughout mankind.   Using the internet, con tricks can be carried out on a large scale. The criminals do reconnaissance to find out about targets over email. Then they effectively take advantage of key human traits.

One common attack comes as an email impersonating a CEO or supplier. The email looks like it came from your boss or a regular supplier, but it’s actually targeted to a specific professional in the organization.   The email might say, ‘We’ve acquire a new organization. We need to pay them. We need the company’s bank details, and we need to keep this quiet so it won’t affect our stock price.’ The email will go on to say, ‘We only trust you, and you need to do this immediately.’ The email comes from a criminal, using triggers like flattery, saying, ‘You’re the most trusted individual in the organization.’ The criminals play on authority and create the panic of time pressure. Believe it or not, my consulting company has gotten these messages. The most recent being a hack from Experian.

Even long-term attacks can be launched by using this tactic of a CEO message. “A company in Malaysia received kits purporting to come from the CEO.  The users were told the kit needed to be installed. It took months before the company found out it didn’t come from the CEO at all.

Instead of increased technology, some of the new hackers are deploying the classic con moves, playing against personal foibles. They are taking advantage of those base aspects of human nature and how we’re taught to behave.   We have to make sure we have better awareness. For cybersecurity to be engaging, you have to have an impact.

As well as entering the email stream, hackers are identifying the personal interests of victims on social media. Every kind of media is used for attacks. Social media is used to carry out reconnaissance, to identify targets and learn about them.  Users need to see what attackers can find out about them on Twitter or Facebook. The trick hackers use is to pretend they know the target. Then the get closes through personal interaction on social media. You can look at an organization on Twitter and see who works in finance. Then they take a good look across social platform to find those individuals on social media to see if they go to a class each week or if they traveled to Iceland in 1996.  You can put together a spear-phishing program where you say, Hey I went on this trip with you.

CONCLUSIONS:

The counter-action to personal hacking is education and awareness. The company can identify potential weaknesses and potential targets and then change the vulnerable aspects of the corporate environment.  We have to look at the culture of the organization. Those who are under pressure are targets. They don’t have time to study each email they get. We also have to discourage reliance on email.   Hackers also exploit the culture of fear, where people are punished for their mistakes. Those are the people most in danger. We need to create a culture where if someone makes a mistake, they can immediately come forward. The quicker someone comes forward, the quicker we can deal with it.


In preparation for this post, I asked my fifteen-year old grandson to define product logistics and product supply chain.  He looked at me as though I had just fallen off a turnip truck.  I said you know, how does a manufacturer or producer of products get those products to the customer—the eventual user of the device or commodity.  How does that happen? I really need to go do my homework.  Can I think about this and give you an answer tomorrow?

SUPPLY CHAIN LOGISTICS:

Let’s take a look at Logistics and Supply Chain Management:

“Logistics typically refers to activities that occur within the boundaries of a single organization and Supply Chain refers to networks of companies that work together and coordinate their actions to deliver a product to market. Also, traditional logistics focuses its attention on activities such as procurement, distribution, maintenance, and inventory management. Supply Chain Management (SCM) acknowledges all of traditional logistics and also includes activities such as marketing, new product development, finance, and customer service” – from Essential of Supply Chain Management by Michael Hugos.

“Logistics is about getting the right product, to the right customer, in the right quantity, in the right condition, at the right place, at the right time, and at the right cost (the seven Rs of Logistics)” – from Supply Chain Management: A Logistics Perspective By John J. Coyle et al

Now, that wasn’t so difficult, was it?  A good way to look at is as follows:

MOBILITY AND THE SUPPLY CHAIN:

There have been remarkable advancements in supply chain logistics over the past decade.  Most of those advancements have resulted from companies bringing digital technologies into the front office, the warehouse, and transportation to the eventual customer.   Mobile technologies are certainly changing how products are tracked outside the four walls of the warehouse and the distribution center.  Realtime logistics management is within the grasp of many very savvy shippers.  To be clear:

Mobile networking refers to technology that can support voice and/or data network connectivity using wireless, via a radio transmission solution. The most familiar application of mobile networking is the mobile phone or tablet or i-pad.  From real-time goods tracking to routing assistance to the Internet of Things (IoT) “cutting wires” in the area that lies between the warehouse and the customer’s front door is gaining ground as shippers grapple with fast order fulfillment, smaller order sizes, and ever-evolving customer expectations.

In return for their tech investments, shippers and logistics managers are gaining benefits such as short-ended lead times, improved supply chain visibility, error reductions, optimized transportation networks and better inventory management.  If we combine these advantages we see that “wireless” communications are helping companies work smarter and more efficiently in today’s very fast-paced business world.

MOBILITY TRENDS:

Let’s look now at six (6) mobility trends.

  1. Increasingly Sophisticated Vehicle Communications—There was a time when the only contact a driver had with home base was after an action, such as load drop-off, took place or when there was an in-route problem. Today, as you might expect, truck drivers, pilots and others responsible for getting product to the customer can communicate real-time.  Cell phones have revolutionized and made possible real-time communication.
  2. Trucking Apps—By 2015, Frost & Sullivan indicated the size of the mobile trucking app market hit $35.4 billion dollars. Mobile apps are being launched, targeting logistics almost constantly. With the launch of UBER Freight, the competition in the trucking app space has heated up considerably, pressing incumbents to innovate and move much faster than ever before.
  3. Its’ Not Just for the Big Guys Anymore: At one time, fleet mobility solutions were reserved for larger companies that could afford them.  As technology has advanced and become more mainstream and affordable, so have fleet mobility solution.
  4. Mobility Helps Pinpoint Performance and Productivity Gaps: Knowing where everything is at any one given time is “golden”. It is the Holy Grail for every logistics manager.  Mobility is putting that goal within their reach.
  5. More Data Means More Mobile Technology to Generate and Support Logistics: One great problem that is now being solved, is how to handle perishable goods and refrigerated consumer items.  Shippers who handle these commodities are now using sensors to detect trailer temperatures, dead batteries, and other problems that would impact their cargos.  Using sensors, and the data they generate, shippers can hopefully make much better business decisions and head off problems before they occur.  Sensors, if monitored properly, can indicate trends and predict eventual problems.
  6. Customers Want More Information and Data—They Want It Now: Customer’s expectations for real-time shipment data is now available at their fingertips without having to pick up a telephone or send an e-mail.  Right now, that information is available quickly online or with a smartphone.

CONCLUSIONS: 

The world is changing at light speed, and mobility communications is one technology making this possible.  I have no idea as to where we will be in ten years, but it just might be exciting.

MULTITASKING

September 14, 2017


THE DEFINITION:

“Multitasking, in a human context, is the practice of doing multiple things simultaneously, such as editing a document or responding to email while attending a teleconference.”

THE PROCESS:

The concept of multitasking began in a computing context. Computer multitasking, similarly to human multitasking, refers to performing multiple tasks at the same time. In a computer, multitasking refers to things like running more than one application simultaneously.   Modern-day computers are designed for multitasking. For humans, however, multitasking has been decisively proven to be an ineffective way to work. Research going back to the 1980s has indicated repeatedly that performance suffers when people multitask.

REALITY:

Multitasking is not a natural human trait.  In a few hundred years, natural evolution may improve human abilities but for now, we are just not good at it.  In 2007, an ABC Evening News broadcast cited, “People are interrupted once every ten and one-half minutes (10.5).  It takes twenty-three (23) minutes to regain your train of thought.  People lose two point one (2.1) hours each day in the process of multitasking.”

A great article entitled “No Task Left Behind” by Mark Gloria, indicated that a person juggled twelve (12) work spheres each day and fifty-seven percent (57%) of the work got interrupted.  As a result, twenty-three percent (23%) of the work to be accomplished that day got pushed to the next day and beyond. That was the case twelve years ago.  We all have been there trying to get the most of each day only to return home with frustration and more to do the next day.

Experience tells us that:

  • For students, an increase in multitasking predicted poorer academic results.
  • Multitaskers took longer to complete tasks and produced more errors.
  • People had more difficulty retaining new information while multitasking.
  • When tasks involved making selections or producing actions, even very simple tasks performed concurrently were impaired.
  • Multitaskers lost a significant amount of time switching back and forth between tasks, reducing their productivity up to forty percent (40%).
  • Habitual multitaskers were less effective than non-multitaskers even when doing one task at any given time because their ability to focus was impaired.
  • Multitasking temporarily causes an IQ drop of 10 points, the equivalent of going without sleep for a full night.
  • Multitaskers typically think they are more effective than is actually the case.
  • There are limited amounts of energy for any one given day.
  • Multitasking can lessen inter-personal skills and actually detract from the total work force.
  • It encourages procrastination.
  • A distracted mind may become permanent.

THE MYTH OF MULTITASKING:

People believe multitasking is a positive attribute, one to be admired. But multitasking is simply the lack of self-discipline. Multitasking is really switching your attention from one to task to another to another, instead of giving yourself over to a single task. Multitasking is easy; disciplined focus and attention is difficult.

The quality of your work is determined by how much of your time, your focus and your attention you give it. While multitasking feels good and feels busy, the quality of the work is never what it could be with the creator’s full attention. More and more, this is going to be apparent to those who are judging the work, especially when compared to work of someone who is disciplined and who has given the same or similar project their full focus and attention.

MENTAL FLOW:

In positive psychology, flow, also known as the zone, is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity.

The individual who coined the phrase “flow” was Mihaly Csikszentmihalyi. (Please do NOT ask me to pronounce Dr. Csikszentmihalyi’s last name.)  He made the following statement:

“The best moments in our lives are not the passive, receptive, relaxing times… The best moments usually occur if a person’s body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile.”

– Mihaly Csikszentmihalyi  

EIGHT CHARACTERISTICS OF “FLOW”:

  1. Complete concentration on the task.  By this we mean really complete.
  2. Clarity of goals and reward in mind and immediate feedback. No need to focus and concentrate when there are no goals in mind to indicate completion.
  3. Transformation of time (speeding up/slowing down of time). When in full “flow” mode, you lost time.
  4. The experience is intrinsically rewarding, has an end itself.
  5. Effortlessness and ease.
  6. There is a balance between challenge and skills.
  7. Actions and awareness are merged, losing self-conscious rumination.
  8. There is a feeling of control over the task.

I personally do not get there often but the point is—you cannot get in the “zone”, you will not be able to achieve mental “flow” when you are in the multitasking mode.  I just will not happen.

As always, I welcome your comments.

V2V TECHNOLOGY

September 9, 2017


You probably know this by now if you read my postings—my wife and I love to go to the movies.  I said GO TO THE MOVIES, not download movies but GO.  If you go to a matinée, and if you are senior, you get a reduced rate.  We do that. Normally a movie beginning at 4:00 P.M. will get you out by 6:00 or 6:30 P.M. Just in time for dinner. Coming from the Carmike Cinema on South Terrace, I looked left and slowly moved over to the inside lane—just in time to hit car in my “blind side”.  Low impact “touching” but never the less an accident anyway.  All cars, I’m told, have blind sides and ours certainly does.  Side mirrors do NOT cover all areas to the left and right of any vehicle.   Maybe there is a looming solution to that dilemma.

V2V:

The global automotive industry seems poised and on the brink of a “Brave New World” in which connectivity and sensor technologies come together to create systems that can eliminate life-threatening collisions and enable automobiles that drive themselves.  Knows as Cooperative Intelligent Transportation Systems, vehicle-to-vehicle or V2V technologies open the door for automobiles to share information and interact with each other, as well as emerging smart infrastructure. These systems, obviously, make transportation safer but offer the promise of reducing traffic congestion.

Smart features of V2V promise to enhance drive awareness via traffic alerts, providing notifications on congestion, obstacles, lane changing, traffic merging and railway crossing alerts.  Additional applications include:

  • Blind spot warnings
  • Forward collision warnings
  • Sudden brake-ahead warnings
  • Approaching emergency vehicle warnings
  • Rollover warnings
  • Travel condition data to improve maintenance services.

Already The Department of Transportation “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application”, DOT HS 812 014, details the technology as follows:

“The purpose of this research report is to assess the readiness for application of vehicle-to-vehicle (V2V) communications, a system designed to transmit basic safety information between vehicles to facilitate warnings to drivers concerning impending crashes. The United States Department of Transportation and NHTSA have been conducting research on this technology for more than a decade. This report explores technical, legal, and policy issues relevant to V2V, analyzing the research conducted thus far, the technological solutions available for addressing the safety problems identified by the agency, the policy implications of those technological solutions, legal authority and legal issues such as liability and privacy. Using this report and other available information, decision-makers will determine how to proceed with additional activities involving vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P) technologies.”

The agency estimates there are approximately five (5) million annual vehicle crashes, with attendant property damage, injuries, and fatalities. While it may seem obvious, if technology can help drivers avoid crashes, the damage due to crashes simply never occurs.  This is the intent of an operative V2V automotive system. While these “vehicle-resident” crash avoidance technologies can be highly beneficial, V2V communications represent an additional step in helping to warn drivers about impending danger. V2V communications use on-board dedicated short-range radio communication devices to transmit messages about a vehicle’s speed, heading, brake status, and other information to other vehicles and receive the same information from the messages, with range and “line-of-sight” capabilities that exceed current and near-term “vehicle-resident” systems — in some cases, nearly twice the range. This longer detection distance and ability to “see” around corners or “through” other vehicles and helps V2V-equipped vehicles perceive some threats sooner than sensors, cameras, or radar.  This can warn drivers accordingly. V2V technology can also be fused with those vehicle-resident technologies to provide even greater benefits than either approach alone. V2V can augment vehicle-resident systems by acting as a complete system, extending the ability of the overall safety system to address other crash scenarios not covered by V2V communications, such as lane and road departure. A fused system could also augment system accuracy, potentially leading to improved warning timing and reducing the number of false warnings.

Communications represent the keystone of V2V systems.  The current technology builds upon a wireless standard called Dedicated Shor- Range Communication or DSRC.  DSRC is based upon the IEEE 802.11p protocol.  Transmissions of these systems consists of highly secure, short-to-medium-range, high-speed wireless communication channels, which enable vehicles to connect with each other for short periods of time.  Using DSRC, two or more vehicles can exchange basic safety messages, which describe each vehicle’s speed, position, heading, acceleration rate, size and braking status.  The system sends these messages to the onboard units of surrounding vehicles ten (10) times per second, where they are interpreted and provide warnings to the driver.  To achieve this, V2V systems leverage telematics to track vehicles via GPS monitoring the location, movements, behavior and status of each vehicle.

Based on preliminary information, NHTSA currently estimates that the V2V equipment and supporting communications functions (including a security management system) would cost approximately $341 to $350 per vehicle in 2020 dollars. It is possible that the cost could decrease to approximately $209 to $227 by 2058, as manufacturers gain experience producing this equipment (the learning curve). These costs would also include an additional $9 to $18 per year in fuel costs due to added vehicle weight from the V2V system. Estimated costs for the security management system range from $1 to $6 per vehicle, and they will increase over time due to the need to support an increasing number of vehicles with the V2V technologies. The communications costs range from $3 to $13 per vehicle. Cost estimates are not expected to change significantly by the inclusion of V2V-based safety applications, since the applications themselves are software and their costs are negligible.  Based on preliminary estimates, the total projected preliminary annual costs of the V2V system fluctuate year after year but generally show a declining trend. The estimated total annual costs range from $0.3 to $2.1 billion in 2020 with the specific costs being dependent upon the technology implementation scenarios and discount rates. The costs peak to $1.1 to $6.4 billion between 2022 and 2024, and then they gradually decrease to $1.1 to $4.6 billion.

In terms of safety impacts, the agency estimates annually that just two of many possible V2V safety applications, IMA (Integrated Motor Assists) and LTA (Land Transport Authority), would on an annual basis potentially prevent 25,000 to 592,000 crashes, save 49 to 1,083 lives, avoid 11,000 to 270,000 MAIS 1-5 injuries, and reduce 31,000 to 728,000 property-damage-only crashes by the time V2V technology had spread through the entire fleet. We chose those two applications for analysis at this stage because they are good illustrations of benefits that V2V can provide above and beyond the safety benefits of vehicle-resident cameras and sensors. Of course, the number of lives potentially saved would likely increase significantly with the implementation of additional V2V and V2I safety applications that would be enabled if vehicles were equipped with DSRC capability.

CONCLUSIONS: 

It is apparent to me that we are driving (pardon the pun) towards self-driving automobiles. I have no idea as to when this technology will become fully adopted, if ever.  If that happens in part or across the vehicle spectrum, there will need to be some form of V2V. One car definitely needs to know where other cars are relative to position, speed, acceleration, and overall movement. My wife NEVER goes to sleep or naps while I’m driving—OK maybe one time as mentioned previously.  She is always remarkably attentive and aware when I’m behind the wheel.  This comes from experience gained over fifty-two years of marriage.  “The times they are a-changing”.   The great concern I have is how we are to maintain the systems and how “hackable” they may become.  As I awoke this morning, I read the following:

The credit reporting agency Equifax said Thursday that hackers gained access to sensitive personal data — Social Security numbers, birth dates and home addresses — for up to 143 million Americans, a major cybersecurity breach at a firm that serves as one of the three major clearinghouses for Americans’ credit histories.

I am sure, like me, that gives you pause.  If hackers can do that, just think about the chaos that can occur if V2V systems can be accessed and controlled.  Talk about keeping one up at night.

As always, I welcome your comments.


Various definitions of product lifecycle management or PLM have been issued over the years but basically: product lifecycle management is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products.  PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

“In recent years, great emphasis has been put on disposal of a product after its service life has been met.  How to get rid of a product or component is extremely important. Disposal methodology is covered by RoHS standards for the European Community.  If you sell into the EU, you will have to designate proper disposal.  Dumping in a landfill is no longer appropriate.

Since this course deals with the application of PLM to industry, we will now look at various industry definitions.

Industry Definitions

PLM is a strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination, and use of product definition information across the extended enterprise, and spanning from product concept to end of life integrating people, processes, business systems, and information. PLM forms the product information backbone for a company and its extended enterprise.” Source:  CIMdata

“Product life cycle management or PLM is an all-encompassing approach for innovation, new product development and introduction (NPDI) and product information management from initial idea to the end of life.  PLM Systems is an enabling technology for PLM integrating people, data, processes, and business systems and providing a product information backbone for companies and their extended enterprise.” Source:  PLM Technology Guide

“The core of PLM (product life cycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.” Source:  Wikipedia article on Product Lifecycle Management

“Product life cycle management is the process of managing product-related design, production and maintenance information. PLM may also serve as the central repository for secondary information, such as vendor application notes, catalogs, customer feedback, marketing plans, archived project schedules, and other information acquired over the product’s life.” Source:  Product Lifecycle Management

“It is important to note that PLM is not a definition of a piece, or pieces, of technology. It is a definition of a business approach to solving the problem of managing the complete set of product definition information-creating that information, managing it through its life, and disseminating and using it throughout the lifecycle of the product. PLM is not just a technology, but is an approach in which processes are as important, or more important than data.” Source:  CIMdata

“PLM or Product Life Cycle Management is a process or system used to manage the data and design process associated with the life of a product from its conception and envisioning through its manufacture, to its retirement and disposal. PLM manages data, people, business processes, manufacturing processes, and anything else pertaining to a product. A PLM system acts as a central information hub for everyone associated with a given product, so a well-managed PLM system can streamline product development and facilitate easier communication among those working on/with a product. Source:  Aras

A pictorial representation of PLM may be seen as follows:

Hopefully, you can see that PLM deals with methodologies from “white napkin design to landfill disposal”.  Please note, documentation is critical to all aspects of PLM and good document production, storage and retrieval is extremely important to the overall process.  We are talking about CAD, CAM, CAE, DFSS, laboratory testing notes, etc.  In other words, “the whole nine yards of product life”.   If you work in a company with ISO certification, PLM is a great method to insure retaining that certification.

In looking at the four stages of a products lifecycle, we see the following:

Four Stages of Product Life Cycle—Marketing and Sales:

Introduction: When the product is brought into the market. In this stage, there’s heavy marketing activity, product promotion and the product is put into limited outlets in a few channels for distribution. Sales take off slowly in this stage. The need is to create awareness, not profits.

The second stage is growth. In this stage, sales take off, the market knows of the product; other companies are attracted, profits begin to come in and market shares stabilize.

The third stage is maturity, where sales grow at slowing rates and finally stabilize. In this stage, products get differentiated, price wars and sales promotion become common and a few weaker players exit.

The fourth stage is decline. Here, sales drop, as consumers may have changed, the product is no longer relevant or useful. Price wars continue, several products are withdrawn and cost control becomes the way out for most products in this stage.

Benefits of PLM Relative to the Four Stages of Product Life:

Considering the benefits of Product Lifecycle Management, we realize the following:

  • Reduced time to market
  • Increase full price sales
  • Improved product quality and reliability
  • Reduced prototypingcosts
  • More accurate and timely request for quote generation
  • Ability to quickly identify potential sales opportunities and revenue contributions
  • Savings through the re-use of original data
  • frameworkfor product optimization
  • Reduced waste
  • Savings through the complete integration of engineering workflows
  • Documentation that can assist in proving compliance for RoHSor Title 21 CFR Part 11
  • Ability to provide contract manufacturers with access to a centralized product record
  • Seasonal fluctuation management
  • Improved forecasting to reduce material costs
  • Maximize supply chain collaboration
  • Allowing for much better “troubleshooting” when field problems arise. This is accomplished by laboratory testing and reliability testing documentation.

PLM considers not only the four stages of a product’s lifecycle but all of the work prior to marketing and sales AND disposal after the product is removed from commercialization.   With this in mind, why is PLM a necessary business technique today?  Because increases in technology, manpower and specialization of departments, PLM was needed to integrate all activity toward the design, manufacturing and support of the product. Back in the late 1960s when the F-15 Eagle was conceived and developed, almost all manufacturing and design processes were done by hand.  Blueprints or drawings needed to make the parts for the F15 were created on a piece of paper. No electronics, no emails – all paper for documents. This caused a lack of efficiency in design and manufacturing compared to today’s technology.  OK, another example of today’s technology and the application of PLM.

If we look at the processes for Boeings DREAMLINER, we see the 787 Dreamliner has about 2.3 million parts per airplane.  Development and production of the 787 has involved a large-scale collaboration with numerous suppliers worldwide. They include everything from “fasten seatbelt” signs to jet engines and vary in size from small fasteners to large fuselage sections. Some parts are built by Boeing, and others are purchased from supplier partners around the world.  In 2012, Boeing purchased approximately seventy-five (75) percent of its supplier content from U.S. companies. On the 787 program, content from non-U.S. suppliers accounts for about thirty (30) percent of purchased parts and assemblies.  PLM or Boeing’s version of PLM was used to bring about commercialization of the 787 Dreamliner.

 

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

%d bloggers like this: