AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

Advertisements

Portions of the following post were taken from an article by Rob Spiegel publishing through Design News Daily.

Two former Apple design engineers – Anna Katrina Shedletsky and Samuel Weiss have leveraged machine learning to help brand owners improve their manufacturing lines. The company, Instrumental , uses artificial intelligence (AI) to identify and fix problems with the goal of helping clients ship on time. The AI system consists of camera-equipped inspection stations that allow brand owners to remotely manage product lines at their contact manufacturing facilities with the purpose of maximizing up-time, quality and speed. Their digital photo is shown as follows:

Shedletsky and Weiss took what they learned from years of working with Apple contract manufacturers and put it into AI software.

“The experience with Apple opened our eyes to what was possible. We wanted to build artificial intelligence for manufacturing. The technology had been proven in other industries and could be applied to the manufacturing industry,   it’s part of the evolution of what is happening in manufacturing. The product we offer today solves a very specific need, but it also works toward overall intelligence in manufacturing.”

Shedletsky spent six (6) years working at Apple prior to founding Instrumental with fellow Apple alum, Weiss, who serves Instrumental’s CTO (Chief Technical Officer).  The two took their experience in solving manufacturing problems and created the AI fix. “After spending hundreds of days at manufacturers responsible for millions of Apple products, we gained a deep understanding of the inefficiencies in the new-product development process,” said Shedletsky. “There’s no going back, robotics and automation have already changed manufacturing. Intelligence like the kind we are building will change it again. We can radically improve how companies make products.”

There are number examples of big and small companies with problems that prevent them from shipping products on time. Delays are expensive and can cause the loss of a sale. One day of delay at a start-up could cost $10,000 in sales. For a large company, the cost could be millions. “There are hundreds of issues that need to be found and solved. They are difficult and they have to be solved one at a time,” said Shedletsky. “You can get on a plane, go to a factory and look at failure analysis so you can see why you have problems. Or, you can reduce the amount of time needed to identify and fix the problems by analyzing them remotely, using a combo of hardware and software.”

Instrumental combines hardware and software that takes images of each unit at key states of assembly on the line. The system then makes those images remotely searchable and comparable in order for the brand owner to learn and react to assembly line data. Engineers can then take action on issues. “The station goes onto the assembly line in China,” said Shedletsky. “We get the data into the cloud to discover issues the contract manufacturer doesn’t know they have. With the data, you can do failure analysis and reduced the time it takes to find an issue and correct it.”

WHAT IS AI:

Artificial intelligence (AI) is intelligence exhibited by machines.  In computer science, the field of AI research defines itself as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.   Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of “artificial intelligence”, having become a routine technology.  Capabilities currently classified as AI include successfully understanding human speech,  competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

FUTURE:

Some would have you believe that AI IS the future and we will succumb to the “Rise of the Machines”.  I’m not so melodramatic.  I feel AI has progressed and will progress to the point where great time saving and reduction in labor may be realized.   Anna Katrina Shedletsky and Samuel Weiss realize the potential and feel there will be no going back from this disruptive technology.   Moving AI to the factory floor will produce great benefits to manufacturing and other commercial enterprises.   There is also a significant possibility that job creation will occur as a result.  All is not doom and gloom.


Various definitions of product lifecycle management or PLM have been issued over the years but basically: product lifecycle management is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products.  PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

“In recent years, great emphasis has been put on disposal of a product after its service life has been met.  How to get rid of a product or component is extremely important. Disposal methodology is covered by RoHS standards for the European Community.  If you sell into the EU, you will have to designate proper disposal.  Dumping in a landfill is no longer appropriate.

Since this course deals with the application of PLM to industry, we will now look at various industry definitions.

Industry Definitions

PLM is a strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination, and use of product definition information across the extended enterprise, and spanning from product concept to end of life integrating people, processes, business systems, and information. PLM forms the product information backbone for a company and its extended enterprise.” Source:  CIMdata

“Product life cycle management or PLM is an all-encompassing approach for innovation, new product development and introduction (NPDI) and product information management from initial idea to the end of life.  PLM Systems is an enabling technology for PLM integrating people, data, processes, and business systems and providing a product information backbone for companies and their extended enterprise.” Source:  PLM Technology Guide

“The core of PLM (product life cycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.” Source:  Wikipedia article on Product Lifecycle Management

“Product life cycle management is the process of managing product-related design, production and maintenance information. PLM may also serve as the central repository for secondary information, such as vendor application notes, catalogs, customer feedback, marketing plans, archived project schedules, and other information acquired over the product’s life.” Source:  Product Lifecycle Management

“It is important to note that PLM is not a definition of a piece, or pieces, of technology. It is a definition of a business approach to solving the problem of managing the complete set of product definition information-creating that information, managing it through its life, and disseminating and using it throughout the lifecycle of the product. PLM is not just a technology, but is an approach in which processes are as important, or more important than data.” Source:  CIMdata

“PLM or Product Life Cycle Management is a process or system used to manage the data and design process associated with the life of a product from its conception and envisioning through its manufacture, to its retirement and disposal. PLM manages data, people, business processes, manufacturing processes, and anything else pertaining to a product. A PLM system acts as a central information hub for everyone associated with a given product, so a well-managed PLM system can streamline product development and facilitate easier communication among those working on/with a product. Source:  Aras

A pictorial representation of PLM may be seen as follows:

Hopefully, you can see that PLM deals with methodologies from “white napkin design to landfill disposal”.  Please note, documentation is critical to all aspects of PLM and good document production, storage and retrieval is extremely important to the overall process.  We are talking about CAD, CAM, CAE, DFSS, laboratory testing notes, etc.  In other words, “the whole nine yards of product life”.   If you work in a company with ISO certification, PLM is a great method to insure retaining that certification.

In looking at the four stages of a products lifecycle, we see the following:

Four Stages of Product Life Cycle—Marketing and Sales:

Introduction: When the product is brought into the market. In this stage, there’s heavy marketing activity, product promotion and the product is put into limited outlets in a few channels for distribution. Sales take off slowly in this stage. The need is to create awareness, not profits.

The second stage is growth. In this stage, sales take off, the market knows of the product; other companies are attracted, profits begin to come in and market shares stabilize.

The third stage is maturity, where sales grow at slowing rates and finally stabilize. In this stage, products get differentiated, price wars and sales promotion become common and a few weaker players exit.

The fourth stage is decline. Here, sales drop, as consumers may have changed, the product is no longer relevant or useful. Price wars continue, several products are withdrawn and cost control becomes the way out for most products in this stage.

Benefits of PLM Relative to the Four Stages of Product Life:

Considering the benefits of Product Lifecycle Management, we realize the following:

  • Reduced time to market
  • Increase full price sales
  • Improved product quality and reliability
  • Reduced prototypingcosts
  • More accurate and timely request for quote generation
  • Ability to quickly identify potential sales opportunities and revenue contributions
  • Savings through the re-use of original data
  • frameworkfor product optimization
  • Reduced waste
  • Savings through the complete integration of engineering workflows
  • Documentation that can assist in proving compliance for RoHSor Title 21 CFR Part 11
  • Ability to provide contract manufacturers with access to a centralized product record
  • Seasonal fluctuation management
  • Improved forecasting to reduce material costs
  • Maximize supply chain collaboration
  • Allowing for much better “troubleshooting” when field problems arise. This is accomplished by laboratory testing and reliability testing documentation.

PLM considers not only the four stages of a product’s lifecycle but all of the work prior to marketing and sales AND disposal after the product is removed from commercialization.   With this in mind, why is PLM a necessary business technique today?  Because increases in technology, manpower and specialization of departments, PLM was needed to integrate all activity toward the design, manufacturing and support of the product. Back in the late 1960s when the F-15 Eagle was conceived and developed, almost all manufacturing and design processes were done by hand.  Blueprints or drawings needed to make the parts for the F15 were created on a piece of paper. No electronics, no emails – all paper for documents. This caused a lack of efficiency in design and manufacturing compared to today’s technology.  OK, another example of today’s technology and the application of PLM.

If we look at the processes for Boeings DREAMLINER, we see the 787 Dreamliner has about 2.3 million parts per airplane.  Development and production of the 787 has involved a large-scale collaboration with numerous suppliers worldwide. They include everything from “fasten seatbelt” signs to jet engines and vary in size from small fasteners to large fuselage sections. Some parts are built by Boeing, and others are purchased from supplier partners around the world.  In 2012, Boeing purchased approximately seventy-five (75) percent of its supplier content from U.S. companies. On the 787 program, content from non-U.S. suppliers accounts for about thirty (30) percent of purchased parts and assemblies.  PLM or Boeing’s version of PLM was used to bring about commercialization of the 787 Dreamliner.

 

COGNITIVE ABILITY

June 10, 2017


In 2013 my mother died of Alzheimer’s disease.  She was ninety-two (92) years old.  My father suffered significant dementia and passed away in 2014.  He was ninety-three (93) and one day.  We provided a birthday cake for him but unfortunately, he was unable to eat because he did not understand the significance and had no appetite remaining at all. Dementia is an acquired condition characterized by a decline in at least two cognitive domains (e.g., loss of memory, attention, language, or visuospatial or executive functioning) that is severe enough to affect social or occupational functioning. The passing of both parents demanded a search for methodologies to prolong cognitive ability. What, if anything, can we do to remain “brain healthy” well into our eighties and nineties?  Neurologists tell us we all will experience diminished mental abilities as we age but can we lengthen our brain’s ability to reason and perform?  The answer is a resounding YES.  Let’s take a look at activities the medical profession recommends to do just that.

  • READ—What is the difference between someone who does not know how to read and someone who does know but never cracks a book? ANSWER: Absolutely nothing.   If the end result is knowledge and/or pleasure gained, they both are equal.  Reading books and other materials with vivid imagery is not only fun, it also allows us to create worlds in our own minds. Researchers have found that visual imagery is simply automatic. Participants were able to identify photos of objects faster if they’d just read a sentence that described the object visually, suggesting that when we read a sentence, we automatically bring up pictures of objects in our minds. Any kind of reading provides stimulation for your brain, but different types of reading give different experiences with varying benefits. Stanford University researchers have found that close literary reading in particular gives your brain a workout in multiple complex cognitive functions, while pleasure reading increases blood flow to different areas of the brain. They concluded that reading a novel closely for literary study and thinking about its value is an effective brain exercise, more effective than simple pleasure reading alone.
  • MAKE MORE MISTAKES—Now, we are talking about engaging life or JUST DO IT. Every endeavor must be accompanied by calculating the risks vs. reward always keeping safety and general well-being in mind.  It took me a long time to get the courage to write and publish but the rewards have been outstanding on a personal level.
  • LEARN FROM OTHER’S MISTAKES—Less painful than “learning the hard way” but just as beneficial. Reading about the efforts of successful people and the mistakes they made along the way can go a long way to our avoiding the same pitfalls.
  • LEARN TO CONTROL YOUR BREATHING—This one really surprises me. Medical textbooks suggest that the normalrespiratory rate for adults is only 12 breaths per minute at rest. Older textbooks often provide even smaller values (e.g., 8-10 breaths per minute). Most modern adults breathe much faster (about 15-20 breaths per minute) than their normal breathing frequency. The respiratory rates in the sick persons are usually higher, generally about 20 breaths/min or more. This site quotes numerous studies that testify that respiratory rates in terminally sick people with cancer, HIV-AIDS, cystic fibrosis and other conditions is usually over 30 breaths/min.  Learning to control respiratory rate is one factor in providing a healthy brain.
  • EXERCISE-– This seems to be a no-brainer (pardon the pun) but thousands, maybe hundreds of thousands, of people NEVER exercise. For most healthy adults, the Department of Health and Human Services recommends these exercise guidelines: Aerobic activity. Get at least 150 minutes of moderate aerobic activity or 75 minutes of vigorous aerobic activity a week, or a combination of moderate and vigorous activity.  That is the minimum.
  • VISUALIZE YOUR OUTCOME—You have heard this before from world-class athletes. Picture yourself accomplishing the goal or goals you have established.  Make winning a foregone conclusion.
  • FOCUS ON THE LITTLE THINGS—For want of a nail the shoe was lost. For want of a shoe the horse was lost. For want of a horse the rider was lost. You have often heard ‘don’t sweat the small stuff’.  People who accomplish pay attention to detail.
  • WRITE—Nothing can clear the mind like writing down your thoughts. You have to organize, plan, visualize and execute when writing.
  • LEARN A NEW LANGUAGE—This is a tough one for most adults but, learning a new language stimulates areas of your brain. Scientists have long held the theory that the left and right hemisphere of your brain control different functions when it comes to learning. The left hemisphere is thought to control language, math and logic, while the right hemisphere is responsible for spatial abilities, visual imagery, music and your ability to recognize faces. The left hemisphere of your brain also controls the movement on the right side of your body. The left hemisphere of the brain contains parts of the parietal lobe, temporal lobe and the occipital lobe, which make up your language control center. In these lobes, two regions known as the Wernicke area and the Broca area allow you to understand and recognize, read and speak language patterns — including the ability to learn foreign languages.
  • SLEEP-– The evidence is clear that better brain and physical health in older people is related to getting an average of seven to eight hours of sleep every 24 hours,” said Sarah Lock, the council’s executive director and AARP senior vice president. The evidence on whether naps are beneficial to brain health in older adults is still unclear. If you must, limit napping to 30 minutes in the early afternoon. Longer naps late in the day can disrupt nighttime sleep. Get up at the same time every day, seven days a week. (You will not like this one.) Keep the bedroom for sleeping, not watching TV or reading or playing games on your smartphone or tablet.
  • DIET—A “brain-healthy” diet can go a long way to promoting cognitive ability. Keeping weight off and maintaining an acceptable body mass index (BMI) can certainly promote improved mental ability.
  • LEARN TO PROGRAM-– This is another tough one. Programming is difficult, tedious, time-consuming and can be extremely frustrating.  You must have the patience of Job to be a successful programmer, but it is mind-stimulating and can benefit cognitive ability.
  • TRAVEL—As much as you can, travel. Travel is a marvelous learning experience and certainly broadens an individual’s outlook.  New experiences, new and interesting people, new languages, all contribute to mental stimulation and improve cognitive ability.
  • LESSEN MIND-NUMING TELEVISION—Enough said here. Read a good book.
  • APPLY THE KNOWLEDGE YOU HAVE—Trust me on this one, you are a lot smarter than you think you are. Apply what you know to any one given situation. You will be surprised at the outcome and how your success will fuel additional successes.
  • REDUCE EXPOSURE TO SOCIAL MEDIA—Social medial can become a time-robbing exercise that removes you from real life. Instead of reading about the experiences of others, bring about experiences in your own life.

CONCLUSIONS:  As always, I welcome your comments.

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.


My wife and I love going to the movies.  Notice I say “going to the movies”.  Not so much downloading a film and watching from the couch.  We usually combine our movie watching with dinner afterward; discussing the film over a hot plate of “something”.  Something usually fattening. With that being the case, have you noticed lately the number of movies dedicated to designated heroes “saving the day” and the number of Sci-Fi movies that have been and will be released in the 2017 calendar year?

Let’s take a quick look at what we have coming down the pike for 2017.

  • POWER RANGERS—24 Marcy 2017
  • WONDER WOMAN—2 June 2017
  • SPIDERMAN HOMECOMING—July 7, 2017
  • RESIDENT EVIL: THE FINAL CHAPTER—January 27, 2017
  • GUARDIANS OF THE GALAXY, VOLUME 2—May 5, 2017
  • THOR: RAGNAROK—November 3, 2017
  • TRANSFORMERS: THE LAST KNIGHT—June 23,2017
  • OKJA—June 27, 2017
  • DARK TOWER—July 28, 2017
  • STAR WARS: THE LAST JEDI—December 17, 2017

Let’s take a look at releases for 2018.

If we take a look at history, we find the top ten (10) grossing movies are as follows:

  • Star Wars: Episode II — Attack of the Clones (2002) –$311 Million
·         9. Star Wars: Episode III — Revenge of the Sith (2005) –$380 Million
·         8. Independence Day (1996) –$306 Million
·         7. Back to the Future (1985) –$211 Million
·         6. Star Wars: Episode I — The Phantom Menace (1999) –$475 Million
·         5. Return of the Jedi (1983) –$309 Million
·         4. The Empire Strikes Back (1980) –$290 Million
·         3. Avatar (2009) –$761 Million
·         2. ET: The Extra-Terrestrial (1982) –$435 Million
·         1. Star Wars (1977) –$461 Million

The box office amounts represent numbers earned during the year of release.  If you associate 2017 numbers the amounts become much greater from a relative perspective.

Notice a trend?  Six (6) of the top ten grossing movies involve the Star Wars series.  We absolutely love seeing the “good guys” win.  Good against evil.  We go to these “flicks” hoping beyond hope the heroes will save the day, the good guys will get the girl, civilizations will win, all live happily ever after.  I love to see the computerized graphics and how the script unfolds using the graphics to illustrate and define the story.  These days, spectacular computer generated effects sequences are commonplace in everything from big-budget films to television, games, and even commercial advertising. But that wasn’t always the case—before 3D computer graphics became the norm, the world was a slightly duller place. Aliens were made of plastic instead of pixels. Superman needed wires in order to fly. Animations were created with pencils and paintbrushes.   Blockbusters look better than ever thanks to a talented army of 3D modelers, animators, render technicians, and warehouses full of the computers that do all the math.

Computer animations make the “hero-type” movies but still the questions must be asked—Why do we need heroes? Let’s take a look.

  1. We’re born to have heroes— More than a half-century ago, Carl Jung proposed the idea that all humans have collectively inherited unconscious images, ideas, or thoughts, which he called archetypes.  These archetypes reflect common experiences that all humans (and their ancestors) have shared over millions of years of evolution, and the main purpose of these archetypes is to prepare us for these common experiences.  Two such archetypes, according to Jung, are heroes and demons.  Current research appears to support Jung – scientists have found that newborn babies are equipped with a readiness for language, for numbers, for their parents’ faces — and even a preference for people who are moral.  Humans appear to be innately prepared for certain people and tasks, and we believe this may include encounters with heroes.
  2. Heroes nurture us when we’re young— Research has shown that when people are asked to name their own personal heroes, the first individuals who often come to mind are parents and caretakers.  All of us owe whatever success we’ve had in life to the people who were there for us when we were young, vulnerable, and developing.  When we recognize the great sacrifices that these nurturers and caretakers have made for us, we’re likely to call them our heroes.
  3. Heroes reveal our missing qualities— Heroes educate us about right and wrong.  Most fairytales and children’s stories serve this didactic purpose, showing kids the kinds of behaviors that are needed to succeed in life, to better society, and to overcome villainy.  It is during our youth that we most need good, healthy adult role models who demonstrate exemplary behavior.  But adults need heroic models as well.  Heroes reveal to us the kinds of qualities we need to be in communion with others.
  4. Heroes save us when we’re in trouble— This principle explains the powerful appeal of comic book superheroes.  People seemingly can’t get enough of Batman, Superman, Spiderman, Iron Man, and many others. We are moved by stories of magical beings with superhuman powers who can instantly remove danger and make everything right.  This principle also explains our extreme admiration for society’s true heroic protectors – law enforcement officers, firefighters, EMTs, paramedics, and military personnel.
  5. Heroes pick us up when we’re down— Life inevitably hands us personal setbacks and failings.  Failed relationships, failed businesses, and health problems are common life experiences for us.  Our research has shown that it is during these phases of great personal challenge in our lives that heroes are most likely inspire us to overcome whatever adversity we’re facing.  Heroes lift us up when we’re personally in danger of falling down emotionally, physically, or spiritually.  I think this is one reason we all love the movies.  We can become winners through the actions of our heroes.
  6. Heroes validate our preferred moral worldview — One fascinating theory in psychology is called terror management theory, which proposes that people’s fear of death strengthens their allegiance to cultural values. Just the simple act of reminding people of their mortality leads them to exaggerate whatever moral tendencies they already have.  For example, studies have shown that reminders of death lead people to reward do-gooders and punish bad-doers more than they normally would.  Just thinking about the fragility of life can lead us to need and to value heroes.
  7. Heroes provide dramatic, entertaining stories— Psychologists have long been aware of the power of a good, juicy narrative.  Stories of heroes and heroic myth have entertained humans since the dawn of recorded history.  Joseph Campbell documented recurring patterns in these hero stories in his seminal book, and virtually all hero stories feature these time-honored patterns.  Today’s media are all-too aware of our hunger for hero stories and take great delight in building celebrity heroes up and then tearing them down.  People have always been drawn to human drama and they always will be. Admit it—we all love a great story.
  8. Heroes solve problems— Research has shown that people’s heroes are not just paragons of morality. They also show superb competencies directed toward the goal of solving society’s most vexing problems.  Jonas Salk developed the first polio vaccine.  George Washington Carver introduced crop rotation into agriculture. Stephanie Kwolek invented the material in bullet-proof vests that have saved the lives of countless law enforcement officers.  Heroes give us wisdom and save lives with their brains, not just with their brawn.  We do NOT get much problem solving from Congress now days so just maybe we substitute that inability with guys who really do things.
  9. Heroes deliver justice — People from all cultures possess a strong desire for justice.  After members of the Boston police captured the Boston Marathon bomber, crowds of citizens lined the streets to applaud their heroes.  Research has shown that we need to believe that we live in a just world where good things happen to good people and bad things happen to bad people.  The preamble to the 1950s Superman television show spoke of superman’s never-ending quest for “truth, justice, and the American way”.  Heroes quench our thirst for fairness and lawfulness.
  10.  Heroes give us hope — Independent of our own personal well-being, we cannot help but recognize that the world is generally a troubled place rife with warfare, poverty, famine, and unrest.  Heroes are beacons of light amidst this vast darkness. Heroes prove to us that no matter how much suffering there is in the world, there are supremely good people around whom we can count on to do the right thing, even when most other people are not. Heroes bring light into a dark world.

Hope you enjoy this one as always—I would love to receive your comments.

RISE OF THE MACHINES

March 20, 2017


Movie making today is truly remarkable.  To me, one of the very best parts is animation created by computer graphics.  I’ve attended “B” movies just to see the graphic displays created by talented programmers.  The “Terminator” series, at least the first movie in that series, really captures the creative essence of graphic design technology.  I won’t replay the movie for you but, the “terminator” goes back in time to carry out its prime directive—Kill John Conner.  The terminator, a robotic humanoid, has decision-making capability as well as human-like mobility that allows the plot to unfold.  Artificial intelligence or AI is a fascinating technology many companies are working on today.  Let’s get a proper definition of AI as follows:

“the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Question:  Are Siri, Cortana, and Alexa eventually going to be more literate than humans? Anyone excited about the recent advancements in artificial intelligence (AI) and machine learning should also be concerned about human literacy as well. That’s according to Protect Literacy , a global campaign, backed by education company Pearson, aimed at creating awareness and fighting against illiteracy.

Project Literacy, which has been raising awareness for its cause at SXSW 2017, recently released a report, “ 2027: Human vs. Machine Literacy ,” that projects machines powered by AI and voice recognition will surpass the literacy levels of one in seven American adults in the next ten (10) years. “While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to simple text search task…exceeding the abilities of millions of people who are nonliterate,” Kate James, Project Literacy spokesperson and Chief Corporate Affairs and Global Marketing Officer at Pearson, wrote in the report. In light of this the organization is calling for “society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.”  (I would invite you to re-read this statement and shudder in your boots as I did.)

While the past twenty-five (25) years have seen disappointing progress in U.S. literacy, there have been huge gains in linguistic performance by a totally different type of actor – computers. Dramatic advances in natural language processing (Hirschberg and Manning, 2015) have led to the rise of language technologies like search engines and machine translation that “read” text and produce answers or translations that are useful for people. While these systems currently have a much shallower understanding of language than people do, they can already perform tasks similar to the simple text search task above – exceeding the abilities of millions of people who are nonliterate.

According to the National National Centre for Education Statistics machine literacy has already exceeded the literacy abilities of the estimated three percent (3%) of non-literate adults in the US.

Comparing demographic data from the Global Developer Population and Demographic Study 2016 v2 and the 2015 Digest of Education Statistics finds there are more software engineers in the U.S. than school teachers, “We are focusing so much on teaching algorithms and AI to be better at language that we are forgetting that fifty percent (50%)  of adults cannot read a book written at an eighth grade level,” Project Literacy said in a statement.  I retired from General Electric Appliances.   Each engineer was required to write, or at least the first draft, of the Use and Care Manuals for specific cooking products.  We were instructed to 1.) Use plenty of graphic examples and 2.) Write for a fifth-grade audience.  Even with that, we know from experience that many consumers never use and have no intention of reading their Use and Care Manual.  With this being the case, many of the truly cool features are never used.  They may as well buy the most basic product.

Research done by Business Insider reveals that thirty-two (32) million Americans cannot currently read a road sign. Yet at the same time there are ten (10) million self-driving cars predicted to be on the roads by 2020. (One could argue this will further eliminate the need for literacy, but that is debatable.)  If we look at literacy rates for the top ten (10) countries on our planet we see the following:

Citing research from Venture Scanner , Project Literacy found that in 2015 investment in AI technologies, including natural language processing, speech recognition, and image recognition, reached $47.2 billion. Meanwhile, data on US government spending shows that the 2017 U.S. Federal Education Budget for schools (pre-primary through secondary school) is $40.4 billion.  I’m not too sure funding for education always goes to benefit students education. In other words, throwing more money at this problem may not always provide desired results, but there is no doubt, funding for AI will only increase.

“Human literacy levels have stalled since 2000. At any time, this would be a cause for concern, when one in ten people worldwide…still cannot read a road sign, a voting form, or a medicine label,” James wrote in the report. “In popular discussion about advances in artificial intelligence, it is easy

CONCLUSION:  AI will only continue to advance and there will come a time when robotic systems will be programmed with basic decision-making skills.  To me, this is not only fascinating but more than a little scary.

%d bloggers like this: