SMARTS

March 17, 2019


Who was the smartest person in the history of our species? Solomon, Albert Einstein, Jesus, Nikola Tesla, Isaac Newton, Leonardo de Vinci, Stephen Hawking—who would you name.  We’ve had several individuals who broke the curve relative to intelligence.   As defined by the Oxford Dictionary of the English Language, IQ:

“an intelligence test score that is obtained by dividing mental age, which reflects the age-graded level of performance as derived from population norms, by chronological age and multiplying by100: a score of100 thus indicates performance at exactly the normal level for that age group. Abbreviation: IQ”

An intelligence quotient or IQ is a score derived from one of several different intelligence measures.  Standardized tests are designed to measure intelligence.  The term “IQ” is a translation of the German Intellizenz Quotient and was coined by the German psychologist William Stern in 1912.  This was a method proposed by Dr. Stern to score early modern children’s intelligence tests such as those developed by Alfred Binet and Theodore Simin in the early twentieth century.  Although the term “IQ” is still in use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is not based on a projection of the subject’s measured rank on the Gaussian Bell curve with a center value of one hundred (100) and a standard deviation of fifteen (15).  The Stanford-Binet IQ test has a standard deviation of sixteen (16).  As you can see from the graphic below, seventy percent (70%) of the human population has an IQ between eighty-five and one hundred and fifteen.  From one hundred and fifteen to one hundred and thirty you are considered to be highly intelligent.  Above one hundred and thirty you are exceptionally gifted.

What are several qualities of highly intelligent people?  Let’s look.

QUALITIES:

  • A great deal of self-control.
  • Very curious
  • They are avid readers
  • They are intuitive
  • They love learning
  • They are adaptable
  • They are risk-takers
  • They are NOT over-confident
  • They are open-minded
  • They are somewhat introverted

You probably know individuals who fit this profile.  We are going to look at one right now:  John von Neumann.

JON von NEUMANN:

The Financial Times of London celebrated John von Neumann as “The Man of the Century” on Dec. 24, 1999. The headline hailed him as the “architect of the computer age,” not only the “most striking” person of the 20th century, but its “pattern-card”—the pattern from which modern man, like the newest fashion collection, is cut.

The Financial Times and others characterize von Neumann’s importance for the development of modern thinking by what are termed his three great accomplishments, namely:

(1) Von Neumann is the inventor of the computer. All computers in use today have the “architecture” von Neumann developed, which makes it possible to store the program, together with data, in working memory.

(2) By comparing human intelligence to computers, von Neumann laid the foundation for “Artificial Intelligence,” which is taken to be one of the most important areas of research today.

(3) Von Neumann used his “game theory,” to develop a dominant tool for economic analysis, which gained recognition in 1994 when the Nobel Prize for economic sciences was awarded to John C. Harsanyi, John F. Nash, and Richard Selten.

John von Neumann, original name János Neumann, (born December 28, 1903, Budapest, Hungary—died February 8, 1957, Washington, D.C. Hungarian-born American mathematician. As an adult, he appended von to his surname; the hereditary title had been granted his father in 1913. Von Neumann grew from child prodigy to one of the world’s foremost mathematicians by his mid-twenties. Important work in set theory inaugurated a career that touched nearly every major branch of mathematics. Von Neumann’s gift for applied mathematics took his work in directions that influenced quantum theory theory of automation, economics, and defense planning. Von Neumann pioneered game theory, and, along with Alan Turing and Claude Shannon was one of the conceptual inventors of the stored-program digital computer .

Von Neumann did exhibit signs of genius in early childhood: he could joke in Classical Greek and, for a family stunt, he could quickly memorize a page from a telephone book and recite its numbers and addresses. Von Neumann learned languages and math from tutors and attended Budapest’s most prestigious secondary school, the Lutheran Gymnasium . The Neumann family fled Bela Kun’s short-lived communist regime in 1919 for a brief and relatively comfortable exile split between Vienna and the Adriatic resort of Abbazia. Upon completion of von Neumann’s secondary schooling in 1921, his father discouraged him from pursuing a career in mathematics, fearing that there was not enough money in the field. As a compromise, von Neumann simultaneously studied chemistry and mathematics. He earned a degree in chemical engineering from the Swiss Federal Institute in  Zurich and a doctorate in mathematics (1926) from the University of Budapest.

OK, that all well and good but do we know the IQ of Dr. John von Neumann?

John Von Neumann IQ is 190, which is considered as a super genius and in top 0.1% of the population in the world.

With his marvelous IQ, he wrote one hundred and fifty (150) published papers in his life; sixty (60) in pure mathematics, twenty (20) in physics, and sixty (60) in applied mathematics. His last work, an unfinished manuscript written while in the hospital and later published in book form as The Computer and the Brain, gives an indication of the direction of his interests at the time of his death. It discusses how the brain can be viewed as a computing machine. The book is speculative in nature, but discusses several important differences between brains and computers of his day (such as processing speed and parallelism), as well as suggesting directions for future research. Memory is one of the central themes in his book.

I told you he was smart!

TELECOMMUTING

March 13, 2019


Our two oldest granddaughters have new jobs.  Both, believe it or not, telecommute.  That’s right, they do NOT drive to work.  They work from home—every day of the week and sometimes on Saturday.  Both ladies work for companies not remotely close to their homes in Atlanta.  The headquarters for these companies are hundreds of miles away and in other states.

Even the word is fairly new!  A few years ago, there was no such “animal” as telecommuting and today it’s considered by progressive companies as “kosher”.   Companies such as AT&T, Blue Cross-Blue Shield, Southwest Airlines, The Home Shopping Network, Amazon and even Home Depot allow selected employees to “mail it in”.  The interesting thing; efficiency and productivity are not lessened and, in most cases, improve.   Let’s look at several very interesting facts regarding this trend in conducting business.  This information comes from a website called “Flexjobs.com”.

  1. Three point three (3.3) million full-time professionals, excluding volunteers and the self-employed, consider their home as their primary workplace.
  2. Telecommuting saves between six hundred ($600) and one thousand ($1,000)  on annual dry-cleaning expenses, more than eight hundred ($800) on coffee and lunch expenses, enjoy a tax break of about seven hundred and fifty ($750), save five hundred and ninety ($590) on their professional wardrobe, save one thousand one hundred and twenty ($1,120) on gas, and avoid over three hundred ( $300 ) dollars in car maintenance costs.
  3. Telecommuters save two hundred and sixty (260) hours by not commuting on a daily basis.
  4. Work from home programs help businesses save about two thousand ($2,000) per year help businesses save two thousand ($2,000) per person per year and reduce turnover by fifty (50%) percent.
  5. Typical telecommuter are college graduates of about forty-nine (49) years old and work with a company with fewer than one hundred (100) employees.
  6. Seventy-three percent (73%) of remote workers are satisfied with the company they work for and feel that their managers are concerned about their well-being and morale.
  7. For every one real work-from-home job, there are sixty job scams.
  8. Most telecommuters (53 percent) work more than forty (40) hours per week.
  9. Telecommuters work harder to create a friendly, cooperative, and positive work environment for themselves and their teams.
  10. Work-from-home professionals (82 percent) were able to lower their stress levels by working remotely. Eighty (80) percent have improved morale, seventy (70) percent increase productivity, and sixty-nine (69) percent miss fewer days from work.
  11. Half of the U.S. workforce have jobs that are compatible with remote work.
  12. Remote workers enjoy more sleep, eat healthier, and get more physical exercise
  13. Telecommuters are fifty (50) percent less likely to quit their jobs.
  14. When looking at in-office workers and telecommuters, forty-five (45) percent of telecommuters love their job, while twenty-four (24) percent of in-office workers love their jobs.
  15. Four in ten (10) freelancers have completed projects completely from home.

OK, what are the individual and company benefits resulting from this activity.  These might be as follows:

  • Significant reduction in energy usage by company.
  •  Reduction in individual carbon footprint. (It has been estimated that 9,500 pounds of CO 2 per year per person could be avoided if the employee works from home.  Most of this is avoidance of cranking up the “tin lezzy”.)
  • Reduction in office expenses in the form of space, desk, chair, tables, lighting, telephone equipment, and computer connections, etc.
  • Reduction in the number of sick days taken due to illnesses from communicable diseases.
  • Fewer “in-office” distractions allowing for greater focus on work.  These might include: 1.) Monday morning congregation at the water cooler to discuss the game on Saturday, 2.) Birthday parties, 3.) Mary Kay meetings, etc etc.  You get the picture!

In the state where I live (Tennessee), the number of telecommuters has risen eighteen (18) percent relative to 2011.  489,000 adults across Tennessee work from home on a regular basis.  Most of these employees do NOT work for themselves in family-owned businesses but for large companies that allow the activity.  Also, many of these employees work for out-of-state concerns thus creating ideal situations for both worker and employer.   At Blue Cross of Tennessee, one in six individuals go to work by staying at home.   Working at home definitely does not always mean there is no personal communication with supervisors and peers.    These meetings are factored into each work week, some required at least on a monthly basis.

Four point three (4.3) million employees (3.2% of the workforce) now work from home at least half the time.  Regular work-at-home, among the non-self-employed population, has grown by 140% since 2005, nearly 10x faster than the rest of the workforce or the self-employed.  Of course, this marvelous transition has only been made possible by internet connections and in most cases; the computer technology at home equals or surpasses that found at “work”.   We all know this trend will continue as well it should.

 

I welcome your comments and love to know your “telecommuting” stories.  Please send responses to: bobjengr@comcast.net.

ARTIFICIAL INTELLIGENCE

February 12, 2019


Just what do we know about Artificial Intelligence or AI?  Portions of this post were taken from Forbes Magazine.

John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automation theory and complex information processing. The proposal for the conference said, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions focus on AI being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human). The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Merriam-Webster defines artificial intelligence this way:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

About thirty (30) year ago, a professor at the Harvard Business School (Dr. Shoshana Zuboff) articulated three laws based on research into the consequences that widespread computing would have on society. Dr. Zuboff had degrees in philosophy and social psychology so she was definitely ahead of her time relative to the unknown field of AI.  Her document “In the Age of the Smart Machine: The Future of Work and Power”, she postulated the following three laws:

  • Everything that can be automated will be automated
  • Everything that can be informated will be informated. (NOTE: Informated was coined by Zuboff to describe the process of turning descriptions and measurements of activities, events and objects into information.)
  • In the absence of countervailing restrictions and sanctions, every digital application that can be sued for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

At that time there was definitely a significant lack of computing power.  That ship has sailed and is no longer a great hinderance to AI advancement that it certainly once was.

 

WHERE ARE WE?

In recent speech, Russian president Vladimir Putin made an incredibly prescient statement: “Artificial intelligence is the future, not only for Russia, but for all of humankind.” He went on to highlight both the risks and rewards of AI and concluded by declaring that whatever country comes to dominate this technology will be the “ruler of the world.”

As someone who closely monitors global events and studies emerging technologies, I think Putin’s lofty rhetoric is entirely appropriate. Funding for global AI startups has grown at a sixty percent (60%) compound annual growth rate since 2010. More significantly, the international community is actively discussing the influence AI will exert over both global cooperation and national strength. In fact, the United Arab Emirates just recently appointed its first state minister responsible for AI.

Automation and digitalization have already had a radical effect on international systems and structures. And considering that this technology is still in its infancy, every new development will only deepen the effects. The question is: Which countries will lead the way, and which ones will follow behind?

If we look at criteria necessary for advancement, there are the seven countries in the best position to rule the world with the help of AI.  These countries are as follows:

  • Russia
  • The United States of America
  • China
  • Japan
  • Estonia
  • Israel
  • Canada

The United States and China are currently in the best position to reap the rewards of AI. These countries have the infrastructure, innovations and initiative necessary to evolve AI into something with broadly shared benefits. In fact, China expects to dominate AI globally by 2030. The United States could still maintain its lead if it makes AI a top priority and charges necessary investments while also pulling together all required government and private sector resources.

Ultimately, however, winning and losing will not be determined by which country gains the most growth through AI. It will be determined by how the entire global community chooses to leverage AI — as a tool of war or as a tool of progress.

Ideally, the country that uses AI to rule the world will do it through leadership and cooperation rather than automated domination.

CONCLUSIONS:  We dare not neglect this disruptive technology.  We cannot afford to lose this battle.

COMPUTER SIMULATION

January 20, 2019


More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.

WHAT IS COMPUTER SIMULATION?

Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.

REAL-WORLD SIMULATION:

The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.

 

Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.

 

 

WEARABLE TECHNOLOGY

January 12, 2019


Wearable technology’s evolution is not about the gadget on the wrist or the arm but what is done with the data these devices collect, say most computational biologist. I think before we go on, let’s define wearable technology as:

“Wearable technology (also called wearable gadgets) is a category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness. Other wearable tech gadgets include devices that have small motion sensors to take photos and sync with your mobile devices.”

Several examples of wearable technology may be seen by the following digital photographs.

You can all recognize the “watches” shown above. I have one on right now.  For Christmas this year, my wife gave me a Fitbit Charge 3.  I can monitor: 1.) Number of steps per day, 2.) Pulse rate, 3.) Calories burned during the day, 4.) Time of day, 5.) Number of stairs climbed per day, 6.) Miles walked or run per day, and 7.) Several items I can program in from the app on my digital phone.  It is truly a marvelous device.

Other wearables provide very different information and accomplish data of much greater import.

The device above is manufactured by a company called Lumus.  This company focusses on products that provide new dimensions for the human visual experience. It offers cutting-edge eyewear displays that can be used in various applications including gaming, movie watching, text reading, web browsing, and interaction with the interface of wearable computers. Lumus does not aim to produce self-branded products. Instead, the company wants to work with various original equipment manufacturers (OEMs) to enable the wider use of its technologies.  This is truly ground-breaking technology being used today on a limited basis.

Wearable technology is aiding individuals of decreasing eyesight to see as most people see.  The methodology is explained with the following digital.

Glucose levels may be monitored by the device shown above. No longer is it necessary to prick your finger to draw a small droplet of blood to determine glucose levels.  The device below can do that on a continuous basis and without a cumbersome test device.

There are many over the world suffering from “A-fib”.  Periodic monitoring becomes a necessity and one of the best methods of accomplishing that is shown by the devices below. A watch monitors pulse rate and sends that information via blue tooth to an app downloaded on your cell phone.

Four Benefits of Wearable Health Technology are as follows:

  • Real Time Data collection. Wearables can already collect an array of data like activity levels, sleep and heart rate, among others. …
  • Continuous Monitoring. …
  • Predict and alerting. …
  • Empowering patients.

Major advances in sensor and micro-electromechanical systems (MEMS) technologies are allowing much more accurate measurements and facilitating believable data that can be used to track movements and health conditions on any one given day.  In many cases, the data captured can be downloaded into a computer and transmitted to a medical practitioner for documentation.

Sensor miniaturization is a key driver for space-constrained wearable design.  Motion sensors are now available in tiny packages measuring 2 x 2 millimeters.  As mentioned, specific medical sensors can be used to track 1.) Heart rate variability, 2.) Oxygen levels, 3.) Cardiac health, 4.) Blood pressure, 5.) Hemoglobin, 6.) Glucose levels and 7.) Body temperature.  These medical devices represent a growing market due to their higher accuracy and greater performance.  These facts make them less prone to price pressures that designers commonly face with designing consumer wearables.

One great advantage for these devices now is the ability to hold a charge for a much longer period of time.  My Fitbit has a battery life of seven (7) days.  That’s really unheard of relative to times past.

CONCLUSION:  Wearable designs are building a whole new industry one gadget at a time.  MEMS sensors represent an intrinsic part of this design movement. Wearable designs have come a long way from counting steps in fitness trackers, and they are already applying machine-learning algorithms to classify and analyze data.


My posts are not necessarily aimed to provide public service announcements but I just could not pass this one up.  Take a look.

On November first of 2018, Honeywell released a study founding that forty-four percent (44%) of the USB drives scanned by their software at fifty (50) customer locations contained at least one unsecured file.  In twenty-six percent (26%) of those cases, the detected fire was capable of causing what company officials called “a serious disruption by causing individuals to lose visibility or control of their operations”.  Honeywell began talking up its SMX (Secure Media Exchange) technology at its North American user group meeting in 2016, when removable media like flash drives were already a top pathway for attackers to gain access to a network. SMX, launched officially in 2018  is designed to manage USB security by giving users a place to plug in and check devices for approved use. The SMX Intelligence Gateway is used to analyze files in conjunction with the Advanced Threat Intelligence Exchange ( Exchange (ATIX), Honeywell’s threat intelligence cloud. Not only has SMX made USB use safer, but Honeywell has gained access to a significant amount of information about the methodology of attacks being attempted through these devices.

“The data showed much more serious threats than we expected,” said Eric Knapp, director of strategic innovation for Honeywell Industrial Cyber Security. “And taken together, the results indicate that a number of these threats were targeted and intentional.” Though Honeywell has long suspected the very real USB threats for industrial operators, the data confirmed a surprising scope and severity of threats, Knapp said, adding. “Many of which can lead to serious and dangerous situations at sites that handle industrial processes.”

The threats targeted a range of industrial sites, including refineries, chemical plants and pulp and paper facilities around the world. About one in six of the threats specifically targeted industrial control systems (ICSs) or Internet of Things (IoT) devices. (DEFINITION OF IoT: The Internet of Things (IoT) refers to the use of intelligently connected devices and systems to leverage data gathered by embedded sensors and actuators in machines and other physical objects. In other words, the IoT (Internet of Things) can be called to any of the physical objects connected with network.)

Among the threats detected, fifteen percent (15%) were high-profile, well-known issues such as Triton, Mirai and WannaCry, as well as variants of Stuxnet. Though these threats have been known to be in the wild, what the Honeywell Industry Cyber Security team considered worrisome was the fact that these threats were trying to get into industrial control facilities through removable storage devices in a relatively high density.

“That high-potency threats were at all prevalent on USB drives bound for industrial control facility use is the first concern. As ICS security experts are well aware, it only takes one instance of malware bypassing security defenses to rapidly execute a successful, widespread attack,” Honeywell’s report noted. “Second, the findings also confirm that such threats do exist in the wild, as the high-potency malware was detected among day-to-day routine traffic, not pure research labs or test environments. Finally, as historical trends have shown, newly emerging threat techniques such as Triton, which target safety instrumented systems, can provoke copycat attackers. Although more difficult and sophisticated to accomplish, such newer threat approaches can indicate the beginnings of a new wave of derivative or copycat attacks.”

In comparative tests, up to eleven percent (11%) of the threats discovered were not reliably detected by more traditional anti-malware technology. Although the type and behavior of the malware detected varied considerably, trojans—which can be spread very effectively through USB devices—accounted for fifty-five percent (55%) of the malicious files. Other malware types discovered included bots (eleven percent), hack-tools (six percent) and potentially unwanted applications (five percent).

“Customers already know these threats exist, but many believe they aren’t the targets of these high-profile attacks,” Knapp said. “This data shows otherwise and underscores the need for advanced systems to detect these threats.”

CONCLUSION:  Some companies and organizations have outlawed USB drives entirely for obvious reasons.  Also, there is some indication that companies, generally off-shore, have purposely embedded malware within USB drives to access information on a random level.  It becomes imperative that we take great care in choosing vendors providing USB drives and other external means of capturing data.  You can never be too safe.

HOW MUCH IS TOO MUCH?

December 15, 2018


How many “screen-time” hours do you spend each day?  Any idea? Now, let’s face facts, an adult working a full-time job requiring daily hour-long screen time may be a necessity.  We all know that but how about our children and grandchildren?

I’m old enough to remember when television was a laboratory novelty and telephones were “ringer-types” affixed to the cleanest wall in the house.  No laptops, no desktops, no cell phones, no Gameboys, etc etc.  You get the picture.  That, as we all know, is a far cry from where we are today.

Today’s children have grown up with a vast array of electronic devices at their fingertips. They can’t imagine a world without smartphones, tablets, and the internet.  If you do not believe this just ask them. One of my younger grandkids asked me what we did before the internet.  ANSWER: we played outside, did our chores, called our friends and family members.

The advances in technology mean today’s parents are the first generation who have to figure out how to limit screen-time for children.  This is a growing requirement for reasons we will discuss later.  While digital devices can provide endless hours of entertainment and they can offer educational content, unlimited screen time can be harmful. The American Academy of Pediatrics recommends parents place a reasonable limit on entertainment media. Despite those recommendations, children between the ages of eight (8) and eighteen (18) average seven and one-half (7 ½) hours of entertainment media per day, according to a 2010 study by the Henry J. Kaiser Family Foundation.  Can you imagine over seven (7) hours per day?  When I read this it just blew my mind.

But it’s not just kids who are getting too much screen time. Many parents struggle to impose healthy limits on themselves too. The average adult spends over eleven (11) hours per day behind a screen, according to the Kaiser Family Foundation.  I’m very sure that most of this is job related but most people do not work eleven hours behind their desk each day.

Let’s now look at what the experts say:

  • Childrenunder age two (2) spend about forty-two (42) minutes, children ages two (2) to four (4) spend two (2) hours and forty (40) minutes, and kids ages five (5) to eight (8) spend nearly three (3) hours (2:58) with screen media daily. About thirty-five (35) percent of children’s screen time is spent with a mobile device, compared to four (4) percent in 2011. Oct 19, 2017
  • Children aged eighteen (18) monthsto two (2) years can watch or use high-quality programs or apps if adults watch or play with them to help them understand what they’re seeing. children aged two to five (2-5) years should have no more than one hour a day of screen time with adults watching or playing with them.
  • The American Academy of Pediatrics released new guidelines on how much screen timeis appropriate for children. … Excessive screen time can also lead to “Computer Vision Syndrome” which is a combination of headaches, eye strain, fatigue, blurry vision for distance, and excessive dry eyes. August 21, 2017
  • Pediatricians: No More than two (2) HoursScreen Time Daily for Kids. Children should be limited to less than two hours of entertainment-based screen time per day, and shouldn’t have TVs or Internet access in their bedrooms, according to new guidelines from pediatricians. October 28, 2013

OK, why?

  • Obesity: Too much time engaging in sedentary activity, such as watching TV and playing video games, can be a risk factor for becoming overweight.
  • Sleep Problems:  Although many parents use TV to wind down before bed, screen time before bed can backfire. The light emitted from screens interferes with the sleep cycle in the brain and can lead to insomnia.
  • Behavioral Problems: Elementary school-age children who watch TV or use a computer more than two hours per day are more likely to have emotional, social, and attention problems. Excessive TV viewing has even been linked to increased bullying behavior.
  • Educational problems: Elementary school-age children who have televisions in their bedrooms do worse on academic testing.  This is an established fact—established.  At this time in our history we need educated adults that can get the job done.  We do not need dummies.
  • Violence: Exposure to violent TV shows, movies, music, and video games can cause children to become desensitized to it. Eventually, they may use violence to solve problems and may imitate what they see on TV, according to the American Academy of Child and Adolescent Psychiatry.

When very small children get hooked on tablets and smartphones, says Dr. Aric Sigman, an associate fellow of the British Psychological Society and a Fellow of Britain’s Royal Society of Medicine, they can unintentionally cause permanent damage to their still-developing brains. Too much screen time too soon, he says, “is the very thing impeding the development of the abilities that parents are so eager to foster through the tablets. The ability to focus, to concentrate, to lend attention, to sense other people’s attitudes and communicate with them, to build a large vocabulary—all those abilities are harmed.”

Between birth and age three, for example, our brains develop quickly and are particularly sensitive to the environment around us. In medical circles, this is called the critical period, because the changes that happen in the brain during these first tender years become the permanent foundation upon which all later brain function is built. In order for the brain’s neural networks to develop normally during the critical period, a child needs specific stimuli from the outside environment. These are rules that have evolved over centuries of human evolution, but—not surprisingly—these essential stimuli are not found on today’s tablet screens. When a young child spends too much time in front of a screen and not enough getting required stimuli from the real world, her development becomes stunted.

CONCLUSION: This digital age is wonderful if used properly and recognized as having hazards that may create lasting negative effects.  Use wisely.

SOCIAL MEDIA

June 27, 2018


DEFINITION:

Social media is typically defined today as: – “Web sites and applications that enable users to create and share content or to participate in social networking” – OxfordDictionaries.

Now that we have cleared that up, let’s take a look at the very beginning of social media.

Six Degrees, according to several sources, was the first modern-day attempt of providing access to communication relative to the “marvelous world” of social media. (I have chosen to put marvelous world in quotes because I’m not too sure it’s that marvelous. There is an obvious downside.)  Six Degrees was launched in 1997 and definitely was the first modern social network. It allowed users to create a profile and to become friends with other users. While the site is no longer functional, at one time it was actually quite popular and had approximately a million members at its peak.

Other sources indicate that social media has been around for the better part of forty (40) years with Usenet appearing in 1979.  Usenet is the first recorded network that enabled users to post news to newsgroups.  Although these Usenets and similar bulletin boards heralded the launch of the first, albeit very rudimentary, social networks, social media never really took off until almost thirty (30) years later, following the roll out of Facebook in 2006. Usenet was not identified as “social media” so the exact term was not used at that time.

If we take a very quick look at Internet and Social Media usage, we find the following:

As you can see from above, social media is incredibly popular and in use hourly if not minute-by-minute.  It’s big in our society today across the world and where allowed.

If we look at the fifteen most popular sites we see the following:

With out a doubt, the gorilla in the room is Facebook.

Facebook statistics

  • Facebook adds 500,000 new users a day – that’s six new profiles a second – and just under a quarter (775) of adults in the US visit their account at least once a month
  • The average (mean) number of Facebook friends is 155
  • There are 60 million active small business pages (up from 40 million in 2015), 5 million of which pay for advertising
  • There are thought to be 270 million fake Facebook profiles (there were only81 million in 2015)
  • Facebook accounts for 1% of social logins made by consumers to sign into the apps and websites of publishers and brands.

It’s important we look at all social media sites so If we look at daily usage for the most popular web sites, we see the following:

BENEFITS:

  • Ability to connect to other people all over the world. One of the most obvious pros of using social networks is the ability to instantly reach people from anywhere. Use Facebook to stay in touch with your old high school friends who’ve relocated all over the country, get on Google Hangouts with relatives who live halfway around the world, or meet brand new people on Twitter from cities or regions you’ve never even heard of before.
  • Easy and instant communication. Now that we’re connected wherever we go, we don’t have to rely on our landlines, answering machines or snail mail to contact somebody. We can simply open up our laptops or pick up our smartphones and immediately start communicating with anyone on platforms like Twitter or one of the many social messaging apps
  • Real-time news and information discovery. Gone are the days of waiting around for the six o’clock news to come on TV or for the delivery boy to bring the newspaper in the morning. If you want to know what’s going on in the world, all you need to do is jump on social media. An added bonus is that you can customize your news and information discovery experiences by choosing to follow exactly what you want.
  • Great opportunities for business owners. Business owners and other types of professional organizations can connect with current customers, sell their products and expand their reach using social media. There are actually lots of entrepreneurs and businesses out there that thrive almost entirely on social networks and wouldn’t even be able to operate without it.
  • General fun and enjoyment. You have to admit that social networking is just plain fun sometimes. A lot of people turn to it when they catch a break at work or just want to relax at home. Since people are naturally social creatures, it’s often quite satisfying to see comments and likes show up on our own posts, and it’s convenient to be able to see exactly what our friends are up to without having to ask them directly.

DISADVANTAGES:

  • Information overwhelm. With so many people now on social media tweeting links and posting selfies and sharing YouTube videos, it sure can get pretty noisy. Becoming overwhelmed by too many Facebook friends to keep up with or too many Instagram photos to browse through isn’t all that uncommon. Over time, we tend to rack up a lot of friends and followers, and that can lead to lots of bloated news feeds with too much content we’re not all that interested in.
  • Privacy issues. With so much sharing going on, issues over privacy will always be a big concern. Whether it’s a question of social sites owning your content after it’s posted, becoming a target after sharing your geographical location online, or even getting in trouble at work after tweeting something inappropriate – sharing too much with the public can open up all sorts of problems that sometimes can’t ever be undone.
  • Social peer pressure and cyber bullying. For people struggling to fit in with their peers – especially teens and young adults – the pressure to do certain things or act a certain way can be even worse on social media than it is at school or any other offline setting. In some extreme cases, the overwhelming pressure to fit in with everyone posting on social media or becoming the target of a cyber-bullying attack can lead to serious stress, anxiety and even depression.
  • Online interaction substitution for offline interaction. Since people are now connected all the time and you can pull up a friend’s social profile with a click of your mouse or a tap of your smartphone, it’s a lot easier to use online interaction as a substitute for face-to-face interaction. Some people argue that social media actually promotes antisocial human behavior.
  • Distraction and procrastination. How often do you see someone look at their phone? People get distracted by all the social apps and news and messages they receive, leading to all sorts of problems like distracted driving or the lack of gaining someone’s full attention during a conversation. Browsing social media can also feed procrastination habits and become something people turn to in order to avoid certain tasks or responsibilities.
  • Sedentary lifestyle habits and sleep disruption. Lastly, since social networking is all done on some sort of computer or mobile device, it can sometimes promote too much sitting down in one spot for too long. Likewise, staring into the artificial light from a computer or phone screen at night can negatively affect your ability to get a proper night’s sleep. (Here’s how you can reduce that blue light, by the way.)

Social media is NOT going away any time soon.  Those who choose to use it will continue using it although there are definite privacy issues. The top five (5) issues discussed by users are as follows:

  • Account hacking and impersonation.
  • Stalking and harassment
  • Being compelled to turn over passwords
  • The very fine line between effective marketing and privacy intrusion
  • The privacy downside with location-based services

I think these issues are very important and certainly must be considered with using ANY social media platform.  Remember—someone is ALWAYS watching.

 


Portions of this post are taken from the January 2018 article written by John Lewis of “Vision Systems”.

I feel there is considerable confusion between Artificial Intelligence (AI), Machine Learning and Deep Learning.  Seemingly, we use these terms and phrases interchangeably and they certainly have different meanings.  Natural Learning is the intelligence displayed by humans and certain animals. Why don’t we do the numbers:

AI:

Artificial Intelligence refers to machines mimicking human cognitive functions such as problem solving or learning.  When a machine understands human speech or can compete with humans in a game of chess, AI applies.  There are several surprising opinions about AI as follows:

  • Sixty-one percent (61%) of people see artificial intelligence making the world a better place
  • Fifty-seven percent (57%) would prefer an AI doctor perform an eye exam
  • Fifty-five percent (55%) would trust an autonomous car. (I’m really not there as yet.)

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.

MACHINE LEARNING:

Machine Learning is the current state-of-the-art application of AI and largely responsible for its recent rapid growth. Based upon the idea of giving machines access to data so that they can learn for themselves, machine learning has been enabled by the internet, and the associated rise in digital information being generated, stored and made available for analysis.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level understanding. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

DEEP LEARNING:

Deep Learning concentrates on a subset of machine-learning techniques, with the term “deep” generally referring to the number of hidden layers in the deep neural network.  While conventional neural network may contain a few hidden layers, a deep network may have tens or hundreds of layers.  In deep learning, a computer model learns to perform classification tasks directly from text, sound or image data. In the case of images, deep learning requires substantial computing power and involves feeding large amounts of labeled data through a multi-layer neural network architecture to create a model that can classify the objects contained within the image.

CONCLUSIONS:

Brave new world we are living in.  Someone said that AI is definitely the future of computing power and eventually robotic systems that could possibly replace humans.  I just hope the programmers adhere to Dr. Isaac Asimov’s three laws:

 

  • The First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

  • The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

  • The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those words, science-fiction author Isaac Asimov changed how the world saw robots. Where they had largely been Frankenstein-esque, metal monsters in the pulp magazines, Asimov saw the potential for robotics as more domestic: as a labor-saving device; the ultimate worker. In doing so, he continued a literary tradition of speculative tales: What happens when humanity remakes itself in its image?

As always, I welcome your comments.


The convergence of “smart” microphones, new digital signal processing technology, voice recognition and natural language processing has opened the door for voice interfaces.  Let’s first define a “smart device”.

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G, etc., that can operate to some extent interactively and autonomously.

I am told by my youngest granddaughter that all the cool kids now have in-home, voice-activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition of the devices works well; although you may find yourself with an extra dollhouse or two occasionally. (I do wonder if they speak “southern” but that’s another question for another day.)

A smart speaker is, essentially, a speaker with added internet connectivity and “smart assistant” voice-control functionality. The smart assistant is typically Amazon Alexa or Google Assistant, both of which are independently managed by their parent companies and have been opened up for other third-parties to implement into their hardware. The idea is that the more people who bring these into their homes, the more Amazon and Google have a “space” in every abode where they’re always accessible.

Let me first state that my family does not, as yet, have a smart device but we may be inching in that direction.  If we look at numbers, we see the following projections:

  • 175 million smart devices will be installed in a majority of U.S. households by 2022 with at least seventy (70) million households having at least one smart speaker in their home. (Digital Voice Assistants Platforms, Revenues & Opportunities, 2017-2022. Juniper Research, November 2017.)
  • Amazon sold over eleven (11) million Alexa voice-controlled Amazon Echo devices in 2016. That number was expected to double for 2017. (Smart Home Devices Forecast, 2017 to 2022(US) Forester Research, October 2017.
  • Amazon Echo accounted for 70.6% of all voice-enabled speaker users in the United States in 2017, followed by Google Home at 23.8%. (eMarketer, April 2017)
  • In 2018, 38.5 million millennials are expected to use voice-enabled digital assistants—such as Amazon Alexa, Apple Siri, Google Now and Microsoft Cortana—at least once per month. (eMarketer, April 2017.)
  • The growing smart speaker market is expected to hit 56.3 million shipments, globally in 2018. (Canalys Research, January 2018)
  • The United States will remain the most important market for smart speakers in 2018, with shipments expected to reach 38.4 million units. China is a distant second at 4.4 million units. (Canalys Research, April 2018.)

With that being the case, let’s now look at what smart speakers are now commercialized and available either as online purchases or retail markets:

  • Amazon Echo Spot–$114.99
  • Sonos One–$199.00
  • Google Home–$129.00
  • Amazon Echo Show–$179.99
  • Google Home Max–$399.00
  • Google Home Mini–$49.00
  • Fabriq Choros–$69.99
  • Amazon Echo (Second Generation) –$$84.99
  • Harman Kardon Evoke–$199.00
  • Amazon Echo Plus–$149.00

CONCLUSIONS:  If you are interested in purchasing one from the list above, I would definitely recommend you do your homework.  Investigate the services provided by a smart speaker to make sure you are getting what you desire.  Be aware that there will certainly be additional items enter the marketplace as time goes by.  GOOD LUCK.

%d bloggers like this: