WHY DID I NOT THINK OF THAT?

February 17, 2018


Portions of this post were taken from Design News Daily.

How many times have you said that? It’s called the Eureka moment – a sudden flash of intuition that leads us down a path to a wonderful, new, productive solution. Most of us have had such moments, but a select few have parlayed them into something grand, something that changes the world. That was the case for Arthur Fry, inventor of the Post-It Note and Richard James, inventor of the Slinky toy. They took simple ideas – such as a sticky note and a coil spring — and touched hundreds of millions of lives with them.  Given below are nine Eureka Moments that actually produced workable and usable devices that have revolutionized and made life easier for all of us. Let’s take a look.

If you could see my computer and associated screen, you would see a “ton” of post-it-notes.  Most with scribbles, PIN numbers, telephone numbers, etc etc.  We all use them.

Legend has it that Post-It Note inventor Arthur Fry conjured up the idea for his product when the little scraps of paper in his Sunday hymnal kept falling out. To solve the problem, he used an adhesive developed by a 3M colleague, Dr. Spencer Silver. Silver’s reusable, pressure-sensitive adhesive was failing to stir interest inside 3M until Fry came along and made the mental connection to his hymnal.

In 1974, the two partnered to put the adhesive on small sheets of yellow paper and … a mythic product was born. They passed their sticky notes to fellow employees, who loved them. “I thought, what we have here isn’t just a bookmark,” Fry said. “It’s a whole new way to communicate.” They later put their product on the market, receiving an even stronger reaction. Lee Iacocca and other Fortune 500 CEOs reportedly wrote to praise it. Post-It Notes, as they soon became known, eventually were sold in more than 100 countries. At one point, it was estimated that the average professional received 11 messages on Post-It Notes per day. Fry received 3M’s Golden Step Award, was named a corporate researcher, became a member of the company’s Carlton Society and was appointed to its Circle of Technical Excellence.

(Image source: By Tinkeringbell – Own work, Public Domain/Wikipedia)

Ansa baby bottles are virtually impossible to find today, but they were all the rage in the mid-1980s.

The bottles, which have a hole in the middle to make them easy for babies to hold, were the brainchild of William and Nickie Campbell of Muskogee, OK, who designed them for their infant son. After filing for patents in 1984, they took out a loan, launched the Ansa Bottle Co., manufactured the plastic bottles, and enjoyed immediate success. They received editorial coverage in American Baby and Mothers Today, while inking deals with Sears, K-Mart, Walgreens, and Target, according to The Oklahoman. Their bottles even went on display in the Museum of Modern Art in New York City.

(Image source: US Patent Office)

Rolling luggage is an accepted fact of air travel today, but it wasn’t always so and I’m not too sure what we now would do without it.  The concept was slow to take hold, and achieved acceptance in two distinct steps. The first step occurred in 1970, when inventor Bernard Sadow observed an airport worker rolling a heavy machine on a wheeled skid. Sadow, who was at the time dragging his own luggage through customs after a trip to Aruba, had the proverbial “eureka moment,” according to The New York Times. Sadow’s solution to the problem was a suitcase with four wheels and a pull strap. To his surprise, however, the idea was slow to take off. That’s where the second step came in. In 1987, a Northwest Airlines pilot and workshop tinkerer named Robert Plath took it to the next level — developing an upright, two-wheeled suitcase with a long stiff handle. Plath’s so-called “Rollaboard” was the missing ingredient to the success of rolling luggage.

Today, his 30-year-old concept dominates air travel and is built by countless manufacturers — any patents having long since expired. The initial slow acceptance remains a mystery to many, however. Sadow, looking back at it years later, attributed the consumer reluctance to men who refused to take the easy way out. “It was a very macho thing,” he said.

(Image source photo: Design News)

OK, who on the planet has NOT owned and/or played with a slinky?  In 1943, Naval mechanical engineer Richard James was developing springs for instruments when he accidently knocked one to the floor, permanently altering the future of toy manufacturing. The spring subsequently stepped “in a series of arcs to a stack of books, to a tabletop, and to the floor, where it recoiled itself and stood upright,” writes Wikipedia. James reportedly realized that with the right steel properties, he could make a spring walk, which is exactly what he did. Using a $500 loan, he made 400 “Slinky” coil springs at a local machine shop, demonstrated them at a Gimbels department store in Philadelphia, and sold his entire inventory in ninety (90) minutes. From there, Slinky became a legend, reaching sales of 300 million units in 60 years. Today, engineers attribute Slinky’s sales to the taming of the product’s governing physical principles — Hooke’s Law and the force of gravity. But advertising executives argue that its monumental sales were a product of clever TV commercials. The song, “Everyone knows it’s slinky” (recognized by virtually everyone who lived through the 1960s and 1970s), is considered the longest-running jingle in advertising history.

(Image source: Wikipedia)

The Band-Aid (or “Band-Aid brand,” as Johnson & Johnson calls it) is in essence a simple concept – an adhesive strip with a small bandage attached. Still, its success is undeniable. The idea originated with Johnson & Johnson employees Thomas Anderson and Earle Dickson in 1920. Dickson made the prototype for his wife, who frequently burned herself while cooking, enabling her to dress her wounds without help. Dickson introduced the concept to his bosses, who quickly launched it into production.

Today, it is copied by many generic products, but the Band-Aid brand lives on. Band-Aid is accepted around the world, with more than 100 billion having been sold.

(Image source photo: Design News)

Today, it’s hard to imagine that an upside-down bottle was once considered an innovation. But it was. Ketchup maker H.J. Heinz launched a revolution in packaging after deciding that its customers were tired of banging on the side of glass bottles, waiting for their product to ooze out. The unlikely hero of their revolution was Paul Brown, a molding shop owner in Midland, MI, who designed a special valve for bottles of ketchup and other viscous liquids, according to an article in the McClatchey Newspapers. Brown’s valve enabled ketchup bottles to be stored upside down without leaking. It also allowed liquids to be easily delivered when the bottle was squeezed, and sucked back inside when force was released.

Brown was said to have built 111 failed injection-molded silicone prototypes before finding the working design. To his lasting delight, the successful concept found use with not only with Heinz, but with makers of baby food, shampoo, and cosmetics, as well as with NASA for space flights. In retrospect, he said the final design was the result of an unusual intellectual approach. “I would pretend I was silicone, and if I was injected into a mold, what I would do,” he told McClatchey. The technique apparently worked: Brown eventually sold his business for about $13 million in 1995.

Players of pinball may take the games’ dual flippers for granted, but they were an innovation when Steve Kordek devised them in 1948. Working for the Genco Co. in Chicago (a company he became acquainted with after stepping into its lobby to escape a heavy rain), Kordek became the father of the two-flipper pinball game. His lasting contribution was simple, yet innovative — the use of direct current (DC) to actuate the flippers, rather than alternating current (AC). DC, he found, made the flippers more controllable, yet less costly to manufacture. Over six decades, Kordek reached legendary status in the industry, producing games for Genco, Bally Manufacturing, and Williams Manufacturing, always employing his dual-flipper design. He worked until 2003, designing the Vacation America game (based on the National Lampoon Vacation movies) at age 92. But it was his DC-based, dual flipper design that shaped his legacy. “It was really revolutionary, and pretty much everyone followed suit,” David Silverman, executive director of the National Pinball Hall of Fame told The New York Times in 2012. “And it’s stayed the standard for 60 years.”

(Image source: By ElHeineken, own work/Wikipedia)

It’s difficult to know whether any individual has ever been credited with the design of the ergonomic bent snow shovel, but the idea is nevertheless making money … for somebody. Bent-handle snow shovels today are sold at virtually every hardware store and home center in the northern United States, and they’re a critical winter tool for millions of homeowners. The idea is that by putting a bend in the shaft, the horizontal moment arm between the shovel handle and the tip is shorter, putting less strain on the user’s lower back. Although there’s some argument on that point, it was recently proven by engineering graduate students at the University of Calgary, according to a story on CTVNews.com.

Studying the bent-handle shovels in the school’s biomechanics laboratory, engineers concluded that they require less bending on the part of users, and therefore reduce mechanical loads on the lower back by 16 percent. “I think that’s a pretty substantial reduction,” researcher Ryan Lewinson told CTVNews. “Over the course of shoveling an entire driveway, that probably would add up to something pretty meaningful.”

(Image source photo: Design News)

Erno Rubik, a Hungarian sculptor and professor of architecture, invented his famous game cube while trying to solve a structural problem. Although his goal had been to put moving parts together in a mechanism that wouldn’t fall apart, it gradually dawned on Rubik that he had created a puzzle of sorts.

His puzzle consisted of 26 miniature cubes, each having an inward extension that interlocked to other cubes, allowing them to move independently and in different directions. Initially called the Magic Cube, it was released in Budapest toy shops in 1977. It was later licensed to the Ideal Toy Co. in 1980, which changed its name to Rubik’s Cube to make it more distinctive. Its broader release started a craze in the early 1980s. Rubik’s Cube won Toy of the Year Awards in Germany, France, the UK, US, Finland, Sweden, and Italy. Between 1980 and 1983, 200 million cubes were sold worldwide. Clubs of “speedcubers” popped up around the world, it appeared on the cover of Scientific American, books were written about it, and The Washington Post called it “a puzzle that’s moving like fast food right now. “Today, Rubik’s Cube continues to sell and enthusiasts continue to test their skill against it. Total sales are said to have passed 300 million. In 2017, a speedcuber named SeungBeom Cho set a world record for solving the puzzle in 4.59 seconds.

(Image source photo: Design News)

CONCLUSIONS:  We all have ideas.  The difference is persistence in developing and marketing those ideas.

Advertisements

ABIBLIOPHOBIA

January 10, 2018


Abibliophobia is the fear of running out of reading material.  Basically, just look up the Greek root-phobia and add whatever word you are afraid of, replace the ending with -o- and couple the results with phobia.  If you have any experience with libraries, the Internet, the back of soup cans, etc. you know there is more than enough material out there to be read and digested. It amazes me that this word has just “popped” up of the last few years.

Now, the World Wide Web is a cavernous source of reading material.  Indeed, it’s a bigger readers’ repository than the world has ever known, so it seems rather ironic that the term abibliophobia appears to have been coined on the Web during the last three or four years. It would seem impossible for anyone with regular access to the Internet to be an abibliophobe (someone suffering from a fear of running out of reading material) or to become abibliophobic when more and more reading matter is available by the hour.  Let’s look at just what is available to convince the abibliophobic individual that there is no fear of running out of reading material.

  • There Are More Than 440 Million Blogs In The World. By October 2011, there were an estimated 173 million blogs Nielsen estimates that by the end of 2011, that number had climbed to 181 million. That was four years after Tumblr launched, and in May 2011, there were just 17.5 million Tumblr blogs.  Today, there are over 360 million blogs on Tumblr alone, and there are millions more on other platforms. While there are some reliable statistics on the number of blogs in 2011, things have changed dramatically with the rise of services like Tumblr, WordPress, Squarespace, Medium and more. Exactly how many blogs there are in the world is difficult to know, but what’s clear is that blogs online number in the hundreds of millions. The total number of blogs on TumblrSquarespace, and WordPress alone equals over 440 million. In actuality, the total number of blogs in the world likely greatly exceeds this number. We do know that content is being consumed online more widely, more quickly, and more voraciously than ever before.
  • According to WordPress, 76.3 million posts are published on WordPress each month, and more than 409 million people view 22.3 billion blog pages each month. It’s interesting to see that there are about 1 billion websites and blogs in the world today. But that figure is not as helpful as looking at the other statistics involving blogging. For example, did you know that more than 409 million people on WordPress view more than 23.6 billion pages each month? Did you know that each month members produce 69.5 million new posts?
  • Websites with a blog have over 434% more indexed pages.
  • 76% of online marketers say they plan to add more content over the 2018 year.
  • There are an estimated 119,487 libraries of all kinds in the United States today.
  • It is estimated that there are 000 libraries in the world. Russia, India and China have about 50.000 each.

Thanks to Johannes Gensfleisch zur Laden zum Gutenberg, the written word flourished after he invented the printing press.  Gutenberg in 1439 was the first European to use movable type. Among his many contributions to printing are: the invention of a process for mass-producing movable type; the use of oil-based ink for printing books; adjustable molds; mechanical movable type; and the use of a wooden printing press similar to the agricultural screw presses of the period. His truly epochal invention was the combination of these elements into a practical system that allowed the mass production of printed books and was economically viable for printers and readers alike. Gutenberg’s method for making type is traditionally considered to have included a type metal alloy and a hand mold for casting type. The alloy was a mixture of lead, tin, and antimony melted at a relatively low temperature for faster and more economical casting.  His invention was a game-changing event for all prospective readers the world over.  No longer will there be a fear of or absence of material to read.

CONCLUSIONS:

I think the basic conclusion here is not the fear of having no reading material but the fear of reading.

  • If I read, I might miss my favorite TV programs.
  • If I read, I might miss that important phone call.
  • Why read when I can TWEET?
  • Why read when I can stream Netflix or HULU?
  • I’m such a slow reader. It just takes too much time.
  • I cannot find any subject I’m really that interested in.
  • I really have no quite place to read.
  • ___________________ Fill in the blanks.

Reading does take a commitment, so why not set goals and commit?

YOU KNOW YOU’RE OLD WHEN

December 16, 2017


Your grandchildren start graduating from college or a university system.  One of our oldest granddaughters graduated this past Wednesday from Georgia State University in Atlanta.  Magna Cum Laude.  The commencement program is shown below.

QUICK FACTS ABOUT GEORGIA STATE UNIVERSITY:

There are several very interesting facts about Georgia State as follows:

  • 7 campuses
  • 10 colleges and schools
  • 51,000+ students from every county in Georgia, every state in the U.S. and 170 countries. (This one blew my mind. Fifty-one thousand students?????? I’m sure that includes part-time, online, and night students but fifty-one thousand?)
  • 3,000+ international students. All you have to do is look at the graduating class and try to pronounce the names to see there is a significant international presence.
  • This graduating class, forty-four (44%) percent were culturally diverse or from backgrounds being non-native-born American.
  • 250+ degree programs in 100 fields of study at the Atlanta Campus — the widest variety in the state
  • 30+ associate degree pathways at five campuses and through the largest online program in the state
  • $2.5 billion annual economic impact on metro Atlanta*
  • 84 research centers
  • 72 study-abroad programs in 45 countries
  • 400+ student organizations, including 31 fraternities and sororities
  • 9,500+ degrees conferred each year
  • 240,276 all-time degrees conferred
  • 88% of faculty with the highest degree in their field
  • A campus without boundaries in downtown Atlanta, the leading economic center of the Southeast with the world’s busiest airport and third most Fortune 500 companies of any U.S. city, where internships, jobs and connections to the world’s business, government, healthcare, nonprofit and cultural communities are just blocks away.
  • One more fact that I would like to throw in. Traffic in Atlanta is the worst—the very worst on the planet.  Orange barrels everywhere with associated road work.  If you plan on taking a look at the campus, plan your trip then double or triple the time for transit when in “hot—Lanta”.

The graduation ceremony was held in the Georgia Tech McCamish Pavilion.  This is a beautiful stadium. Ground was broken for the construction of Tech’s new on-campus arena on May 5, 2011, and eighteen (18) months later, the Yellow Jackets had a state-of-the-art building with 8,600 seats and a luxurious club area, which provides a cozy view of the court. The lower level seating bowl has 6,935 seats, and the balcony level seats 1,665.  There were approximately fifteen hundred graduates that walked that day so the pavilion was just about full of family, friends, faculty, and assorted people getting in from the thirty-two (32) degree cold weather.  It was a beautiful day though.

(NOTE:  I want to apologize for the quality of the digital pictures below.  The lighting was not very good and our vantage point gave us a great overall view of the ceremony but at a distance.)

You can see from the picture above the size of the pavilion.  McCamish is the site of Yellow Jacket basketball.  Note the vacant seats behind the overhead screen.  Other than these vacant seats, the auditorium was absolutely full.

In the picture above, all prospective graduates were standing for the opening ceremonies.  Hopefully, you can see the bagpipers coming down the isle to open the event.

Georgia State used the overhead screen in a marvelous way by posting the name and college of the graduate.   Excellent use of the overhead and those of us in the upper seats were allowed to get a great look at our graduate.

When the ceremony was over and all of the graduates having walked, balloons were released from overhead netting.  That’s when the mortar boards started flying.

Our granddaughter, her mother and father are shown in this picture.  Next year, our oldest granddaughter—this young ladies’ sister, will graduate from Georgia State.  Both granddaughters have two degrees indicating hard, intense, focused work over four and five years.  We are certainly proud of their considerable efforts.  Remarkable work ethic for both.  Their futures look very bright.

 

HALF SMART

December 12, 2017


The other day I was visiting a client and discussing a project involving the application of a robotic system to an existing work cell.  The process is somewhat complex and we all questioned which employee would manage the operation of the cell including the system.  The system is a SCARA type.  SCARA is an acronym for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm.

In 1981, Sankyo SeikiPentel and NEC presented a completely new concept for assembly robots. The robot was developed under the guidance of Hiroshi Makino, a professor at the University of Yamanashi and was called the Selective Compliance Assembly Robot Arm or SCARA.

SCARA’s are generally faster and cleaner than comparable Cartesian (X, Y, Z) robotic systems.  Their single pedestal mount requires a small footprint and provides an easy, unhindered form of mounting. On the other hand, SCARA’s can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves. This software typically comes with the SCARA however and is usually transparent to the end-user.   The SCARA system used in this work cell had the capability of one hundred programs with 100 data points per program.  It was programmed by virtue of a “teach pendant” and “jog” switch controlling the placement of the robotic arm over the material.

Several names were mentioned as to who might ultimately, after training, be capable of taking on this task.  When one individual was named, the retort was; “not James, he is only half smart.  That got me to thinking about “smarts”.  How smart is smart?   At what point do we say smart is smart enough?

IQ CHARTS—WHO’S SMART

The concept of IQ or intelligence quotient was developed by either the German psychologist and philosopher Wilhelm Stern in 1912 or by Lewis Terman in 1916.  This is depending on which of several sources you consult.   Intelligence testing was initially accomplished on a large scale before either of these dates. In 1904 psychologist Alfred Binet was commissioned by the French government to create a testing system to differentiate intellectually normal children from those who were inferior.

From Binet’s work the IQ scale called the “Binet Scale,” (and later the “Simon-Binet Scale”) was developed. Sometime later, “intelligence quotient,” or “IQ,” entered our vocabulary.  Lewis M. Terman revised the Simon-Binet IQ Scale, and in 1916 published the Stanford Revision of the Binet-Simon Scale of Intelligence (also known as the Stanford-Binet).

Intelligence tests are one of the most popular types of psychological tests in use today. On the majority of modern IQ tests, the average (or mean) score is set at 100 with a standard deviation of 15 so that scores conform to a normal distribution curve.  This means that 68 percent of scores fall within one standard deviation of the mean (that is, between 85 and 115), and 95 percent of scores fall within two standard deviations (between 70 and 130).  This may be shown from the following bell-shaped curve:

Why is the average score set to 100?  Psychometritians, individuals who study the biology of the brain, utilize a process known as standardization in order to make it possible to compare and interpret the meaning of IQ scores. This process is accomplished by administering the test to a representative sample and using these scores to establish standards, usually referred to as norms, by which all individual scores can be compared. Since the average score is 100, experts can quickly assess individual test scores against the average to determine where these scores fall on the normal distribution.

The following scale resulted for classifying IQ scores:

IQ Scale

Over 140 – Genius or almost genius
120 – 140 – Very superior intelligence
110 – 119 – Superior intelligence
90 – 109 – Average or normal intelligence
80 – 89 – Dullness
70 – 79 – Borderline deficiency in intelligence
Under 70 – Feeble-mindedness

Normal Distribution of IQ Scores

From the curve above, we see the following:

50% of IQ scores fall between 90 and 110
68% of IQ scores fall between 85 and 115
95% of IQ scores fall between 70 and 130
99.5% of IQ scores fall between 60 and 140

Low IQ & Mental Retardation

An IQ under 70 is considered as “mental retardation” or limited mental ability. 5% of the population falls below 70 on IQ tests. The severity of the mental retardation is commonly broken into 4 levels:

50-70 – Mild mental retardation (85%)
35-50 – Moderate mental retardation (10%)
20-35 – Severe mental retardation (4%)
IQ < 20 – Profound mental retardation (1%)

High IQ & Genius IQ

Genius or near-genius IQ is considered to start around 140 to 145. Less than 1/4 of 1 percent fall into this category. Here are some common designations on the IQ scale:

115-124 – Above average
125-134 – Gifted
135-144 – Very gifted
145-164 – Genius
165-179 – High genius
180-200 – Highest genius

We are told “Big Al” had an IQ over 160 which would definitely qualify him as being one the most intelligent people on the planet.

As you can see, the percentage of individuals considered to be genius is quite small. 0.50 percent to be exact.  OK, who are these people?

  1. Stephen Hawking

Dr. Hawking is a man of Science, a theoretical physicist and cosmologist.  Hawking has never failed to astonish everyone with his IQ level of 160. He was born in Oxford, England and has proven himself to be a remarkably intelligent person.   Hawking is an Honorary Fellow of the Royal Society of Arts, a lifetime member of the Pontifical Academy of Sciences, and a recipient of the Presidential Medal of Freedom, the highest civilian award in the United States.  Hawking was the Lucasian Professor of Mathematics at the University of Cambridge between 1979 and 2009. Hawking has a motor neuron disease related to amyotrophic lateral sclerosis (ALS), a condition that has progressed over the years. He is almost entirely paralyzed and communicates through a speech generating device. Even with this condition, he maintains a very active schedule demonstrating significant mental ability.

  1. Andrew Wiles

Sir Andrew John Wiles is a remarkably intelligent individual.  Sir Andrew is a British mathematician, a member of the Royal Society, and a research professor at Oxford University.  His specialty is numbers theory.  He proved Fermat’s last theorem and for this effort, he was awarded a special silver plaque.    It is reported that he has an IQ of 170.

  1. Paul Gardner Allen

Paul Gardner Allen is an American business magnate, investor and philanthropist, best known as the co-founder of The Microsoft Corporation. As of March 2013, he was estimated to be the 53rd-richest person in the world, with an estimated wealth of $15 billion. His IQ is reported to be 170. He is considered to be the most influential person in his field and known to be a good decision maker.

  1. Judit Polgar

Born in Hungary in 1976, Judit Polgár is a chess grandmaster. She is by far the strongest female chess player in history. In 1991, Polgár achieved the title of Grandmaster at the age of 15 years and 4 months, the youngest person to do so until then. Polgar is not only a chess master but a certified brainiac with a recorded IQ of 170. She lived a childhood filled with extensive chess training given by her father. She defeated nine former and current world champions including Garry Kasparov, Boris Spassky, and Anatoly Karpov.  Quite amazing.

  1. Garry Kasparov

Garry Kasparov has totally amazed the world with his outstanding IQ of more than 190. He is a Russian chess Grandmaster, former World Chess Champion, writer, and political activist, considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 months.  Kasparov became the youngest ever undisputed World Chess Champion in 1985 at age 22 by defeating then-champion Anatoly Karpov.   He held the official FIDE world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. In 1997 he became the first world champion to lose a match to a computer under standard time controls, when he lost to the IBM supercomputer Deep Blue in a highly publicized match. He continued to hold the “Classical” World Chess Championship until his defeat by Vladimir Kramnik in 2000.

  1. Rick Rosner

Gifted with an amazing IQ of 192.  Richard G. “Rick” Rosner (born May 2, 1960) is an American television writer and media figure known for his high intelligence test scores and his unusual career. There are reports that he has achieved some of the highest scores ever recorded on IQ tests designed to measure exceptional intelligence. He has become known for taking part in activities not usually associated with geniuses.

  1. Kim Ung-Yong

With a verified IQ of 210, Korean civil engineer Ung Yong is considered to be one of the smartest people on the planet.  He was born March 7, 1963 and was definitely a child prodigy .  He started speaking at the age of 6 months and was able to read Japanese, Korean, German, English and many other languages by his third birthday. When he was four years old, his father said he had memorized about 2000 words in both English and German.  He was writing poetry in Korean and Chinese and wrote two very short books of essays and poems (less than 20 pages). Kim was listed in the Guinness Book of World Records under “Highest IQ“; the book gave the boy’s score as about 210. [Guinness retired the “Highest IQ” category in 1990 after concluding IQ tests were too unreliable to designate a single record holder.

  1. Christopher Hirata

Christopher Hirata’s  IQ is approximately 225 which is phenomenal. He was genius from childhood. At the age of 16, he was working with NASA with the Mars mission.  At the age of 22, he obtained a PhD from Princeton University.  Hirata is teaching astrophysics at the California Institute of Technology.

  1. Marilyn vos Savant

Marilyn Vos Savant is said to have an IQ of 228. She is an American magazine columnist, author, lecturer, and playwright who rose to fame as a result of the listing in the Guinness Book of World Records under “Highest IQ.” Since 1986 she has written “Ask Marilyn,” a Parade magazine Sunday column where she solves puzzles and answers questions on various subjects.

1.Terence Tao

Terence Tao is an Australian mathematician working in harmonic analysis, partial differential equations, additive combinatorics, ergodic Ramsey theory, random matrix theory, and analytic number theory.  He currently holds the James and Carol Collins chair in mathematics at the University of California, Los Angeles where he became the youngest ever promoted to full professor at the age of 24 years. He was a co-recipient of the 2006 Fields Medal and the 2014 Breakthrough Prize in Mathematics.

Tao was a child prodigy, one of the subjects in the longitudinal research on exceptionally gifted children by education researcher Miraca Gross. His father told the press that at the age of two, during a family gathering, Tao attempted to teach a 5-year-old child arithmetic and English. According to Smithsonian Online Magazine, Tao could carry out basic arithmetic by the age of two. When asked by his father how he knew numbers and letters, he said he learned them from Sesame Street.

OK, now before you go running to jump from the nearest bridge, consider the statement below:

Persistence—President Calvin Coolidge said it better than anyone I have ever heard. “Nothing in the world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent.   Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent.  The slogan “Press on” has solved and always will solve the problems of the human race.” 

I personally think Calvin really knew what he was talking about.  Most of us get it done by persistence!! ‘Nuff” said.

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

WHY I DRINK WINE

December 6, 2017


Over the years I have developed a taste for wine.  Please note, I am definitely NOT an expert and do not even come close to being an expert.  As a matter of fact, when the waiter brings the wine list I immediately hand it to my wife to ponder and ultimately place the order. One of the things on my “bucket list” (and I had better get to it) is taking a wine-appreciation course—preferably in Italy.

The wine industry is a fascinating commercial enterprise and not without hazards; namely, weather and disease.  Even with that being the case, most individuals can appreciate production efforts on an annual basis.   Let’s take a look.

In April 2017 the International Organization of Vine and Wine (OIV) released its annual report on the state of the wine industry for the year 2016.  Last year, the global output definitely declined slightly over three (3%) percent from the production of 2015. Bad weather created the issues with production.  According to the OIV, the top ten (10) producers were as follows:

  • Italy—50.9 million hector-liters
  • France—43.5 million hector-liters
  • Spain—39.3 million hector-liters
  • United States—23.9 million hector-liters
  • Australia—13.0 million hector-liters
  • China—11.4 million hector-liters
  • South Africa—10.5 million hector-liters
  • Chile—10.1– million hector-liters
  • Argentina—9.4 million hector-liters
  • Germany—9.0 million hector-liters

Now, if we go from supply to demand, we find the following:

  • United States—31.8 million hector-liters
  • France—27.0– million hector-liters
  • Italy—22.5 million hector-liters
  • Germany—20.2 million hector-liters
  • China—17.3 million hector-liters
  • United Kingdom—12.9 million hector-liters
  • Spain—9.9 million hector-liters
  • Argentina—9.4 million hector-liters
  • Russia—9.3 million hector-liters
  • Australia—5.4 million hector-liters

A partial list of countries and associated consumption is given as follows:

Now, there may be other reasons and experiences when drinking wine as shown by the digital pictures below.

GREAT LOGIC ON THIS ONE.

This is called supreme rationalization.

I truly believe Mr. Handy has his head in the right place.  He is doing humanity a great service.

SELF-EXPLANATORY!!!!!!!!

We’ve all been there.

This very well could be the result of enjoying a glass (or several glasses) much too much.

As always, I welcome your comments.

HILLBILLY ELEGY

November 9, 2017


Hillbilly Elegy is without a doubt one of the best-written, most important books I have ever read.  A remarkably insightful account of J.D. Vance growing up in a significantly dysfunctional family but only realizing that fact as he became older and compared his family with others.  As you read this book, you realize it is a “major miracle” he escaped the continuing system of mental and physical abuse prevalent with poor, white, Eastern Kentucky “hillbilly” families.  When moving to Ohio, the abuse continued.  Even though financial conditions improved, conditions remained ingrained relative to family behavior.

 I grew up poor, in the Rust Belt, in an Ohio steel town that has been hemorrhaging jobs and hope for as long as I can remember.” That’s how J. D. Vance begins one of the saddest and most fascinating books, “Hillbilly Elegy:  A Memoir of a Family and Culture in Crisis. Published by Harper, this book has been on the NYT best seller list since its first publication and has rarely dipped below number ten on anyone’s list. Vance was born in Kentucky and raised by his grandparents, as a self-described “hillbilly,” in Middletown, Ohio, home of the once-mighty Armco Steel. His family struggled with poverty and domestic violence, of which he and his sister were victims. His mother was addicted to drugs—first to painkillers, then to heroin. Many of his neighbors were jobless and on welfare. Vance escaped their fate by joining the Marines after high school and serving in Iraq. Afterward, he attended Ohio State and Yale Law School, where he was mentored by Amy Chua, a law professor and tiger mom. He now lives in San Francisco, and works at Mithril Capital Management the investment firm helmed by Peter Thiel. It seems safe to say that Vance, who is now in his early thirties, has seen a wider swath of America than most people.  The life he has lived during his adolescent years is absolutely foreign to the life this writer has lived.  This makes the descriptive information in his book valuable and gives a glimpse into another way of life.

“Hillbilly Elegy” is a regional memoir about Vance’s Scots-Irish family, one of many who have lived and worked in Appalachia for generations. For perhaps a century, Vance explains, the region was on an upward trajectory. Family men worked as sharecroppers, then as coal miners, then as steelworkers; families inched their way toward prosperity, often moving north in pursuit of work.  Vance’s family moved about a hundred miles, from Kentucky to Ohio; like many families, they are “hillbilly transplants.” In mid-century Middletown, where Armco Steel built schools and parks along the Great Miami River, Vance’s grandparents were able to live a middle-class life, driving back to the hollers of Kentucky every weekend to visit relatives and friends. Many families, on a regular basis, sent money back to their relatives in Appalachian Kentucky for aid and support consequently “keeping their boat afloat”.

Middletown’s industrial jobs began to disappear in the seventies and eighties. Today, its main street is full of shuttered storefronts, and is a haven for drug dealers at night. Vance reports that, in 2014, more people died from drug overdoses than from natural causes in Butler County, where Middletown is located. Families are disintegrating: neighbors listen as kitchen-table squabbles escalate and come to blows, and single mothers raise the majority of children (Vance himself had fifteen “stepdads” while growing up). Although many people identify as religious, church attendance is at historic lows. High-school graduation rates are sinking, and few students go on to college. Columbus, Ohio, one of the fastest-growing cities in America, is just ninety minutes’ drive from Middletown, but the distance feels unbridgeable. Vance uses the psychological term “learned helplessness” to describe the resignation of his peers, many of whom have given up on the idea of upward mobility in a region that they see as permanently left behind. Writing in a higher register, he says that there is something “almost spiritual about the cynicism” in his home town.

Mr. Vance mentions Martin Seligman as being one psychologist that aids his efforts in understanding the “mechanics” of his family life. Commonly known as the founder of Positive Psychology, Martin Seligman is a leading authority in the fields of Positive Psychology, resilience, learned helplessness, depression, optimism and pessimism. He is also a recognized authority on interventions that prevent depression, and build strengths and well-being.

Learned helplessness, in psychology, a mental state in which an organism forced to bear aversive stimuli, or stimuli that are painful or otherwise unpleasant, becomes unable or unwilling to avoid subsequent encounters with those stimuli, even if they are “escapable,” presumably because it has learned that it cannot.  This describes the culture that Mr. Vance grew up in and the culture he desperately had tried to escape—helplessness.

Vance makes the proper decision when he enlists in the Marine Corps for four (4) years.  This action took place after high school graduation.  Just graduating from high school is remarkable.  The Marine Corps instilled in Vance a spirit in which just about anything is possible including enrolling and completing study at Ohio State University and then going on to Yale Law School.  He escapes his environment but has difficulty in escaping his lack of understanding of how the world works.  There are several chapters in his book that give a vivid description of those social necessities he lacks. “You can take the boy out of Kentucky but you can’t take Kentucky out of the boy”.  This is one of my favorite quotes from the book and Vance lives that quote but works diligently to make course corrections as he progresses through Yale and beyond.

In my opinion, this is a “must-read” book. As a matter of fact, it should be read more than once to fully understand the details presented.  READ THIS BOOK.

%d bloggers like this: