WHY DID I NOT THINK OF THAT?

February 17, 2018


Portions of this post were taken from Design News Daily.

How many times have you said that? It’s called the Eureka moment – a sudden flash of intuition that leads us down a path to a wonderful, new, productive solution. Most of us have had such moments, but a select few have parlayed them into something grand, something that changes the world. That was the case for Arthur Fry, inventor of the Post-It Note and Richard James, inventor of the Slinky toy. They took simple ideas – such as a sticky note and a coil spring — and touched hundreds of millions of lives with them.  Given below are nine Eureka Moments that actually produced workable and usable devices that have revolutionized and made life easier for all of us. Let’s take a look.

If you could see my computer and associated screen, you would see a “ton” of post-it-notes.  Most with scribbles, PIN numbers, telephone numbers, etc etc.  We all use them.

Legend has it that Post-It Note inventor Arthur Fry conjured up the idea for his product when the little scraps of paper in his Sunday hymnal kept falling out. To solve the problem, he used an adhesive developed by a 3M colleague, Dr. Spencer Silver. Silver’s reusable, pressure-sensitive adhesive was failing to stir interest inside 3M until Fry came along and made the mental connection to his hymnal.

In 1974, the two partnered to put the adhesive on small sheets of yellow paper and … a mythic product was born. They passed their sticky notes to fellow employees, who loved them. “I thought, what we have here isn’t just a bookmark,” Fry said. “It’s a whole new way to communicate.” They later put their product on the market, receiving an even stronger reaction. Lee Iacocca and other Fortune 500 CEOs reportedly wrote to praise it. Post-It Notes, as they soon became known, eventually were sold in more than 100 countries. At one point, it was estimated that the average professional received 11 messages on Post-It Notes per day. Fry received 3M’s Golden Step Award, was named a corporate researcher, became a member of the company’s Carlton Society and was appointed to its Circle of Technical Excellence.

(Image source: By Tinkeringbell – Own work, Public Domain/Wikipedia)

Ansa baby bottles are virtually impossible to find today, but they were all the rage in the mid-1980s.

The bottles, which have a hole in the middle to make them easy for babies to hold, were the brainchild of William and Nickie Campbell of Muskogee, OK, who designed them for their infant son. After filing for patents in 1984, they took out a loan, launched the Ansa Bottle Co., manufactured the plastic bottles, and enjoyed immediate success. They received editorial coverage in American Baby and Mothers Today, while inking deals with Sears, K-Mart, Walgreens, and Target, according to The Oklahoman. Their bottles even went on display in the Museum of Modern Art in New York City.

(Image source: US Patent Office)

Rolling luggage is an accepted fact of air travel today, but it wasn’t always so and I’m not too sure what we now would do without it.  The concept was slow to take hold, and achieved acceptance in two distinct steps. The first step occurred in 1970, when inventor Bernard Sadow observed an airport worker rolling a heavy machine on a wheeled skid. Sadow, who was at the time dragging his own luggage through customs after a trip to Aruba, had the proverbial “eureka moment,” according to The New York Times. Sadow’s solution to the problem was a suitcase with four wheels and a pull strap. To his surprise, however, the idea was slow to take off. That’s where the second step came in. In 1987, a Northwest Airlines pilot and workshop tinkerer named Robert Plath took it to the next level — developing an upright, two-wheeled suitcase with a long stiff handle. Plath’s so-called “Rollaboard” was the missing ingredient to the success of rolling luggage.

Today, his 30-year-old concept dominates air travel and is built by countless manufacturers — any patents having long since expired. The initial slow acceptance remains a mystery to many, however. Sadow, looking back at it years later, attributed the consumer reluctance to men who refused to take the easy way out. “It was a very macho thing,” he said.

(Image source photo: Design News)

OK, who on the planet has NOT owned and/or played with a slinky?  In 1943, Naval mechanical engineer Richard James was developing springs for instruments when he accidently knocked one to the floor, permanently altering the future of toy manufacturing. The spring subsequently stepped “in a series of arcs to a stack of books, to a tabletop, and to the floor, where it recoiled itself and stood upright,” writes Wikipedia. James reportedly realized that with the right steel properties, he could make a spring walk, which is exactly what he did. Using a $500 loan, he made 400 “Slinky” coil springs at a local machine shop, demonstrated them at a Gimbels department store in Philadelphia, and sold his entire inventory in ninety (90) minutes. From there, Slinky became a legend, reaching sales of 300 million units in 60 years. Today, engineers attribute Slinky’s sales to the taming of the product’s governing physical principles — Hooke’s Law and the force of gravity. But advertising executives argue that its monumental sales were a product of clever TV commercials. The song, “Everyone knows it’s slinky” (recognized by virtually everyone who lived through the 1960s and 1970s), is considered the longest-running jingle in advertising history.

(Image source: Wikipedia)

The Band-Aid (or “Band-Aid brand,” as Johnson & Johnson calls it) is in essence a simple concept – an adhesive strip with a small bandage attached. Still, its success is undeniable. The idea originated with Johnson & Johnson employees Thomas Anderson and Earle Dickson in 1920. Dickson made the prototype for his wife, who frequently burned herself while cooking, enabling her to dress her wounds without help. Dickson introduced the concept to his bosses, who quickly launched it into production.

Today, it is copied by many generic products, but the Band-Aid brand lives on. Band-Aid is accepted around the world, with more than 100 billion having been sold.

(Image source photo: Design News)

Today, it’s hard to imagine that an upside-down bottle was once considered an innovation. But it was. Ketchup maker H.J. Heinz launched a revolution in packaging after deciding that its customers were tired of banging on the side of glass bottles, waiting for their product to ooze out. The unlikely hero of their revolution was Paul Brown, a molding shop owner in Midland, MI, who designed a special valve for bottles of ketchup and other viscous liquids, according to an article in the McClatchey Newspapers. Brown’s valve enabled ketchup bottles to be stored upside down without leaking. It also allowed liquids to be easily delivered when the bottle was squeezed, and sucked back inside when force was released.

Brown was said to have built 111 failed injection-molded silicone prototypes before finding the working design. To his lasting delight, the successful concept found use with not only with Heinz, but with makers of baby food, shampoo, and cosmetics, as well as with NASA for space flights. In retrospect, he said the final design was the result of an unusual intellectual approach. “I would pretend I was silicone, and if I was injected into a mold, what I would do,” he told McClatchey. The technique apparently worked: Brown eventually sold his business for about $13 million in 1995.

Players of pinball may take the games’ dual flippers for granted, but they were an innovation when Steve Kordek devised them in 1948. Working for the Genco Co. in Chicago (a company he became acquainted with after stepping into its lobby to escape a heavy rain), Kordek became the father of the two-flipper pinball game. His lasting contribution was simple, yet innovative — the use of direct current (DC) to actuate the flippers, rather than alternating current (AC). DC, he found, made the flippers more controllable, yet less costly to manufacture. Over six decades, Kordek reached legendary status in the industry, producing games for Genco, Bally Manufacturing, and Williams Manufacturing, always employing his dual-flipper design. He worked until 2003, designing the Vacation America game (based on the National Lampoon Vacation movies) at age 92. But it was his DC-based, dual flipper design that shaped his legacy. “It was really revolutionary, and pretty much everyone followed suit,” David Silverman, executive director of the National Pinball Hall of Fame told The New York Times in 2012. “And it’s stayed the standard for 60 years.”

(Image source: By ElHeineken, own work/Wikipedia)

It’s difficult to know whether any individual has ever been credited with the design of the ergonomic bent snow shovel, but the idea is nevertheless making money … for somebody. Bent-handle snow shovels today are sold at virtually every hardware store and home center in the northern United States, and they’re a critical winter tool for millions of homeowners. The idea is that by putting a bend in the shaft, the horizontal moment arm between the shovel handle and the tip is shorter, putting less strain on the user’s lower back. Although there’s some argument on that point, it was recently proven by engineering graduate students at the University of Calgary, according to a story on CTVNews.com.

Studying the bent-handle shovels in the school’s biomechanics laboratory, engineers concluded that they require less bending on the part of users, and therefore reduce mechanical loads on the lower back by 16 percent. “I think that’s a pretty substantial reduction,” researcher Ryan Lewinson told CTVNews. “Over the course of shoveling an entire driveway, that probably would add up to something pretty meaningful.”

(Image source photo: Design News)

Erno Rubik, a Hungarian sculptor and professor of architecture, invented his famous game cube while trying to solve a structural problem. Although his goal had been to put moving parts together in a mechanism that wouldn’t fall apart, it gradually dawned on Rubik that he had created a puzzle of sorts.

His puzzle consisted of 26 miniature cubes, each having an inward extension that interlocked to other cubes, allowing them to move independently and in different directions. Initially called the Magic Cube, it was released in Budapest toy shops in 1977. It was later licensed to the Ideal Toy Co. in 1980, which changed its name to Rubik’s Cube to make it more distinctive. Its broader release started a craze in the early 1980s. Rubik’s Cube won Toy of the Year Awards in Germany, France, the UK, US, Finland, Sweden, and Italy. Between 1980 and 1983, 200 million cubes were sold worldwide. Clubs of “speedcubers” popped up around the world, it appeared on the cover of Scientific American, books were written about it, and The Washington Post called it “a puzzle that’s moving like fast food right now. “Today, Rubik’s Cube continues to sell and enthusiasts continue to test their skill against it. Total sales are said to have passed 300 million. In 2017, a speedcuber named SeungBeom Cho set a world record for solving the puzzle in 4.59 seconds.

(Image source photo: Design News)

CONCLUSIONS:  We all have ideas.  The difference is persistence in developing and marketing those ideas.

Advertisements

The convergence of “smart” microphones, new digital signal processing technology, voice recognition and natural language processing has opened the door for voice interfaces.  Let’s first define a “smart device”.

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G, etc., that can operate to some extent interactively and autonomously.

I am told by my youngest granddaughter that all the cool kids now have in-home, voice-activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition of the devices works well; although you may find yourself with an extra dollhouse or two occasionally. (I do wonder if they speak “southern” but that’s another question for another day.)

A smart speaker is, essentially, a speaker with added internet connectivity and “smart assistant” voice-control functionality. The smart assistant is typically Amazon Alexa or Google Assistant, both of which are independently managed by their parent companies and have been opened up for other third-parties to implement into their hardware. The idea is that the more people who bring these into their homes, the more Amazon and Google have a “space” in every abode where they’re always accessible.

Let me first state that my family does not, as yet, have a smart device but we may be inching in that direction.  If we look at numbers, we see the following projections:

  • 175 million smart devices will be installed in a majority of U.S. households by 2022 with at least seventy (70) million households having at least one smart speaker in their home. (Digital Voice Assistants Platforms, Revenues & Opportunities, 2017-2022. Juniper Research, November 2017.)
  • Amazon sold over eleven (11) million Alexa voice-controlled Amazon Echo devices in 2016. That number was expected to double for 2017. (Smart Home Devices Forecast, 2017 to 2022(US) Forester Research, October 2017.
  • Amazon Echo accounted for 70.6% of all voice-enabled speaker users in the United States in 2017, followed by Google Home at 23.8%. (eMarketer, April 2017)
  • In 2018, 38.5 million millennials are expected to use voice-enabled digital assistants—such as Amazon Alexa, Apple Siri, Google Now and Microsoft Cortana—at least once per month. (eMarketer, April 2017.)
  • The growing smart speaker market is expected to hit 56.3 million shipments, globally in 2018. (Canalys Research, January 2018)
  • The United States will remain the most important market for smart speakers in 2018, with shipments expected to reach 38.4 million units. China is a distant second at 4.4 million units. (Canalys Research, April 2018.)

With that being the case, let’s now look at what smart speakers are now commercialized and available either as online purchases or retail markets:

  • Amazon Echo Spot–$114.99
  • Sonos One–$199.00
  • Google Home–$129.00
  • Amazon Echo Show–$179.99
  • Google Home Max–$399.00
  • Google Home Mini–$49.00
  • Fabriq Choros–$69.99
  • Amazon Echo (Second Generation) –$$84.99
  • Harman Kardon Evoke–$199.00
  • Amazon Echo Plus–$149.00

CONCLUSIONS:  If you are interested in purchasing one from the list above, I would definitely recommend you do your homework.  Investigate the services provided by a smart speaker to make sure you are getting what you desire.  Be aware that there will certainly be additional items enter the marketplace as time goes by.  GOOD LUCK.

THE NEXT COLD WAR

February 3, 2018


I’m old enough to remember the Cold War waged by the United States and Russia.  The term “Cold War” first appeared in a 1945 essay by the English writer George Orwell called “You and the Atomic Bomb”.

HOW DID THIS START:

During World War II, the United States and the Soviet Union fought together as allies against the Axis powers, Germany, Japan and Italy. However, the relationship between the two nations was a tense one. Americans had long been wary of Soviet communism and concerned about Russian leader Joseph Stalin’s tyrannical, blood-thirsty rule of his own country. For their part, the Soviets resented the Americans’ decades-long refusal to treat the USSR as a legitimate part of the international community as well as their delayed entry into World War II, which resulted in the deaths of tens of millions of Russians. After the war ended, these grievances ripened into an overwhelming sense of mutual distrust and enmity. Postwar Soviet expansionism in Eastern Europe fueled many Americans’ fears of a Russian plan to control the world. Meanwhile, the USSR came to resent what they perceived as American officials’ bellicose rhetoric, arms buildup and interventionist approach to international relations. In such a hostile atmosphere, no single party was entirely to blame for the Cold War; in fact, some historians believe it was inevitable.

American officials encouraged the development of atomic weapons like the ones that had ended World War II. Thus, began a deadly “arms race.” In 1949, the Soviets tested an atom bomb of their own. In response, President Truman announced that the United States would build an even more destructive atomic weapon: the hydrogen bomb, or “superbomb.” Stalin followed suit.

The ever-present threat of nuclear annihilation had a great impact on American domestic life as well. People built bomb shelters in their backyards. They practiced attack drills in schools and other public places. The 1950s and 1960s saw an epidemic of popular films that horrified moviegoers with depictions of nuclear devastation and mutant creatures. In these and other ways, the Cold War was a constant presence in Americans’ everyday lives.

SPACE AND THE COLD WAR:

Space exploration served as another dramatic arena for Cold War competition. On October 4, 1957, a Soviet R-7 intercontinental ballistic missile launched Sputnik (Russian for “traveler”), the world’s first artificial satellite and the first man-made object to be placed into the Earth’s orbit. Sputnik’s launch came as a surprise, and not a pleasant one, to most Americans. In the United States, space was seen as the next frontier, a logical extension of the grand American tradition of exploration, and it was crucial not to lose too much ground to the Soviets. In addition, this demonstration of the overwhelming power of the R-7 missile–seemingly capable of delivering a nuclear warhead into U.S. air space–made gathering intelligence about Soviet military activities particularly urgent.

In 1958, the U.S. launched its own satellite, Explorer I, designed by the U.S. Army under the direction of rocket scientist Wernher von Braun, and what came to be known as the Space Race was underway. That same year, President Dwight Eisenhower signed a public order creating the National Aeronautics and Space Administration (NASA), a federal agency dedicated to space exploration, as well as several programs seeking to exploit the military potential of space. Still, the Soviets were one step ahead, launching the first man into space in April 1961.

THE COLD WAR AND AI (ARTIFICIAL INTELLEGENCE):

Our country NEEDS to consider AI as an extension of the cold war.  Make no mistake about it, AI will definitely play into the hands of a few desperate dictators or individuals in future years.  A country that thinks its adversaries have or will get AI weapons will need them also to retaliate or deter foreign use against the US. Wide use of AI-powered cyberattacks may still be some time away. Countries might agree to a proposed Digital Geneva Convention to limit AI conflict. But that won’t stop AI attacks by independent nationalist groups, militias, criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out of a desire to be prepared to defend themselves. With Russia embracing AI, other nations that don’t or those that restrict AI development risk becoming unable to compete – economically or militarily – with countries wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many countries could provide a deterrent against attacks, as happened with nuclear weapons during the Cold War.

The Congress of the United States and the Executive Branch need to “lose” the high school mentality and get back in the game.  They need to address the future instead of living in the past OR we the people need to vote them all out and start over.

 

AUTOMOTIVE FUTURE

January 25, 2018


Portions of this post are taken from Design News Daily Magazine, January publication.

The Detroit Auto Show has a weirdly duplicitous vibe these days. The biggest companies that attend make sure to talk about things that make them sound future-focused, almost benevolent. They talk openly about autonomy, electrification, and even embracing other forms of transportation. But they do this while doling out product announcements that are very much about meeting the current demands of consumers who, enjoying low gas prices, want trucks and crossover SUVs. With that said, it really is interesting to take a look at several “concept” cars.  Cars we just may be driving the future is not the near future.  Let’s take a look right now.

Guangzhou Automobile Co. (better known as GAC Motor) stole the show in Detroit, at least if we take their amazing claims at face value. The Chinese automaker rolled out the Enverge electric concept car, which is said to have a 373-mile all-electric range based on a 71-kWh battery. Incredibly, it is also reported to have a wireless recharge time of just 10 minutes for a 240-mile range. Enverge’s power numbers are equally impressive: 235 HP and 302 lb-ft of torque, with a 0-62 mph time of just 4.4 seconds. GAC, the sixth biggest automaker in China, told the Detroit audience that it would start selling cars in the US by Q4 2019. The question is whether its extraordinary performance numbers will hold up to EPA scrutiny.  If GAC can live up to and meet their specifications they may have the real deal here.  Very impressive.

As autonomous vehicle technology advances, automakers are already starting to examine the softer side of that market – that is, how will humans interact the machines? And what are some of the new applications for the technology? That’s where Ford’s pizza delivery car came in. The giant automaker started delivering Domino’s pizzas in Ann Arbor, MI, late last year with an autonomous car. In truth, the car had a driver at the wheel, sitting behind a window screen. But the actual delivery was automated: Customers were alerted by a text; a rear window rolled down; an automated voice told them what to do, and they grabbed the pie. Ford engineers were surprised to find that that the humans weren’t intimated by the technology. “In the testing we did, people interacted nicely with the car,” Ford autonomous car research engineer Wayne Williams told Design News. “They talked to it as if it were a robot. They waved when it drove away. Kids loved it. They’d come running up to it.” The message to Ford was clear – autonomous cars are about more than just personal transportation. Delivery services are a real possibility, too.

Most of today’s autonomous cars use unsightly, spinning Lidar buckets atop their roofs. At the auto show, Toyota talked about an alternative Lidar technology that’s sleek and elegant. You have to admit that for now, the autonomous cars look UGLY—really ugly.  Maybe Toyota has the answer.

In a grand rollout, Lexus introduced a concept car called the LF-1 Limitless. The LF-1 is what we’ve all come to expect from modern concept cars – a test bed for numerous power trains and autonomous vehicle technologies. It can be propelled by a fuel cell, hybrid, plug-in hybrid, all-electric or gasoline power train. And its automated driving system includes a “miniaturized supercomputer with links to navigation data, radar sensors, and cameras for a 360-degree view of your surroundings with predictive capabilities.” The sensing technologies are all part of a system known as “Chauffeur mode.” Lexus explained that the LF-1 is setting the stage for bigger things: By 2025, every new Lexus around the world will be available as a dedicated electrified model or will have an electrified option.

The Xmotion, which is said to combine Japanese aesthetics with SUV styling, includes seven digital screens. Three main displays join left- and right-side screens across the instrument panel. There’s also a “digital room mirror” in the ceiling and center console display. Moreover, the displays can be controlled by gestures and even eye motions, enabling drivers to focus on the task of driving. A Human Machine Interface also allows drivers to easily switch from Nissan’s ProPilot automated driving system to a manual mode.

Cadillac showed off its Super Cruise technology, which is said to be the only semi-autonomous driving system that actually monitors the driver’s attention level. If the driver is attentive, Super Cruise can do amazing things – tooling along for hours on a divided highway with no intersections, for example, while handling all the steering, acceleration and braking. GM describes it as an SAE Level 2 autonomous system. It’s important because it shows autonomous vehicle technology has left the lab and is making its debut on production vehicles. Super Cruise launched late in 2017 on the Cadillac CT6 (shown here).

In a continuing effort to understand the relationship between self-driving cars and humans, Ford Motor Co. and Virginia Tech displayed an autonomous test vehicle that communicates its intent to other drivers, bicyclists, and pedestrians. Such communication is important, Ford engineers say, because “designing a way to replace the head nod or hand wave is fundamental to ensuring safe and efficient operation of self-driving vehicles.”

Infiniti rolled out the Q Inspiration luxury sedan concept, which combines its variable compression ratio engine with Nissan’s ProPilot semi-autonomous vehicle technology. Infiniti claims the engine combines “turbo charged gasoline power with the torque and efficiency of a hybrid or diesel.” Known as the VC-Turbo, the four-cylinder engine continually transforms itself, adjusting its compression ratio to optimize power and fuel efficiency. At the same time, the sedan features ProPilot Assist, which provides assisted steering, braking and acceleration during driving. You can see from the digital below, the photographers were there covering the Infinity.

The eye-catching Concept-i vehicle provided a more extreme view of the distant future, when vehicles will be equipped with artificial intelligence (AI). Meant to anticipate people’s needs and improve their quality of life, Concept-i is all about communicating with the driver and occupants. An AI agent named Yui uses light, sound, and even touch, instead of traditional screens, to communicate information. Colored lights in the footwells, for example, indicate whether the vehicle is an autonomous or manual drive; projectors in the rear deck project outside views onto the seat pillar to warn drivers about potential blind spots, and a next-generation heads-up display keeps the driver’s eyes and attention on the road. Moreover, the vehicle creates a feeling of warmth inside by emanating sweeping lines of light around it. Toyota engineers created the Concept-i features based on their belief that “mobility technology should be warm, welcoming, and above all, fun.”

CONCLUSIONS:  To be quite honest, I was not really blown away with this year’s offerings.  I LOVE the Infinity and the Toyota concept car shown above.  The American models did not capture my attention. Just a thought.

WORLD’S RICHEST

December 29, 2017


OK, it is once again time to make those New Year’s resolutions.  Health, finances, weight loss, quit smoking, cut out sugar, daily exercise, etc. You get the drill.   All of those resolutions we get tired of and basically forget by the end of February.  If you had all the money in the world, as some do, you might not even make resolutions.  You might sit back and watch it roll in.  Let’s take a quick look.

According to the Bloomberg Billionaires Index, 2017 proved to be an outstanding year for the world’s richest people, watching their net worth rise 23 percent from $4.4 trillion in 2016 to $5.3 trillion by the end of trading on Tuesday, December 26.

The following graph will indicate the progress of the world’s richest through the 2017 year.  As you can see, the world’s richest individuals added a very cool one trillion dollars ($1 trillion USD) to their individual wealth.  Now that’s the entire group of richest people but even that’s a huge sum of “dinero”.

Take a look at these duds below.  Do you know who they are?  I’m going to let you ponder this over the weekend but they all “look familiar” and they are all very very wealthy.

WINNERS:

  • The U.S. has the largest presence on the index, with 159 billionaires. They added $315 billion, an eighteen (18%) percent gain that gives them a collective net worth of $2 trillion.
  • Russia’s twenty-seven (27) richest people put behind them the economic pain that followed President Vladimir Putin’s 2014 annexation of Crimea, adding $29 billion to $275 billion, surpassing the collective net worth they had before western economic sanctions began.
  • It was also a banner year for tech moguls, with the fifty-seven (57) technology billionaires on the index adding $262 billion, a thirty-five (35%) percent increase that was the most of any sector on the ranking.
  • Facebook Inc. co-founder Mark Zuckerberghad the fourth-largest U.S. dollar increase on the index, adding $22.6 billion, or forty-five (45%) percent, and filed plans to sell eighteen (18%) percent of his stake in the social media giant as part of his plan to give away the majority of his $72.6 billion fortune.
  • In all, the 440 billionaires on the index who added to their fortunes in 2017, gained a combined $1.05 trillion.
  • The Bloomberg index discovered sixty-seven (67) hidden billionaires in 2017.
  • Renaissance Technologies’ Henry Lauferwas identified with a net worth of $4 billion in April. Robert Mercer, 71, who plans to step down as co-CEO of the world’s most profitable trading fund on Jan. 1, couldn’t be confirmed as a billionaire.
  • Two fish billionaires were caught: Russia’s Vitaly Orlovand Chuck Bundrant of Trident Seafood.
  • A Brazilian tycoon who built a $1.3 billion fortune with Latin America’s biggest wind developer was interviewed in April.
  • Two New York real estate moguls were identified, Ben Ashkenazy and Joel Wiener.
  • Several technology startup billionaires were identified, including the chief executive officer of Roku Inc. and the two co-founders of Wayfair Inc.
  • Investor euphoria created a number of bitcoin billionaires, including Tyler and Cameron Winkelvoss, with the value of the cryptocurrency soaring to more than $16,000 Tuesday, up from $1,140 on Jan. 4. The leap came with a chorus of warnings, including from Janet Yellen, who called the emerging tender a “highly speculative asset” at her last news conference as chair of the Federal Reserve, on Dec. 13.

I’m not going to highlight the losers because even their monetary losses leave them as millionaires and billionaires.  I know this post makes your day but I tell you these things to indicate that maybe, just maybe it is possible to achieve monetary success in 2018.  I DO KNOW IT’S POSSIBLE TO TRY.  Now, when I say success, I’m not necessarily talking about millions and certainly not billions—enough to cover the basic expenses with a little left over for FUL.

Here’s hoping you all have a marvelous NEW YEAR.  Remember—clean slate.  Starting over. Have a great year.

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

BITCOIN

December 9, 2017


I have been hearing a great deal about Bitcoin lately specifically on the early-morning television business channels. I am not too sure what this is all about so I thought I would take a look.    First, an “official” definition.

Bitcoin is a cryptocurrency and worldwide payment system. It is the first decentralized digital currency, as the system works without a central bank or single administrator. … Bitcoin was invented by an unknown person or group of people under the name Satoshi Nakamoto and released as open-source software in 2009.

The “unknown” part really disturbs me as well as the “cryptocurrency” aspects, but let’s continue.  Do you remember the Star Trek episodes in which someone asks, ‘how much does it cost and the answer is _______ credits’?  This is specifically what Bitcoin does, it is digital currency. No one controls Bitcoin; they aren’t printed, like dollars or euros – they’re produced by people, and increasingly businesses, running computers all around the world, using software that solves mathematical problems. A Bitcoin looks as follows-if you acquire a physical object representing“coin”.

Bitcoin transactions are completed when a “block” is added to the blockchain database that underpins the currency however, this can be a laborious process.  Segwit2x proposes moving bitcoin’s transaction data outside of the block and on to a parallel track to allow more transactions to take place. The changes happened in November and it remains to be seen if those changes will have a positive or negative impact on the price of bitcoin in the long term.

It’s been an incredible 2017 for bitcoin growth, with its value quadrupling in the past six months, surpassing the value of an ounce of gold for the first time. It means if you invested £2,000 five years ago, you would be a millionaire today.

You cannot “churn out” an unlimited number of Bitcoin. The bitcoin protocol – the rules that make bitcoin work – say that only twenty-one (21) million bitcoins can ever be created by miners. However, these coins can be divided into smaller parts (the smallest divisible amount is one hundred millionth of a bitcoin and is called a ‘Satoshi’, after the founder of bitcoin).

Conventional currency has been based on gold or silver. Theoretically, you knew that if you handed over a dollar at the bank, you could get some gold back (although this didn’t actually work in practice). But bitcoin isn’t based on gold; it’s based on mathematics. To me this is absolutely fascinating.  Around the world, people are using software programs that follow a mathematical formula to produce bitcoins. The mathematical formula is freely available, so that anyone can check it. The software is also open source, meaning that anyone can look at it to make sure that it does what it is supposed to.

SPECIFIC CHARACTERISTICS:

  1. It’s decentralized

The bitcoin network isn’t controlled by one central authority. Every machine that mines bitcoin and processes transactions makes up a part of the network, and the machines work together. That means that, in theory, one central authority can’t tinker with monetary policy and cause a meltdown – or simply decide to take people’s bitcoins away from them, as the Central European Bank decided to do in Cyprus in early 2013. And if some part of the network goes offline for some reason, the money keeps on flowing.

  1. It’s easy to set up

Conventional banks make you jump through hoops simply to open a bank account. Setting up merchant accounts for payment is another Kafkaesque task, beset by bureaucracy. However, you can set up a bitcoin address in seconds, no questions asked, and with no fees payable.

  1. It’s anonymous

Well, kind of. Users can hold multiple bitcoin addresses, and they aren’t linked to names, addresses, or other personally identifying information.

  1. It’s completely transparent

Bitcoin stores details of every single transaction that ever happened in the network in a huge version of a general ledger, called the blockchain. The blockchain tells all. If you have a publicly used bitcoin address, anyone can tell how many bitcoins are stored at that address. They just don’t know that it’s yours. There are measures that people can take to make their activities opaquer on the bitcoin network, though, such as not using the same bitcoin addresses consistently, and not transferring lots of bitcoin to a single address.

  1. Transaction fees are miniscule

Your bank may charge you a £10 fee for international transfers. Bitcoin doesn’t.

  1. It’s fast

You can send money anywhere and it will arrive minutes later, as soon as the bitcoin network processes the payment.

  1. It’s non-reputable

When your bitcoins are sent, there’s no getting them back, unless the recipient returns them to you. They’re gone forever.

WHERE TO BUY AND SELL

I definitely recommend you do your homework before buying Bitcoin because the value is roller coaster in nature, but given below are several exchanges in which Bitcoin can be purchased or sold.  Good luck.

CONSLUSIONS:

Is Bitcoin a bubble? It’s a natural question to ask—especially after Bitcoin’s price shot up from $12,000 to $15,000 this past week.

Brent Goldfarb is a business professor at the University of Maryland, and William Deringer is a historian at MIT. Both have done research on the history and economics of bubbles, and they talked to Ars by phone this week as Bitcoin continues its surge.

Both academics saw clear parallels between the bubbles they’ve studied and Bitcoin’s current rally. Bubbles tend to be driven either by new technologies (like railroads in 1840s Britain or the Internet in the 1990s) or by new financial innovations (like the financial engineering that produced the 2008 financial crisis). Bitcoin, of course, is both a new technology and a major financial innovation.

“A lot of bubbles historically involve some kind of new financial technology the effects of which people can’t really predict,” Deringer told Ars. “These new financial innovations create enthusiasm at a speed that is greater than people are able to reckon with all the consequences.”

Neither scholar wanted to predict when the current Bitcoin boom would end. But Goldfarb argued that we’re seeing classic signs that often occur near the end of a bubble. The end of a bubble, he told us, often comes with “a high amount of volatility and a lot of excitement.”

Goldfarb expects that in the coming months we’ll see more “stories about people who got fabulously wealthy on bitcoin.” That, in turn, could draw in more and more novice investors looking to get in on the action. From there, some triggering event will start a panic that will lead to a market crash.

“Uncertainty of valuation is often a huge issue in bubbles,” Deringer told Ars. Unlike a stock or bond, Bitcoin pays no interest or dividends, making it hard to figure out how much the currency ought to be worth. “It is hard to pinpoint exactly what the fundamentals of Bitcoin are,” Deringer said.

That uncertainty has allowed Bitcoin’s value to soar a 1,000-fold over the last five years. But it could also make the market vulnerable to crashes if investors start to lose confidence.

I would say travel at your own risk.

 

%d bloggers like this: