The convergence of “smart” microphones, new digital signal processing technology, voice recognition and natural language processing has opened the door for voice interfaces.  Let’s first define a “smart device”.

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G, etc., that can operate to some extent interactively and autonomously.

I am told by my youngest granddaughter that all the cool kids now have in-home, voice-activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition of the devices works well; although you may find yourself with an extra dollhouse or two occasionally. (I do wonder if they speak “southern” but that’s another question for another day.)

A smart speaker is, essentially, a speaker with added internet connectivity and “smart assistant” voice-control functionality. The smart assistant is typically Amazon Alexa or Google Assistant, both of which are independently managed by their parent companies and have been opened up for other third-parties to implement into their hardware. The idea is that the more people who bring these into their homes, the more Amazon and Google have a “space” in every abode where they’re always accessible.

Let me first state that my family does not, as yet, have a smart device but we may be inching in that direction.  If we look at numbers, we see the following projections:

  • 175 million smart devices will be installed in a majority of U.S. households by 2022 with at least seventy (70) million households having at least one smart speaker in their home. (Digital Voice Assistants Platforms, Revenues & Opportunities, 2017-2022. Juniper Research, November 2017.)
  • Amazon sold over eleven (11) million Alexa voice-controlled Amazon Echo devices in 2016. That number was expected to double for 2017. (Smart Home Devices Forecast, 2017 to 2022(US) Forester Research, October 2017.
  • Amazon Echo accounted for 70.6% of all voice-enabled speaker users in the United States in 2017, followed by Google Home at 23.8%. (eMarketer, April 2017)
  • In 2018, 38.5 million millennials are expected to use voice-enabled digital assistants—such as Amazon Alexa, Apple Siri, Google Now and Microsoft Cortana—at least once per month. (eMarketer, April 2017.)
  • The growing smart speaker market is expected to hit 56.3 million shipments, globally in 2018. (Canalys Research, January 2018)
  • The United States will remain the most important market for smart speakers in 2018, with shipments expected to reach 38.4 million units. China is a distant second at 4.4 million units. (Canalys Research, April 2018.)

With that being the case, let’s now look at what smart speakers are now commercialized and available either as online purchases or retail markets:

  • Amazon Echo Spot–$114.99
  • Sonos One–$199.00
  • Google Home–$129.00
  • Amazon Echo Show–$179.99
  • Google Home Max–$399.00
  • Google Home Mini–$49.00
  • Fabriq Choros–$69.99
  • Amazon Echo (Second Generation) –$$84.99
  • Harman Kardon Evoke–$199.00
  • Amazon Echo Plus–$149.00

CONCLUSIONS:  If you are interested in purchasing one from the list above, I would definitely recommend you do your homework.  Investigate the services provided by a smart speaker to make sure you are getting what you desire.  Be aware that there will certainly be additional items enter the marketplace as time goes by.  GOOD LUCK.


Jeanne Calment was a typical woman of her time. Born in Arles, France, in 1875, she lived a rather unremarkable life by most accounts — except for one thing. When she died in 1997 at the age of 122, she was on record as the oldest person to have ever lived. “I just kept getting older and couldn’t help it,” she once said.

So, what does the extraordinary life of this ordinary woman have to do with us today? More than you might think. In her day, living to be one hundred was extremely rare. But today in the United States, people one hundred and over represent the second-fastest-growing age group in the country. The fastest? Just think of that.  Many sixty-five-year-olds today will live well into their 90s.

Think of it another way: A ten-year-old child today — maybe your grandchild — has a fifty (50) percent chance of living to age of one hundred and four.  Some demographers have even speculated that the first person ever to live to be one hundred and fifty (150) is alive today.

I’m not suggesting that we should expect to live to one hundred and twenty-two (122), but as individuals and as a society, we need to prepare for a time when it is common to live to one hundred (100). We have to create a new mind-set around aging and solutions for helping us to live better as we live longer — what is called  Disrupt Aging. There are three areas where this is really important: health, wealth and self.

HEALTH:  As we think about living to one hundred (100), we simply cannot continue doing the same things we’ve been doing with regard to health. Our health has more to do with the choices we make each day in how we live our lives than it has to do with an occasional visit to the doctor’s office. We’re beginning to embrace a new vision and a new culture of health that focus more on preventing disease and emphasize well-being throughout our lives.  How many Big Mac’s have you had this week? President Trump is said to drink six to eight Diet Cokes PER DAY.  When was the last time you exercised?  How about reducing your stress level? ( Let me mention right now that I’m singing to the choir. I probably need to look in a mirror before launching this post.)  Back in March of 2017, I had a hip replacement.  My recovery, for my age, is right on target.  I know several friends who have had hip, knee, shoulder and even one ankle replacement.  What ails us, if it’s skeletal, can probably be fixed.  The cardiovascular is much more tricky and requires constant vigilance, but it can be done.

WEALTH:  One of the things people fear most about living longer is that they will outlive their money. Unfortunately, for many this fear is a very real one, especially for many younger people who tend to view saving for retirement as an exercise in futility. My mom and dad did just that as a result, I’m still working.  I enjoy working so it’s not drudgery day after day but I’m certainly old enough to retire. Then again, I just replaced the starter on my truck–$598.00. The range in our kitchen was definitely on it’s last legs and I do mean last legs.  Have you bought one of those lately? Go rob a bank.   My parents ran out of money and had to survive on Social Security and a reverse mortgage.  Not good. I would recommend to anyone—look carefully at the reverse mortgage before you sign on the dotted line.   What if instead of saving for retirement, we think of saving to do the things we’ve always wanted to do? In other words, saving not for the absence of financial hardship but for the means to thrive and be able to afford to live the life you want to live — saving for life.  The golden rule here is—-start early—even if it means a few dollars per month.

SELF:  Finally, we need to challenge outdated attitudes and stereotypes about aging. Research shows that our self-perceptions of aging influence not only how we age, but also our health status as we get older. More positive self-perceptions of aging are associated with living longer with less disability.

We need to get rid of the outdated stereotypes about aging and spark new solutions, so more of us can choose how we want to live as we age. For young people, living to one hundred is not a pipe dream, it’s a real possibility. And it’s up to us to help them realize and prepare for it, because Jeanne Calment’s strategy of just getting older because she “couldn’t help it” isn’t going to cut it.

You can see from the chart below—we are living longer. It’s going to happen and with the marvelous medical treatment we have today, one hundred year is not that far-fetched.



January 25, 2018

Portions of this post are taken from Design News Daily Magazine, January publication.

The Detroit Auto Show has a weirdly duplicitous vibe these days. The biggest companies that attend make sure to talk about things that make them sound future-focused, almost benevolent. They talk openly about autonomy, electrification, and even embracing other forms of transportation. But they do this while doling out product announcements that are very much about meeting the current demands of consumers who, enjoying low gas prices, want trucks and crossover SUVs. With that said, it really is interesting to take a look at several “concept” cars.  Cars we just may be driving the future is not the near future.  Let’s take a look right now.

Guangzhou Automobile Co. (better known as GAC Motor) stole the show in Detroit, at least if we take their amazing claims at face value. The Chinese automaker rolled out the Enverge electric concept car, which is said to have a 373-mile all-electric range based on a 71-kWh battery. Incredibly, it is also reported to have a wireless recharge time of just 10 minutes for a 240-mile range. Enverge’s power numbers are equally impressive: 235 HP and 302 lb-ft of torque, with a 0-62 mph time of just 4.4 seconds. GAC, the sixth biggest automaker in China, told the Detroit audience that it would start selling cars in the US by Q4 2019. The question is whether its extraordinary performance numbers will hold up to EPA scrutiny.  If GAC can live up to and meet their specifications they may have the real deal here.  Very impressive.

As autonomous vehicle technology advances, automakers are already starting to examine the softer side of that market – that is, how will humans interact the machines? And what are some of the new applications for the technology? That’s where Ford’s pizza delivery car came in. The giant automaker started delivering Domino’s pizzas in Ann Arbor, MI, late last year with an autonomous car. In truth, the car had a driver at the wheel, sitting behind a window screen. But the actual delivery was automated: Customers were alerted by a text; a rear window rolled down; an automated voice told them what to do, and they grabbed the pie. Ford engineers were surprised to find that that the humans weren’t intimated by the technology. “In the testing we did, people interacted nicely with the car,” Ford autonomous car research engineer Wayne Williams told Design News. “They talked to it as if it were a robot. They waved when it drove away. Kids loved it. They’d come running up to it.” The message to Ford was clear – autonomous cars are about more than just personal transportation. Delivery services are a real possibility, too.

Most of today’s autonomous cars use unsightly, spinning Lidar buckets atop their roofs. At the auto show, Toyota talked about an alternative Lidar technology that’s sleek and elegant. You have to admit that for now, the autonomous cars look UGLY—really ugly.  Maybe Toyota has the answer.

In a grand rollout, Lexus introduced a concept car called the LF-1 Limitless. The LF-1 is what we’ve all come to expect from modern concept cars – a test bed for numerous power trains and autonomous vehicle technologies. It can be propelled by a fuel cell, hybrid, plug-in hybrid, all-electric or gasoline power train. And its automated driving system includes a “miniaturized supercomputer with links to navigation data, radar sensors, and cameras for a 360-degree view of your surroundings with predictive capabilities.” The sensing technologies are all part of a system known as “Chauffeur mode.” Lexus explained that the LF-1 is setting the stage for bigger things: By 2025, every new Lexus around the world will be available as a dedicated electrified model or will have an electrified option.

The Xmotion, which is said to combine Japanese aesthetics with SUV styling, includes seven digital screens. Three main displays join left- and right-side screens across the instrument panel. There’s also a “digital room mirror” in the ceiling and center console display. Moreover, the displays can be controlled by gestures and even eye motions, enabling drivers to focus on the task of driving. A Human Machine Interface also allows drivers to easily switch from Nissan’s ProPilot automated driving system to a manual mode.

Cadillac showed off its Super Cruise technology, which is said to be the only semi-autonomous driving system that actually monitors the driver’s attention level. If the driver is attentive, Super Cruise can do amazing things – tooling along for hours on a divided highway with no intersections, for example, while handling all the steering, acceleration and braking. GM describes it as an SAE Level 2 autonomous system. It’s important because it shows autonomous vehicle technology has left the lab and is making its debut on production vehicles. Super Cruise launched late in 2017 on the Cadillac CT6 (shown here).

In a continuing effort to understand the relationship between self-driving cars and humans, Ford Motor Co. and Virginia Tech displayed an autonomous test vehicle that communicates its intent to other drivers, bicyclists, and pedestrians. Such communication is important, Ford engineers say, because “designing a way to replace the head nod or hand wave is fundamental to ensuring safe and efficient operation of self-driving vehicles.”

Infiniti rolled out the Q Inspiration luxury sedan concept, which combines its variable compression ratio engine with Nissan’s ProPilot semi-autonomous vehicle technology. Infiniti claims the engine combines “turbo charged gasoline power with the torque and efficiency of a hybrid or diesel.” Known as the VC-Turbo, the four-cylinder engine continually transforms itself, adjusting its compression ratio to optimize power and fuel efficiency. At the same time, the sedan features ProPilot Assist, which provides assisted steering, braking and acceleration during driving. You can see from the digital below, the photographers were there covering the Infinity.

The eye-catching Concept-i vehicle provided a more extreme view of the distant future, when vehicles will be equipped with artificial intelligence (AI). Meant to anticipate people’s needs and improve their quality of life, Concept-i is all about communicating with the driver and occupants. An AI agent named Yui uses light, sound, and even touch, instead of traditional screens, to communicate information. Colored lights in the footwells, for example, indicate whether the vehicle is an autonomous or manual drive; projectors in the rear deck project outside views onto the seat pillar to warn drivers about potential blind spots, and a next-generation heads-up display keeps the driver’s eyes and attention on the road. Moreover, the vehicle creates a feeling of warmth inside by emanating sweeping lines of light around it. Toyota engineers created the Concept-i features based on their belief that “mobility technology should be warm, welcoming, and above all, fun.”

CONCLUSIONS:  To be quite honest, I was not really blown away with this year’s offerings.  I LOVE the Infinity and the Toyota concept car shown above.  The American models did not capture my attention. Just a thought.

One source for this post is Forbes Magazine article, ” U.S. Dependence on Foreign Oil Hits 30-Year Low”, by Mr. Mike Patton.  Other sources were obviously used.

The United States is at this point in time “energy independent”—for the most part.   Do you remember the ‘70s and how, at times, it was extremely difficult to buy gasoline?  If you were driving during the 1970s, you certainly must remember waiting in line for an hour or more just to put gas in the ol’ car? Thanks to the OPEC oil embargo, petroleum was in short supply. At that time, America’s need for crude oil was soaring while U.S. production was falling. As a result, the U.S. was becoming increasingly dependent on foreign suppliers. Things have changed a great deal since then. Beginning in the mid-2000s, America’s dependence on foreign oil began to decline.  One of the reasons for this decline is the abundance of natural gas or methane existent in the US.

“At the rate of U.S. dry natural gas consumption in 2015 of about 27.3 Tcf (trillion cubic feet) per year, the United States has enough natural gas to last about 86 years. The actual number of years will depend on the amount of natural gas consumed each year, natural gas imports and exports, and additions to natural gas reserves. Jul 25, 2017”

For most of the one hundred and fifty (150) years of U.S. oil and gas production, natural gas has played second fiddle to oil. That appeared to change in the mid-2000s, when natural gas became the star of the shale revolution, and eight of every 10 rigs were chasing gas targets.

But natural gas turned out to be a shooting star. Thanks to the industry’s incredible success in leveraging game-changing technology to commercialize ultralow-permeability reservoirs, the market was looking at a supply glut by 2010, with prices below producer break-even values in many dry gas shale plays.

Everyone knows what happened next. The shale revolution quickly transitioned to crude oil production, and eight of every ten (10) rigs suddenly were drilling liquids. What many in the industry did not realize initially, however, is that tight oil and natural gas liquids plays would yield substantial associated gas volumes. With ongoing, dramatic per-well productivity increases in shale plays, and associated dry gas flowing from liquids resource plays, the beat just keeps going with respect to growth in oil, NGL and natural gas supplies in the United States.

Today’s market conditions certainly are not what had once been envisioned for clean, affordable and reliable natural gas. But producers can rest assured that vision of a vibrant, growing and stable market will become a reality; it just will take more time to materialize. There is no doubt that significant demand growth is coming, driven by increased consumption in industrial plants and natural gas-fired power generation, as well as exports, including growing pipeline exports to Mexico and overseas shipments of liquefied natural gas.

Just over the horizon, the natural gas star is poised to again shine brightly. But in the interim, what happens to the supply/demand equation? This is a critically important question for natural gas producers, midstream companies and end-users alike.

Natural gas production in the lower-48 states has increased from less than fifty (50) billion cubic feet a day (Bcf/d) in 2005 to about 70 Bcf/d today. This is an increase of forty (40%) percent over nine years, or a compound annual growth rate of about four (4%) percent. There is no indication that this rate of increase is slowing. In fact, with continuing improvements in drilling efficiency and effectiveness, natural gas production is forecast to reach almost ninety (90) Bcf/d by 2020, representing another twenty-nine (29%) percent increase over 2014 output.

Most of this production growth is concentrated in a few extremely prolific producing regions. Four of these are in a fairway that runs from the Texas Gulf Coast to North Dakota through the middle section of the country, and encompasses the Eagle Ford, the Permian Basin, the Granite Wash, the SouthCentral Oklahoma Oil Play and other basins in Oklahoma, and the Williston Basin. The other major producing region is the Marcellus and Utica shales in the Northeast. Almost all the natural gas supply growth is coming from these regions.

We are at the point where this abundance can allow US companies to export LNG or liquified natural gas.   To move this cleaner-burning fuel across oceans, natural gas must be converted into liquefied natural gas (LNG), a process called liquefaction. LNG is natural gas that has been cooled to –260° F (–162° C), changing it from a gas into a liquid that is 1/600th of its original volume.  This would be the same requirement for Dayton.  The methane gas captured would need to be liquified and stored.  This is accomplished by transporting in a vessel similar to the one shown below:

As you might expect, a vessel such as this requires very specific designs relative to the containment area.  A cut-a-way is given below to indicate just how exacting that design must be to accomplish, without mishap, the transportation of LNG to other areas of the world.

Loading LNG from storage to the vessel is no easy manner either and requires another significant expenditure of capital.

For this reason, LNG facilities over the world are somewhat limited in number.  The map below will indicate their location.

A typical LNG station, both process and loading may be seen below.  This one is in Darwin.


With natural gas being in great supply, there will follow increasing demand over the world for this precious commodity.  We already see automobiles using LNG instead of gasoline as primary fuel.  Also, the cost of LNG is significantly less than gasoline even with average prices over the US being around $2.00 +++ dollars per gallon.  According to AAA, the national average for regular, unleaded gasoline has fallen for thirty-five (35) out of thirty-six (36) days to $2.21 per gallon and sits at the lowest mark for this time of year since 2004. Gas prices continue to drop in most parts of the country due to abundant fuel supplies and declining crude oil costs. Average prices are about fifty-five (55) cents less than a year ago, which is motivating millions of Americans to take advantage of cheap gas by taking long road trips this summer.

I think the bottom line is: natural gas is here to stay.


January 6, 2018

OKAY, how many of you have said already this year?  “MAN, I have to lose some weight.”  I have a dear friend who put on a little weight over a couple of years and he commented: “Twenty or twenty-five pounds every year and pretty soon it adds up.”  It does add up.  Let’s look at several numbers from the CDC and other sources.

  • The CDC organization estimates that three-quarters (3/4of the American population will likely be overweight or obese by 2020. The latest figures, as of 2014, show that more than one-third (36.5%) of U.S. adults age twenty (20) and older and seventeen percent (17%) of children and adolescents aged two through nineteen (2–19) years were obese.
  • American ObesityRates are on the Rise, Gallup Poll Finds. Americans have become even fatter than before, with nearly twenty-eight (28%) percent saying they are clinically obese, a new survey finds. … At 180 pounds this person has a BMI of thirty (30) and is considered obese.

Now, you might say—we are in good company:  According to the World Health Organization, the following countries have the highest rates of obesity.

  • Republic of Nauru. Formerly known as Pleasant Island, this tiny island country in the South Pacific only has a population of 9,300. …
  • American Samoa. …
  • Tokelau
  • Tonga
  • French Polynesia. …
  • Republic of Kiribati. …
  • Saudi Arabia. …
  • Panama.

There is absolutely no doubt that more and more Americans are over weight even surpassing the magic BMI number of 30.  We all know what reduction in weight can do for us on an individual basis, but have you ever considered what reduction in weight can do for “other items”—namely hardware?

  • Using light-weight components, (composite materials) and high-efficiency engines enabled by advanced materials for internal-combustion engines in one-quarter of U.S. fleet trucks and automobiles could possibly save more than five (5) billion gallons of fuel annually by 2030. This is according to the US Energy Department Vehicle Technologies Office.
  • This is possible because, according to the Oak Ridge National Laboratory, The Department of Energy’s Carbon Fiber Technology Facility has a capacity to produce up to twenty-five (25) tons of carbon fiber per year.
  • Replacing heavy steel with high-strength steel, aluminum, or glass fiber-reinforced polymer composites can decrease component weight by ten to sixty percent (10-60 %). Longer term, materials such as magnesium and carbon fiber-reinforced composites could reduce the weight of some components by fifty to seventy-five percent (50-75%).
  • It costs $10,000 per pound to put one pound of payload into Earth orbit. NASA’s goal is to reduce the cost of getting to space down to hundreds of dollars per pound within twenty-five (25) years and tens of dollars per pound within forty (40) years.
  • Space-X Falcon Heavy rocket will be the first ever rocket to break the $1,000 per pound per orbit barrier—less than a tenth as much as the Shuttle. ( SpaceX press release, July 13, 2017.)
  • The Solar Impulse 2 flew 40,000 Km without fuel. The 3,257-pound solar plane used sandwiched carbon fiber and honey-combed alveolate foam for the fuselage, cockpit and wing spars.

So you see, reduction in weight can have lasting affects for just about every person and some pieces of hardware.   Let’s you and I get it off.


December 29, 2017

OK, it is once again time to make those New Year’s resolutions.  Health, finances, weight loss, quit smoking, cut out sugar, daily exercise, etc. You get the drill.   All of those resolutions we get tired of and basically forget by the end of February.  If you had all the money in the world, as some do, you might not even make resolutions.  You might sit back and watch it roll in.  Let’s take a quick look.

According to the Bloomberg Billionaires Index, 2017 proved to be an outstanding year for the world’s richest people, watching their net worth rise 23 percent from $4.4 trillion in 2016 to $5.3 trillion by the end of trading on Tuesday, December 26.

The following graph will indicate the progress of the world’s richest through the 2017 year.  As you can see, the world’s richest individuals added a very cool one trillion dollars ($1 trillion USD) to their individual wealth.  Now that’s the entire group of richest people but even that’s a huge sum of “dinero”.

Take a look at these duds below.  Do you know who they are?  I’m going to let you ponder this over the weekend but they all “look familiar” and they are all very very wealthy.


  • The U.S. has the largest presence on the index, with 159 billionaires. They added $315 billion, an eighteen (18%) percent gain that gives them a collective net worth of $2 trillion.
  • Russia’s twenty-seven (27) richest people put behind them the economic pain that followed President Vladimir Putin’s 2014 annexation of Crimea, adding $29 billion to $275 billion, surpassing the collective net worth they had before western economic sanctions began.
  • It was also a banner year for tech moguls, with the fifty-seven (57) technology billionaires on the index adding $262 billion, a thirty-five (35%) percent increase that was the most of any sector on the ranking.
  • Facebook Inc. co-founder Mark Zuckerberghad the fourth-largest U.S. dollar increase on the index, adding $22.6 billion, or forty-five (45%) percent, and filed plans to sell eighteen (18%) percent of his stake in the social media giant as part of his plan to give away the majority of his $72.6 billion fortune.
  • In all, the 440 billionaires on the index who added to their fortunes in 2017, gained a combined $1.05 trillion.
  • The Bloomberg index discovered sixty-seven (67) hidden billionaires in 2017.
  • Renaissance Technologies’ Henry Lauferwas identified with a net worth of $4 billion in April. Robert Mercer, 71, who plans to step down as co-CEO of the world’s most profitable trading fund on Jan. 1, couldn’t be confirmed as a billionaire.
  • Two fish billionaires were caught: Russia’s Vitaly Orlovand Chuck Bundrant of Trident Seafood.
  • A Brazilian tycoon who built a $1.3 billion fortune with Latin America’s biggest wind developer was interviewed in April.
  • Two New York real estate moguls were identified, Ben Ashkenazy and Joel Wiener.
  • Several technology startup billionaires were identified, including the chief executive officer of Roku Inc. and the two co-founders of Wayfair Inc.
  • Investor euphoria created a number of bitcoin billionaires, including Tyler and Cameron Winkelvoss, with the value of the cryptocurrency soaring to more than $16,000 Tuesday, up from $1,140 on Jan. 4. The leap came with a chorus of warnings, including from Janet Yellen, who called the emerging tender a “highly speculative asset” at her last news conference as chair of the Federal Reserve, on Dec. 13.

I’m not going to highlight the losers because even their monetary losses leave them as millionaires and billionaires.  I know this post makes your day but I tell you these things to indicate that maybe, just maybe it is possible to achieve monetary success in 2018.  I DO KNOW IT’S POSSIBLE TO TRY.  Now, when I say success, I’m not necessarily talking about millions and certainly not billions—enough to cover the basic expenses with a little left over for FUL.

Here’s hoping you all have a marvelous NEW YEAR.  Remember—clean slate.  Starting over. Have a great year.


December 10, 2017

If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.


Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).


Just what applications could take advantage of “deep learning?”


A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.


Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.


Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.


A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.


Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.


Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.


An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.


Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.



  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.


  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.


Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

%d bloggers like this: