ARTIFICIAL INTELLIGENCE

February 12, 2019


Just what do we know about Artificial Intelligence or AI?  Portions of this post were taken from Forbes Magazine.

John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automation theory and complex information processing. The proposal for the conference said, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions focus on AI being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human). The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Merriam-Webster defines artificial intelligence this way:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

About thirty (30) year ago, a professor at the Harvard Business School (Dr. Shoshana Zuboff) articulated three laws based on research into the consequences that widespread computing would have on society. Dr. Zuboff had degrees in philosophy and social psychology so she was definitely ahead of her time relative to the unknown field of AI.  Her document “In the Age of the Smart Machine: The Future of Work and Power”, she postulated the following three laws:

  • Everything that can be automated will be automated
  • Everything that can be informated will be informated. (NOTE: Informated was coined by Zuboff to describe the process of turning descriptions and measurements of activities, events and objects into information.)
  • In the absence of countervailing restrictions and sanctions, every digital application that can be sued for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

At that time there was definitely a significant lack of computing power.  That ship has sailed and is no longer a great hinderance to AI advancement that it certainly once was.

 

WHERE ARE WE?

In recent speech, Russian president Vladimir Putin made an incredibly prescient statement: “Artificial intelligence is the future, not only for Russia, but for all of humankind.” He went on to highlight both the risks and rewards of AI and concluded by declaring that whatever country comes to dominate this technology will be the “ruler of the world.”

As someone who closely monitors global events and studies emerging technologies, I think Putin’s lofty rhetoric is entirely appropriate. Funding for global AI startups has grown at a sixty percent (60%) compound annual growth rate since 2010. More significantly, the international community is actively discussing the influence AI will exert over both global cooperation and national strength. In fact, the United Arab Emirates just recently appointed its first state minister responsible for AI.

Automation and digitalization have already had a radical effect on international systems and structures. And considering that this technology is still in its infancy, every new development will only deepen the effects. The question is: Which countries will lead the way, and which ones will follow behind?

If we look at criteria necessary for advancement, there are the seven countries in the best position to rule the world with the help of AI.  These countries are as follows:

  • Russia
  • The United States of America
  • China
  • Japan
  • Estonia
  • Israel
  • Canada

The United States and China are currently in the best position to reap the rewards of AI. These countries have the infrastructure, innovations and initiative necessary to evolve AI into something with broadly shared benefits. In fact, China expects to dominate AI globally by 2030. The United States could still maintain its lead if it makes AI a top priority and charges necessary investments while also pulling together all required government and private sector resources.

Ultimately, however, winning and losing will not be determined by which country gains the most growth through AI. It will be determined by how the entire global community chooses to leverage AI — as a tool of war or as a tool of progress.

Ideally, the country that uses AI to rule the world will do it through leadership and cooperation rather than automated domination.

CONCLUSIONS:  We dare not neglect this disruptive technology.  We cannot afford to lose this battle.

Advertisements

COMPUTER SIMULATION

January 20, 2019


More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.

WHAT IS COMPUTER SIMULATION?

Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.

REAL-WORLD SIMULATION:

The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.

 

Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.

 

 


Portions of this post are taken from the January 2018 article written by John Lewis of “Vision Systems”.

I feel there is considerable confusion between Artificial Intelligence (AI), Machine Learning and Deep Learning.  Seemingly, we use these terms and phrases interchangeably and they certainly have different meanings.  Natural Learning is the intelligence displayed by humans and certain animals. Why don’t we do the numbers:

AI:

Artificial Intelligence refers to machines mimicking human cognitive functions such as problem solving or learning.  When a machine understands human speech or can compete with humans in a game of chess, AI applies.  There are several surprising opinions about AI as follows:

  • Sixty-one percent (61%) of people see artificial intelligence making the world a better place
  • Fifty-seven percent (57%) would prefer an AI doctor perform an eye exam
  • Fifty-five percent (55%) would trust an autonomous car. (I’m really not there as yet.)

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.

MACHINE LEARNING:

Machine Learning is the current state-of-the-art application of AI and largely responsible for its recent rapid growth. Based upon the idea of giving machines access to data so that they can learn for themselves, machine learning has been enabled by the internet, and the associated rise in digital information being generated, stored and made available for analysis.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level understanding. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

DEEP LEARNING:

Deep Learning concentrates on a subset of machine-learning techniques, with the term “deep” generally referring to the number of hidden layers in the deep neural network.  While conventional neural network may contain a few hidden layers, a deep network may have tens or hundreds of layers.  In deep learning, a computer model learns to perform classification tasks directly from text, sound or image data. In the case of images, deep learning requires substantial computing power and involves feeding large amounts of labeled data through a multi-layer neural network architecture to create a model that can classify the objects contained within the image.

CONCLUSIONS:

Brave new world we are living in.  Someone said that AI is definitely the future of computing power and eventually robotic systems that could possibly replace humans.  I just hope the programmers adhere to Dr. Isaac Asimov’s three laws:

 

  • The First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

  • The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

  • The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those words, science-fiction author Isaac Asimov changed how the world saw robots. Where they had largely been Frankenstein-esque, metal monsters in the pulp magazines, Asimov saw the potential for robotics as more domestic: as a labor-saving device; the ultimate worker. In doing so, he continued a literary tradition of speculative tales: What happens when humanity remakes itself in its image?

As always, I welcome your comments.


The convergence of “smart” microphones, new digital signal processing technology, voice recognition and natural language processing has opened the door for voice interfaces.  Let’s first define a “smart device”.

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G, etc., that can operate to some extent interactively and autonomously.

I am told by my youngest granddaughter that all the cool kids now have in-home, voice-activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition of the devices works well; although you may find yourself with an extra dollhouse or two occasionally. (I do wonder if they speak “southern” but that’s another question for another day.)

A smart speaker is, essentially, a speaker with added internet connectivity and “smart assistant” voice-control functionality. The smart assistant is typically Amazon Alexa or Google Assistant, both of which are independently managed by their parent companies and have been opened up for other third-parties to implement into their hardware. The idea is that the more people who bring these into their homes, the more Amazon and Google have a “space” in every abode where they’re always accessible.

Let me first state that my family does not, as yet, have a smart device but we may be inching in that direction.  If we look at numbers, we see the following projections:

  • 175 million smart devices will be installed in a majority of U.S. households by 2022 with at least seventy (70) million households having at least one smart speaker in their home. (Digital Voice Assistants Platforms, Revenues & Opportunities, 2017-2022. Juniper Research, November 2017.)
  • Amazon sold over eleven (11) million Alexa voice-controlled Amazon Echo devices in 2016. That number was expected to double for 2017. (Smart Home Devices Forecast, 2017 to 2022(US) Forester Research, October 2017.
  • Amazon Echo accounted for 70.6% of all voice-enabled speaker users in the United States in 2017, followed by Google Home at 23.8%. (eMarketer, April 2017)
  • In 2018, 38.5 million millennials are expected to use voice-enabled digital assistants—such as Amazon Alexa, Apple Siri, Google Now and Microsoft Cortana—at least once per month. (eMarketer, April 2017.)
  • The growing smart speaker market is expected to hit 56.3 million shipments, globally in 2018. (Canalys Research, January 2018)
  • The United States will remain the most important market for smart speakers in 2018, with shipments expected to reach 38.4 million units. China is a distant second at 4.4 million units. (Canalys Research, April 2018.)

With that being the case, let’s now look at what smart speakers are now commercialized and available either as online purchases or retail markets:

  • Amazon Echo Spot–$114.99
  • Sonos One–$199.00
  • Google Home–$129.00
  • Amazon Echo Show–$179.99
  • Google Home Max–$399.00
  • Google Home Mini–$49.00
  • Fabriq Choros–$69.99
  • Amazon Echo (Second Generation) –$$84.99
  • Harman Kardon Evoke–$199.00
  • Amazon Echo Plus–$149.00

CONCLUSIONS:  If you are interested in purchasing one from the list above, I would definitely recommend you do your homework.  Investigate the services provided by a smart speaker to make sure you are getting what you desire.  Be aware that there will certainly be additional items enter the marketplace as time goes by.  GOOD LUCK.

THE NEXT COLD WAR

February 3, 2018


I’m old enough to remember the Cold War waged by the United States and Russia.  The term “Cold War” first appeared in a 1945 essay by the English writer George Orwell called “You and the Atomic Bomb”.

HOW DID THIS START:

During World War II, the United States and the Soviet Union fought together as allies against the Axis powers, Germany, Japan and Italy. However, the relationship between the two nations was a tense one. Americans had long been wary of Soviet communism and concerned about Russian leader Joseph Stalin’s tyrannical, blood-thirsty rule of his own country. For their part, the Soviets resented the Americans’ decades-long refusal to treat the USSR as a legitimate part of the international community as well as their delayed entry into World War II, which resulted in the deaths of tens of millions of Russians. After the war ended, these grievances ripened into an overwhelming sense of mutual distrust and enmity. Postwar Soviet expansionism in Eastern Europe fueled many Americans’ fears of a Russian plan to control the world. Meanwhile, the USSR came to resent what they perceived as American officials’ bellicose rhetoric, arms buildup and interventionist approach to international relations. In such a hostile atmosphere, no single party was entirely to blame for the Cold War; in fact, some historians believe it was inevitable.

American officials encouraged the development of atomic weapons like the ones that had ended World War II. Thus, began a deadly “arms race.” In 1949, the Soviets tested an atom bomb of their own. In response, President Truman announced that the United States would build an even more destructive atomic weapon: the hydrogen bomb, or “superbomb.” Stalin followed suit.

The ever-present threat of nuclear annihilation had a great impact on American domestic life as well. People built bomb shelters in their backyards. They practiced attack drills in schools and other public places. The 1950s and 1960s saw an epidemic of popular films that horrified moviegoers with depictions of nuclear devastation and mutant creatures. In these and other ways, the Cold War was a constant presence in Americans’ everyday lives.

SPACE AND THE COLD WAR:

Space exploration served as another dramatic arena for Cold War competition. On October 4, 1957, a Soviet R-7 intercontinental ballistic missile launched Sputnik (Russian for “traveler”), the world’s first artificial satellite and the first man-made object to be placed into the Earth’s orbit. Sputnik’s launch came as a surprise, and not a pleasant one, to most Americans. In the United States, space was seen as the next frontier, a logical extension of the grand American tradition of exploration, and it was crucial not to lose too much ground to the Soviets. In addition, this demonstration of the overwhelming power of the R-7 missile–seemingly capable of delivering a nuclear warhead into U.S. air space–made gathering intelligence about Soviet military activities particularly urgent.

In 1958, the U.S. launched its own satellite, Explorer I, designed by the U.S. Army under the direction of rocket scientist Wernher von Braun, and what came to be known as the Space Race was underway. That same year, President Dwight Eisenhower signed a public order creating the National Aeronautics and Space Administration (NASA), a federal agency dedicated to space exploration, as well as several programs seeking to exploit the military potential of space. Still, the Soviets were one step ahead, launching the first man into space in April 1961.

THE COLD WAR AND AI (ARTIFICIAL INTELLEGENCE):

Our country NEEDS to consider AI as an extension of the cold war.  Make no mistake about it, AI will definitely play into the hands of a few desperate dictators or individuals in future years.  A country that thinks its adversaries have or will get AI weapons will need them also to retaliate or deter foreign use against the US. Wide use of AI-powered cyberattacks may still be some time away. Countries might agree to a proposed Digital Geneva Convention to limit AI conflict. But that won’t stop AI attacks by independent nationalist groups, militias, criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out of a desire to be prepared to defend themselves. With Russia embracing AI, other nations that don’t or those that restrict AI development risk becoming unable to compete – economically or militarily – with countries wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many countries could provide a deterrent against attacks, as happened with nuclear weapons during the Cold War.

The Congress of the United States and the Executive Branch need to “lose” the high school mentality and get back in the game.  They need to address the future instead of living in the past OR we the people need to vote them all out and start over.

 

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

DARK NET

December 6, 2017


Most of the individuals who read my posting are very well-informed and know that Tim Berners-Lee “invented” the internet.  In my opinion, the Internet is a resounding technological improvement in communication.  It has been a game-changer in the truest since of the word.  I think there are legitimate uses which save tremendous time.  There are also illegitimate uses as we shall see.

A JPEG of Mr. Berners-Lee is shown below:

BIOGRAPHY:

In 1989, while working at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, Tim Berners-Lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier “Enquire” work, his efforts were designed to allow people to work together by combining their knowledge in a web of hypertext documents.  Sir Tim wrote the first World Wide Web server, “httpd“, and the first client, “WorldWideWeb” a what-you-see-is-what-you-get hypertext browser/editor which ran in the NeXTStep environment. This work began in October 1990.k   The program “WorldWideWeb” was first made available within CERN in December, and on the Internet at large in the summer of 1991.

Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial specifications of URIs, HTTP and HTML were refined and discussed in larger circles as the Web technology spread.

Tim Berners-Lee graduated from the Queen’s College at Oxford University, England, in 1976. While there he built his first computer with a soldering iron, TTL gates, an M6800 processor and an old television.

He spent two years with Plessey Telecommunications Ltd (Poole, Dorset, UK) a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology.

In 1978 Tim left Plessey to join D.G Nash Ltd (Ferndown, Dorset, UK), where he wrote, among other things, typesetting software for intelligent printers and a multitasking operating system.

His year and one-half spent as an independent consultant included a six-month stint (Jun-Dec 1980) as consultant software engineer at CERN. While there, he wrote for his own private use his first program for storing information including using random associations. Named “Enquire” and never published, this program formed the conceptual basis for the future development of the World Wide Web.

From 1981 until 1984, Tim worked at John Poole’s Image Computer Systems Ltd, with technical design responsibility. Work here included real time control firmware, graphics and communications software, and a generic macro language. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for scientific data acquisition and system control. Among other things, he worked on FASTBUS system software and designed a heterogeneous remote procedure call system.

In 1994, Tim founded the World Wide Web Consortium at the Laboratory for Computer Science (LCS). This lab later merged with the Artificial Intelligence Lab in 2003 to become the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). Since that time he has served as the Director of the World Wide Web Consortium, a Web standards organization which develops interoperable technologies (specifications, guidelines, software, and tools) to lead the Web to its full potential. The Consortium has host sites located at MIT, at ERCIM in Europe, and at Keio University in Japan as well as offices around the world.

In 1999, he became the first holder of 3Com Founders chair at MIT. In 2008 he was named 3COM Founders Professor of Engineering in the School of Engineering, with a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL where he also heads the Decentralized Information Group (DIG). In December 2004 he was also named a Professor in the Computer Science Department at the University of Southampton, UK. From 2006 to 2011 he was co-Director of the Web Science Trust, launched as the Web Science Research Initiative, to help create the first multidisciplinary research body to examine the Web.

In 2008 he founded and became Director of the World Wide Web Foundation.  The Web Foundation is a non-profit organization devoted to achieving a world in which all people can use the Web to communicate, collaborate and innovate freely.  The Web Foundation works to fund and coordinate efforts to defend the Open Web and further its potential to benefit humanity.

In June 2009 then Prime Minister Gordon Brown announced that he would work with the UK Government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Sir Tim was a member of The Public Sector Transparency Board tasked to drive forward the UK Government’s transparency agenda.  He has promoted open government data globally, is a member of the UK’s Transparency Board.

In 2011 he was named to the Board of Trustees of the Ford Foundation, a globally oriented private foundation with the mission of advancing human welfare. He is President of the UK’s Open Data Institute which was formed in 2012 to catalyze open data for economic, environmental, and social value.

He is the author, with Mark Fischetti, of the book “Weaving the Web” on the past, present and future of the Web.

On March 18 2013, Sir Tim, along with Vinton Cerf, Robert Kahn, Louis Pouzin and Marc Andreesen, was awarded the Queen Elizabeth Prize for Engineering for “ground-breaking innovation in engineering that has been of global benefit to humanity.”

It should be very obvious from this rather short biography that Sir Tim is definitely a “heavy hitter”.

DARK WEB:

I honestly don’t think Sir Tim realized the full gravity of his work and certainly never dreamed there might develop a “dark web”.

The Dark Web is the public World Wide Web content existing on dark nets or networks which overlay the public Internet.  These networks require specific software, configurations or authorization to access. They are NOT open forums as we know the web to be at this time.  The dark web forms part of the Deep Web which is not indexed by search engines such as GOOGLE, BING, Yahoo, Ask.com, AOL, Blekko.com,  Wolframalpha, DuckDuckGo, Waybackmachine, or ChaCha.com.  The dark nets which constitute the Dark Web include small, friend-to-friend peer-to-peer networks, as well as large, popular networks like FreenetI2P, and Tor, operated by public organizations and individuals. Users of the Dark Web refer to the regular web as the Clearnet due to its unencrypted nature.

A December 2014 study by Gareth Owen from the University of Portsmouth found the most commonly requested type of content on Tor was child pornography, followed by black markets, while the individual sites with the highest traffic were dedicated to botnet operations.  Botnet is defined as follows:

“a network of computers created by malware andcontrolled remotely, without the knowledge of the users of those computers: The botnet was usedprimarily to send spam emails.”

Hackers built the botnet to carry out DDoS attacks.

Many whistle-blowing sites maintain a presence as well as political discussion forums.  Cloned websites and other scam sites are numerous.   Many hackers sell their services individually or as a part of groups. There are reports of crowd-funded assassinations and hit men for hire.   Sites associated with Bitcoinfraud related services and mail order services are some of the most prolific.

Commercial dark net markets, which mediate transactions for illegal drugs and other goods, attracted significant media coverage starting with the popularity of Silk Road and its subsequent seizure by legal authorities. Other markets sells software exploits and weapons.  A very brief look at the table below will indicate activity commonly found on the dark net.

As you can see, the uses for the dark net are quite lovely, lovely indeed.  As with any great development such as the Internet, nefarious uses can and do present themselves.  I would stay away from the dark net.  Just don’t go there.  Hope you enjoy this one and please send me your comments.

%d bloggers like this: