SOCIAL MEDIA

June 27, 2018


DEFINITION:

Social media is typically defined today as: – “Web sites and applications that enable users to create and share content or to participate in social networking” – OxfordDictionaries.

Now that we have cleared that up, let’s take a look at the very beginning of social media.

Six Degrees, according to several sources, was the first modern-day attempt of providing access to communication relative to the “marvelous world” of social media. (I have chosen to put marvelous world in quotes because I’m not too sure it’s that marvelous. There is an obvious downside.)  Six Degrees was launched in 1997 and definitely was the first modern social network. It allowed users to create a profile and to become friends with other users. While the site is no longer functional, at one time it was actually quite popular and had approximately a million members at its peak.

Other sources indicate that social media has been around for the better part of forty (40) years with Usenet appearing in 1979.  Usenet is the first recorded network that enabled users to post news to newsgroups.  Although these Usenets and similar bulletin boards heralded the launch of the first, albeit very rudimentary, social networks, social media never really took off until almost thirty (30) years later, following the roll out of Facebook in 2006. Usenet was not identified as “social media” so the exact term was not used at that time.

If we take a very quick look at Internet and Social Media usage, we find the following:

As you can see from above, social media is incredibly popular and in use hourly if not minute-by-minute.  It’s big in our society today across the world and where allowed.

If we look at the fifteen most popular sites we see the following:

With out a doubt, the gorilla in the room is Facebook.

Facebook statistics

  • Facebook adds 500,000 new users a day – that’s six new profiles a second – and just under a quarter (775) of adults in the US visit their account at least once a month
  • The average (mean) number of Facebook friends is 155
  • There are 60 million active small business pages (up from 40 million in 2015), 5 million of which pay for advertising
  • There are thought to be 270 million fake Facebook profiles (there were only81 million in 2015)
  • Facebook accounts for 1% of social logins made by consumers to sign into the apps and websites of publishers and brands.

It’s important we look at all social media sites so If we look at daily usage for the most popular web sites, we see the following:

BENEFITS:

  • Ability to connect to other people all over the world. One of the most obvious pros of using social networks is the ability to instantly reach people from anywhere. Use Facebook to stay in touch with your old high school friends who’ve relocated all over the country, get on Google Hangouts with relatives who live halfway around the world, or meet brand new people on Twitter from cities or regions you’ve never even heard of before.
  • Easy and instant communication. Now that we’re connected wherever we go, we don’t have to rely on our landlines, answering machines or snail mail to contact somebody. We can simply open up our laptops or pick up our smartphones and immediately start communicating with anyone on platforms like Twitter or one of the many social messaging apps
  • Real-time news and information discovery. Gone are the days of waiting around for the six o’clock news to come on TV or for the delivery boy to bring the newspaper in the morning. If you want to know what’s going on in the world, all you need to do is jump on social media. An added bonus is that you can customize your news and information discovery experiences by choosing to follow exactly what you want.
  • Great opportunities for business owners. Business owners and other types of professional organizations can connect with current customers, sell their products and expand their reach using social media. There are actually lots of entrepreneurs and businesses out there that thrive almost entirely on social networks and wouldn’t even be able to operate without it.
  • General fun and enjoyment. You have to admit that social networking is just plain fun sometimes. A lot of people turn to it when they catch a break at work or just want to relax at home. Since people are naturally social creatures, it’s often quite satisfying to see comments and likes show up on our own posts, and it’s convenient to be able to see exactly what our friends are up to without having to ask them directly.

DISADVANTAGES:

  • Information overwhelm. With so many people now on social media tweeting links and posting selfies and sharing YouTube videos, it sure can get pretty noisy. Becoming overwhelmed by too many Facebook friends to keep up with or too many Instagram photos to browse through isn’t all that uncommon. Over time, we tend to rack up a lot of friends and followers, and that can lead to lots of bloated news feeds with too much content we’re not all that interested in.
  • Privacy issues. With so much sharing going on, issues over privacy will always be a big concern. Whether it’s a question of social sites owning your content after it’s posted, becoming a target after sharing your geographical location online, or even getting in trouble at work after tweeting something inappropriate – sharing too much with the public can open up all sorts of problems that sometimes can’t ever be undone.
  • Social peer pressure and cyber bullying. For people struggling to fit in with their peers – especially teens and young adults – the pressure to do certain things or act a certain way can be even worse on social media than it is at school or any other offline setting. In some extreme cases, the overwhelming pressure to fit in with everyone posting on social media or becoming the target of a cyber-bullying attack can lead to serious stress, anxiety and even depression.
  • Online interaction substitution for offline interaction. Since people are now connected all the time and you can pull up a friend’s social profile with a click of your mouse or a tap of your smartphone, it’s a lot easier to use online interaction as a substitute for face-to-face interaction. Some people argue that social media actually promotes antisocial human behavior.
  • Distraction and procrastination. How often do you see someone look at their phone? People get distracted by all the social apps and news and messages they receive, leading to all sorts of problems like distracted driving or the lack of gaining someone’s full attention during a conversation. Browsing social media can also feed procrastination habits and become something people turn to in order to avoid certain tasks or responsibilities.
  • Sedentary lifestyle habits and sleep disruption. Lastly, since social networking is all done on some sort of computer or mobile device, it can sometimes promote too much sitting down in one spot for too long. Likewise, staring into the artificial light from a computer or phone screen at night can negatively affect your ability to get a proper night’s sleep. (Here’s how you can reduce that blue light, by the way.)

Social media is NOT going away any time soon.  Those who choose to use it will continue using it although there are definite privacy issues. The top five (5) issues discussed by users are as follows:

  • Account hacking and impersonation.
  • Stalking and harassment
  • Being compelled to turn over passwords
  • The very fine line between effective marketing and privacy intrusion
  • The privacy downside with location-based services

I think these issues are very important and certainly must be considered with using ANY social media platform.  Remember—someone is ALWAYS watching.

 

Advertisements

Portions of this post are taken from the January 2018 article written by John Lewis of “Vision Systems”.

I feel there is considerable confusion between Artificial Intelligence (AI), Machine Learning and Deep Learning.  Seemingly, we use these terms and phrases interchangeably and they certainly have different meanings.  Natural Learning is the intelligence displayed by humans and certain animals. Why don’t we do the numbers:

AI:

Artificial Intelligence refers to machines mimicking human cognitive functions such as problem solving or learning.  When a machine understands human speech or can compete with humans in a game of chess, AI applies.  There are several surprising opinions about AI as follows:

  • Sixty-one percent (61%) of people see artificial intelligence making the world a better place
  • Fifty-seven percent (57%) would prefer an AI doctor perform an eye exam
  • Fifty-five percent (55%) would trust an autonomous car. (I’m really not there as yet.)

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.

MACHINE LEARNING:

Machine Learning is the current state-of-the-art application of AI and largely responsible for its recent rapid growth. Based upon the idea of giving machines access to data so that they can learn for themselves, machine learning has been enabled by the internet, and the associated rise in digital information being generated, stored and made available for analysis.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level understanding. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

DEEP LEARNING:

Deep Learning concentrates on a subset of machine-learning techniques, with the term “deep” generally referring to the number of hidden layers in the deep neural network.  While conventional neural network may contain a few hidden layers, a deep network may have tens or hundreds of layers.  In deep learning, a computer model learns to perform classification tasks directly from text, sound or image data. In the case of images, deep learning requires substantial computing power and involves feeding large amounts of labeled data through a multi-layer neural network architecture to create a model that can classify the objects contained within the image.

CONCLUSIONS:

Brave new world we are living in.  Someone said that AI is definitely the future of computing power and eventually robotic systems that could possibly replace humans.  I just hope the programmers adhere to Dr. Isaac Asimov’s three laws:

 

  • The First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

  • The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

  • The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those words, science-fiction author Isaac Asimov changed how the world saw robots. Where they had largely been Frankenstein-esque, metal monsters in the pulp magazines, Asimov saw the potential for robotics as more domestic: as a labor-saving device; the ultimate worker. In doing so, he continued a literary tradition of speculative tales: What happens when humanity remakes itself in its image?

As always, I welcome your comments.


The convergence of “smart” microphones, new digital signal processing technology, voice recognition and natural language processing has opened the door for voice interfaces.  Let’s first define a “smart device”.

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G, etc., that can operate to some extent interactively and autonomously.

I am told by my youngest granddaughter that all the cool kids now have in-home, voice-activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition of the devices works well; although you may find yourself with an extra dollhouse or two occasionally. (I do wonder if they speak “southern” but that’s another question for another day.)

A smart speaker is, essentially, a speaker with added internet connectivity and “smart assistant” voice-control functionality. The smart assistant is typically Amazon Alexa or Google Assistant, both of which are independently managed by their parent companies and have been opened up for other third-parties to implement into their hardware. The idea is that the more people who bring these into their homes, the more Amazon and Google have a “space” in every abode where they’re always accessible.

Let me first state that my family does not, as yet, have a smart device but we may be inching in that direction.  If we look at numbers, we see the following projections:

  • 175 million smart devices will be installed in a majority of U.S. households by 2022 with at least seventy (70) million households having at least one smart speaker in their home. (Digital Voice Assistants Platforms, Revenues & Opportunities, 2017-2022. Juniper Research, November 2017.)
  • Amazon sold over eleven (11) million Alexa voice-controlled Amazon Echo devices in 2016. That number was expected to double for 2017. (Smart Home Devices Forecast, 2017 to 2022(US) Forester Research, October 2017.
  • Amazon Echo accounted for 70.6% of all voice-enabled speaker users in the United States in 2017, followed by Google Home at 23.8%. (eMarketer, April 2017)
  • In 2018, 38.5 million millennials are expected to use voice-enabled digital assistants—such as Amazon Alexa, Apple Siri, Google Now and Microsoft Cortana—at least once per month. (eMarketer, April 2017.)
  • The growing smart speaker market is expected to hit 56.3 million shipments, globally in 2018. (Canalys Research, January 2018)
  • The United States will remain the most important market for smart speakers in 2018, with shipments expected to reach 38.4 million units. China is a distant second at 4.4 million units. (Canalys Research, April 2018.)

With that being the case, let’s now look at what smart speakers are now commercialized and available either as online purchases or retail markets:

  • Amazon Echo Spot–$114.99
  • Sonos One–$199.00
  • Google Home–$129.00
  • Amazon Echo Show–$179.99
  • Google Home Max–$399.00
  • Google Home Mini–$49.00
  • Fabriq Choros–$69.99
  • Amazon Echo (Second Generation) –$$84.99
  • Harman Kardon Evoke–$199.00
  • Amazon Echo Plus–$149.00

CONCLUSIONS:  If you are interested in purchasing one from the list above, I would definitely recommend you do your homework.  Investigate the services provided by a smart speaker to make sure you are getting what you desire.  Be aware that there will certainly be additional items enter the marketplace as time goes by.  GOOD LUCK.

THE NEXT COLD WAR

February 3, 2018


I’m old enough to remember the Cold War waged by the United States and Russia.  The term “Cold War” first appeared in a 1945 essay by the English writer George Orwell called “You and the Atomic Bomb”.

HOW DID THIS START:

During World War II, the United States and the Soviet Union fought together as allies against the Axis powers, Germany, Japan and Italy. However, the relationship between the two nations was a tense one. Americans had long been wary of Soviet communism and concerned about Russian leader Joseph Stalin’s tyrannical, blood-thirsty rule of his own country. For their part, the Soviets resented the Americans’ decades-long refusal to treat the USSR as a legitimate part of the international community as well as their delayed entry into World War II, which resulted in the deaths of tens of millions of Russians. After the war ended, these grievances ripened into an overwhelming sense of mutual distrust and enmity. Postwar Soviet expansionism in Eastern Europe fueled many Americans’ fears of a Russian plan to control the world. Meanwhile, the USSR came to resent what they perceived as American officials’ bellicose rhetoric, arms buildup and interventionist approach to international relations. In such a hostile atmosphere, no single party was entirely to blame for the Cold War; in fact, some historians believe it was inevitable.

American officials encouraged the development of atomic weapons like the ones that had ended World War II. Thus, began a deadly “arms race.” In 1949, the Soviets tested an atom bomb of their own. In response, President Truman announced that the United States would build an even more destructive atomic weapon: the hydrogen bomb, or “superbomb.” Stalin followed suit.

The ever-present threat of nuclear annihilation had a great impact on American domestic life as well. People built bomb shelters in their backyards. They practiced attack drills in schools and other public places. The 1950s and 1960s saw an epidemic of popular films that horrified moviegoers with depictions of nuclear devastation and mutant creatures. In these and other ways, the Cold War was a constant presence in Americans’ everyday lives.

SPACE AND THE COLD WAR:

Space exploration served as another dramatic arena for Cold War competition. On October 4, 1957, a Soviet R-7 intercontinental ballistic missile launched Sputnik (Russian for “traveler”), the world’s first artificial satellite and the first man-made object to be placed into the Earth’s orbit. Sputnik’s launch came as a surprise, and not a pleasant one, to most Americans. In the United States, space was seen as the next frontier, a logical extension of the grand American tradition of exploration, and it was crucial not to lose too much ground to the Soviets. In addition, this demonstration of the overwhelming power of the R-7 missile–seemingly capable of delivering a nuclear warhead into U.S. air space–made gathering intelligence about Soviet military activities particularly urgent.

In 1958, the U.S. launched its own satellite, Explorer I, designed by the U.S. Army under the direction of rocket scientist Wernher von Braun, and what came to be known as the Space Race was underway. That same year, President Dwight Eisenhower signed a public order creating the National Aeronautics and Space Administration (NASA), a federal agency dedicated to space exploration, as well as several programs seeking to exploit the military potential of space. Still, the Soviets were one step ahead, launching the first man into space in April 1961.

THE COLD WAR AND AI (ARTIFICIAL INTELLEGENCE):

Our country NEEDS to consider AI as an extension of the cold war.  Make no mistake about it, AI will definitely play into the hands of a few desperate dictators or individuals in future years.  A country that thinks its adversaries have or will get AI weapons will need them also to retaliate or deter foreign use against the US. Wide use of AI-powered cyberattacks may still be some time away. Countries might agree to a proposed Digital Geneva Convention to limit AI conflict. But that won’t stop AI attacks by independent nationalist groups, militias, criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out of a desire to be prepared to defend themselves. With Russia embracing AI, other nations that don’t or those that restrict AI development risk becoming unable to compete – economically or militarily – with countries wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many countries could provide a deterrent against attacks, as happened with nuclear weapons during the Cold War.

The Congress of the United States and the Executive Branch need to “lose” the high school mentality and get back in the game.  They need to address the future instead of living in the past OR we the people need to vote them all out and start over.

 


According to the “Electronic Design Magazine”, ‘Electronic waste is the fastest-growing form of waste. Electromechanical waste results from the Digital Revolution.  The Digital Revolution refers to the advancement of technology from analog electronic and mechanical devices to the digital technology available today. The era started to during the 1980s and is ongoing. The Digital Revolution also marks the beginning of the Information Era.

The Digital Revolution is sometimes also called the Third Industrial Revolution. The development and advancement of digital technologies started with one fundamental idea: The Internet. Here is a brief timeline of how the Digital Revolution progressed:

  • 1947-1979 – The transistor, which was introduced in 1947, paved the way for the development of advanced digital computers. The government, military and other organizations made use of computer systems during the 1950s and 1960s. This research eventually led to the creation of the World Wide Web.
  • 1980s – The computer became a familiar machine and by the end of the decade, being able to use one became a necessity for many jobs. The first cellphone was also introduced during this decade.
  • 1990s – By 1992, the World Wide Web had been introduced, and by 1996 the Internet became a normal part of most business operations. By the late 1990s, the Internet became a part of everyday life for almost half of the American population.
  • 2000s – By this decade, the Digital Revolution had begun to spread all over the developing world; mobile phones were commonly seen, the number of Internet users continued to grow, and the television started to transition from using analog to digital signals.
  • 2010 and beyond – By this decade, Internet makes up more than 25 percent of the world’s population. Mobile communication has also become very important, as nearly 70 percent of the world’s population owns a mobile phone. The connection between Internet websites and mobile gadgets has become a standard in communication. It is predicted that by 2015, the innovation of tablet computers will far surpass personal computers with the use of the Internet and the promise of cloud computing services. This will allow users to consume media and use business applications on their mobile devices, applications that would otherwise be too much for such devices to handle.

In the United States, E-waste represents approximately two percent (2%) of America’s trash in landfills, but seventy percent (70%) of the overall toxic waste.  American recycles about 679,000 tons of E-waste annually, and that figure does not include a large portion of electronics such as TV, DVD and VCR players, and related TV electronics. According to the EPA, E-waste is still the fastest growing municipal waste stream.  Not only is electromechanical waste a major environmental problem it contains valuable resources that could generate revenue and be used again.  Cell phones and other electronic items contain high amounts of precious metals, such as gold, and silver.  Americans dump phones containing more than sixty million ($60,000,000) dollars in gold and silver each year.

The United States and China generated the most e-waste last year – thirty-two (32%) percent of the world’s total. However, on a per capita basis, several countries famed for their environmental awareness and recycling records lead the way. Norway is on top of the world’s electronic waste mountain, generating 62.4 pounds per inhabitant.

Technology has made a significant difference in the ability to deal and handle E-waste products.  One country, Japan, is making a major effort to deal with the problem. Japan has approximately one hundred (100) major electronic waste facilities, as well as numerous smaller, local collection and operating facilities.  From those one hundred major plants, more than thirty (30) utilize the Kubota Vertical Shredder to reduce the overall size of the assemblies. Recycling technology company swissRTec has announced that one of its key products, the Kubota Vertical Shredder, is now available in the United States to take care of E-waste.

WHY IS E-WASTE RECYCLING IMPORTANT:

If we look at why recycling E-waste is important, we see the following:

  • Rich Source of Raw Materials Internationally, only ten to fifteen (10-15) percent of the gold in e-waste is successfully recovered while the rest is lost. Ironically, electronic waste contains deposits of precious metal estimated to be between forty and fifty (40 and 50) times richer than ores mined from the earth, according to the United Nations.
  • Solid Waste Management Because the explosion of growth in the electronics industry, combined with short product life cycle has led to a rapid escalation in the generation of solid waste.
  • Toxic Materials Because old electronic devices contain toxic substances such as lead, mercury, cadmium and chromium, proper processing is essential to ensure that these materials are not released into the environment. They may also contain other heavy metals and potentially toxic chemical flame retardants.
  • International Movement of Hazardous Waste The uncontrolled movement of e-waste to countries where cheap labor and primitive approaches to recycling have resulted in health risks to local residents exposed to the release of toxins continues to an issue of concern.

We are fortunate in Chattanooga to have an E-cycling “stations”.  ForeRunner does just that.  Here is a cut from their web site:

“… with more than 15 years in the computer \ e waste recycling field, Forerunner Computer Recycling has given Chattanooga companies a responsible option to dispose end of life cycle and surplus computer equipment. All Chattanooga based companies face the task of safely disposing of older equipment and their e waste. The EPA estimates that as many as 500 million computers \e- waste will soon become obsolete.

As Chattanooga businesses upgrade existing PCs, more computers and other e waste are finding their way into the waste stream. According to the EPA, over two million tons of electronics waste is discarded each year and goes to U.S. landfills.

Now you have a partner in the computer \ e waste recycling business who understands your need to safely dispose of your computer and electronic equipment in an environmentally responsible manner.

By promoting reuse – computer recycling and electronic recycling – Forerunner Computer Recycling extends the life of computer equipment and reduce e waste. Recycle your computers, recycle your electronics.”

CONCLUSIONS:

I definitely encourage you to look up the recycling E-waste facility in your city or county.  You will be doing our environment a great service in doing so.

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

BITCOIN

December 9, 2017


I have been hearing a great deal about Bitcoin lately specifically on the early-morning television business channels. I am not too sure what this is all about so I thought I would take a look.    First, an “official” definition.

Bitcoin is a cryptocurrency and worldwide payment system. It is the first decentralized digital currency, as the system works without a central bank or single administrator. … Bitcoin was invented by an unknown person or group of people under the name Satoshi Nakamoto and released as open-source software in 2009.

The “unknown” part really disturbs me as well as the “cryptocurrency” aspects, but let’s continue.  Do you remember the Star Trek episodes in which someone asks, ‘how much does it cost and the answer is _______ credits’?  This is specifically what Bitcoin does, it is digital currency. No one controls Bitcoin; they aren’t printed, like dollars or euros – they’re produced by people, and increasingly businesses, running computers all around the world, using software that solves mathematical problems. A Bitcoin looks as follows-if you acquire a physical object representing“coin”.

Bitcoin transactions are completed when a “block” is added to the blockchain database that underpins the currency however, this can be a laborious process.  Segwit2x proposes moving bitcoin’s transaction data outside of the block and on to a parallel track to allow more transactions to take place. The changes happened in November and it remains to be seen if those changes will have a positive or negative impact on the price of bitcoin in the long term.

It’s been an incredible 2017 for bitcoin growth, with its value quadrupling in the past six months, surpassing the value of an ounce of gold for the first time. It means if you invested £2,000 five years ago, you would be a millionaire today.

You cannot “churn out” an unlimited number of Bitcoin. The bitcoin protocol – the rules that make bitcoin work – say that only twenty-one (21) million bitcoins can ever be created by miners. However, these coins can be divided into smaller parts (the smallest divisible amount is one hundred millionth of a bitcoin and is called a ‘Satoshi’, after the founder of bitcoin).

Conventional currency has been based on gold or silver. Theoretically, you knew that if you handed over a dollar at the bank, you could get some gold back (although this didn’t actually work in practice). But bitcoin isn’t based on gold; it’s based on mathematics. To me this is absolutely fascinating.  Around the world, people are using software programs that follow a mathematical formula to produce bitcoins. The mathematical formula is freely available, so that anyone can check it. The software is also open source, meaning that anyone can look at it to make sure that it does what it is supposed to.

SPECIFIC CHARACTERISTICS:

  1. It’s decentralized

The bitcoin network isn’t controlled by one central authority. Every machine that mines bitcoin and processes transactions makes up a part of the network, and the machines work together. That means that, in theory, one central authority can’t tinker with monetary policy and cause a meltdown – or simply decide to take people’s bitcoins away from them, as the Central European Bank decided to do in Cyprus in early 2013. And if some part of the network goes offline for some reason, the money keeps on flowing.

  1. It’s easy to set up

Conventional banks make you jump through hoops simply to open a bank account. Setting up merchant accounts for payment is another Kafkaesque task, beset by bureaucracy. However, you can set up a bitcoin address in seconds, no questions asked, and with no fees payable.

  1. It’s anonymous

Well, kind of. Users can hold multiple bitcoin addresses, and they aren’t linked to names, addresses, or other personally identifying information.

  1. It’s completely transparent

Bitcoin stores details of every single transaction that ever happened in the network in a huge version of a general ledger, called the blockchain. The blockchain tells all. If you have a publicly used bitcoin address, anyone can tell how many bitcoins are stored at that address. They just don’t know that it’s yours. There are measures that people can take to make their activities opaquer on the bitcoin network, though, such as not using the same bitcoin addresses consistently, and not transferring lots of bitcoin to a single address.

  1. Transaction fees are miniscule

Your bank may charge you a £10 fee for international transfers. Bitcoin doesn’t.

  1. It’s fast

You can send money anywhere and it will arrive minutes later, as soon as the bitcoin network processes the payment.

  1. It’s non-reputable

When your bitcoins are sent, there’s no getting them back, unless the recipient returns them to you. They’re gone forever.

WHERE TO BUY AND SELL

I definitely recommend you do your homework before buying Bitcoin because the value is roller coaster in nature, but given below are several exchanges in which Bitcoin can be purchased or sold.  Good luck.

CONSLUSIONS:

Is Bitcoin a bubble? It’s a natural question to ask—especially after Bitcoin’s price shot up from $12,000 to $15,000 this past week.

Brent Goldfarb is a business professor at the University of Maryland, and William Deringer is a historian at MIT. Both have done research on the history and economics of bubbles, and they talked to Ars by phone this week as Bitcoin continues its surge.

Both academics saw clear parallels between the bubbles they’ve studied and Bitcoin’s current rally. Bubbles tend to be driven either by new technologies (like railroads in 1840s Britain or the Internet in the 1990s) or by new financial innovations (like the financial engineering that produced the 2008 financial crisis). Bitcoin, of course, is both a new technology and a major financial innovation.

“A lot of bubbles historically involve some kind of new financial technology the effects of which people can’t really predict,” Deringer told Ars. “These new financial innovations create enthusiasm at a speed that is greater than people are able to reckon with all the consequences.”

Neither scholar wanted to predict when the current Bitcoin boom would end. But Goldfarb argued that we’re seeing classic signs that often occur near the end of a bubble. The end of a bubble, he told us, often comes with “a high amount of volatility and a lot of excitement.”

Goldfarb expects that in the coming months we’ll see more “stories about people who got fabulously wealthy on bitcoin.” That, in turn, could draw in more and more novice investors looking to get in on the action. From there, some triggering event will start a panic that will lead to a market crash.

“Uncertainty of valuation is often a huge issue in bubbles,” Deringer told Ars. Unlike a stock or bond, Bitcoin pays no interest or dividends, making it hard to figure out how much the currency ought to be worth. “It is hard to pinpoint exactly what the fundamentals of Bitcoin are,” Deringer said.

That uncertainty has allowed Bitcoin’s value to soar a 1,000-fold over the last five years. But it could also make the market vulnerable to crashes if investors start to lose confidence.

I would say travel at your own risk.

 

%d bloggers like this: