DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.

Advertisements

BITCOIN

December 9, 2017


I have been hearing a great deal about Bitcoin lately specifically on the early-morning television business channels. I am not too sure what this is all about so I thought I would take a look.    First, an “official” definition.

Bitcoin is a cryptocurrency and worldwide payment system. It is the first decentralized digital currency, as the system works without a central bank or single administrator. … Bitcoin was invented by an unknown person or group of people under the name Satoshi Nakamoto and released as open-source software in 2009.

The “unknown” part really disturbs me as well as the “cryptocurrency” aspects, but let’s continue.  Do you remember the Star Trek episodes in which someone asks, ‘how much does it cost and the answer is _______ credits’?  This is specifically what Bitcoin does, it is digital currency. No one controls Bitcoin; they aren’t printed, like dollars or euros – they’re produced by people, and increasingly businesses, running computers all around the world, using software that solves mathematical problems. A Bitcoin looks as follows-if you acquire a physical object representing“coin”.

Bitcoin transactions are completed when a “block” is added to the blockchain database that underpins the currency however, this can be a laborious process.  Segwit2x proposes moving bitcoin’s transaction data outside of the block and on to a parallel track to allow more transactions to take place. The changes happened in November and it remains to be seen if those changes will have a positive or negative impact on the price of bitcoin in the long term.

It’s been an incredible 2017 for bitcoin growth, with its value quadrupling in the past six months, surpassing the value of an ounce of gold for the first time. It means if you invested £2,000 five years ago, you would be a millionaire today.

You cannot “churn out” an unlimited number of Bitcoin. The bitcoin protocol – the rules that make bitcoin work – say that only twenty-one (21) million bitcoins can ever be created by miners. However, these coins can be divided into smaller parts (the smallest divisible amount is one hundred millionth of a bitcoin and is called a ‘Satoshi’, after the founder of bitcoin).

Conventional currency has been based on gold or silver. Theoretically, you knew that if you handed over a dollar at the bank, you could get some gold back (although this didn’t actually work in practice). But bitcoin isn’t based on gold; it’s based on mathematics. To me this is absolutely fascinating.  Around the world, people are using software programs that follow a mathematical formula to produce bitcoins. The mathematical formula is freely available, so that anyone can check it. The software is also open source, meaning that anyone can look at it to make sure that it does what it is supposed to.

SPECIFIC CHARACTERISTICS:

  1. It’s decentralized

The bitcoin network isn’t controlled by one central authority. Every machine that mines bitcoin and processes transactions makes up a part of the network, and the machines work together. That means that, in theory, one central authority can’t tinker with monetary policy and cause a meltdown – or simply decide to take people’s bitcoins away from them, as the Central European Bank decided to do in Cyprus in early 2013. And if some part of the network goes offline for some reason, the money keeps on flowing.

  1. It’s easy to set up

Conventional banks make you jump through hoops simply to open a bank account. Setting up merchant accounts for payment is another Kafkaesque task, beset by bureaucracy. However, you can set up a bitcoin address in seconds, no questions asked, and with no fees payable.

  1. It’s anonymous

Well, kind of. Users can hold multiple bitcoin addresses, and they aren’t linked to names, addresses, or other personally identifying information.

  1. It’s completely transparent

Bitcoin stores details of every single transaction that ever happened in the network in a huge version of a general ledger, called the blockchain. The blockchain tells all. If you have a publicly used bitcoin address, anyone can tell how many bitcoins are stored at that address. They just don’t know that it’s yours. There are measures that people can take to make their activities opaquer on the bitcoin network, though, such as not using the same bitcoin addresses consistently, and not transferring lots of bitcoin to a single address.

  1. Transaction fees are miniscule

Your bank may charge you a £10 fee for international transfers. Bitcoin doesn’t.

  1. It’s fast

You can send money anywhere and it will arrive minutes later, as soon as the bitcoin network processes the payment.

  1. It’s non-reputable

When your bitcoins are sent, there’s no getting them back, unless the recipient returns them to you. They’re gone forever.

WHERE TO BUY AND SELL

I definitely recommend you do your homework before buying Bitcoin because the value is roller coaster in nature, but given below are several exchanges in which Bitcoin can be purchased or sold.  Good luck.

CONSLUSIONS:

Is Bitcoin a bubble? It’s a natural question to ask—especially after Bitcoin’s price shot up from $12,000 to $15,000 this past week.

Brent Goldfarb is a business professor at the University of Maryland, and William Deringer is a historian at MIT. Both have done research on the history and economics of bubbles, and they talked to Ars by phone this week as Bitcoin continues its surge.

Both academics saw clear parallels between the bubbles they’ve studied and Bitcoin’s current rally. Bubbles tend to be driven either by new technologies (like railroads in 1840s Britain or the Internet in the 1990s) or by new financial innovations (like the financial engineering that produced the 2008 financial crisis). Bitcoin, of course, is both a new technology and a major financial innovation.

“A lot of bubbles historically involve some kind of new financial technology the effects of which people can’t really predict,” Deringer told Ars. “These new financial innovations create enthusiasm at a speed that is greater than people are able to reckon with all the consequences.”

Neither scholar wanted to predict when the current Bitcoin boom would end. But Goldfarb argued that we’re seeing classic signs that often occur near the end of a bubble. The end of a bubble, he told us, often comes with “a high amount of volatility and a lot of excitement.”

Goldfarb expects that in the coming months we’ll see more “stories about people who got fabulously wealthy on bitcoin.” That, in turn, could draw in more and more novice investors looking to get in on the action. From there, some triggering event will start a panic that will lead to a market crash.

“Uncertainty of valuation is often a huge issue in bubbles,” Deringer told Ars. Unlike a stock or bond, Bitcoin pays no interest or dividends, making it hard to figure out how much the currency ought to be worth. “It is hard to pinpoint exactly what the fundamentals of Bitcoin are,” Deringer said.

That uncertainty has allowed Bitcoin’s value to soar a 1,000-fold over the last five years. But it could also make the market vulnerable to crashes if investors start to lose confidence.

I would say travel at your own risk.

 

DARK NET

December 6, 2017


Most of the individuals who read my posting are very well-informed and know that Tim Berners-Lee “invented” the internet.  In my opinion, the Internet is a resounding technological improvement in communication.  It has been a game-changer in the truest since of the word.  I think there are legitimate uses which save tremendous time.  There are also illegitimate uses as we shall see.

A JPEG of Mr. Berners-Lee is shown below:

BIOGRAPHY:

In 1989, while working at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, Tim Berners-Lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier “Enquire” work, his efforts were designed to allow people to work together by combining their knowledge in a web of hypertext documents.  Sir Tim wrote the first World Wide Web server, “httpd“, and the first client, “WorldWideWeb” a what-you-see-is-what-you-get hypertext browser/editor which ran in the NeXTStep environment. This work began in October 1990.k   The program “WorldWideWeb” was first made available within CERN in December, and on the Internet at large in the summer of 1991.

Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial specifications of URIs, HTTP and HTML were refined and discussed in larger circles as the Web technology spread.

Tim Berners-Lee graduated from the Queen’s College at Oxford University, England, in 1976. While there he built his first computer with a soldering iron, TTL gates, an M6800 processor and an old television.

He spent two years with Plessey Telecommunications Ltd (Poole, Dorset, UK) a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology.

In 1978 Tim left Plessey to join D.G Nash Ltd (Ferndown, Dorset, UK), where he wrote, among other things, typesetting software for intelligent printers and a multitasking operating system.

His year and one-half spent as an independent consultant included a six-month stint (Jun-Dec 1980) as consultant software engineer at CERN. While there, he wrote for his own private use his first program for storing information including using random associations. Named “Enquire” and never published, this program formed the conceptual basis for the future development of the World Wide Web.

From 1981 until 1984, Tim worked at John Poole’s Image Computer Systems Ltd, with technical design responsibility. Work here included real time control firmware, graphics and communications software, and a generic macro language. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for scientific data acquisition and system control. Among other things, he worked on FASTBUS system software and designed a heterogeneous remote procedure call system.

In 1994, Tim founded the World Wide Web Consortium at the Laboratory for Computer Science (LCS). This lab later merged with the Artificial Intelligence Lab in 2003 to become the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). Since that time he has served as the Director of the World Wide Web Consortium, a Web standards organization which develops interoperable technologies (specifications, guidelines, software, and tools) to lead the Web to its full potential. The Consortium has host sites located at MIT, at ERCIM in Europe, and at Keio University in Japan as well as offices around the world.

In 1999, he became the first holder of 3Com Founders chair at MIT. In 2008 he was named 3COM Founders Professor of Engineering in the School of Engineering, with a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL where he also heads the Decentralized Information Group (DIG). In December 2004 he was also named a Professor in the Computer Science Department at the University of Southampton, UK. From 2006 to 2011 he was co-Director of the Web Science Trust, launched as the Web Science Research Initiative, to help create the first multidisciplinary research body to examine the Web.

In 2008 he founded and became Director of the World Wide Web Foundation.  The Web Foundation is a non-profit organization devoted to achieving a world in which all people can use the Web to communicate, collaborate and innovate freely.  The Web Foundation works to fund and coordinate efforts to defend the Open Web and further its potential to benefit humanity.

In June 2009 then Prime Minister Gordon Brown announced that he would work with the UK Government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Sir Tim was a member of The Public Sector Transparency Board tasked to drive forward the UK Government’s transparency agenda.  He has promoted open government data globally, is a member of the UK’s Transparency Board.

In 2011 he was named to the Board of Trustees of the Ford Foundation, a globally oriented private foundation with the mission of advancing human welfare. He is President of the UK’s Open Data Institute which was formed in 2012 to catalyze open data for economic, environmental, and social value.

He is the author, with Mark Fischetti, of the book “Weaving the Web” on the past, present and future of the Web.

On March 18 2013, Sir Tim, along with Vinton Cerf, Robert Kahn, Louis Pouzin and Marc Andreesen, was awarded the Queen Elizabeth Prize for Engineering for “ground-breaking innovation in engineering that has been of global benefit to humanity.”

It should be very obvious from this rather short biography that Sir Tim is definitely a “heavy hitter”.

DARK WEB:

I honestly don’t think Sir Tim realized the full gravity of his work and certainly never dreamed there might develop a “dark web”.

The Dark Web is the public World Wide Web content existing on dark nets or networks which overlay the public Internet.  These networks require specific software, configurations or authorization to access. They are NOT open forums as we know the web to be at this time.  The dark web forms part of the Deep Web which is not indexed by search engines such as GOOGLE, BING, Yahoo, Ask.com, AOL, Blekko.com,  Wolframalpha, DuckDuckGo, Waybackmachine, or ChaCha.com.  The dark nets which constitute the Dark Web include small, friend-to-friend peer-to-peer networks, as well as large, popular networks like FreenetI2P, and Tor, operated by public organizations and individuals. Users of the Dark Web refer to the regular web as the Clearnet due to its unencrypted nature.

A December 2014 study by Gareth Owen from the University of Portsmouth found the most commonly requested type of content on Tor was child pornography, followed by black markets, while the individual sites with the highest traffic were dedicated to botnet operations.  Botnet is defined as follows:

“a network of computers created by malware andcontrolled remotely, without the knowledge of the users of those computers: The botnet was usedprimarily to send spam emails.”

Hackers built the botnet to carry out DDoS attacks.

Many whistle-blowing sites maintain a presence as well as political discussion forums.  Cloned websites and other scam sites are numerous.   Many hackers sell their services individually or as a part of groups. There are reports of crowd-funded assassinations and hit men for hire.   Sites associated with Bitcoinfraud related services and mail order services are some of the most prolific.

Commercial dark net markets, which mediate transactions for illegal drugs and other goods, attracted significant media coverage starting with the popularity of Silk Road and its subsequent seizure by legal authorities. Other markets sells software exploits and weapons.  A very brief look at the table below will indicate activity commonly found on the dark net.

As you can see, the uses for the dark net are quite lovely, lovely indeed.  As with any great development such as the Internet, nefarious uses can and do present themselves.  I would stay away from the dark net.  Just don’t go there.  Hope you enjoy this one and please send me your comments.


OKAY first, let us define “OPEN SOURCE SOFTWARE” as follows:

Open-source software (OSS) is computer software with its source-code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. The benefits include:

  • COST—Generally, open source software if free.
  • FLEXIBILITY—Computer specialists can alter the software to fit their needs for the program(s) they are writing code for.
  • FREEDOM—Generally, no issues with patents or copyrights.
  • SECURITY—The one issue with security is using open source software and embedded code due to compatibility issues.
  • ACCOUNTABILITY—Once again, there are no issues with accountability and producers of the code are known.

A very detailed article written by Jacob Beningo has seven (7) excellent points for avoiding, like the plague, open source software.  Given below are his arguments.

REASON 1—LACKS TRACEABLE SOFTWARE DEVELOPMENT LIFE CYCLE–Open source software usually starts with an ingenious developer working out their garage or basement hoping to create code that is very functional and useful. Eventually multiple developers with spare time on their hands get involved. The software evolves but it doesn’t really follow a traceable design cycle or even follow best practices. These various developers implement what they want or push the code in the direction that meets their needs. The result is software that works in limited situations and circumstances and users need to cross their fingers and pray that their needs and conditions match them.

REASON 2—DESIGNED FOR FUNCTIONALITY AND NOT ROBUSTNESS–Open source software is often written for functionality only. Accessed and written to an SD card for communication over USB connections. The issue here is that while it functions the code, it generally is not robust and is never designed to anticipate issues.  This is rarely the case and while the software is free, very quickly developers can find that their open source software is just functional and can’t stand up to real-world pressures. Developers will find themselves having to dig through unknown terrain trying to figure out how best to improve or handle errors that weren’t expected by the original developers.

REASON 3—ACCIDENTIALLY EXPOSING CONFIDENTIAL INTELLECTURAL PROPERTY–There are several different licensing schemes that open source software developers use. Some really do give away the farm; however, there are also licenses that require any modifications or even associated software to be released as open source. If close attention is not being paid, a developer could find themselves having to release confidential code and algorithms to the world. Free software just cost the company in revealing the code or if they want to be protected, they now need to spend money on attorney fees to make sure that they aren’t giving it all away by using “free” software.

REASON 4—LACKING AUTOMATED AND/OR MANUAL TESTING–A formalized testing process, especially automated tests are critical to ensuring that a code base is robust and has sufficient quality to meet its needs. I’ve seen open source Python projects that include automated testing which is encouraging but for low level firmware and embedded systems we seem to still lag behind the rest of the software industry. Without automated tests, we have no way to know if integrating that open source component broke something in it that we won’t notice until we go to production.

REASON 5—POOR DOCUMENTATION OR DOCUMENTATION THAT IS LACKING COMPLETELY–Documentation has been getting better among open source projects that have been around for a long time or that have strong commercial backing. Smaller projects though that are driven by individuals tend to have little to no documentation. If the open source code doesn’t have documentation, putting it into practice or debugging it is going to be a nightmare and more expensive than just getting commercial or industrial-grade software.

REASON 6—REAL-TIME SUPPORT IS LACKING–There are few things more frustrating than doing everything you can to get something to work or debugged and you just hit the wall. When this happens, the best way to resolve the issue is to get support. The problem with open source is that there is no guarantee that you will get the support you need in a timely manner to resolve any issues. Sure, there are forums and social media to request help but those are manned by people giving up their free time to help solve problems. If they don’t have the time to dig into a problem, or the problem isn’t interesting or is too complex, then the developer is on their own.

REASON 7—INTEGRATION IS NEVER AS EASY AS IT SEEMS–The website was found; the demonstration video was awesome. This is the component to use. Look at how easy it is! The source is downloaded and the integration begins. Months later, integration is still going on. What appeared easy quickly turned complex because the same platform or toolchain wasn’t being used. “Minor” modifications had to be made. The rabbit hole just keeps getting deeper but after this much time has been sunk into the integration, it cannot be for naught.

CONCLUSIONS:

I personally am by no means completely against open source software. It’s been extremely helpful and beneficial in certain circumstances. I have used open source, namely JAVA, as embedded software for several programs I have written.   It’s important though not to just use software because it’s free.  Developers need to recognize their requirements, needs, and level of robustness that required for their product and appropriately develop or source software that meets those needs rather than blindly selecting software because it’s “free.”  IN OTHER WORDS—BE CAREFUL!


Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.

THEY GOT IT ALL WRONG

November 15, 2017


We all have heard that necessity is the mother of invention.  There have been wonderful advances in technology since the Industrial Revolution but some inventions haven’t really captured the imagination of many people, including several of the smartest people on the planet.

Consider, for example, this group: Thomas Edison, Lord Kelvin, Steve Ballmer, Robert Metcalfe, and Albert Augustus Pope. Despite backgrounds of amazing achievement and even brilliance, all share the dubious distinction of making some of the worst technological predictions in history and I mean the very worst.

Had they been right, history would be radically different and today, there would be no airplanes, moon landings, home computers, iPhones, or Internet. Fortunately, they were wrong.  And that should tell us something: Even those who shape the future can’t always get a handle on it.

Let’s take a look at several forecasts that were most publically, painfully, incorrect. From Edison to Kelvin to Ballmer, click through for 10 of the worst technological predictions in history.

“Heavier-than-air flying machines are impossible.” William Thomson (often referred to as Lord Kelvin), mathematical physicist and engineer, President, Royal Society, in 1895.

A prolific scientific scholar whose name is commonly associated with the history of math and science, Lord Kelvin was nevertheless skeptical about flight. In retrospect, it is often said that Kelvin was quoted out of context, but his aversion to flying machines was well known. At one point, he is said to have publically declared that he “had not the smallest molecule of faith in aerial navigation.” OK, go tell that to Wilber and Orville.

“Fooling around with alternating current is just a waste of time. No one will use it, ever. Thomas Edison, 1889.

Thomas Edison’s brilliance was unassailable. A prolific inventor, he earned 1,093 patents in areas ranging from electric power to sound recording to motion pictures and light bulbs. But he believed that alternating current (AC) was unworkable and its high voltages were dangerous.As a result, he battled those who supported the technology. His so-called “war of currents” came to an end, however, when AC grabbed a larger market share, and he was forced out of the control of his own company.

 

“Computers in the future may weigh no more than 1.5 tons.” Popular Mechanics Magazine, 1949.

The oft-repeated quotation, which has virtually taken on a life of its own over the years, is actually condensed. The original quote was: “Where a calculator like the ENIAC today is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh only 1.5 tons.” Stated either way, though, the quotation delivers a clear message: Computers are mammoth machines, and always will be. Prior to the emergence of the transistor as a computing tool, no one, including Popular Mechanics, foresaw the incredible miniaturization that was about to begin.

 

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.” Darryl Zanuck, 20th Century Fox, 1946.

Hollywood film producer Darryl Zanuck earned three Academy Awards for Best Picture, but proved he had little understanding of the tastes of Americans when it came to technology. Television provided an alternative to the big screen and a superior means of influencing public opinion, despite Zanuck’s dire predictions. Moreover, the technology didn’t wither after six months; it blossomed. By the 1950s, many homes had TVs. In 2013, 79% of the world’s households had them.

 

“I predict the Internet will go spectacularly supernova and in 1996 catastrophically collapse.” Robert Metcalfe, founder of 3Com, in 1995.

An MIT-educated electrical engineer who co-invented Ethernet and founded 3Com, Robert Metcalfe is a holder of the National Medal of Technology, as well as an IEEE Medal of Honor. Still, he apparently was one of many who failed to foresee the unbelievable potential of the Internet. Today, 47% of the 7.3 billion people on the planet use the Internet. Metcalfe is currently a professor of innovation and Murchison Fellow of Free Enterprise at the University of Texas at Austin.

“There’s no chance that the iPhone is going to get any significant market share.” Steve Ballmer, former CEO, Microsoft Corp., in 2007.

Some magna cum laude Harvard math graduate with an estimated $33 billion in personal wealth, Steve Ballmer had an amazing tenure at Microsoft. Under his leadership, Microsoft’s annual revenue surged from $25 billion to $70 billion, and its net income jumped 215%. Still, his insights failed him when it came to the iPhone. Apple sold 6.7 million iPhones in its first five quarters, and by end of fiscal year 2010, its sales had grown to 73.5 million.

 

 

“After the rocket quits our air and starts on its longer journey, its flight would be neither accelerated nor maintained by the explosion of the charges it then might have left.” The New York Times,1920.

The New York Times was sensationally wrong when it assessed the future of rocketry in 1920, but few people of the era were in a position to dispute their declaration. Forty-one years later, astronaut Alan Shepard was the first American to enter space and 49 years later, Neil Armstrong set foot on the moon, laying waste to the idea that rocketry wouldn’t work. When Apollo 11 was on its way to the moon in 1969, the Times finally acknowledged the famous quotation and amended its view on the subject.

“With over 15 types of foreign cars already on sale here, the Japanese auto industry isn’t likely to carve out a big share of the market for itself.” Business Week, August 2, 1968.

Business Week seemed to be on safe ground in 1968, when it predicted that Japanese market share in the auto industry would be miniscule. But the magazine’s editors underestimated the American consumer’s growing distaste for the domestic concept of planned obsolescence. By the 1970s, Americans were flocking to Japanese dealerships, in large part because Japanese manufacturers made inexpensive, reliable cars. That trend has continued over the past 40 years. In 2016, Japanese automakers built more cars in the US than Detroit did.

“You cannot get people to sit over an explosion.” Albert Augustus Pope, founder, Pope Manufacturing, in the early 1900s.

Albert Augustus Pope thought he saw the future when he launched production of electric cars in Hartford, CT, in 1897. Listening to the quiet performance of the electrics, he made his now-famous declaration about the future of the internal combustion engine. Despite his preference for electrics, however, Pope also built gasoline-burning cars, laying the groundwork for future generations of IC engines. In 2010, there were more than one billion vehicles in the world, the majority of which used internal combustion propulsion.

 

 

 

“I have traveled the length and breadth of this country and talked to the best people, and I can assure you that data processing is a fad that won’t last out the year.” Editor, Prentice Hall Books,1957.

The concept of data processing was a head-scratcher in 1957, especially for the unnamed Prentice Hall editor who uttered the oft-quoted prediction of its demise. The prediction has since been used in countless technical presentations, usually as an example of our inability to see the future. Amazingly, the editor’s forecast has recently begun to look even worse, as Internet of Things users search for ways to process the mountains of data coming from a new breed of connected devices. By 2020, experts predict there will be 30 to 50 billion such connected devices sending their data to computers for processing.

CONCLUSIONS:

Last but not least, Charles Holland Duell in 1898 was appointed as the United States Commissioner of Patents, and held that post until 1901.  In that role, he is famous for purportedly saying “Everything that can be invented has been invented.”  Well Charlie, maybe not.

Astrolabe

October 25, 2017


Information for the following post was taken from an article entitled “It’s Official: Earliest Known Marine Astrolabe Found in Shipwreck” by Laura Geggel, senior writer for LiveScience, 25 October 2017.

It’s amazing to me how much history is yet to be discovered, understood and transmitted to readers such as you and me.   I read a fascinating article some months ago indicating the history we do NOT know far exceeds the history we DO know.  Of course, the “winners” get to write their version of what happened.  This is as it has always been. In the great and grand scheme of things, we have artifacts and mentifacts.

ARTIFACT:

“Any object made by human beings, especially with a view to subsequent use.  A handmade object, as a tool, or the remains of one, as shard of pottery, characteristic of an earlier time or cultural stage, especially such an object found at an archaeological excavation.”

MENTIFACT:

“Mentifact (sometimes called a “psychofact”) is a term coined by Sir Julian Sorell Huxley, used together with the related terms “sociofact” and “artifact” to describe how cultural traits, such as “beliefs, values, ideas,” take on a life of their own spanning over generations, and are conceivable as objects in themselves.”

The word astrolabe is defined as follows:

The astrolabe is a very ancient astronomical computer for solving problem relating to time and position of the Sun and stars.  Several types of astrolabes have been made.  By far, the most popular type is the planispheric astrolabe, on which the celestial sphere is projected onto the plane of the equator.  A typical old astrolabe was made of brass and was approximately six (6) inches in diameter, although much larger and smaller astrolabes were also fabricated.

The subject for this post is the device shown as follows:

FIND:

More than 500 years ago, a fierce storm sank a ship carrying the earliest known marine astrolabe — a device that helped sailors navigate at sea, new research finds. Divers found the artifact in 2014, but were unsure exactly what it was at the time. Now, thanks to a 3D-imaging scanner, scientists were able to find etchings on the bronze disc that confirmed it was an astrolabe.

“It was fantastic to apply our 3D scanning technology to such an exciting project and help with the identification of such a rare and fascinating item,” Mark Williams, a professorial fellow at the Warwick Manufacturing Group at the University of Warwick, in the United Kingdom, said in a statement. Williams and his team did the scan.

 

The marine astrolabe likely dates to between 1495 and 1500, and was aboard a ship known as the Esmeralda, which sank in 1503. The Esmeralda was part of a fleet led by Portuguese explorer Vasco da Gama, the first known person to sail directly from Europe to India.

In 2014, an expedition led by Blue Water Recoveries excavated the Esmeralda shipwreck and recovered the astrolabe. But because researchers couldn’t discern any navigational markings on the almost seven (7) inch-diameter (17.5 centimeters) disc, they were cautious about labeling it without further evidence.

Now, the new scan reveals etchings around the edge of the disc, each separated by five degrees, Williams found. This detail proves it’s an astrolabe, as these markings would have helped mariners measure the height of the sun above the horizon at noon — a strategy that helped them figure out their location while at sea, Williams said.  The disc is also engraved with the Portuguese coat of arms and the personal emblem of Dom Manuel I, Portugal’s king from 1495 to1521.  “Usually we are working on engineering-related challenges, so to be able to take our expertise and transfer that to something totally different and so historically significant was a really interesting opportunity,” Williams said.

CONCLUSIONS:

The only manner in which the use of this device could be known is by three-dimensional scanning techniques.  Once again, modern technology allows for the unveiling of the truth.  The engravings indicating Portugal’s king nailed the time period.  This is a significant find and confirms early voyages throughout history.

%d bloggers like this: