Portions of this post are taken from the January 2018 article written by John Lewis of “Vision Systems”.

I feel there is considerable confusion between Artificial Intelligence (AI), Machine Learning and Deep Learning.  Seemingly, we use these terms and phrases interchangeably and they certainly have different meanings.  Natural Learning is the intelligence displayed by humans and certain animals. Why don’t we do the numbers:

AI:

Artificial Intelligence refers to machines mimicking human cognitive functions such as problem solving or learning.  When a machine understands human speech or can compete with humans in a game of chess, AI applies.  There are several surprising opinions about AI as follows:

  • Sixty-one percent (61%) of people see artificial intelligence making the world a better place
  • Fifty-seven percent (57%) would prefer an AI doctor perform an eye exam
  • Fifty-five percent (55%) would trust an autonomous car. (I’m really not there as yet.)

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.

MACHINE LEARNING:

Machine Learning is the current state-of-the-art application of AI and largely responsible for its recent rapid growth. Based upon the idea of giving machines access to data so that they can learn for themselves, machine learning has been enabled by the internet, and the associated rise in digital information being generated, stored and made available for analysis.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level understanding. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

DEEP LEARNING:

Deep Learning concentrates on a subset of machine-learning techniques, with the term “deep” generally referring to the number of hidden layers in the deep neural network.  While conventional neural network may contain a few hidden layers, a deep network may have tens or hundreds of layers.  In deep learning, a computer model learns to perform classification tasks directly from text, sound or image data. In the case of images, deep learning requires substantial computing power and involves feeding large amounts of labeled data through a multi-layer neural network architecture to create a model that can classify the objects contained within the image.

CONCLUSIONS:

Brave new world we are living in.  Someone said that AI is definitely the future of computing power and eventually robotic systems that could possibly replace humans.  I just hope the programmers adhere to Dr. Isaac Asimov’s three laws:

 

  • The First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

  • The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

  • The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those words, science-fiction author Isaac Asimov changed how the world saw robots. Where they had largely been Frankenstein-esque, metal monsters in the pulp magazines, Asimov saw the potential for robotics as more domestic: as a labor-saving device; the ultimate worker. In doing so, he continued a literary tradition of speculative tales: What happens when humanity remakes itself in its image?

As always, I welcome your comments.

Advertisements

OKAY first, let us define “OPEN SOURCE SOFTWARE” as follows:

Open-source software (OSS) is computer software with its source-code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. The benefits include:

  • COST—Generally, open source software if free.
  • FLEXIBILITY—Computer specialists can alter the software to fit their needs for the program(s) they are writing code for.
  • FREEDOM—Generally, no issues with patents or copyrights.
  • SECURITY—The one issue with security is using open source software and embedded code due to compatibility issues.
  • ACCOUNTABILITY—Once again, there are no issues with accountability and producers of the code are known.

A very detailed article written by Jacob Beningo has seven (7) excellent points for avoiding, like the plague, open source software.  Given below are his arguments.

REASON 1—LACKS TRACEABLE SOFTWARE DEVELOPMENT LIFE CYCLE–Open source software usually starts with an ingenious developer working out their garage or basement hoping to create code that is very functional and useful. Eventually multiple developers with spare time on their hands get involved. The software evolves but it doesn’t really follow a traceable design cycle or even follow best practices. These various developers implement what they want or push the code in the direction that meets their needs. The result is software that works in limited situations and circumstances and users need to cross their fingers and pray that their needs and conditions match them.

REASON 2—DESIGNED FOR FUNCTIONALITY AND NOT ROBUSTNESS–Open source software is often written for functionality only. Accessed and written to an SD card for communication over USB connections. The issue here is that while it functions the code, it generally is not robust and is never designed to anticipate issues.  This is rarely the case and while the software is free, very quickly developers can find that their open source software is just functional and can’t stand up to real-world pressures. Developers will find themselves having to dig through unknown terrain trying to figure out how best to improve or handle errors that weren’t expected by the original developers.

REASON 3—ACCIDENTIALLY EXPOSING CONFIDENTIAL INTELLECTURAL PROPERTY–There are several different licensing schemes that open source software developers use. Some really do give away the farm; however, there are also licenses that require any modifications or even associated software to be released as open source. If close attention is not being paid, a developer could find themselves having to release confidential code and algorithms to the world. Free software just cost the company in revealing the code or if they want to be protected, they now need to spend money on attorney fees to make sure that they aren’t giving it all away by using “free” software.

REASON 4—LACKING AUTOMATED AND/OR MANUAL TESTING–A formalized testing process, especially automated tests are critical to ensuring that a code base is robust and has sufficient quality to meet its needs. I’ve seen open source Python projects that include automated testing which is encouraging but for low level firmware and embedded systems we seem to still lag behind the rest of the software industry. Without automated tests, we have no way to know if integrating that open source component broke something in it that we won’t notice until we go to production.

REASON 5—POOR DOCUMENTATION OR DOCUMENTATION THAT IS LACKING COMPLETELY–Documentation has been getting better among open source projects that have been around for a long time or that have strong commercial backing. Smaller projects though that are driven by individuals tend to have little to no documentation. If the open source code doesn’t have documentation, putting it into practice or debugging it is going to be a nightmare and more expensive than just getting commercial or industrial-grade software.

REASON 6—REAL-TIME SUPPORT IS LACKING–There are few things more frustrating than doing everything you can to get something to work or debugged and you just hit the wall. When this happens, the best way to resolve the issue is to get support. The problem with open source is that there is no guarantee that you will get the support you need in a timely manner to resolve any issues. Sure, there are forums and social media to request help but those are manned by people giving up their free time to help solve problems. If they don’t have the time to dig into a problem, or the problem isn’t interesting or is too complex, then the developer is on their own.

REASON 7—INTEGRATION IS NEVER AS EASY AS IT SEEMS–The website was found; the demonstration video was awesome. This is the component to use. Look at how easy it is! The source is downloaded and the integration begins. Months later, integration is still going on. What appeared easy quickly turned complex because the same platform or toolchain wasn’t being used. “Minor” modifications had to be made. The rabbit hole just keeps getting deeper but after this much time has been sunk into the integration, it cannot be for naught.

CONCLUSIONS:

I personally am by no means completely against open source software. It’s been extremely helpful and beneficial in certain circumstances. I have used open source, namely JAVA, as embedded software for several programs I have written.   It’s important though not to just use software because it’s free.  Developers need to recognize their requirements, needs, and level of robustness that required for their product and appropriately develop or source software that meets those needs rather than blindly selecting software because it’s “free.”  IN OTHER WORDS—BE CAREFUL!

AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?

%d bloggers like this: