SOCIAL MEDIA

June 27, 2018


DEFINITION:

Social media is typically defined today as: – “Web sites and applications that enable users to create and share content or to participate in social networking” – OxfordDictionaries.

Now that we have cleared that up, let’s take a look at the very beginning of social media.

Six Degrees, according to several sources, was the first modern-day attempt of providing access to communication relative to the “marvelous world” of social media. (I have chosen to put marvelous world in quotes because I’m not too sure it’s that marvelous. There is an obvious downside.)  Six Degrees was launched in 1997 and definitely was the first modern social network. It allowed users to create a profile and to become friends with other users. While the site is no longer functional, at one time it was actually quite popular and had approximately a million members at its peak.

Other sources indicate that social media has been around for the better part of forty (40) years with Usenet appearing in 1979.  Usenet is the first recorded network that enabled users to post news to newsgroups.  Although these Usenets and similar bulletin boards heralded the launch of the first, albeit very rudimentary, social networks, social media never really took off until almost thirty (30) years later, following the roll out of Facebook in 2006. Usenet was not identified as “social media” so the exact term was not used at that time.

If we take a very quick look at Internet and Social Media usage, we find the following:

As you can see from above, social media is incredibly popular and in use hourly if not minute-by-minute.  It’s big in our society today across the world and where allowed.

If we look at the fifteen most popular sites we see the following:

With out a doubt, the gorilla in the room is Facebook.

Facebook statistics

  • Facebook adds 500,000 new users a day – that’s six new profiles a second – and just under a quarter (775) of adults in the US visit their account at least once a month
  • The average (mean) number of Facebook friends is 155
  • There are 60 million active small business pages (up from 40 million in 2015), 5 million of which pay for advertising
  • There are thought to be 270 million fake Facebook profiles (there were only81 million in 2015)
  • Facebook accounts for 1% of social logins made by consumers to sign into the apps and websites of publishers and brands.

It’s important we look at all social media sites so If we look at daily usage for the most popular web sites, we see the following:

BENEFITS:

  • Ability to connect to other people all over the world. One of the most obvious pros of using social networks is the ability to instantly reach people from anywhere. Use Facebook to stay in touch with your old high school friends who’ve relocated all over the country, get on Google Hangouts with relatives who live halfway around the world, or meet brand new people on Twitter from cities or regions you’ve never even heard of before.
  • Easy and instant communication. Now that we’re connected wherever we go, we don’t have to rely on our landlines, answering machines or snail mail to contact somebody. We can simply open up our laptops or pick up our smartphones and immediately start communicating with anyone on platforms like Twitter or one of the many social messaging apps
  • Real-time news and information discovery. Gone are the days of waiting around for the six o’clock news to come on TV or for the delivery boy to bring the newspaper in the morning. If you want to know what’s going on in the world, all you need to do is jump on social media. An added bonus is that you can customize your news and information discovery experiences by choosing to follow exactly what you want.
  • Great opportunities for business owners. Business owners and other types of professional organizations can connect with current customers, sell their products and expand their reach using social media. There are actually lots of entrepreneurs and businesses out there that thrive almost entirely on social networks and wouldn’t even be able to operate without it.
  • General fun and enjoyment. You have to admit that social networking is just plain fun sometimes. A lot of people turn to it when they catch a break at work or just want to relax at home. Since people are naturally social creatures, it’s often quite satisfying to see comments and likes show up on our own posts, and it’s convenient to be able to see exactly what our friends are up to without having to ask them directly.

DISADVANTAGES:

  • Information overwhelm. With so many people now on social media tweeting links and posting selfies and sharing YouTube videos, it sure can get pretty noisy. Becoming overwhelmed by too many Facebook friends to keep up with or too many Instagram photos to browse through isn’t all that uncommon. Over time, we tend to rack up a lot of friends and followers, and that can lead to lots of bloated news feeds with too much content we’re not all that interested in.
  • Privacy issues. With so much sharing going on, issues over privacy will always be a big concern. Whether it’s a question of social sites owning your content after it’s posted, becoming a target after sharing your geographical location online, or even getting in trouble at work after tweeting something inappropriate – sharing too much with the public can open up all sorts of problems that sometimes can’t ever be undone.
  • Social peer pressure and cyber bullying. For people struggling to fit in with their peers – especially teens and young adults – the pressure to do certain things or act a certain way can be even worse on social media than it is at school or any other offline setting. In some extreme cases, the overwhelming pressure to fit in with everyone posting on social media or becoming the target of a cyber-bullying attack can lead to serious stress, anxiety and even depression.
  • Online interaction substitution for offline interaction. Since people are now connected all the time and you can pull up a friend’s social profile with a click of your mouse or a tap of your smartphone, it’s a lot easier to use online interaction as a substitute for face-to-face interaction. Some people argue that social media actually promotes antisocial human behavior.
  • Distraction and procrastination. How often do you see someone look at their phone? People get distracted by all the social apps and news and messages they receive, leading to all sorts of problems like distracted driving or the lack of gaining someone’s full attention during a conversation. Browsing social media can also feed procrastination habits and become something people turn to in order to avoid certain tasks or responsibilities.
  • Sedentary lifestyle habits and sleep disruption. Lastly, since social networking is all done on some sort of computer or mobile device, it can sometimes promote too much sitting down in one spot for too long. Likewise, staring into the artificial light from a computer or phone screen at night can negatively affect your ability to get a proper night’s sleep. (Here’s how you can reduce that blue light, by the way.)

Social media is NOT going away any time soon.  Those who choose to use it will continue using it although there are definite privacy issues. The top five (5) issues discussed by users are as follows:

  • Account hacking and impersonation.
  • Stalking and harassment
  • Being compelled to turn over passwords
  • The very fine line between effective marketing and privacy intrusion
  • The privacy downside with location-based services

I think these issues are very important and certainly must be considered with using ANY social media platform.  Remember—someone is ALWAYS watching.

 

Advertisements

DISCRIMINATION

June 20, 2018


When I think of discrimination I automatically think of whites discriminating against blacks. I’m sure that’s because I’m from the southeastern part of the United States although there is ample evidence that discrimination occurs in all states of the United States.   There are other manners in which discrimination can occur.

From the New York Times we read the following:

“A group that is suing Harvard University is demanding that it publicly release admissions data on hundreds of thousands of applicants, saying the records show a pattern of discrimination against Asian-Americans going back decades.

The group was able to view the documents through its lawsuit, which was filed in 2014 and challenges Harvard’s admissions policies. The plaintiffs said in a letter to the court last week that the documents were so compelling that there was no need for a trial, and that they would ask the judge to rule summarily in their favor based on the documents alone.

The plaintiffs also say that the public — which provides more than half a billion dollars a year in federal funding to Harvard — has a right to see the evidence that the judge will consider in her decision.

Harvard counters that the documents are tantamount to trade secrets, and that even in the unlikely event that the judge agrees to decide the case without a trial, she is likely to use only a fraction of the evidence in her decision. Only that portion, the university says, should be released.”

There is no doubt that Harvard University makes considerable efforts to be “all-inclusive”.  They discriminate against whites and Asian-Americans in favor of African-Americans, Hispanics and the LGBT community.  That is a fact and a form of discrimination.

The EEOC tells us the following are methods of discrimination:

I recently read a horrible story about a young man in the country of India.  This guy had completed a course of study at the prestigious Indian Institute of Technology in Delhi with a Masters Degree in computer science.  He came to know a fellow classmate.  They fell in love.  He asked her father for her hand in marriage.  He said absolutely not.  “My daughter will not marry an untouchable, a Dalit.”  Now, Article 17 of the Indian Constitution abolishes untouchability and makes it punishable by law, and the Scheduled Caste and Scheduled Tribes (Prevention of Atrocities) Act of 1989 spells out the safeguards against caste discrimination and violence. His daughter honored her father and they did not get married.  The young man moved to the United States and now is a citizen working for an aerospace company in New England. He is happily married with three children—all citizens.

The term caste was first used by Portuguese travelers who came to India in the 16th century. Caste comes from the Spanish and Portuguese word “casta” which means “race”, “breed”, or “lineage”. Many Indians use the term “jati”. There are 3,000 castes and 25,000 sub-castes in India, each related to a specific occupation. A caste system is a class structure determined by birth. Loosely, it means that in some societies, if your parents are poor, you’re going to be poor, also. Same goes for being rich, if you’re parents were rich, you would be rich.   According to one long-held theory about the origins of South Asia’s caste system, Aryans from central Asia invaded South Asia and introduced the caste system as a means of controlling the local populations. The Aryans defined key roles in society, then assigned groups of people to them.

If a Hindu were asked to explain the nature of the caste system, he or she might tell the story of Brahma — the four-headed, four-handed deity worshipped as the creator of the universe. According to an ancient text known as the Rigveda, the division of Indian society was based on Brahma’s divine manifestation of four groups. Priests and teachers were cast from his mouth, rulers and warriors from his arms, merchants and traders from his thighs, and workers and peasants from his feet.  Others might present a biological explanation of India’s stratification system, based on the notion that all living things inherit a particular set of qualities. Some inherit wisdom and intelligence, some get pride and passion, and others are stuck with less fortunate traits. Proponents of this theory attribute all aspects of one’s lifestyle — social status, occupation, and even diet — to these inherent qualities and thus use them to explain the foundation of the caste system.

The caste structure may be seen by the digital below.

India’s caste system has four main classes (also called varnas) based originally on personality, profession, and birth. In descending order, the classes are as follows:

  • Brahmana (now more commonly spelled Brahmin): Consist of those engaged in scriptural education and teaching, essential for the continuation of knowledge.
  • Kshatriya: Take on all forms of public service, including administration, maintenance of law and order, and defense.
  • Vaishya: Engage in commercial activity as businessmen.
  • Shudra: Work as semi-skilled and unskilled laborers.

You will notice the “untouchables” and not even considered as a class of Indian society. Traditionally, the groups characterized as untouchable were those whose occupations and habits of life involved ritually polluting activities, of which the most important were (1) taking life for a living, a category that included, for example, fishermen, (2) killing or disposing of dead cattle or working with their hides for a living, (3) pursuing activities that brought the participant into contact with emissions of the human body, such as feces, urine, sweat, and spittle, a category that included such occupational groups as sweepers and washermen, and (4) eating the flesh of cattle or of domestic pigs and chickens, a category into which most of the indigenous tribes of India fell.

As mentioned earlier, Article 17 of the Indian Constitution was introduced to eliminate the caste system.  Do you really think that happened?  Of course not.  Indians of the Dalit classification, and there are thousands, still face rejection and discrimination on a daily basis.  Maybe we here in “los estados unidos” have it better than we think.


Portions of this post are taken from the January 2018 article written by John Lewis of “Vision Systems”.

I feel there is considerable confusion between Artificial Intelligence (AI), Machine Learning and Deep Learning.  Seemingly, we use these terms and phrases interchangeably and they certainly have different meanings.  Natural Learning is the intelligence displayed by humans and certain animals. Why don’t we do the numbers:

AI:

Artificial Intelligence refers to machines mimicking human cognitive functions such as problem solving or learning.  When a machine understands human speech or can compete with humans in a game of chess, AI applies.  There are several surprising opinions about AI as follows:

  • Sixty-one percent (61%) of people see artificial intelligence making the world a better place
  • Fifty-seven percent (57%) would prefer an AI doctor perform an eye exam
  • Fifty-five percent (55%) would trust an autonomous car. (I’m really not there as yet.)

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names. This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.

MACHINE LEARNING:

Machine Learning is the current state-of-the-art application of AI and largely responsible for its recent rapid growth. Based upon the idea of giving machines access to data so that they can learn for themselves, machine learning has been enabled by the internet, and the associated rise in digital information being generated, stored and made available for analysis.

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level understanding. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

DEEP LEARNING:

Deep Learning concentrates on a subset of machine-learning techniques, with the term “deep” generally referring to the number of hidden layers in the deep neural network.  While conventional neural network may contain a few hidden layers, a deep network may have tens or hundreds of layers.  In deep learning, a computer model learns to perform classification tasks directly from text, sound or image data. In the case of images, deep learning requires substantial computing power and involves feeding large amounts of labeled data through a multi-layer neural network architecture to create a model that can classify the objects contained within the image.

CONCLUSIONS:

Brave new world we are living in.  Someone said that AI is definitely the future of computing power and eventually robotic systems that could possibly replace humans.  I just hope the programmers adhere to Dr. Isaac Asimov’s three laws:

 

  • The First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

  • The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

  • The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

With those words, science-fiction author Isaac Asimov changed how the world saw robots. Where they had largely been Frankenstein-esque, metal monsters in the pulp magazines, Asimov saw the potential for robotics as more domestic: as a labor-saving device; the ultimate worker. In doing so, he continued a literary tradition of speculative tales: What happens when humanity remakes itself in its image?

As always, I welcome your comments.

DARK NET

December 6, 2017


Most of the individuals who read my posting are very well-informed and know that Tim Berners-Lee “invented” the internet.  In my opinion, the Internet is a resounding technological improvement in communication.  It has been a game-changer in the truest since of the word.  I think there are legitimate uses which save tremendous time.  There are also illegitimate uses as we shall see.

A JPEG of Mr. Berners-Lee is shown below:

BIOGRAPHY:

In 1989, while working at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, Tim Berners-Lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier “Enquire” work, his efforts were designed to allow people to work together by combining their knowledge in a web of hypertext documents.  Sir Tim wrote the first World Wide Web server, “httpd“, and the first client, “WorldWideWeb” a what-you-see-is-what-you-get hypertext browser/editor which ran in the NeXTStep environment. This work began in October 1990.k   The program “WorldWideWeb” was first made available within CERN in December, and on the Internet at large in the summer of 1991.

Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial specifications of URIs, HTTP and HTML were refined and discussed in larger circles as the Web technology spread.

Tim Berners-Lee graduated from the Queen’s College at Oxford University, England, in 1976. While there he built his first computer with a soldering iron, TTL gates, an M6800 processor and an old television.

He spent two years with Plessey Telecommunications Ltd (Poole, Dorset, UK) a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology.

In 1978 Tim left Plessey to join D.G Nash Ltd (Ferndown, Dorset, UK), where he wrote, among other things, typesetting software for intelligent printers and a multitasking operating system.

His year and one-half spent as an independent consultant included a six-month stint (Jun-Dec 1980) as consultant software engineer at CERN. While there, he wrote for his own private use his first program for storing information including using random associations. Named “Enquire” and never published, this program formed the conceptual basis for the future development of the World Wide Web.

From 1981 until 1984, Tim worked at John Poole’s Image Computer Systems Ltd, with technical design responsibility. Work here included real time control firmware, graphics and communications software, and a generic macro language. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for scientific data acquisition and system control. Among other things, he worked on FASTBUS system software and designed a heterogeneous remote procedure call system.

In 1994, Tim founded the World Wide Web Consortium at the Laboratory for Computer Science (LCS). This lab later merged with the Artificial Intelligence Lab in 2003 to become the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). Since that time he has served as the Director of the World Wide Web Consortium, a Web standards organization which develops interoperable technologies (specifications, guidelines, software, and tools) to lead the Web to its full potential. The Consortium has host sites located at MIT, at ERCIM in Europe, and at Keio University in Japan as well as offices around the world.

In 1999, he became the first holder of 3Com Founders chair at MIT. In 2008 he was named 3COM Founders Professor of Engineering in the School of Engineering, with a joint appointment in the Department of Electrical Engineering and Computer Science at CSAIL where he also heads the Decentralized Information Group (DIG). In December 2004 he was also named a Professor in the Computer Science Department at the University of Southampton, UK. From 2006 to 2011 he was co-Director of the Web Science Trust, launched as the Web Science Research Initiative, to help create the first multidisciplinary research body to examine the Web.

In 2008 he founded and became Director of the World Wide Web Foundation.  The Web Foundation is a non-profit organization devoted to achieving a world in which all people can use the Web to communicate, collaborate and innovate freely.  The Web Foundation works to fund and coordinate efforts to defend the Open Web and further its potential to benefit humanity.

In June 2009 then Prime Minister Gordon Brown announced that he would work with the UK Government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Sir Tim was a member of The Public Sector Transparency Board tasked to drive forward the UK Government’s transparency agenda.  He has promoted open government data globally, is a member of the UK’s Transparency Board.

In 2011 he was named to the Board of Trustees of the Ford Foundation, a globally oriented private foundation with the mission of advancing human welfare. He is President of the UK’s Open Data Institute which was formed in 2012 to catalyze open data for economic, environmental, and social value.

He is the author, with Mark Fischetti, of the book “Weaving the Web” on the past, present and future of the Web.

On March 18 2013, Sir Tim, along with Vinton Cerf, Robert Kahn, Louis Pouzin and Marc Andreesen, was awarded the Queen Elizabeth Prize for Engineering for “ground-breaking innovation in engineering that has been of global benefit to humanity.”

It should be very obvious from this rather short biography that Sir Tim is definitely a “heavy hitter”.

DARK WEB:

I honestly don’t think Sir Tim realized the full gravity of his work and certainly never dreamed there might develop a “dark web”.

The Dark Web is the public World Wide Web content existing on dark nets or networks which overlay the public Internet.  These networks require specific software, configurations or authorization to access. They are NOT open forums as we know the web to be at this time.  The dark web forms part of the Deep Web which is not indexed by search engines such as GOOGLE, BING, Yahoo, Ask.com, AOL, Blekko.com,  Wolframalpha, DuckDuckGo, Waybackmachine, or ChaCha.com.  The dark nets which constitute the Dark Web include small, friend-to-friend peer-to-peer networks, as well as large, popular networks like FreenetI2P, and Tor, operated by public organizations and individuals. Users of the Dark Web refer to the regular web as the Clearnet due to its unencrypted nature.

A December 2014 study by Gareth Owen from the University of Portsmouth found the most commonly requested type of content on Tor was child pornography, followed by black markets, while the individual sites with the highest traffic were dedicated to botnet operations.  Botnet is defined as follows:

“a network of computers created by malware andcontrolled remotely, without the knowledge of the users of those computers: The botnet was usedprimarily to send spam emails.”

Hackers built the botnet to carry out DDoS attacks.

Many whistle-blowing sites maintain a presence as well as political discussion forums.  Cloned websites and other scam sites are numerous.   Many hackers sell their services individually or as a part of groups. There are reports of crowd-funded assassinations and hit men for hire.   Sites associated with Bitcoinfraud related services and mail order services are some of the most prolific.

Commercial dark net markets, which mediate transactions for illegal drugs and other goods, attracted significant media coverage starting with the popularity of Silk Road and its subsequent seizure by legal authorities. Other markets sells software exploits and weapons.  A very brief look at the table below will indicate activity commonly found on the dark net.

As you can see, the uses for the dark net are quite lovely, lovely indeed.  As with any great development such as the Internet, nefarious uses can and do present themselves.  I would stay away from the dark net.  Just don’t go there.  Hope you enjoy this one and please send me your comments.


OKAY first, let us define “OPEN SOURCE SOFTWARE” as follows:

Open-source software (OSS) is computer software with its source-code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. The benefits include:

  • COST—Generally, open source software if free.
  • FLEXIBILITY—Computer specialists can alter the software to fit their needs for the program(s) they are writing code for.
  • FREEDOM—Generally, no issues with patents or copyrights.
  • SECURITY—The one issue with security is using open source software and embedded code due to compatibility issues.
  • ACCOUNTABILITY—Once again, there are no issues with accountability and producers of the code are known.

A very detailed article written by Jacob Beningo has seven (7) excellent points for avoiding, like the plague, open source software.  Given below are his arguments.

REASON 1—LACKS TRACEABLE SOFTWARE DEVELOPMENT LIFE CYCLE–Open source software usually starts with an ingenious developer working out their garage or basement hoping to create code that is very functional and useful. Eventually multiple developers with spare time on their hands get involved. The software evolves but it doesn’t really follow a traceable design cycle or even follow best practices. These various developers implement what they want or push the code in the direction that meets their needs. The result is software that works in limited situations and circumstances and users need to cross their fingers and pray that their needs and conditions match them.

REASON 2—DESIGNED FOR FUNCTIONALITY AND NOT ROBUSTNESS–Open source software is often written for functionality only. Accessed and written to an SD card for communication over USB connections. The issue here is that while it functions the code, it generally is not robust and is never designed to anticipate issues.  This is rarely the case and while the software is free, very quickly developers can find that their open source software is just functional and can’t stand up to real-world pressures. Developers will find themselves having to dig through unknown terrain trying to figure out how best to improve or handle errors that weren’t expected by the original developers.

REASON 3—ACCIDENTIALLY EXPOSING CONFIDENTIAL INTELLECTURAL PROPERTY–There are several different licensing schemes that open source software developers use. Some really do give away the farm; however, there are also licenses that require any modifications or even associated software to be released as open source. If close attention is not being paid, a developer could find themselves having to release confidential code and algorithms to the world. Free software just cost the company in revealing the code or if they want to be protected, they now need to spend money on attorney fees to make sure that they aren’t giving it all away by using “free” software.

REASON 4—LACKING AUTOMATED AND/OR MANUAL TESTING–A formalized testing process, especially automated tests are critical to ensuring that a code base is robust and has sufficient quality to meet its needs. I’ve seen open source Python projects that include automated testing which is encouraging but for low level firmware and embedded systems we seem to still lag behind the rest of the software industry. Without automated tests, we have no way to know if integrating that open source component broke something in it that we won’t notice until we go to production.

REASON 5—POOR DOCUMENTATION OR DOCUMENTATION THAT IS LACKING COMPLETELY–Documentation has been getting better among open source projects that have been around for a long time or that have strong commercial backing. Smaller projects though that are driven by individuals tend to have little to no documentation. If the open source code doesn’t have documentation, putting it into practice or debugging it is going to be a nightmare and more expensive than just getting commercial or industrial-grade software.

REASON 6—REAL-TIME SUPPORT IS LACKING–There are few things more frustrating than doing everything you can to get something to work or debugged and you just hit the wall. When this happens, the best way to resolve the issue is to get support. The problem with open source is that there is no guarantee that you will get the support you need in a timely manner to resolve any issues. Sure, there are forums and social media to request help but those are manned by people giving up their free time to help solve problems. If they don’t have the time to dig into a problem, or the problem isn’t interesting or is too complex, then the developer is on their own.

REASON 7—INTEGRATION IS NEVER AS EASY AS IT SEEMS–The website was found; the demonstration video was awesome. This is the component to use. Look at how easy it is! The source is downloaded and the integration begins. Months later, integration is still going on. What appeared easy quickly turned complex because the same platform or toolchain wasn’t being used. “Minor” modifications had to be made. The rabbit hole just keeps getting deeper but after this much time has been sunk into the integration, it cannot be for naught.

CONCLUSIONS:

I personally am by no means completely against open source software. It’s been extremely helpful and beneficial in certain circumstances. I have used open source, namely JAVA, as embedded software for several programs I have written.   It’s important though not to just use software because it’s free.  Developers need to recognize their requirements, needs, and level of robustness that required for their product and appropriately develop or source software that meets those needs rather than blindly selecting software because it’s “free.”  IN OTHER WORDS—BE CAREFUL!

AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?

HACKED OFF

October 2, 2017


Portions of this post are taken from an article by Rob Spiegel of Design News Daily.

You can now anonymously hire a cybercriminal online for as little as six to ten dollars ($6 to $10) per hour, says Rodney Joffe, senior vice president at Neustar, a cybersecurity company. As it becomes easier to engineer such attacks, with costs falling, more businesses are getting targeted. About thirty-two (32) percent of information technology professionals surveyed said DDoS attacks cost their companies $100,000 an hour or more. That percentage is up from thirty (30) percent reported in 2014, according to Neustar’s survey of over 500 high-level IT professionals. The data was released Monday.

Hackers are costing consumers and companies between $375 and $575 billion, annually, according to a study published this past Monday, a number only expected to grow as online information stealing expands with increased Internet use.  This number blows my mind.   I actually had no idea the costs were so great.  Great and increasing.

Online crime is estimated at 0.8 percent of worldwide GDP, with developed countries in regions including North America and Europe losing more than countries in Latin American or Africa, according to the new study published by the Center for Strategic and International Studies and funded by cybersecurity firm McAfee.

That amount rivals the amount of worldwide GDP – 0.9 percent – that is spent on managing the narcotics trade. This difference in costs for developed nations may be due to better accounting or transparency in developed nations, as the cost of online crime can be difficult to measure and some companies do not do disclose when they are hacked for fear of damage to their reputations, the report said.

Cyber attacks have changed in recent years. Gone are the days when relatively benign bedroom hackers entered organizations to show off their skills.  No longer is it a guy in the basement of his or her mom’s home eating Doritos.  Attackers now are often sophisticated criminals who target employees who have access to the organization’s jewels. Instead of using blunt force, these savvy criminals use age-old human fallibility to con unwitting employees into handing over the keys to the vault.  Professional criminals like the crime opportunities they’ve found on the internet. It’s far less dangerous than slinging guns. Cybersecurity is getting worse. Criminal gangs have discovered they can carry out crime more effectively over the internet, and there’s less chance of getting caught.   Hacking individual employees is often the easiest way into a company.  One of the cheapest and most effective ways to target an organization is to target its people. Attackers use psychological tricks that have been used throughout mankind.   Using the internet, con tricks can be carried out on a large scale. The criminals do reconnaissance to find out about targets over email. Then they effectively take advantage of key human traits.

One common attack comes as an email impersonating a CEO or supplier. The email looks like it came from your boss or a regular supplier, but it’s actually targeted to a specific professional in the organization.   The email might say, ‘We’ve acquire a new organization. We need to pay them. We need the company’s bank details, and we need to keep this quiet so it won’t affect our stock price.’ The email will go on to say, ‘We only trust you, and you need to do this immediately.’ The email comes from a criminal, using triggers like flattery, saying, ‘You’re the most trusted individual in the organization.’ The criminals play on authority and create the panic of time pressure. Believe it or not, my consulting company has gotten these messages. The most recent being a hack from Experian.

Even long-term attacks can be launched by using this tactic of a CEO message. “A company in Malaysia received kits purporting to come from the CEO.  The users were told the kit needed to be installed. It took months before the company found out it didn’t come from the CEO at all.

Instead of increased technology, some of the new hackers are deploying the classic con moves, playing against personal foibles. They are taking advantage of those base aspects of human nature and how we’re taught to behave.   We have to make sure we have better awareness. For cybersecurity to be engaging, you have to have an impact.

As well as entering the email stream, hackers are identifying the personal interests of victims on social media. Every kind of media is used for attacks. Social media is used to carry out reconnaissance, to identify targets and learn about them.  Users need to see what attackers can find out about them on Twitter or Facebook. The trick hackers use is to pretend they know the target. Then the get closes through personal interaction on social media. You can look at an organization on Twitter and see who works in finance. Then they take a good look across social platform to find those individuals on social media to see if they go to a class each week or if they traveled to Iceland in 1996.  You can put together a spear-phishing program where you say, Hey I went on this trip with you.

CONCLUSIONS:

The counter-action to personal hacking is education and awareness. The company can identify potential weaknesses and potential targets and then change the vulnerable aspects of the corporate environment.  We have to look at the culture of the organization. Those who are under pressure are targets. They don’t have time to study each email they get. We also have to discourage reliance on email.   Hackers also exploit the culture of fear, where people are punished for their mistakes. Those are the people most in danger. We need to create a culture where if someone makes a mistake, they can immediately come forward. The quicker someone comes forward, the quicker we can deal with it.

%d bloggers like this: