BELA LISBOA

October 21, 2017


My wife and I heard rave reviews about a new restaurant in Chattanooga called Bela Lisboa so we decided to visit very early this past Friday afternoon.  (When I say early I mean early.  It was just past five o’clock as they were opening the doors.)  We were their first customers so please do not be put off by the vacant tables.  Bela Lisboa has been open for approximately three (3) months.  Our server, YaYa indicated their busiest hours were from seven to nine in the evening.  One very important note:  there is “free” parking in the back.  The location is 417 Frazier Avenue in the North Shore area and parking can be a real problem during the evening hours.  Park in back.

It was a MARVELOUS experience and an unexpected surprise. The food was excellent, the service was flawless—every dish was extremely well-repaired.  If I may, let me give you a digital tour of the evening.

As I mentioned, we were their first customers although people began entering as we were finishing our meal.  You can get a feel for the seating arrangement and decorations from above.  There are additional tables to the left of the photograph and tables positioned towards the front of the establishment.  There is also a bar which is not shown in the picture.

Our server was a young lady named YaYa.  She was very knowledgeable regarding the menu items and the specials for the evening and very attentive and yet not “hovering”.  (We don’t like hovering!) Not a native of Chattanooga but resident for ten (10) years and was one of the first employees of Bela Lisboa.

OK, let’s go to the food.  We decided that since we had not been there before, we would order multiple dishes from the “starter” menu.  That turned out to be the best thing we could have done.

Let me state emphatically—I do not like HUMMUS—never have for some reason.  This dish was the first served and it was wonderfully well prepared.  I am now a believer—at least in the hummus served by Bela Lisboa.  As you can see, the bread served was based with olive oil.

Breaded Calamari Rings with House Spicy Marinara. Notice the yellow pepper added to the dish.  I do love calamari which is one of the favorite dishes in Portugal.  The spicy sauce was great but not “three alarm”.  It was delicious on top of the rings.

Fig-infused Goat Cheese with Honey, Walnut, and Balsamic Reduction.  OK, this is a mouth full.  Once again, I’m not really a fan of goat cheese but this was truly good with a capital “G”.  This could have been my meal alone.

Poached Shrimp in Garlic & Olive Oil. Who does not like shrimp? Served in a skillet and piping hot.

Salmon Tartare With Red Onion, Mango, Coriander, Olive Oil. In years past, I had a bad experience with steak tartare so I was a little nervous about this one also but it was fabulous.  I mean really fabulous.

The owner and chef of Bela Lisboa is David Filippini.  He is from Portugal and has owned restaurants in Portugal prior to coming to the United States. We did not meet him this evening but we meet the manager during dinner and had a conversation with him as we were leaving.  We certainly indicated what a great experience this was.  Bela Lisboa is the only restaurant serving Portuguese food in Chattanooga even though our city is becoming much more oriented to food from other parts of the world.

Now, I would like to show you reviews from others who have enjoyed the experience.  Take a look at the several given below.

CONCLUSIONS:

The good news is—Bela Lisboa is in Chattanooga.  The bad news is-Bela Lisboa is in Chattanooga and most of you reading this post are not in Chattanooga.  One good reason to make the visit to the “sunny south”.  Also, I want to portray the fact that Chattanooga is a marvelous town and one which has become a “destination city” simply due to the great scenery, the wonderful and welcoming people, marvelous restaurants such as Bela Lisboa, and just plenty of “stuff” to do.  When visiting, you MUST try Bela Lisboa!  Please come take a look for yourself.

Advertisements

DISTRACTIONS

October 18, 2017


Is there anyone in the United States who does NOT use our road systems on a daily basis?  Only senior citizens in medical facilities and those unfortunate enough to have health problems stay off the roads.  I have a daily commute of approximately thirty-seven (37) miles, one way, and you would not believe what I see.  Then again, maybe you would.  You’ve been there, done that, got the “T” shirt.

It’s no surprise to learn that information systems cause driver distraction, but recent news from the AAA Foundation for Traffic Safety indicated the problem may be worse than we thought. A study released by the organization showed that the majority of today’s information technologies are complex, frustrating, and maybe even dangerous to use. Working with researchers from the University of Utah, AAA analyzed the systems in thirty (30) vehicles, rating them on how much visual and cognitive demand they placed on drivers. The conclusion: None of the thirty-produced low demand. Twenty-three (23) of the systems generated “high” or “very high” demand.

“Removing eyes from the road for just two seconds doubles the risk for a crash,” AAA wrote in a press release. “With one in three adults using the systems available while driving, AAA cautions that using these technologies while behind the wheel can have dangerous consequences.”

In the study, University of Utah researchers examined visual (eyes-on-the-road) and cognitive (mental) demands of each system, and looked at the time required to complete tasks. Tasks included the use of voice commands and touch screens to make calls, send texts, tune the radio and program navigation. And the results were uniformly disappointing—really disappointing.

We are going to look at the twelve (12) vehicles categorized by researchers as having “very high demand” information systems. The vehicles vary from entry-level to luxury and sedan to SUV, but they all share one common trait: AAA says the systems distract drivers.  This is to me very discouraging.  Here we go.

CONCLUSIONS:

I’m definitely NOT saying don’t buy these cars but it is worth knowing and compensating for when driving.


Portions of the following post were taken from the September 2017 Machine Design Magazine.

We all like to keep up with salary levels within our chosen profession.  It’s a great indicator of where we stand relative to our peers and the industry we participate in.  The state of the engineering profession has always been relatively stable. Engineers are as essential to the job market as doctors are to medicine. Even in the face of automation and the fear many have of losing their jobs to robots, engineers are still in high demand.  I personally do not think most engineers will be out-placed by robotic systems.  That fear definitely resides with on-line manufacturing positions with duties that are repetitive in nature.  As long as engineers can think, they will have employment.

The Machine Design Annual Salary & Career Report collected information and opinions from more than two thousand (2,000) Machine Design readers. The employee outlook is very good with thirty-three percent (33%) indicating they are staying with their current employer and thirty-six percent (36%) of employers focusing on job retention. This is up fifteen percent (15%) from 2016.  From those who responded to the survey, the average reported salary for engineers across the country was $99,922, and almost sixty percent (57.9%) reported a salary increase while only ten percent (9.7%) reported a salary decrease. The top three earning industries with the largest work forces were 1.) industrial controls systems and equipment, 2.) research & development, and 3.) medical products. Among these industries, the average salary was $104,193. The West Coast looks like the best place for engineers to earn a living with the average salary in the states of California, Washington, and Oregon was $116,684. Of course, the cost of living in these three states is definitely higher than other regions of the country.

PROFILE OF THE ENGINEER IN THE USA TODAY:

As is the ongoing trend in engineering, the profession is dominated by male engineers, with seventy-one percent (71%) being over fifty (50) years of age. However, the MD report shows an up-swing of young engineers entering the profession.  One effort that has been underway for some years now is encouraging more women to enter the profession.  With seventy-one percent (71%) of the engineering workforce being over fifty, there is a definite need to attract participants.    There was an increase in engineers within between twenty-five (25) and thirty-five (35).  This was up from 5.6% to 9.2%.  The percentage of individuals entering the profession increased as well, with engineers with less than fourteen (14) years of experience increasing five percent (5%) from last year.  Even with all the challenges of engineering, ninety-two percent (92%) would still recommend the engineering profession to their children, grandchildren and others. One engineer responds, “In fact, wherever I’ll go, I always will have an engineer’s point of view. Trying to understand how things work, and how to improve them.”

 

When asked about foreign labor forces, fifty-four percent (54%) believe H1-B visas hurt engineering employment opportunities and sixty-one percent (61%) support measures to reform the system. In terms of outsourcing, fifty-two percent (52%) reported their companies outsource work—the main reason being lack of in-house talent. However, seventy-three percent (73%) of the outsourced work is toward other U.S. locations. When discussing the future, the job force, fifty-five percent (55%) of engineers believe there is a job shortage, specifically in the skilled labor area. An overwhelming eighty-seven percent (87%) believe that we lack a skilled labor force. According to the MD readers, the strongest place for job growth is in automation at forty-five percent (45%) and the strongest place to look for skilled laborers is in vocational schools at thirty-two percent (32%). The future of engineering is dependent on the new engineers not only in school today, but also in younger people just starting their young science, technology, engineering, and mathematic (STEM) interests. With the average engineer being fifty (50) years or old, the future of engineering will rely heavily on new engineers willing to carry the torch—eighty-seven percent (87%) of our engineers believe there needs to be more focus on STEM at an earlier age to make sure the future of engineering is secure.

With being the case, let us now look at the numbers.

The engineering profession is a “graying” profession as mentioned earlier.  The next digital picture will indicate that, for the most part, those in engineering have been in for the “long haul”.  They are “lifers”.  This fact speaks volumes when trying to influence young men and women to consider the field of engineering.  If you look at “years in the profession”, “work location” and years at present employer” we see the following:

The slide below is a surprise to me and I think the first time the question has been asked by Machine Design.  How much of your engineering training is theory vs. practice? You can see the greatest response is almost fourteen percent (13.6%) with a fifty/fifty balance between theory and practice.  In my opinion, this is as it should be.

“The theory can be learned in a school, but the practical applications need to be learned on the job. The academic world is out of touch with the current reality of practical applications since they do not work in

that area.” “My university required three internships prior to graduating. This allowed them to focus significantly on theoretical, fundamental knowledge and have the internships bolster the practical.”

ENGINEERING CERTIFICATIONS:

The demands made on engineers by their respective companies can sometimes be time-consuming.  The respondents indicated the following certifications their companies felt necessary.

 

 

SALARIES:

The lowest salary is found with contract design and manufacturing.  Even this salary, would be much desired by just about any individual.

As we mentioned earlier, the West Coast provides the highest salary with several states in the New England area coming is a fairly close second.

 

SALARY LEVELS VS. EXPERIENCE:

This one should be no surprise.  The greater number of years in the profession—the greater the salary level.  Forty (40) plus years provides an average salary of approximately $100,000.  Management, as you might expect, makes the highest salary with an average being $126,052.88.

OUTSOURCING:

 

As mentioned earlier, outsourcing is a huge concern to the engineering community. The chart below indicates where the jobs go.

JOB SATISFACTION:

 

Most engineers will tell you they stay in the profession because they love the work. The euphoria created by a “really neat” design stays with an engineer much longer than an elevated pay check.  Engineers love solving problems.  Only two percent (2%) told MD they are not satisfied at all with their profession or current employer.  This is significant.

Any reason or reasons for leaving the engineering profession are shown by the following graphic.

ENGINEERING AND SOCIETY: 

As mentioned earlier, engineers are very worried about the H1-B visa program and trade policies issued by President Trump and the Legislative Branch of our country.  The Trans-Pacific Partnership has been “nixed” by President Trump but trade policies such as NAFTA and trade between the EU are still of great concern to engineers.  Trade with China, patent infringement, and cyber security remain big issues with the STEM profession and certainly engineers.

 

CONCLUSIONS:

I think it’s very safe to say that, for the most part, engineers are very satisfied with the profession and the salary levels offered by the profession.  Job satisfaction is great making the dawn of a new day something NOT to be dreaded.

AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

DEGREE OR NO DEGREE

October 7, 2017


The availability of information in books (as always), on the Internet, through seminars and professional shows, scientific publications, pod-casts, Webinars, etc. is amazing in today’s “digital age”.  That begs the question—Is a college degree really necessary?   Can you rise to a level of competence and succeed by being self-taught?  For most, a college degree is the way to open doors. For a precious few, however, no help is needed.

Let’s look at twelve (12) individuals who did just that.

The co-founder of Apple and the force behind the iPod, iPhone, and iPad, Steve Jobs attended Reed College, an academically-rigorous liberal arts college with a heavy emphasis on social sciences and literature. Shortly after enrolling in 1972, however, he dropped out and took a job as a technician at Atari.

Legendary industrialist Howard Hughes is often said to have graduated from Cal Tech, but the truth is that the California school has no record of his having attended classes there. He did enroll at Rice University in Texas in 1924, but dropped out prematurely due the death of his father.

Arguably Harvard’s most famous dropout, Bill Gates was already an accomplished software programmer when he started as a freshman at the Massachusetts campus in 1973. His passion for software actually began before high school, at the Lakeside School in Seattle, Washington, where he was programming in BASIC by age 13.

Just like his fellow Microsoft co-founder Bill Gates, Paul Allen was a college dropout.

Like Gates, he was also a star student (a perfect score on the SAT) who honed his programming skills at the Lakeside School in Seattle. Unlike Gates, however, he went on to study at Washington State University before leaving in his second year to work as a programmer at Honeywell in Boston.

Even for his time, Thomas Edison had little formal education. His schooling didn’t start until age eight, and then only lasted a few months.

Edison said that he learned most of his reading, writing, and math at home from his mother. Still, he became known as one of America’s most prolific inventors, amassing 1,093 U.S. patents and changing the world with such devices as the phonograph, fluoroscope, stock ticker, motion picture camera, mechanical vote recorder, and long-lasting incandescent electric light bulb. He is also credited with patenting a system of electrical power distribution for homes, businesses, and factories.

Michael Dell, founder of Dell Computer Corp., seemed destined for a career in the computer industry long before he dropped out of the University of Texas. He purchased his first calculator at age seven, applied to take a high school equivalency exam at age eight, and performed his first computer teardown at age 15.

A pioneer of early television technology, Philo T. Farnsworth was a brilliant student who dropped out of Brigham Young University after the death of his father, according to Biography.com.

Although born in a log cabin, Farnsworth quickly grasped technical concepts, sketching out his revolutionary idea for a television vacuum tube while still in high school, much to the confusion of teachers and fellow students.

Credited with inventing the controls that made fixed-wing powered flight possible, the Wright Brothers had little formal education.

Neither attended college, but they gained technical knowledge from their experiences working with printing presses, bicycles, and motors. By doing so, they were able to develop a three-axis controller, which served as the means to steer and maintain the equilibrium of an aircraft.

Stanford Ovshinsky managed to amass 400 patents covering subjects ranging from nickel-metal hydride batteries to amorphous silicon semiconductors to hydrogen fuel cells, all without the benefit of a college education. He is best known for his formation of Energy Conversion Devices and his pioneering work in nickel-metal hydride batteries, which have been widely used in hybrid and electric cars, as well as laptop computers, digital cameras, and cell phones.

Preston Tucker, designer of the infamous 1948 Tucker sedan, worked as a machinist, police officer and car salesman, but was not known to have attended college. Still, he managed to become founder of the Tucker Aviation Corp. and the Tucker Corp.

Larry Ellison dropped out of his pre-med studies at the University of Illinois in his second year and left the University of Chicago after only one term, but his brief academic experiences eventually led him to the top of the computer industry.

A Harvard dropout, Mark Zuckerberg was considered a prodigy before he even set foot on campus.

He began doing BASIC programming in middle school, created an instant messaging system while in high school, and learned to read and write French, Hebrew, Latin, and ancient Greek prior to enrolling in college.

CONCLUSIONS:

In conclusions, I want to leave you with a quote from President Calvin Coolidge:

Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent.

AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?

HACKED OFF

October 2, 2017


Portions of this post are taken from an article by Rob Spiegel of Design News Daily.

You can now anonymously hire a cybercriminal online for as little as six to ten dollars ($6 to $10) per hour, says Rodney Joffe, senior vice president at Neustar, a cybersecurity company. As it becomes easier to engineer such attacks, with costs falling, more businesses are getting targeted. About thirty-two (32) percent of information technology professionals surveyed said DDoS attacks cost their companies $100,000 an hour or more. That percentage is up from thirty (30) percent reported in 2014, according to Neustar’s survey of over 500 high-level IT professionals. The data was released Monday.

Hackers are costing consumers and companies between $375 and $575 billion, annually, according to a study published this past Monday, a number only expected to grow as online information stealing expands with increased Internet use.  This number blows my mind.   I actually had no idea the costs were so great.  Great and increasing.

Online crime is estimated at 0.8 percent of worldwide GDP, with developed countries in regions including North America and Europe losing more than countries in Latin American or Africa, according to the new study published by the Center for Strategic and International Studies and funded by cybersecurity firm McAfee.

That amount rivals the amount of worldwide GDP – 0.9 percent – that is spent on managing the narcotics trade. This difference in costs for developed nations may be due to better accounting or transparency in developed nations, as the cost of online crime can be difficult to measure and some companies do not do disclose when they are hacked for fear of damage to their reputations, the report said.

Cyber attacks have changed in recent years. Gone are the days when relatively benign bedroom hackers entered organizations to show off their skills.  No longer is it a guy in the basement of his or her mom’s home eating Doritos.  Attackers now are often sophisticated criminals who target employees who have access to the organization’s jewels. Instead of using blunt force, these savvy criminals use age-old human fallibility to con unwitting employees into handing over the keys to the vault.  Professional criminals like the crime opportunities they’ve found on the internet. It’s far less dangerous than slinging guns. Cybersecurity is getting worse. Criminal gangs have discovered they can carry out crime more effectively over the internet, and there’s less chance of getting caught.   Hacking individual employees is often the easiest way into a company.  One of the cheapest and most effective ways to target an organization is to target its people. Attackers use psychological tricks that have been used throughout mankind.   Using the internet, con tricks can be carried out on a large scale. The criminals do reconnaissance to find out about targets over email. Then they effectively take advantage of key human traits.

One common attack comes as an email impersonating a CEO or supplier. The email looks like it came from your boss or a regular supplier, but it’s actually targeted to a specific professional in the organization.   The email might say, ‘We’ve acquire a new organization. We need to pay them. We need the company’s bank details, and we need to keep this quiet so it won’t affect our stock price.’ The email will go on to say, ‘We only trust you, and you need to do this immediately.’ The email comes from a criminal, using triggers like flattery, saying, ‘You’re the most trusted individual in the organization.’ The criminals play on authority and create the panic of time pressure. Believe it or not, my consulting company has gotten these messages. The most recent being a hack from Experian.

Even long-term attacks can be launched by using this tactic of a CEO message. “A company in Malaysia received kits purporting to come from the CEO.  The users were told the kit needed to be installed. It took months before the company found out it didn’t come from the CEO at all.

Instead of increased technology, some of the new hackers are deploying the classic con moves, playing against personal foibles. They are taking advantage of those base aspects of human nature and how we’re taught to behave.   We have to make sure we have better awareness. For cybersecurity to be engaging, you have to have an impact.

As well as entering the email stream, hackers are identifying the personal interests of victims on social media. Every kind of media is used for attacks. Social media is used to carry out reconnaissance, to identify targets and learn about them.  Users need to see what attackers can find out about them on Twitter or Facebook. The trick hackers use is to pretend they know the target. Then the get closes through personal interaction on social media. You can look at an organization on Twitter and see who works in finance. Then they take a good look across social platform to find those individuals on social media to see if they go to a class each week or if they traveled to Iceland in 1996.  You can put together a spear-phishing program where you say, Hey I went on this trip with you.

CONCLUSIONS:

The counter-action to personal hacking is education and awareness. The company can identify potential weaknesses and potential targets and then change the vulnerable aspects of the corporate environment.  We have to look at the culture of the organization. Those who are under pressure are targets. They don’t have time to study each email they get. We also have to discourage reliance on email.   Hackers also exploit the culture of fear, where people are punished for their mistakes. Those are the people most in danger. We need to create a culture where if someone makes a mistake, they can immediately come forward. The quicker someone comes forward, the quicker we can deal with it.

%d bloggers like this: