TEACHING COMPUTER PROGRAMMING

September 9, 2016


If you read my posts on a regular basis you know I have been in the technical community all of my adult life.  I started my university education long before computers or even hand-held calculators were available.  My first recollection of working with computers resulted from a fairly small punch-card system available to the teaching staff in the engineering department.  Everything was analog—not digital.  The digital revolution has allowed technology to advance at a rate absolutely unheard of in the history of our species.  We are moving at light speed with most engineering and scientific disciplines.  There is no way my class of 1966 would we have dreamed of RFID (radio frequency identification), biometric engineering, rapid prototyping, CFD (computerized fluid dynamics), CAD (computer aided design), FEA (Finite Element Analysis) and a hundred more fascinating technologies.

With this being the case,” introduction slide rule operations” classes have been replaced with computer programming classes.  This is as it should be.  The first “computer” I owned was an HP-35.  WONDERFUL MACHINE.

The HP-35 was Hewlett-Packard‘s first pocket calculator and the world’s first scientific pocket calculator – a calculator with trigonometric and exponential functions.  I’m pretty sure most of you do not know or even remember what an HP-35 looks like.  Let’s take a look.

hp-calculator

  • The HP-35 was 5.8 inches (150 mm) long and 3.2 inches (81 mm) wide, said to have been designed to fit into one of William Hewlett’s shirt pockets. I suspect this is the case because my HP-35 fit very nicely into my shirt pocket.
  • Is the first scientific calculator to fly in space in 1972. Actually this was quite a feat and removed a great deal of extrapolation from the astronauts. Prior to this, they used a slide rule to perform calculations other than addition and subtraction.
  • Is the first pocket calculator with a numeric range that covered 200 decades (more precise 199, 10+/-99
  • The LED display power requirement was responsible for the HP-35’s short battery life between charges — about three hours. To extend operating time and avoid wearing out the on/off slide switch, users would press the decimal point key to force the display to illuminate just a single LED junction. For me, this was a huge issue.  When I took my PE exam in 1974, the battery on my HP-35 died requiring me to complete the exam with my slide rule.  REAL BUMMER !!! You do not forget those days.
  • The HP-35 calculated arithmetic, logarithmic, and trigonomic functions but the complete implementation used only 767 carefully chosen instructions (7670 bits).
  • Introduction of the HP-35 and similar scientific calculators by Texas Instrumentssoon thereafter signaled the demise of the slide rule as a status symbol among science and engineering students. Slide rule holsters rapidly gave way to “electronic slide rule” holsters, and colleges began to drop slide-rule classes from their curricula.  One course all engineers were required to take at the university I attended was how to use a slide rule.  That was the “gold standard”.  Also, if you strap that rule to your belt all the girls knew you were an engineering student.  That was big in the 60’s.
  • 100,000 HP-35 calculators were sold in the first year, and over 300,000 by the time it was discontinued in 1975—3½ years after its introduction.
  • In 2007 HP introduced a revised HP 35scalculator in memory of the original.
  • An emulation of the HP-35 is available for the Apple iPhoneand iPad.

My very first computer course was PASCAL. At that time, it was the teaching language of choice for beginners wishing to know something about computer programming.  Pascal is a general-purpose, high-level language that was originally developed by Niklaus Wirth in the early 1970s. It was developed for teaching programming as a systematic discipline and to develop reliable and efficient programs. Pascal is Algol-based language and includes many constructs of Algol. Algol 60 is a subset of Pascal. Pascal offers several data types and programming structures.

Computer teaching programs exhibit the following characteristics:

  • Easy to learn.
  • Structured language.
  • Produces transparent, efficient and reliable programs.
  • Can be compiled on a variety of computer platforms.

There are hundreds of programming languages in use today. How can you know which one to learn first?   Why not start by learning one of the top ten (10) most popular ones? That way you will always be able to discuss with your employer your capabilities.   Learning a programming language is not easy, but it can be very rewarding. You will have a lot of questions at first. Just remember to get help when you need it! You can find out the answer to almost everything on Google nowadays…. so there is no excuse for failure. Also remember that it takes years to become an expert programmer. Don’t expect to get good overnight. Just keep learning something new every day and eventually you will be competent enough to get the job done.

In today’s educational system the most taught computer programs are as follows:

most-pop-teaching-languages

Let’s take a very quick look at descriptive information relative to each programming language.

  • Python is an interpreted, multi-paradigm programming language written by Guido van Rossum in the late 1980’s and intended for general programming purposes. Python was not named after the snake but actually after the Monty Python comedy group. Python is characterized by its use of indentation for readability, and its encouragement for elegant code by making developers do similar things in similar ways. Python is used as the main programming choice of both Google and Ubuntu.
  • Java uses a compiler, and is an object-oriented language released in 1995 by Sun Microsystems. Java is the number one programming language today for many reasons. First, it is a well-organized language with a strong library of reusable software components. Second, programs written in Java can run on many different computer architectures and operating systems because of the use of the JVM (Java virtual machine ). Sometimes this is referred to as code portability or even WORA (write once, run anywhere). Third, Java is the language most likely to be taught in university computer science classes. A lot of computer science theory books written in the past decade use Java in the code examples. So learning Java syntax is a good idea even if you never actually code in it.
  • MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include: Math and computation.
  • C is a compiled, procedural language developed in 1972 by Dennis Ritchie for use in the UNIX operating system. Although designed to be portable in nature, C programs must be specifically compiled for computers with different architectures and operating systems. This helps make them lightning fast. Although C is a relatively old language, it is still widely used for system programming, writing other programming languages, and in embedded systems.
  • C++ is a compiled, multi-paradigm language written as an update to C in 1979 by Bjarne Stroustrup. It attempts to be backwards-compatible with C and brings object-orientation, which helps in larger projects. Despite its age, C++ is used to create a wide array of applications from games to office suites.
  • Scheme is a functional programming language and one of the two main dialects of the programming language Lisp. Unlike Common Lisp, the other main dialect, Scheme follows a minimalist design philosophy specifying a small standard core with powerful tools for language extension. Scheme was created during the 1970s at the MIT AI Lab and released by its developers, Guy L. Steele and Gerald Jay Sussman, via a series of memos now known as the Lambda Papers. The Scheme language is standardized in the official IEEE standard and a de facto standard called the Revisedn Report on the Algorithmic Language Scheme (RnRS). The most widely implemented standard is R5RS (1998);[4] a new standard, R6RS,[1] was ratified in 2007.[5] Scheme has a diverse user base due to its compactness and elegance, but its minimalist philosophy has also caused wide divergence between practical implementations, so much that the Scheme Steering Committee calls it “the world’s most unpotable programming language” and “a family of dialects” rather than a single language.
  • Scratch is a free visual programming languagedeveloped to help simplify the process of creating and programming animations, games, music, interactive stories and more.  The Scratch programming language is primarily targeted at children ages eight and older, and is designed to teach computational thinking using a simple but powerful building-block approach to software development that focuses more on problem solving than on specific syntax.

SUMMARY:  As mentioned above— Learning a programming language is not easy, but it can be very rewarding. Don’t expect to get good overnight. Just keep learning something new every day and eventually you will be competent enough to get the job done.  I really struggled with PASCAL.  It seemed as though I studied day and one-half the night.  I had a full-time job and attended school after hours.  It was tough but rewarding when I finally to the point where I actually could program and see time-saving results from the programs written.  The best advice I can give—hang in there.  It is worth the effort.

MY CAR–MY COMPUTER

September 8, 2016


In 1964 I became the very proud owner of a gun-metal grey, four-cylinder Ford Falcon.  My first car. I was the third owner but treated my ride as though it was a brand new Lamborghini.  It got me to and from the university, which was one hundred and eight (108) miles from home.  This was back in the days when gasoline was $0.84 per gallon.  No power breaks—no power steering—no power seats—no power door locks—no power windows—no fuel injection.  Very basic automobile, but it was mine and very appreciated by its owner.  OK—don’t laugh but shown below is a JPEG of the car type.

ford-falcon

Mine was grey, as mentioned, but the same body style.  (Really getting nostalgic now.)

I purchased instruction manuals on how to work on the engine, transmission and other parts of the car so I basically did my own maintenance and made all repairs and adjustments.  I can remember the engine compartment being large enough to stand in.  I had the four-cylinder model so there was more than enough room to get to the carburetor, starter/alternator, oil pan, spark plug wires, etc etc.

Evolution of the automobile has been significant since those days.  The most basic cars of today are dependent upon digital technology with the most sophisticated versions being rolling computers. Let’s now flash forward and take a look at what is available today.   We will use the latest information from the Ford Motor Company as an example.

Ford says the 2016 F-150 has more than 150 million (yes that’s million) lines of code in various computer systems sprinkled under the hood.    To put that in some perspective, a smartphone’s operating system has about twelve (12) million lines of code.  The space shuttle had about 400,000 lines.  Why so much software in a truck?  According to the company, it’s part of the Ford Smart Mobility plan to be a “leader in connectivity” mobility, autonomous vehicles, the customer experience, and data analytics.  Ford says it wants to be an auto and mobility company—in other words, hardware is becoming software, hence a moving computer to some degree.  This is where all up-scale cars and trucks are going in this decade and beyond.

If we look at vehicle technology, we get some idea as to what automobile owners expect, or at least would love to have, in their cars.  The following chart will indicate that. Quite frankly, I was surprised at the chart.

what-drivers-want

This is happening today—right now as you can see from the Ford F-150 information above.  Consumers DEMAND information and entertainment as they glide down the Interstates.   Let’s now take a look at connectivity and technology advances over the past decade.

  • Gasoline-Electric Hybrid Drivetrains
  • Direct Fuel Injection
  • Advanced/Variable/Compound Turbocharging
  • Dual-Clutch Transmissions
  • Torque-Vectoring Differentials
  • Satellite Radio and Multimedia Device Integration
  • Tire-Pressure Monitoring
  • ON-Star Availability
  • On-Board Wi-Fi
  • The Availability of HUM— (Verizon Telematics, a subsidiary of the biggest US wireless carrier, has launched a new aftermarket telematics vehicle platform that gives drivers detailed information on their car’s health and how to get help in the event of an emergency or car trouble.)
  • Complete Move from Analog to Digital Technology, Including Instrumentation.
  • Great Improvements in Security, i.e. Keyless Entry.
  • Ability to Pre-set “Creature Comforts” such as Seat Position, Lighting, etc.
  • Navigation, GPS Availability
  • Safety—Air Bag Technology
  • Ability to Parallel Park on Some Vehicles
  • Information to Provide Fuel Monitoring and Distance Remaining Relative to Fuel Usage
  • Rear Mounted Radar
  • Night Vision with Pedestrian Detection
  • Automatic High-Beam Control
  • Sensing Devices to Stop Car When Approaching Another Vehicle
  • Sensing to Driver and Passenger Side to Avoid Collision

All of these are made possible as a result of on-board computers with embedded technology.  Now, here is one problem I see—all of these marvelous digital devices will, at some point, need to be repaired or replaced.  That takes trained personnel using the latest maintenance manuals and diagnostic equipment.  The days of the shade-tree mechanic are over forever.  This was once-upon-a-time.  Of course you could move to Cuba. As far as automobiles, Cuba is still in the 50’s.  I personally love the inter-connectivity and information sharing the most modern automobiles are equipped with today.  I love state-of-the-art as it is applied to vehicles.  If we examine crash statistics, we see great improvements in safety as a result of these marvelous “adders”, not to mention significant improvement in creature comforts.

Hope you enjoy this one.

PAYCHECK 2016

August 28, 2016


The following post is taken from information furnished by Mr. Rob Spiegel of Design News Daily.

We all are interested in how we stack up pay-wise relative to our peers.  Most companies have policies prohibiting discussions about individual pay because every paycheck is somewhat different due to deductible amounts.   The number of dependents, health care options, saving options all play a role in representations of the bottom line—take-home pay.  That’s the reason it is very important to have a representative baseline for average working salaries for professional disciplines.  That is what this post is about.  Just how much should an engineering graduate expect upon graduation in the year 2016?  Let’s take a very quick look.

The average salaries for engineering grads entering the job market range from $62,000 to $64,000 — except for one notable standout. According to the 2016 Salary Survey from The National Association of Colleges and Employers, petroleum engineering majors are expected to enter their field making around $98,000/year. Clearly, petroleum engineering majors are projected to earn the top salaries among engineering graduates this year.

Petroleum Engineers

Actually, I can understand this high salary for Petroleum engineers.  Petroleum is a non-renewable resource with diminishing availability.  Apparently, the “easy” deposits have been discovered—the tough ones, not so much.  The locations for undiscovered petroleum deposits represent some of the most difficult conditions on Earth.  They deserve the pay they get.

Chemical Engineering

Dupont at one time had the slogan, “Better living through chemistry.”  That fact remains true to this day.  Chemical engineers provide value-added products from medical to material.  From the drugs we take to the materials we use, chemistry plays a vital role in kicking the can down the road.

Electrical Engineering

When I was a graduate, back in the dark ages, electrical engineers garnered the highest paying salaries.   Transistors, relays, optical devices were new and gaining acceptance in diverse markets.  Electrical engineers were on the cutting edge of this revolution.  I still remember changing tubes in radios and even TV sets when their useful life was over.  Transistor technology was absolutely earth-shattering and EEs were riding the crest of that technology wave.

Computer Engineering

Computer and software engineering are here to stay because computers have changed our lives in a remarkably dramatic fashion.  We will NEVER go back to performing even the least tedious task with pencil and paper.  We often talk about disruptive technology—game changers.  Computer science is just that

Mechanical Engineering

I am a mechanical engineer and have enjoyed the benefits of ME technology since graduation fifty years ago.  Now, we see a great combination of mechanical and electrical with the advent of mechatronics.  This is a very specialized field providing the best of both worlds.

Software Engineering

Materials Engineering

Material engineering is a fascinating field for a rising freshman and should be considered as a future path.  Composite materials and additive manufacturing have broadened this field in a remarkable fashion.  If I had to do it over again, I would certainly consider materials engineering.

Systems Engineering

Systems engineering involves putting it all together.  A critical task considering “big data”, the cloud, internet exchanges, broadband developments, etc.  Someone has to make sense of it all and that’s the job of the systems engineer.

Hope you enjoyed this one. I look forward to your comments.

US CYBER COMMAND

August 4, 2016


It is absolutely amazing as to the number of “hacks” perpetrated upon Federal agencies of the United States.  This statement could also be made for non-Federal institutions such as banks, independent companies, and commercial establishments from Starbucks to Target to the DNC.  Let’s see if we can quantify the extent by looking at just a few relative to our Federal government.

  • Department of Health and Human Services (HHS), August 2014.
  • White House, October 2014.
  • National Oceanic and Atmospheric Agency (NOAA), November 2014. 
  • United States Postal Service (USPS), November 2014.
  • Department of State, November 2014.
  • Federal Aviation Administration (FAA), April 2015. 
  • Department of Defense, April 2015.
  • St. Louis Federal Reserve, May 2015.
  • Internal Revenue Service May 2015. 
  • U.S. Army Web site, June 2015.
  • Office of Personnel Management (OPM), June 2015. 
  • Census Bureau, July 2015.
  • Pentagon, August 2015. 

The list is very impressive but extremely troubling. QUESTION:  Are top U.S. government leaders serious about cyber security and cyber warfare, or not?  If the answer is a resounding YES, it’s time to prove it.  Is cyber security high enough on the list of national defense priorities to warrant its own unified command? Clearly, the answer is YES.

Two major breaches last year of U.S. government databases holding personnel records and security-clearance files exposed sensitive information about at least twenty-two point one (22.1) million people, including not only federal employees and contractors but their families and friends, U.S. officials said Thursday.

The total vastly exceeds all previous estimates, and marks the most detailed accounting by the Office of Personnel Management of how many people were affected by cyber intrusions that U.S. officials have privately said were traced to the Chinese government.

Think twenty-two (22.1) million names, Social Security numbers, telephone numbers, and addresses being held by the Chinese government.  So again, clearly the time for an independent Cyber Security Command is upon us or approaching quickly.

DoD COMMAND STRUCTURE:

At the present time, there are nine (9) unified combatant commands that exist today in the United States Department of Defense.  These are as follows:

  • U.S. Africa Command based in Stuttgart, Germany
  • U.S. Central Command based at MacDill Air Force Base, Florida
  • U.S. European Command based in Stuttgart, Germany
  • U.S. Northern Command at Peterson Air Force Base, Colorado
  • U.S. Pacific Command at Camp H.M. Smith, Hawaii
  • U.S. Southern Command in Doral, Florida
  • U.S. Special Operations Command at MacDill, Florida
  • U.S. Strategic Command at Offutt Air Force Base, Nebraska
  • U.S. Transportation Command at Scott Air Force Base, Illinois

Placing Cyber Command among these organizations would take it from under the U.S. Strategic Command where it resides today as an armed forces sub-unified command.

PRECIDENT FOR CHANGE:

Over our history there have been two major structural changes to our Federal Government certainly needed for added security and safety.

UNITED STATES AIR FORCE:

World War II had been over for two years and the Korean War lay three years ahead when the Air Force ended a 40-year association with the U.S. Army to become a separate service. The U.S. Air Force thus entered a new era in which airpower became firmly established as a major element of the nation’s defense and one of its chief hopes for deterring war. The Department of the Air Force was created when President Harry S Truman signed the National Security Act of 1947.

Lawmakers explained why they felt the U.S. needed to evolve the Army Air Corps into an independent branch in a Declaration of Policy at the beginning of the National Security Act of 1947: To provide a comprehensive program for the future security of the United States; to provide three military departments: the Army, the Navy, and the Air Force; to provide for their coordination and unified direction under civilian control and to provide for the effective strategic direction and operation of the armed forces under unified control.

General Carl A. Spaatz became the first Chief of Staff of the Air Force on 26 September 1947. When General Spaatz assumed his new position, the first Secretary of the Air Force, W. Stuart Symington, was already on the job, having been sworn in on 18 September 1947.  He had been Assistant Secretary of War for Air and had already worked closely with General Spaatz.  The new Air Force was fortunate to have these two men as its first leaders. They regarded air power as an instrument of national policy and of great importance to national defense.  Both men also knew how to promote air power and win public support for the Air Force.

HOMELAND SECURITY:

Eleven days after the September 11, 2001, terrorist attacks, President George W. Bush announced that he would create an Office of Homeland Security in the White House and appoint Pennsylvania Governor Tom Ridge as the director. The office would oversee and coordinate a comprehensive national strategy to safeguard the country against terrorism, and respond to any future attacks.

Executive Order 13228, issued on October 8, 2001, established two entities within the White House to determine homeland security policy: the Office of Homeland Security (OHS) within the Executive Office of the President, tasked to develop and implement a national strategy to coordinate federal, state, and local counter-terrorism efforts to secure the country from and respond to terrorist threats or attacks, and the Homeland Security Council (HSC), composed of Cabinet members responsible for homeland security-related activities, was to advise the President on homeland security matters, mirroring the role the National Security Council (NSC) plays in national security.

Before the establishment of the Department of Homeland Security, homeland security activities were spread across more than forty (40) federal agencies and an estimated 2,000 separate Congressional appropriations accounts. In February 2001, the U.S. Commission on National Security/21st Century (Hart-Rudman Commission) issued its Phase III Report, recommending significant and comprehensive institutional and procedural changes throughout the executive and legislative branches in order to meet future national security challenges. Among these recommendations was the creation of a new National Homeland Security Agency to consolidate and refine the missions of the different departments and agencies that had a role in U.S. homeland security.

In March 2001, Representative Mac Thornberry (R-TX) proposed a bill to create a National Homeland Security Agency, following the recommendations of the U.S. Commission on National Security/21st Century (Hart-Rudman Commission). The bill combined FEMA, Customs, the Border Patrol, and several infrastructure offices into one agency responsible for homeland security-related activities. Hearings were held, but Congress took no further action on the bill.

CONCLUSIONS:

From the two examples above: i.e. Formation of the USAF and Homeland Security, we see there is precedent for separating Federal activities and making those activities stand-alone entities.  This is what needs to be accomplished here.  I know the arguments about increasing the size of government and these are very valid but, if done properly, the size could possibly be reduced by improving efficiency and consolidation of activities.  Now is the time for CYBER COMMAND.

SMARTS

August 2, 2016


On 13 October 2014 at 9:32 A.M. my ninety-two (92) year old mother died of Alzheimer’s.   It was a very peaceful passing but as her only son it was very painful to witness her gradual memory loss and the demise of all cognitive skills.  Even though there is no cure, there are certain medications that can arrest progression to a point.  None were effective in her case.

Her condition once again piqued my interest in intelligence (I.Q.), smarts, intellect.  Are we born with an I. Q. we cannot improve? How do cultural and family environment affect intelligence? What activities diminish I.Q., if any?  Just how much of our brain’s abilities does the average working-class person need and use each day? Obviously, some professions require greater intellect than others. How is I.Q. distributed over our species in general?

IQ tests are the most reliable (e.g. consistent) and valid (e.g. accurate and meaningful) type of psychometric test that psychologists make use of. They are well-established as a good measure of a general intelligence or G.  IQ tests are widely used in many contexts – educational, professional and for leisure. Universities use IQ tests (e.g. SAT entrance exams) to select students, companies use IQ tests (job aptitude tests) to screen applicants, and high IQ societies such as Mensa use IQ test scores as membership criteria.

The following bell-shaped curve will demonstrate approximate distribution of intellect for our species.

Bell Shaped Curve

The area under the curve between scores corresponds to the percentage (%) in the population. The scores on this IQ bell curve are color-coded in ‘standard deviation units’. A standard deviation is a measure of the spread of the distribution with fifteen (15) points representing one standard deviation for most IQ tests. Nearly seventy percent (70%) of the population score between eighty-five (85) and one hundred and fifteen (115) – i.e. plus and minus one standard deviation. A very small percentage of the population (about 0.1% or 1 in 1000) have scores less than fifty-five (55) or greater than one hundred and forty-five (145) – that is, more than three (3 )standard deviations out!

As you can see, the mean I.Q. is approximately one hundred, with ninety-five percent (95%) of the general population lying between seventy (70) and one hundred and fifteen percent (115%). Only two percent (2%) of the population score greater than one hundred and thirty (130) and a tremendously small 0.01% score in the genius range, greater than one hundred forty-five percent (145%).

OK, who’s smart?  Let’s look.

PRESENT AND LIVING:

  • Gary Kasparov—190.  Born in 1963 in Baku, in what is now Azerbaijan, Garry Kasparov is arguably the most famous chess player of all time. When he was seven, Kasparov enrolled at Baku’s Young Pioneer Palace; then at ten he started to train at the school of legendary Soviet chess player Mikhail Botvinnik. In 1980 Kasparov qualified as a grandmaster, and five years later he became the then youngest-ever outright world champion. He retained the championship title until 1993, and has held the position of world number one-ranked player for three times longer than anyone else. In 1996 he famously took on IBM computer Deep Blue, winning with a score of 4–2 – although he lost to a much upgraded version of the machine the following year. In 2005 Kasparov retired from chess to focus on politics and writing. He has a reported IQ of 190.
  • Philip Emeagwali-190. Dr. Philip Emeagwali, who has been called the “Bill Gates of Africa,” was born in Nigeria in 1954. Like many African schoolchildren, he dropped out of school at age 14 because his father could not continue paying Emeagwali’s school fees. However, his father continued teaching him at home, and everyday Emeagwali performed mental exercises such as solving 100 math problems in one hour. His father taught him until Philip “knew more than he did.”
  • Marlyn vos Savant—228. Marilyn vos Savant’s intelligence quotient (I.Q.) score of 228, is certainly one of the highest ever recorded.  This very high I.Q. gave the St. Louis-born writer instant celebrity and earned her the sobriquet “the smartest person in the world.” Although vos Savant’s family was aware of her exceptionally high I.Q. scores on the Stanford-Benet test when she was ten (10) years old (she is also recognized as having the highest I.Q. score ever recorded by a child), her parents decided to withhold the information from the public in order to avoid commercial exploitation and assure her a normal childhood.
  • Mislav Predavec—192.  Mislav Predavec is a Croatian mathematics professor with a reported IQ of 190. “I always felt I was a step ahead of others. As material in school increased, I just solved the problems faster and better,” he has explained. Predavec was born in Zagreb in 1967, and his unique abilities were obvious from a young age. As for his adult achievements, since 2009 Predavec has taught at Zagreb’s Schola Medica Zagrabiensis. In addition, he runs trading company Preminis, having done so since 1989. And in 2002 Predavec founded exclusive IQ society GenerIQ, which forms part of his wider IQ society network. “Very difficult intelligence tests are my favorite hobby,” he has said. In 2012 the World Genius Directory ranked Predavec as the third smartest person in the world.
  • Rick Rosner—191.  U.S. television writer and pseudo-celebrity Richard Rosner is an unusual case. Born in 1960, he has led a somewhat checkered professional life: as well as writing for Jimmy Kimmel Live! and other TV shows, Rosner has, he says, been employed as a stripper, doorman, male model and waiter. In 2000 he infamously appeared on Who Wants to Be a Millionaire? answering a question about the altitude of capital cities incorrectly and reacting by suing the show, albeit unsuccessfully. Rosner placed second in the World Genius Directory’s 2013 Genius of the Year Awards; the site lists his IQ at 192, which places him just behind Greek psychiatrist Evangelos Katsioulis. Rosner reportedly hit the books for 20 hours a day to try and outdo Katsioulis, but to no avail.
  • Christopher Langan—210.  Born in San Francisco in 1952, self-educated Christopher Langan is a special kind of genius. By the time he turned four, he’d already taught himself how to read.  At high school, according to Langan, he tutored himself in “advanced math, physics, philosophy, Latin and Greek, all that.” What’s more, he allegedly got 100 percent on his SAT test, even though he slept through some of it. Langan attended Montana State University but dropped out. Rather like the titular character in 1997 movie Good Will Hunting, Langan didn’t choose an academic career; instead, he worked as a doorman and developed his Cognitive-Theoretic Model of the Universe during his downtime. In 1999, on TV newsmagazine 20/20, neuropsychologist Robert Novelly stated that Langan’s IQ – said to be between 195 and 210 – was the highest he’d ever measured. Langan has been dubbed “the smartest man in America.”
  • Evangelos Katsioulis—198. Katsioulis is known for his high intelligence test scores.  There are several reports that he has achieved the highest scores ever recorded on IQ tests designed to measure exceptional intelligence.   Katsioulis has a reported IQ 205 on the Stanford-Binet scale with standard deviation of 16, which is equivalent to an IQ 198.4.
  • Kim Ung-Young—210.   Before The Guinness Book of World Records withdrew its Highest IQ category in 1990, South Korean former child prodigy Kim Ung-Yong made the list with a score of 210. Kim was born in Seoul in 1963, and by the time he turned three, he could already read Korean, Japanese, English and German. When he was just eight years old, Kim moved to America to work at NASA. “At that time, I led my life like a machine. I woke up, solved the daily assigned equation, ate, slept, and so forth,” he has explained. “I was lonely and had no friends.” While he was in the States, Kim allegedly obtained a doctorate degree in physics, although this is unconfirmed. In any case, in 1978 he moved back to South Korea and went on to earn a Ph.D. in civil engineering.
  • Christopher Hirata—225.   Astrophysicist Chris Hirata was born in Michigan in 1982, and at the age of 13 he became the youngest U.S. citizen to receive an International Physics Olympiad gold medal. When he turned 14, Hirata apparently began studying at the California Institute of Technology, and he would go on to earn a bachelor’s degree in physics from the school in 2001. At 16 – with a reported IQ of 225 – he started doing work for NASA, investigating whether it would be feasible for humans to settle on Mars. Then in 2005 he went on to obtain a Ph.D. in physics from Princeton. Hirata is currently a physics and astronomy professor at The Ohio State University. His specialist fields include dark energy, gravitational lensing, the cosmic microwave background, galaxy clustering, and general relativity. “If I were to say Chris Hirata is one in a million, that would understate his intellectual ability,” said a member of staff at his high school in 1997.
  • Terrance Tao—230.  Born in Adelaide in 1975, Australian former child prodigy Terence Tao didn’t waste any time flexing his educational muscles. When he was two years old, he was able to perform simple arithmetic. By the time he was nine, he was studying college-level math courses. And in 1988, aged just 13, he became the youngest gold medal recipient in International Mathematical Olympiad history – a record that still stands today. In 1992 Tao achieved a master’s degree in mathematics from Flinders University in Adelaide, the institution from which he’d attained his B.Sc. the year before. Then in 1996, aged 20, he earned a Ph.D. from Princeton, turning in a thesis entitled “Three Regularity Results in Harmonic Analysis.” Tao’s long list of awards includes a 2006 Fields Medal, and he is currently a mathematics professor at the University of California, Los Angeles.
  • Stephen Hawkin—235. Guest appearances on TV shows such as The SimpsonsFuturama and Star Trek: The Next Generation have helped cement English astrophysicist Stephen Hawking’s place in the pop cultural domain. Hawking was born in 1942; and in 1959, when he was 17 years old; he received a scholarship to read physics and chemistry at Oxford University. He earned a bachelor’s degree in 1962 and then moved on to Cambridge to study cosmology. Diagnosed with motor neuron disease at the age of 21, Hawking became depressed and almost gave up on his studies. However, inspired by his relationship with his fiancé – and soon to be first wife – Jane Wilde, he returned to his academic pursuits and obtained his Ph.D. in 1965. Hawking is perhaps best known for his pioneering theories on black holes and his bestselling 1988 book A Brief History of Time.

PAST GENIUS:

The individuals above are living.  Let’s take a very quick look at several past geniuses.  I’m sure you know the names.

  • Johann Goethe—210-225
  • Albert Einstein—205-225
  • Leonardo da vinci-180-220
  • Isaac Newton-190-200
  • James Maxwell-190-205
  • Copernicus—160-200
  • Gottfried Leibniz—182-205
  • William Sidis—200-300
  • Carl Gauss—250-300
  • Voltaire—190-200

As you can see, these guys are heavy hitters.   I strongly suspect there are many that we have not mentioned.  Individuals, who have achieved but never gotten the opportunity to, let’s just say, shine.  OK, where does that leave the rest of us? There is GOOD news.  Calvin Coolidge said it best with the following quote:

“Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent. “

President Calvin Coolidge.

I think this says it all.  As always, I welcome your comments.


A web site called “The Best Schools” recently published a list of the top twenty (20) professions they feel are the most viable and stable for the next decade.   They have identified twenty (20) jobs representing a variety of industries that are not only thriving now, but are expected to grow throughout the next ten (10) years. Numbers were taken from projections by the Bureau of Labor Statistics (BLS) for 2010 to 2020.  I would like to list those jobs for you now as the BLS sees them.  Please note, these are in alphabetical order.

  • Accountant/Auditor
  • Biomedical Engineer
  • Brick mason, Block mason, and Stone mason
  • Civil Engineer
  • Computer Systems Analyst
  • Dental Hygienist
  • Financial Examiner
  • Health Educator
  • Home Health Aide
  • Human Resources Specialist
  • Interpreter/Translator
  • Management Analyst
  • Market Research Analyst
  • Meeting/Event Planner
  • Mental Health Counselor and Family Therapist
  • Physical Therapist and Occupational Therapist
  • Physician and Surgeon
  • Registered Nurse
  • Software Developer
  • Veterinarian

I would like now to present what the BLS indicates will be job growth for the engineering disciplines.  Job prospects for engineers over the next ten (10) years are very positive and according to them, most engineering disciplines will experience growth over the coming decade.

Professions such as biomedical engineering will see stellar growth of twenty-three percent (23%) over the next ten (10) years, while nuclear engineering will actually see a four percent (4%) decline in jobs over the coming decade.

The engineering profession is expected to follow the range of average job growth — about five percent (5%) — through 2024. Engineers, however, are expected to earn more, beginning right after graduation.  Two smart moves that will help engineering job prospects, according to the latest stats, include post-graduate education and the willingness to move into management. This is no different than it has always been.  I would also recommend taking a look at an MBA, after you receive your MS degree in your specific field of endeavor.

Mechanical Engineer

Petroleum

Materials Engineer

Aeorspace

Civil

Biomedical

Neuclear


Chemical

Computer Hardware

Industrial

Electrical

Mining

Computer Programmers

Environmental

Health and Safety

CONCLUSIONS:

I think it can be said that any profession in the fields of engineering and health services will be somewhat insulated from fluxations in the economy over the next ten years.  We are getting older and apparently fatter.   Both “conditions” require healthcare specialists.  Older medical and engineering practitioners are retiring at a very fast rate and many of the positions available are due those retirements.  At the present time, companies in the United States cannot find enough engineers and engineering technicians to fill available jobs.  There is a huge skills gap in our country left unfilled due to lack of training and lack of motivation on the part of well-bodied individuals.  It’s a great problem that must be solved as we progress into the twenty-first century.  My recommendation—BE AN ENGINEER. The jobs for the next twenty years are out there.  Just a thought.

MOORE’S LAW

June 10, 2016


There is absolutely no doubt the invention and development of chip technology has changed the world and made possible a remarkable number of devices we seemingly cannot live without.  It has also made possible miniaturization of electronics considered impossible thirty years ago.  This post is about the rapid improvement that technology and those of you who read my posts are probably very familiar with Moor’s Law.  Let us restate and refresh our memories.

“Moore’s law” is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit has doubled approximately every two years.”

Chart of Moore's Law

You can see from the digital above, that law is represented in graph form with the actual “chip” designation given.  Most people will be familiar with Moore’s Law, which was not so much a law, but a prediction given by Intel’s Gordon Moore.   His theory was stated in 1965.  Currently, the density of components on a silicon wafer is close to reaching its physical limit but there are promising technologies that should supersede transistors to overcome this “shaky” fact.  Just who is Dr. Gordon Moore?

GORDON E. MOORE:

Gordon Earle Moore was born January 3, 1929.  He is an American businessman, co-founder and Chairman Emeritus of Intel Corporation, and the author of Moore’s law.  Moore was born in San Francisco, California, and grew up in nearby Pescadero. He attended Sequoia High School in Redwood City and initially went to San Jose State University.  After two years he transferred to the University of California, Berkeley, from which he received a Bachelor of Science degree in chemistry in 1950.

In September, 1950 Moore matriculated at the California Institute of Technology (Caltech), where he received a PhD in chemistry and a minor in physics, all awarded in 1954. Moore conducted postdoctoral research at the Applied Physics Laboratory at Johns Hopkins University from 1953 to 1956.      

Moore joined MIT and Caltech alumnus William Shockley at the Shockley Semiconductor Laboratory division of Beckman Instruments, but left with the “traitorous eight“, when Sherman Fairchild agreed to fund their efforts to created the influential Fairchild Semiconductor corporation.

In July 1968, Robert Noyce and Moore founded NM Electronics which later became Intel Corporation where he served as Executive Vice President until 1975.   He then became President.  In April 1979, Moore became Chairman of the Board and Chief Executive Officer, holding that position until April 1987, when he became Chairman of the Board. He was named Chairman Emeritus of Intel Corporation in 1997.  Under Noyce, Moore, and later Andrew Grove, Intel has pioneered new technologies in the areas of computer memoryintegrated circuits and microprocessor design.  A picture of Dr. Moore is given as follows:

Gordon Moore

JUST HOW DO YOU MAKE A COMPUTER CHIP?

We are going to use Intel as our example although there are several “chip” manufacturers in the world.  The top ten (10) are as follows:

  • INTEL = $48.7 billion in sales
  • Samsung = $28.6 billion in sales
  • Texas Instruments = $14 billion in sales.
  • Toshiba = $12.7 billion in sales
  • Renesas = $ 10.6 billion in sales
  • Qualcomm =  $10.2 billion in sales
  • ST Microelectronics = $ 9.7 billion in sales
  • Hynix = $9.3 billion in sales
  • Micron = $7.4 billion in sales
  • Broadcom = $7.2 billion in sales

As you can see, INTEL is by far the biggest, producing the greatest number of computer chips.

The deserts of Arizona are home to Intel’s Fab 32, a $3 billion factory that is performing one of the most complicated electrical engineering feats of our time.  It’s here that processors with components measuring just forty-five (45) millionths of a millimeter across are manufactured, ready to be shipped to motherboard manufacturers all over the world.  Creating these complicated miniature systems is impressive enough, but it’s not the processors’ diminutive size that’s the most startling or impressive part of the process. It may seem an impossible transformation, but these fiendishly complex components are made from nothing more glamorous than sand. Such a transformative feat isn’t simple. The production process requires more than three hundred (300) individual steps.

STEP ONE:

Sand is composed of silica (also known as silicon dioxide), and is the starting point for making a processor. Sand used in the building industry is often yellow, orange or red due to impurities, but the type chosen in the manufacture of silicon is a much purer form known as silica sand, which is usually recovered by quarrying. To extract the element silicon from the silica, it must be reduced (in other words, have the oxygen removed from it). This is accomplished by heating a mixture of silica and carbon in an electric arc furnace to a temperature in excess of 2,000°C.  The carbon reacts with the oxygen in the molten silica to produce carbon dioxide (a by-product) and silicon, which settles in the bottom of the furnace. The remaining silicon is then treated with oxygen to reduce any calcium and aluminum impurities. The end result of this process is a substance referred to as metallurgical-grade silicon, which is up to ninety-nine percent (99 %) pure.

This is not nearly pure enough for semiconductor manufacture, however, so the next job is to refine the metallurgical-grade silicon further. The silicon is ground to a fine powder and reacted with gaseous hydrogen chloride in a fluidized bed reactor at 300°C giving a liquid compound of silicon called trichlorosilane.

Impurities such as iron, aluminum, boron and phosphorous also react to give their chlorides, which are then removed by fractional distillation. The purified trichlorosilane is vaporized and reacted with hydrogen gas at 1,100°C so that the elemental silicon is retrieved.

During the reaction, silicon is deposited on the surface of an electrically heated ultra-pure silicon rod to produce a silicon ingot. The end result is referred to as electronic-grade silicon, and has a purity of 99.999999 per cent. (Incredible purity.)

STEP TWO:

Although pure to a very high degree, raw electronic-grade silicon has a polycrystalline structure. In other words, it’s made of many small silicon crystals, with defects called grain boundaries. Because these anomalies affect local electronic behavior, polycrystalline silicon is unsuitable for semiconductor manufacturing. To turn it into a usable material, the silicon must be transformed into single crystals that have a regular atomic structure. This transformation is achieved through the Czochralski Process. Electronic-grade silicon is melted in a rotating quartz crucible and held at just above its melting point of 1,414°C. A tiny crystal of silicon is then dipped into the molten silicon and slowly withdrawn while being continuously rotated in the opposite direction to the rotation of the crucible. The crystal acts as a seed, causing silicon from the crucible to crystallize around it. This builds up a rod – called a boule – that comprises a single silicon crystal. The diameter of the boule depends on the temperature in the crucible, the rate at which the crystal is ‘pulled’ (which is measured in millimeters per hour) and the speed of rotation. A typical boule measures 300mm in diameter.

STEP THREE:

Integrated circuits are approximately linear, which is to say that they’re formed on the surface of the silicon. To maximize the surface area of silicon available for making chips, the boule is sliced up into discs called wafers. The wafers are just thick enough to allow them to be handled safely during semiconductor fabrication. 300mm wafers are typically 0.775mm thick. Sawing is carried out using a wire saw that cuts multiple slices simultaneously, in the same way that some kitchen gadgets cut an egg into several slices in a single operation.

Silicon saws differ from kitchen tools in that the wire is constantly moving and carries with it a slurry of silicon carbide, the same abrasive material that forms the surface of ‘wet-dry’ sandpaper. The sharp edges of each wafer are then smoothed to prevent the wafers from chipping during later processes.

Next, in a procedure called ‘lapping’, the surfaces are polished using an abrasive slurry until the wafers are flat to within an astonishing 2μm (two thousandths of a millimeter). The wafer is then etched in a mixture of nitric, hydrofluoric and acetic acids. The nitric acid oxides the surfaces to give a thin layer of silicon dioxide – which the hydrofluoric acid immediately dissolves away to leave a clean silicon surface – and the acetic acid controls the reaction rate. The result of all this refining and treating is an even smoother and cleaner surface.

STEP FOUR:

In many of the subsequent steps, the electrical properties of the wafer will be modified through exposure to ion beams, hot gasses and chemicals. But this needs to be done selectively to specific areas of the wafer in order to build up the circuit.  A multistage process is used to create an oxide layer in the shape of the required circuit features. In some cases, this procedure can be achieved using ‘photoresist’, a photosensitive chemical not dissimilar to that used in making photographic film (just as described in steps B, C and D, below).

Where hot gasses are involved, however, the photoresist would be destroyed, making another, more complicated method of masking the wafer necessary. To overcome the problem, a patterned oxide layer is applied to the wafer so that the hot gasses only reach the silicon in those areas where the oxide layer is missing. Applying the oxide layer mask to the wafer is a multistage process, as illustrated as follows.

(A) The wafer is heated to a high temperature in a furnace. The surface layer of silicon reacts with the oxygen present to create a layer of silicon dioxide.

(B) A layer of photoresist is applied. The wafer is spun in a vacuum so that the photoresist spreads out evenly over the surface before being baked dry.

(C) The wafer is exposed to ultraviolet light through a photographic mask or film. This mask defines the required pattern of circuit features. This process has to be carried out many times, once for each chip or rectangular cluster of chips on the wafer. The film is moved between each exposure using a machine called a ‘stepper’.

(D) The next stage is to develop the latent circuit image. This process is carried out using an alkaline solution. During this process, those parts of the photoresist that were exposed to the ultraviolet soften in the solution and are washed away.

(E) The photoresist isn’t sufficiently durable to withstand the hot gasses used in some steps, but it is able to withstand hydrofluoric acid, which is now used to dissolve those parts of the silicon oxide layer where the photoresist has been washed away.

(F) Finally, a solvent is used to remove the remaining photoresist, leaving a patterned oxide layer in the shape of the required circuit features.

STEP FIVE:

The fundamental building block of a processor is a type of transistor called a MOSFET.  There are “P” channels and “N” channels. The first step in creating a circuit is to create n-type and p-type regions. Below is given the method Intel uses for its 90nm process and beyond:

(A) The wafer is exposed to a beam of boron ions. These implant themselves into the silicon through the gaps in a layer of photoresist to create areas called ‘p-wells’. These are, confusingly enough, used in the n-channel MOSFETs.

A boron ion is a boron atom that has had an electron removed, thereby giving it a positive charge. This charge allows the ions to be accelerated electrostatically in much the same way that electrons are accelerated towards the front of a CRT television, giving them enough energy to become implanted into the silicon.

(B) A different photoresist pattern is now applied, and a beam of phosphorous ions is used in the same way to create ‘n-wells’ for the p-channel MOSFETs.

(C) In the final ion implantation stage, following the application of yet another photoresist, another beam of phosphorous ions is used to create the n-type regions in the p-wells that will act as the source and drain of the n-channel MOSFETs. This has to be carried out separately from the creation of the n-wells because it needs a greater concentration of phosphorous ions to create n-type regions in p-type silicon than it takes to create n-type regions in pure, un-doped silicon.

(D) Next, following the deposition of a patterned oxide layer (because, once again, the photoresist would be destroyed by the hot gas used here), a layer of silicon-germanium doped with boron (which is a p-type material) is applied.

That’s just about it.  I know this is long and torturous but we did say there were approximately three hundred steps in producing a chip.

OVERALL SUMMARY:

The way a chip works is the result of how a chip’s transistors and gates are designed and the ultimate use of the chip. Design specifications that include chip size, number of transistors, testing, and production factors are used to create schematics—symbolic representations of the transistors and interconnections that control the flow of electricity though a chip.

Designers then make stencil-like patterns, called masks, of each layer. Designers use computer-aided design (CAD) workstations to perform comprehensive simulations and tests of the chip functions. To design, test, and fine-tune a chip and make it ready for fabrication takes hundreds of people.

The “recipe” for making a chip varies depending on the chip’s proposed use. Making chips is a complex process requiring hundreds of precisely controlled steps that result in patterned layers of various materials built one on top of another.

A photolithographic “printing” process is used to form a chip’s multilayered transistors and interconnects (electrical circuits) on a wafer. Hundreds of identical processors are created in batches on a single silicon wafer.  A JPEG of an INTEL wafer is given as follows:

Chip Wafer

Once all the layers are completed, a computer performs a process called wafer sort test. The testing ensures that the chips perform to design specifications.

After fabrication, it’s time for packaging. The wafer is cut into individual pieces called die. The die is packaged between a substrate and a heat spreader to form a completed processor. The package protects the die and delivers critical power and electrical connections when placed directly into a computer circuit board or mobile device, such as a smartphone or tablet.  The chip below is an INTEL Pentium 4 version.

INTEL Pentium Chip

Intel makes chips that have many different applications and use a variety of packaging technologies. Intel packages undergo final testing for functionality, performance, and power. Chips are electrically coded, visually inspected, and packaged in protective shipping material for shipment to Intel customers and retail.

CONCLUSIONS:

Genius is a wonderful thing and Dr. Gordon E. Moore was certainly a genius.  I think their celebrity is never celebrated enough.  We know the entertainment “stars”, sports “stars”, political “want-to-bees” get their press coverage but nine out of ten individuals do not know those who have contributed significantly to better lives for us. People such as Dr. Moore.   Today is the funeral of Caius Clay; AKA Muhammad Ali.  A great boxer and we are told a really kind man.  I have no doubt both are true.  His funeral has been televised and on-going for about four (4) hours now.  Do you think Dr. Moore will get the recognition Mr. Ali is getting when he dies?  Just a thought.

%d bloggers like this: