SMARTS

March 17, 2019


Who was the smartest person in the history of our species? Solomon, Albert Einstein, Jesus, Nikola Tesla, Isaac Newton, Leonardo de Vinci, Stephen Hawking—who would you name.  We’ve had several individuals who broke the curve relative to intelligence.   As defined by the Oxford Dictionary of the English Language, IQ:

“an intelligence test score that is obtained by dividing mental age, which reflects the age-graded level of performance as derived from population norms, by chronological age and multiplying by100: a score of100 thus indicates performance at exactly the normal level for that age group. Abbreviation: IQ”

An intelligence quotient or IQ is a score derived from one of several different intelligence measures.  Standardized tests are designed to measure intelligence.  The term “IQ” is a translation of the German Intellizenz Quotient and was coined by the German psychologist William Stern in 1912.  This was a method proposed by Dr. Stern to score early modern children’s intelligence tests such as those developed by Alfred Binet and Theodore Simin in the early twentieth century.  Although the term “IQ” is still in use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is not based on a projection of the subject’s measured rank on the Gaussian Bell curve with a center value of one hundred (100) and a standard deviation of fifteen (15).  The Stanford-Binet IQ test has a standard deviation of sixteen (16).  As you can see from the graphic below, seventy percent (70%) of the human population has an IQ between eighty-five and one hundred and fifteen.  From one hundred and fifteen to one hundred and thirty you are considered to be highly intelligent.  Above one hundred and thirty you are exceptionally gifted.

What are several qualities of highly intelligent people?  Let’s look.

QUALITIES:

  • A great deal of self-control.
  • Very curious
  • They are avid readers
  • They are intuitive
  • They love learning
  • They are adaptable
  • They are risk-takers
  • They are NOT over-confident
  • They are open-minded
  • They are somewhat introverted

You probably know individuals who fit this profile.  We are going to look at one right now:  John von Neumann.

JON von NEUMANN:

The Financial Times of London celebrated John von Neumann as “The Man of the Century” on Dec. 24, 1999. The headline hailed him as the “architect of the computer age,” not only the “most striking” person of the 20th century, but its “pattern-card”—the pattern from which modern man, like the newest fashion collection, is cut.

The Financial Times and others characterize von Neumann’s importance for the development of modern thinking by what are termed his three great accomplishments, namely:

(1) Von Neumann is the inventor of the computer. All computers in use today have the “architecture” von Neumann developed, which makes it possible to store the program, together with data, in working memory.

(2) By comparing human intelligence to computers, von Neumann laid the foundation for “Artificial Intelligence,” which is taken to be one of the most important areas of research today.

(3) Von Neumann used his “game theory,” to develop a dominant tool for economic analysis, which gained recognition in 1994 when the Nobel Prize for economic sciences was awarded to John C. Harsanyi, John F. Nash, and Richard Selten.

John von Neumann, original name János Neumann, (born December 28, 1903, Budapest, Hungary—died February 8, 1957, Washington, D.C. Hungarian-born American mathematician. As an adult, he appended von to his surname; the hereditary title had been granted his father in 1913. Von Neumann grew from child prodigy to one of the world’s foremost mathematicians by his mid-twenties. Important work in set theory inaugurated a career that touched nearly every major branch of mathematics. Von Neumann’s gift for applied mathematics took his work in directions that influenced quantum theory theory of automation, economics, and defense planning. Von Neumann pioneered game theory, and, along with Alan Turing and Claude Shannon was one of the conceptual inventors of the stored-program digital computer .

Von Neumann did exhibit signs of genius in early childhood: he could joke in Classical Greek and, for a family stunt, he could quickly memorize a page from a telephone book and recite its numbers and addresses. Von Neumann learned languages and math from tutors and attended Budapest’s most prestigious secondary school, the Lutheran Gymnasium . The Neumann family fled Bela Kun’s short-lived communist regime in 1919 for a brief and relatively comfortable exile split between Vienna and the Adriatic resort of Abbazia. Upon completion of von Neumann’s secondary schooling in 1921, his father discouraged him from pursuing a career in mathematics, fearing that there was not enough money in the field. As a compromise, von Neumann simultaneously studied chemistry and mathematics. He earned a degree in chemical engineering from the Swiss Federal Institute in  Zurich and a doctorate in mathematics (1926) from the University of Budapest.

OK, that all well and good but do we know the IQ of Dr. John von Neumann?

John Von Neumann IQ is 190, which is considered as a super genius and in top 0.1% of the population in the world.

With his marvelous IQ, he wrote one hundred and fifty (150) published papers in his life; sixty (60) in pure mathematics, twenty (20) in physics, and sixty (60) in applied mathematics. His last work, an unfinished manuscript written while in the hospital and later published in book form as The Computer and the Brain, gives an indication of the direction of his interests at the time of his death. It discusses how the brain can be viewed as a computing machine. The book is speculative in nature, but discusses several important differences between brains and computers of his day (such as processing speed and parallelism), as well as suggesting directions for future research. Memory is one of the central themes in his book.

I told you he was smart!

COMPUTER SIMULATION

January 20, 2019


More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.

WHAT IS COMPUTER SIMULATION?

Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.

REAL-WORLD SIMULATION:

The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.

 

Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.

 

 

SCUTOIDS

July 31, 2018


Just who is considered the “father of geometry”?  Do you know the answer?  Euclid enters history as one of the greatest mathematicians in history and is often referred to as the father of geometry. The standard geometry most of us learned in school is called Euclidian Geometry.  My geometry teacher in high school was Mr. Willard Millsaps.  OK, you asked how I remember that teacher’s name—he was magic. I graduated in 1961 from Chattanooga Central High School so it is a minor miracle that I remember anything, but I do remember Mr. Millsaps.

Euclid gathered all the knowledge developed in Greek mathematics at that time and created his great work, a book called ‘The Elements’ (c300 BCE). This treatise is unequaled in the history of science and could safely lay claim to being the most influential non-religious book of all time.

Euclid probably attended Plato’s academy in Athens before moving to Alexandria, in Egypt. At this time, the city had a huge library and the ready availability of papyrus made it the center for books, the major reasons why great minds such as Heron of Alexandria and Euclid based themselves there.   With Caesar’s conquest of Alexandria in 48 BC the ancient accounts by Plutarch, Aulus Gellius, Ammianus Marcellinus, and Orosius were accidentally burned during or after the siege.  The library was arguably one of the largest and most significant libraries of the ancient world, but details are a mixture of history and legend. Its main purpose was to show off the wealth of Egypt, with research as a lesser goal, but its contents were used to aid the ruler of Egypt. At any rate, its loss was significant.

You would certainly think that from 300 BCE to the present day just about every geometric figure under the sun would have been discovered but that just might not be the case.  Researchers from the University of Seville found a new configuration of shapes:  “twisted prisms”.  These prisms are found in nature, more specifically within the cells that make up skin and line many organs. Scutoids are the true shape of epithelial cells that protect organisms against infections and take in nutrients.

These “blocks” were previously represented as prism-shaped, but research published in the peer-reviewed journal Nature Communications suggests they have a specific curve and look unlike any other known shape. The researchers observed the structure in fruit-flies and zebrafish.

The scutoid is six-sided at the top, five-sided on the bottom with one triangular side. Why it has been so complex to define is because epithelial cells must move and join together to organize themselves “and give the organs their final shape,” University of Seville Biology faculty teacher Luisma Escudero said in a release.  A picture is truly worth a thousand words so given below is an artist’s rendition of a “twisted prism” or SCUTOID.

This shape — new to math, not to nature — is the form that a group of cells in the body takes in order to pack tightly and efficiently into the tricky curves of organs, scientists reported in a new paper, published July 27 in the journal Nature Communications. As mentioned earlier, the cells, called epithelial cells, line most surfaces in an animal’s body, including the skin, other organs and blood vessels. These cells are typically described in biology books as column-like or having some sort of prism shape — two parallel faces and a certain number of parallelogram sides. Sometimes, they can also be described as a bottle-like form of a prism called a “frustum.

But by using computational modeling, the group of scientists found that epithelial cells can take a new shape, previously unrecognized by mathematics, when they have to pack together tightly to form the bending parts of organs. The scientists named the shape “scutoid” after a triangle-shaped part of a beetle’s thorax called the scutellum. The researchers later confirmed the presence of the new shape in the epithelial cells of fruit-fly salivary glands and embryos.

By packing into scutoids, the cells minimize their energy use and maximize how stable they are when they pack, the researchers said in a statement. And uncovering such elegant mathematics of nature can provide engineers with new models to inspire delicate human-made tissues.

“If you are looking to grow artificial organs, this discovery could help you build a scaffold to encourage this kind of cell packing, accurately mimicking nature’s way to efficiently develop tissues,” study co-senior author Javier Buceta, an associate professor in the Department of Bioengineering at Lehigh University in Pennsylvania, said in the statement.

The results of the study surprised the researchers. “One does not normally have the opportunity to discover much name a new shape,” Buceta said in the statement.

CONCLUSIONS:

I just wonder how many more things do we not know about our universe and the planet we inhabit. I think as technology advances and we become more adept at investigating, we will discover an encyclopedia full of “unknowns”.

AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?


WHERE WE ARE:

The manufacturing industry remains an essential component of the U.S. economy.  In 2016, manufacturing accounted for almost twelve percent (11.7%) of the U.S. gross domestic product (GDP) and contributed slightly over two trillion dollars ($2.18 trillion) to our economy. Every dollar spent in manufacturing adds close to two dollars ($1.81) to the economy because it contributes to development in auxiliary sectors such as logistics, retail, and business services.  I personally think this is a striking number when you compare that contribution to other sectors of our economy.  Interestingly enough, according to recent research, manufacturing could constitute as much as thirty-three percent (33%) of the U.S. GDP if both its entire value chain and production for other sectors are included.  Research from the Bureau of Labor Statistics shows that employment in manufacturing has been trending up since January of 2017. After double-digit gains in the first quarter of 2017, six thousand (6,000) new jobs were added in April.  Currently, the manufacturing industry employs 12,396,000 people, which equals more than nine percent (9%) of the U.S. workforce.   Nonetheless, many experts are concerned that these employment gains are soon to be halted by the ever-rising adoption of automation. Yet automation is inevitable—and like in the previous industrial revolutions, automation is likely to result in job creation in the long term.  If we look back at the Industrial Revolution.

INDUSTRIAL REVOLUTION:

The Industrial Revolution began in the late 18th century when a series of new inventions such as the spinning jenny and steam engine transformed manufacturing in Britain. The changes in British manufacturing spread across Europe and America, replacing traditional rural lifestyles as people migrated to cities in search of work. Men, women and children worked in the new factories operating machines that spun and wove cloth, or made pottery, paper and glass.

Women under 20 made comprised the majority of all factory workers, according to an article on the Industrial Revolution by the Economic History Association. Many power loom workers, and most water frame and spinning jenny workers, were women. However, few women were mule spinners, and male workers sometimes violently resisted attempts to hire women for this position, although some women did work as assistant mule spinners. Many children also worked in the factories and mines, operating the same dangerous equipment as adult workers.  As you might suspect, this was a great departure from times prior to the revolution.

WHERE WE ARE GOING:

In an attempt to create more jobs, the new administration is reassessing free trade agreements, leveraging tariffs on imports, and promising tax incentives to manufacturers to keep their production plants in the U.S. Yet while these measures are certainly making the U.S. more attractive for manufacturers, they’re unlikely to directly increase the number of jobs in the sector. What it will do, however, is free up more capital for manufacturers to invest in automation. This will have the following benefits:

  • Automation will reduce production costs and make U.S. companies more competitive in the global market. High domestic operating costs—in large part due to comparatively high wages—compromise the U.S. manufacturing industry’s position as the world leader. Our main competitor is China, where low-cost production plants currently produce almost eighteen percent (17.6%) of the world’s goods—just zero-point percent (0.6%) less than the U.S. Automation allows manufacturers to reduce labor costs and streamline processes. Lower manufacturing costs results in lower product prices, which in turn will increase demand.

Low-cost production plants in China currently produce 17.6% of the world’s goods—just 0.6% less

than the U.S.

  • Automation increases productivity and improves quality. Smart manufacturing processes that make use of technologies such as robotics, big data, analytics, sensors, and the IoT are faster, safer, more accurate, and more consistent than traditional assembly lines. Robotics provide 24/7 labor, while automated systems perform real-time monitoring of the production process. Irregularities, such as equipment failures or quality glitches, can be immediately addressed. Connected plants use sensors to keep track of inventory and equipment performance, and automatically send orders to suppliers when necessary. All of this combined minimizes downtime, while maximizing output and product quality.
  • Manufacturers will re-invest in innovation and R&D. Cutting-edge technologies. such as robotics, additive manufacturing, and augmented reality (AR) are likely to be widely adopted within a few years. For example, Apple® CEO Tim Cook recently announced the tech giant’s $1 billion investment fund aimed at assisting U.S. companies practicing advanced manufacturing. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D. An important aspect of innovation will involve determining how to integrate increasingly sophisticated technologies with human functions to create highly effective solutions that support manufacturers’ outcomes.

Technologies such as robotics, additive manufacturing, and augmented reality are likely to be widely adopted soon. To remain competitive, manufacturers will have to re-invest a portion of their profits in R&D.

HOW AUTOMATION WILL AFFECT THE WORKFORCE:

Now, let’s look at the five ways in which automation will affect the workforce.

  • Certain jobs will be eliminated.  By 2025, 3.5 million jobs will be created in manufacturing—yet due to the skills gap, two (2) million will remain unfilled. Certain repetitive jobs, primarily on the assembly line will be eliminated.  This trend is with us right now.  Retraining of employees is imperative.
  • Current jobs will be modified.  In sixty percent (60%) of all occupations, thirty percent (30%) of the tasks can be automated.  For the first time, we hear the word “co-bot”.  Co-bot is robotic assisted manufacturing where an employee works side-by-side with a robotic system.  It’s happening right now.
  • New jobs will be created. There are several ways automation will create new jobs. First, lower operating costs will make U.S. products more affordable, which will result in rising demand. This in turn will increase production volume and create more jobs. Second, while automation can streamline and optimize processes, there are still tasks that haven’t been or can’t be fully automated. Supervision, maintenance, and troubleshooting will all require a human component for the foreseeable future. Third, as more manufacturers adopt new technologies, there’s a growing need to fill new roles such as data scientists and IoT engineers. Fourth, as technology evolves due to practical application, new roles that integrate human skills with technology will be created and quickly become commonplace.
  • There will be a skills gap between eliminated jobs and modified or new roles. Manufacturers should partner with educational institutions that offer vocational training in STEM fields. By offering students on-the-job training, they can foster a skilled and loyal workforce.  Manufacturers need to step up and offer additional job training.  Employees need to step up and accept the training that is being offered.  Survival is dependent upon both.
  • The manufacturing workforce will keep evolving. Manufacturers must invest in talent acquisition and development—both to build expertise in-house and to facilitate continuous innovation.  Ten years ago, would you have heard the words, RFID, Biometrics, Stereolithography, Additive manufacturing?  I don’t think so.  The workforce MUST keep evolving because technology will only improve and become a more-present force on the manufacturing floor.

As always, I welcome your comments.

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

BOEING 777

March 22, 2015


The following post used the following references as resources: 1.) Aviation Week and 2.) the Boeing Company web site for the 777 aircraft configurations and history of the Boeing Company.

I don’t think there is much doubt that The Boeing Company is and has been the foremost company in the world when it comes to building commercial aircraft. The history of aviation, specifically commercial aviation, would NOT be complete without Boeing being in the picture. There have been five (5) companies that figured prominently in aviation history relative to the United States. Let’s take a look.

THE COMPANIES:

During the last one hundred (100) years, humans have gone from walking on Earth to walking on the moon. They went from riding horses to flying jet airplanes. With each decade, aviation technology crossed another frontier, and, with each crossing, the world changed.

During the 20th century, five companies charted the course of aerospace history in the United States. They were the Boeing Airplane Co., Douglas Aircraft Co., McDonnell Aircraft Corp., North American Aviation and Hughes Aircraft. By the dawning of the new millennium, they had joined forces to share a legacy of victory and discovery, cooperation and competition, high adventure and hard struggle.

Their stories began with five men who shared the vision that gave tangible wings to the eternal dream of flight. William Edward Boeing, born in 1881 in Detroit, Mich., began building floatplanes near Seattle, Wash. Donald Wills Douglas, born in 1892 in New York, began building bombers and passenger transports in Santa Monica, Calif. James Smith McDonnell, born in 1899 in Denver, Colo., began building jet fighters in St. Louis, Mo. James Howard “Dutch” Kindelberger, born in 1895 in Wheeling, W.Va., began building trainers in Los Angeles, Calif. Howard Hughes Jr. was born in Houston, Texas, in 1905. The Hughes Space and Communications Co. built the world’s first geosynchronous communications satellite in 1963.

These companies began their journey across the frontiers of aerospace at different times and under different circumstances. Their paths merged and their contributions are the common heritage of The Boeing Company today.

In 1903, two events launched the history of modern aviation. The Wright brothers made their first flight at Kitty Hawk, N.C., and twenty-two (22) year-old William Boeing left Yale engineering college for the West Coast.

After making his fortune trading forest lands around Grays Harbor, Wash., Boeing moved to Seattle, Wash., in 1908 and, two years later, went to Los Angeles, Calif., for the first American air meet. Boeing tried to get a ride in one of the airplanes, but not one of the dozen aviators participating in the event would oblige. Boeing came back to Seattle disappointed, but determined to learn more about this new science of aviation.

For the next five years, Boeing’s air travel was mostly theoretical, explored during conversations at Seattle’s University Club with George Conrad Westervelt, a Navy engineer who had taken several aeronautics courses from the Massachusetts Institute of Technology.

The two checked out biplane construction and were passengers on an early Curtiss Airplane and Motor Co.-designed biplane that required the pilot and passenger to sit on the wing. Westervelt later wrote that he “could never find any definite answer as to why it held together.” Both were convinced they could build a biplane better than any on the market.

In the autumn of 1915, Boeing returned to California to take flying lessons from another aviation pioneer, Glenn Martin. Before leaving, he asked Westervelt to start designing a new, more practical airplane. Construction of the twin-float seaplane began in Boeing’s boathouse, and they named it the B & W, after their initials. THIS WAS THE BEGINNING.  Boeing has since developed a position in global markets unparallel relative to competition.

This post is specifically involved with the 777 product and changes in the process of being made to upgrade that product to retain markets and fend off competition such as the Airbus. Let’s take a look.

SPECIFICATION FOR THE 777:

In looking at the external physical characteristics, we see the following:

BOEING GENERAL EXTERNAL ARRANGEMENTS

As you can see, this is one BIG aircraft with a wingspan of approximately 200 feet and a length of 242 feet for the “300” version.  The external dimensions are for passenger and freight configurations.  Both enjoy significantly big external dimensions.

Looking at the internal layout for passengers, we see the following:

TYPICAL INTERIOR SEATING ARRANGEMENTS

TECHNICAL CHARACTERISTICS:

If will drill down to the nitty-gritty, we find the following:

TECHNICAL CHARACTERISTICS(1)

TECHNICAL CHARACTERISTICS(2)

As mentioned, the 777 also provides much needed services for freight haulers the world over.  In looking at payload vs. range, we see the following global “footprint” and long range capabilities from Dubai.  I have chosen but similar “footprints” may be had from Hong Kong, London, Los Angles, etc etc.

FREIGHTER PAYLOAD AND RANGE

Even with these very impressive numbers, Boeing felt an upgrade was necessary to remain competitive to other aircraft manufacturers.

UPGRADES:

Ever careful with its stewardship of the cash-generating 777 program, Boeing is planning a series of upgrades to ensure the aircraft remains competitive in the long-range market well after the 777X derivative enters service.

The plan, initially revealed this past January, was presented in detail by the company for the first time on March 9 at the International Society of Transport Air Trading meeting in Arizona. Aimed at providing the equivalent of two percent (2%) fuel-burn savings in baseline performance, the rolling upgrade effort will also include a series of optional product improvements to increase capacity by up to fourteen (14) seats that will push the total potential fuel-burn savings on a per-seat basis to as much as five percent (5%) over the current 777-300ER by late 2016.

At least 0.5% of the overall specific fuel-burn savings will be gained from an improvement package to the aircraft’s GE90-115B engine, the first elements of which General Electric will test later this year.  The bulk of the savings will come from broad changes to reduce aerodynamic drag and structural weight. Additional optional improvements to the cabin will also provide operators with more seating capacity and upgraded features that would offer various levels of extra savings on a per-seat basis, depending on specific configurations and layouts.  The digital below will highlight the improvements announced.

UPGRADES FOR 777

“We are making improvements to the fuel-burn performance and the payload/range and, at same time, adding features and functionality to allow the airlines to continue to keep the aircraft fresh in their fleets,” says 777 Chief Project Engineer and Vice President Larry Schneider. The upgrades, many of which will be retro-fittable, come as Boeing continues to pursue new sales of the current-generation twin to help maintain the 8.3-per-month production rate until the transition to the 777X at the end of the decade. Robert Stallard, an analyst at RBS Europe, notes that Boeing has a firm backlog of 273 777-300s and 777Fs, which equates to around 2.7 years of current production. “We calculate that Boeing needs to get 272 new orders for the 777 to bridge the current gap and then transition production phase on the 777X,” he says.

The upgrades will also boost existing fleets, Boeing says. “Our 777s are operated by the world’s premier airlines and now we are seeing the Chinese carriers moving from 747 fleets to big twins,” says Schneider. “There are huge 777 fleets in Europe and the Middle East, as well as the U.S., so enabling [operators] to be able to keep those up to date and competitive in the market—even though some of them are 15 years old—is a big element of this.”

Initial parts of the upgrade are already being introduced and, in the tradition of the continuous improvements made to the family since it entered service, will be rolled into the aircraft between now and the third quarter of 2016. “There is not a single block point in 2016 where one aircraft will have everything on it. It is going to be a continuous spin-out of those capabilities,” Schneider says. Fuel-burn improvements to both the 777-200LR and -300ER were introduced early in the service life of both derivatives, and the family has also received several upgrades to the interior, avionics and maintenance features over the last decade.

The overall structural weight of the 777-300ER will be reduced by 1,200 lb. “When the -300ER started service in 2004 it was 1,800 lb. heavier, so we have seen a nice healthy improvement in weight,” he adds. The reductions have been derived from production-line improvements being introduced as part of the move to the automated drilling and riveting process for the fuselage, which Boeing expects will cut assembly flow time by almost half. The manufacturer is adopting the fuselage automated upright build (FAUB) process as part of moves to streamline production ahead of the start of assembly of the first 777-9X in 2017.

One significant assembly change is a redesign of the fuselage crown, which follows the simplified approach taken with the 787. “All the systems go through the crown, which historically is designed around a fore and aft lattice system that is quite heavy. This was designed with capability for growth, but that was not needed from a systems standpoint. So we are going to a system of tie rods and composite integration panels, like the 787. The combination has taken out hundreds of pounds and is a significant improvement for workers on the line who install it as an integrated assembly,” Schneider says. Other reductions will come from a shift to a lower weight, less dense form of cabin insulation and adoption of a lower density hydraulic fluid.

Boeing has also decided to remove the tail skid from the 777-300ER as a weight and drag reduction improvement after developing new flight control software to protect the tail during abused takeoffs and landings. “We redesigned the flight control system to enable pilots to fly like normal and give them full elevator authority, so they can control the tail down to the ground without touching it. The system precludes the aircraft from contacting the tail,” Schneider says. Although Boeing originally developed the baseline electronic tail skid feature to prevent this from occurring on the -300ER, the “old system allowed contact, and to be able to handle those loads we had a lot of structure in the airplane to transfer them through the tailskid up through the aft body into the fuselage,” he adds. “So there are hundreds of pounds in the structure, and to be able to take all that out with the enhanced tail strike-protection system is a nice improvement.”

Boeing is also reducing the drag of the 777 by making a series of aerodynamic changes to the wing based on design work conducted for the 787 and, perhaps surprisingly, the long-canceled McDonnell Douglas MD-12. The most visible change, which sharp-eyed observers will also be able to spot from below the aircraft, is a 787-inspired inboard flap fairing redesign.

“We are using some of the technology we developed on the 787 to use the fairing to influence the pressure distribution on the lower wing. In the old days, aerodynamicists were thrilled if you could put a fairing on an airplane for just the penalty of the skin friction drag. On the 787, we spent a lot of time working on the contribution of the flap fairing shape and camber to control the pressures on the lower wing surface.”

Although Schneider admits that the process was a little easier with the 787’s all-new wing, Boeing “went back and took a look at the 777 and we found a nice healthy improvement,” he says. The resulting fairing will be longer and wider, and although the larger wetted area will increase skin friction, the overall benefits associated with the optimized lift distribution over the whole wing will more than compensate. It’s a little counterintuitive,” says Schneider, adding that wind-tunnel test results of the new shape showed close correlation with benefits predicted by computational fluid dynamics (CFD) analysis using the latest boundary layer capabilities and Navier-Stokes codes.

Having altered the pressure distribution along the underside of the wing, Boeing is matching the change on the upper surface by reaching back to technology developed for the MD-12 in the 1990s. The aircraft’s outboard raked wingtip, a feature added to increase span with the development of the longer-range variants, will be modified with a divergent trailing edge. “Today it has very low camber, and by using some Douglas Aircraft technology from the MD-12 we get a poor man’s version of a supercritical airfoil,” says Schneider. The tweak will increase lift at the outboard wing, making span loading more elliptical and reducing induced drag.

Boeing has been conducting loads analysis on the 777 wing to “make sure we understand where all those loads will go,” he says. A related loads analysis to evaluate whether the revisions could also be incorporated into a potential retrofit kit will be completed this month. “When we figure out at which line number those two changes will come together (as they must be introduced simultaneously by necessity), we will do a single flight to ensure we don’t have any buffet issues from the change in lift distribution. That’s our certification plan,” Schneider says.

A third change to the wing will focus on reducing the base drag of the leading-edge slat by introducing a version with a sharper trailing edge. “The trailing-edge step has a bit of drag associated with it, so we will be making it sharper and smoothing the profile,” he explains. The revised part will be made thinner and introduced around mid-2016. Further drag reductions will be made by extending the seals around the inboard end of the elevator to reduce leakage and by making the passenger windows thicker to ensure they are fully flush with the fuselage surface. The latter change will be introduced in early 2016.

In another change adopted from the 787, Boeing also plans to alter the 777 elevator trim bias. The software-controlled change will move the elevator trailing edge position in cruise by up to 2 deg., inducing increased inverse camber. This will increase the download, reducing the overall trim drag and improving long-range cruise efficiency.

The package of changes means that range will be increased by 100nm or, alternatively, an additional 5,000 lb. of payload can be carried. Some of this extra capacity could be utilized by changes in the cabin that will free up space for another fourteen (14) seats. These will include a revised seat track arrangement in the aft of the cabin to enable additional seats where the fuselage tapers. Some of the extra seating, which will increase overall seat count by three percent (3%), could feature the option of arm rests integrated into the cabin wall. Schneider says the added seats, on top of the baseline  two percent (2%) fuel-burn improvement, will improve total operating efficiency by five percent (5%) on a block fuel per-seat basis.

Other cabin change options will include repackaged Jamco-developed lavatory units that provide the same internal space as today’s units but are eight (8) inch narrower externally. The redesign includes the option of a foldable wall between two modules, providing access for a disabled passenger and an assistant. Boeing is also developing noise-damping modifications to reduce cabin sound by up to 2.5 db, full cabin-length LED lighting and a 787-style entryway around Door 2. Boeing is also preparing to offer a factory-fitted option for electrically controlled window shades, similar to the 777 system developed as an aftermarket modification by British Airways.

CONCLUSIONS:

As you can see, the 777 is preparing to continue service for decades ahead by virtue of the modifications and improvements shown above.

As always, I welcome your comments.

MAKER DAY

March 17, 2013


This past Saturday I had the great opportunity of attending an event called Maker Day.  It was sponsored by CoLab, Inc. in my home town of Chattanooga, Tennessee.    CoLab is company dedicated to fostering innovation in the Chattanooga/Hamilton Country area and this was the first event organized specifically to demonstrate 3-D printing.    CoLab is a tremendous complement to our city, which is becoming well known in the southeast for technological advancements.  We also are very fortunate to have the “SIM Center” located on the campus of the University of Tennessee at Chattanooga.  That organization provides project work involving “computational engineering”; an incredible technology in itself. 

 If you remember an earlier posting from last year, you remember 3-D printing is an “additative manufacturing” technology depending upon metered deposition of material in a proscribed manner determined by solid modeling.  There are several “additative manufacturing” processes as follows:

In each case, the following processes are followed thus producing the model:

BASIC PROCESS:

  • Create a 3-D model of the component using a computer aided design (CAD) program.  There are various CAD modeling programs available today, but the “additative manufacturing” process MUST begin by developing a three-dimensional representation of the part to be produced.  It is important to note that an experienced CAD engineer/designer is an indispensable component for success.  As you can see, RP&M processes were required to wait on three-dimensional modeling before the technology came to fruition. 
  • Generally, the CAD file must go through a CAD to RP&M translator.  This step assures the CAD data is input to the modeling machine in the “tessellated” STL format.  This format has become the standard for RP&M processes.  With this operation, the boundary surfaces of the object are represented as numerous tiny triangles.  (VERY INDENSABLE TO THE PROCESS!)
  • The next step involves generating supports in a separate CAD file.  CAD designers/engineers may accomplish this task directly, or with special software.  One such software is “Bridgeworks”.  Supports are needed and used for the following three reasons:
  1. To ensure that the recoater blade will not strike the platform upon which the part is being built.
  2. To ensure that any small distortions of the platform will not lead to problems during part building.
  3. To provide a simple means of removing the part from the platform upon completion.
    1. Leveling—Typical resins undergo about five percent (5%) to seven percent (7%) total volumetric shrinkage.  Of this amount, roughly fifty percent (50%) to seventy percent (70%) occurs in the vat as a result of laser-induced polymerization.  With this being the case, a level compensation module is built into the RP&M software program.  Upon completion of laser drawing, on each layer, a sensor checks the resin level.  In the event the sensor detects a resin level that is not within the tolerance band, a plunger is activated by means of a computer-controlled precision stepper motor and the resin level is corrected to within the needed tolerance.
    2. Deep Dip—Under computer control, the “Z”-stage motor moves the platform down a prescribed amount to insure those parts with large flat areas can be properly recoated.  When the platform is lowered, a substantial depression is generated on the resin surface.  The time required to close the surface depression has been determined from both viscous fluid dynamic analysis and experimental test results.
    3. Elevate—Under the influence of gravity, the resin fills the depression created during the previous step.  The “Z” stage, again under computer control, now elevates the uppermost part layer above the free resin surface.  This is done so that during the next step, only the excess resin beyond the desired layer thickness need be moved.  If this were not the case, additional resin would be disturbed.
    4. Sweep—The recoater blade traverses the vat from front to back and sweeps the excess resin from the part.  As soon as the recoater blade has completed its motion, the system is ready for the next step.
    5. Platform Drops–The platform then drops down a fraction of a MM.    The process is then repeated.  This is done layer by layer until the entire model is produced.  As you can see, the thinner the layer, the finer and more detailed the resulting part.
    6. Draining–Part completion and draining.
    7. Removal–The part is then removed from the supporting platform and readied for any post-processing operations. .
  • Next step— the appropriate software will “chop” the CAD model into thin layers—typically 5 to 10 layers per millimeter (MM).  Software has improved greatly over the past years, and these improvements allow for much better surface finishes and much better detail in part description.  The part and supports must be sliced or mathematically sectioned by the computer into a series of parallel and horizontal planes like the floors of a very tall building.  Also during this process, the layer thickness, as discussed above, the intended building style, the cure depth, the desired hatch spacing, the line width compensation values and the shrinkage compensation factor(s) are selected and assigned.
  • Merging is the next step where the supports, the part and any additional supports and parts have their computer representations combined.  This is crucial and allows for the production of multiple parts connected by a “web” which can be broken after the parts are molded.
  • Next, certain operational parameters are selected, such as the number or recoater blade sweeps per layer, the sweep period, and the desired “Z”-wait.  All of these parameters must be selected by the programmer. “Z”-wait is the time, in seconds, the system is instructed to pause after recoating.  The purpose of this intentional pause is to allow any resin surface nonuniformities to undergo fluid dynamic relaxation.  The output of this step is the selection of the relevant parameters.
  • Now, we “build the model”.  The 3-D printer “paints” one layer exposing the material in the tank and hardening it.    The resin polymerization process begins at this time, and the physical three-dimensional object is created.  The process consists of the following steps:
  • Next, heat treating and firing may occur for further hardening.  This phase is termed the post-cure operation.
  • After heat treating and firing, the part may be machined, sanded, painted, etc until the final product meets initial specifications.  As mentioned earlier, there have been considerable developments in the materials used for the process, and it is entirely possible that the part may be applied to an assembly or subassembly so that the designed function may be observed.  No longer is the component necessarily for “show and tell” only.

 The entire procedure may take as long as 72 hours, depending upon size and complexity of the part, but the results are remarkably usable and applications are abundant. 

JPEGS FROM MAKER DAY EVENT

 If I may, I would now like to show several JPEGs from the event.  A very short description will follow each photograph.  I would like to state that I’m not Ansel Adams so some of the photographs are a bit borderline in quality.  Please forgive me for that.  Hopefully the content is worthwhile and will demonstrate the equipment used in the 3-D processes. 

 Assembly Hall (1)

 The demonstration was held in the Hamilton County/ Chattanooga Public Library.  The photo above does not really indicate the number of people attending but the day was a great success.  I’m told approximately three thousand (3,000) individuals did attend during the five-hour presentation.  Great turnout for the very first exhibition.

3-D Printing with Computer Image(2)

This photograph will demonstrate that the first step is developing a three-dimensional model of the part to be printed.  The computer screen to the right of the printer will show the model being produced.  The “black box” is the printer itself.  The purple coil located in the back of the printer, is the material being deposited onto the platform.   The platform indexes as the material is being deposited. A better look at a typical print head may be seen as follows:

 

 

Print Head (2)

One of the greatest advances in 3-D printing is the significant number of materials that now can be used for the printing process.  The picture below will show just some the options available.

Materials(2)

The assembly below demonstrates a manufacturing plant layout assembled using 3-D printing techniques.  Individual modules were printed and assembled to provide the overall layout.  Please note the detail and complexity of the overall production.

Astec Plant Layout-3 D Printing(2)

One of the most unique methods used in 3-D printing is the four-bar robotic system.  That system is demonstrated with the JPEG below.   Again, please note the spool of “green” material to the lower left of the JPEG.  This material feeds up and over the equipment to the dispense head shown in the very center of the photograph.

4-Bar 3-D Printer(3)

 This is a marvelous technology and one gaining acceptance as a viable manufacturing technique for component parts as well as prototypes.  I certainly hope this posting will give you cause for further investigation.  Many thanks.


 It’s that time again.  As you know, the magazine “Design News” publishes yearly a study detailing the results of a questionnaire sent to working engineers relative to the following areas:        

      Results this year are based on 1,684 usable replies.    At a 95% confidence level, these results are accurate and projectable within a + or –2.4% margin of error.    I don’t know if you are comfortable with statistics but a 95% confidence level means there could be a + or – 2.5% error relative to the sample size.   This actually is excellent correlation and makes this study one which can be believed.  I would indicate all of the data used in this document is provided by Design News including the graphics.    The text and descriptive information is mine.  I definitely hope you enjoy the study.  Here goes.

 Let’s first take a look at a general overall presentation as to where we stand by region within the United States.   These are average salary levels.

As you can see, the New England region once again wins the prize for having the highest average salary; then the Southwest, then the Pacific Northwest.  As a point of reference, when I entered the engineering work force in 1966, I was offered a $15,000 annual salary by Pratt and Whitney.   That sum was the “going rate” for graduating mechanical engineers.  EEs were higher with industrial engineers bringing up the rear.  From the standpoint of averages, let’s take a look at several conglomerate numbers:

You can see from this chart the “average” annual salary is about $97K.  The stat that blew me away was the 46 hour work week.  I certainly wish I had supervisors who believed in less than 50 hours.  Most of mine thought you were just getting warmed up around 40 to 45 hours.  My, how times have changed.    We are going to delve deeper into the various elements of the profession as we try to highlight certain responses that indicate basic demographics. 

 When considering base salary with no bonus factored in, we see the following:

Please keep in mind, entry level starting salaries will be the lower figures AND differing disciplines AND differing industries will have averages and starting salaries commensurate with industry standards.  For instance, the average salary for a mechanical engineer in the appliance business will be lower than the average salary for a mechanical engineer working in engineering medicine or aerospace.   The complexity of the product dictates the level of compensation to a great degree.   Please note also, the chart above is a “bell-shaped” curve, as it should be with 56% of the salaries being between $60 K and $110K.   Very few true engineering types make north of $ 150 K per year and I suspect these would be individuals having P&L responsibilities in addition to managing teams of people.  (Now you see why there are more doctors, lawyers, bankers, etc than engineers.)

 As mentioned earlier, the discipline and its complexity dictate to a large degree the salary brackets.  You get a feel for this fact with the chart below.

I consider manufacturing engineering being one of the most important engineering fields and yet it always seems to be the “anchor-man” relative to compensation.  Manufacturing is tough with seemingly fewer resources to do a specific job. 

Classifications are further broken down into education levels as follows:

 

Those individuals with PhDs usually work for educational institutions and not industry or manufacturing.  Theirs is the world of R&D and teaching.  The overwhelming ranks are filled with bachelors of science and masters degrees.  In my day, the thing to do was obtain a BS and follow up with an MBA.  With this economy, just finding a job within your field can be quite challenging.  If we further breakdown education vs. age, we find the following:

As you might suspect, greater the years of service, greater the compensation.

 

One of the things we find with engineers–they are not “job hoppers”– with the average length of service for any one company being 13 years.  I find this to be fascinating.  I will let you draw your own conclusions here, but for me it says that most engineers are fairly satisfied with the companies they keep.

The relative company size also has a definite bearing on compensation.

When I retired from GE in 2005, my total compensation package was $93K.   This is after 18 years with the company.  As you can see from the next chart, this is right in line with the Design News data, even for appliance manufacturing.

 Let’s look now at the likes and dislikes of the profession at large.  Most engineers do like their jobs.  This is evident by the next chart.

Job satisfaction was not too difficult to achieve.

One slide that is very telling is given below: I can definitely identify with this chart, although when the word engineer is used, most people think of a guy who drives a train and not a graduate problem-solver or designer.  We certainly have a long way to go with educating the general public as to what engineers really do.  When we discuss fully using engineering skills and overworked and underpaid, don’t we all feel that way at times?  These two categories are definitely objective and certainly based upon an individual engineer’s “feel” for the job.

 

One definite “downer”:

Engineers, like most professionals, feel very uncomfortable about the future and what might happen.  Only 42% indicate they are not very concerned at all.  The chart above is supported by the feelings indicated by the following:

This write-up is definitely not meant to make a political statement but it is an election year and elections do determine the future of our country.   If you read the individual statements beside the pie-chart, you will get an indication as to some of the responses.

I would like to thank Design News for doing another marvelous job and I give them all the credit for putting together the data from which this blog is developed.   Let’s hope 2013 is a year of prosperity for all professions and we can present “great reviews” next August.

%d bloggers like this: