AUTOMOTIVE FUTURE

January 25, 2018


Portions of this post are taken from Design News Daily Magazine, January publication.

The Detroit Auto Show has a weirdly duplicitous vibe these days. The biggest companies that attend make sure to talk about things that make them sound future-focused, almost benevolent. They talk openly about autonomy, electrification, and even embracing other forms of transportation. But they do this while doling out product announcements that are very much about meeting the current demands of consumers who, enjoying low gas prices, want trucks and crossover SUVs. With that said, it really is interesting to take a look at several “concept” cars.  Cars we just may be driving the future is not the near future.  Let’s take a look right now.

Guangzhou Automobile Co. (better known as GAC Motor) stole the show in Detroit, at least if we take their amazing claims at face value. The Chinese automaker rolled out the Enverge electric concept car, which is said to have a 373-mile all-electric range based on a 71-kWh battery. Incredibly, it is also reported to have a wireless recharge time of just 10 minutes for a 240-mile range. Enverge’s power numbers are equally impressive: 235 HP and 302 lb-ft of torque, with a 0-62 mph time of just 4.4 seconds. GAC, the sixth biggest automaker in China, told the Detroit audience that it would start selling cars in the US by Q4 2019. The question is whether its extraordinary performance numbers will hold up to EPA scrutiny.  If GAC can live up to and meet their specifications they may have the real deal here.  Very impressive.

As autonomous vehicle technology advances, automakers are already starting to examine the softer side of that market – that is, how will humans interact the machines? And what are some of the new applications for the technology? That’s where Ford’s pizza delivery car came in. The giant automaker started delivering Domino’s pizzas in Ann Arbor, MI, late last year with an autonomous car. In truth, the car had a driver at the wheel, sitting behind a window screen. But the actual delivery was automated: Customers were alerted by a text; a rear window rolled down; an automated voice told them what to do, and they grabbed the pie. Ford engineers were surprised to find that that the humans weren’t intimated by the technology. “In the testing we did, people interacted nicely with the car,” Ford autonomous car research engineer Wayne Williams told Design News. “They talked to it as if it were a robot. They waved when it drove away. Kids loved it. They’d come running up to it.” The message to Ford was clear – autonomous cars are about more than just personal transportation. Delivery services are a real possibility, too.

Most of today’s autonomous cars use unsightly, spinning Lidar buckets atop their roofs. At the auto show, Toyota talked about an alternative Lidar technology that’s sleek and elegant. You have to admit that for now, the autonomous cars look UGLY—really ugly.  Maybe Toyota has the answer.

In a grand rollout, Lexus introduced a concept car called the LF-1 Limitless. The LF-1 is what we’ve all come to expect from modern concept cars – a test bed for numerous power trains and autonomous vehicle technologies. It can be propelled by a fuel cell, hybrid, plug-in hybrid, all-electric or gasoline power train. And its automated driving system includes a “miniaturized supercomputer with links to navigation data, radar sensors, and cameras for a 360-degree view of your surroundings with predictive capabilities.” The sensing technologies are all part of a system known as “Chauffeur mode.” Lexus explained that the LF-1 is setting the stage for bigger things: By 2025, every new Lexus around the world will be available as a dedicated electrified model or will have an electrified option.

The Xmotion, which is said to combine Japanese aesthetics with SUV styling, includes seven digital screens. Three main displays join left- and right-side screens across the instrument panel. There’s also a “digital room mirror” in the ceiling and center console display. Moreover, the displays can be controlled by gestures and even eye motions, enabling drivers to focus on the task of driving. A Human Machine Interface also allows drivers to easily switch from Nissan’s ProPilot automated driving system to a manual mode.

Cadillac showed off its Super Cruise technology, which is said to be the only semi-autonomous driving system that actually monitors the driver’s attention level. If the driver is attentive, Super Cruise can do amazing things – tooling along for hours on a divided highway with no intersections, for example, while handling all the steering, acceleration and braking. GM describes it as an SAE Level 2 autonomous system. It’s important because it shows autonomous vehicle technology has left the lab and is making its debut on production vehicles. Super Cruise launched late in 2017 on the Cadillac CT6 (shown here).

In a continuing effort to understand the relationship between self-driving cars and humans, Ford Motor Co. and Virginia Tech displayed an autonomous test vehicle that communicates its intent to other drivers, bicyclists, and pedestrians. Such communication is important, Ford engineers say, because “designing a way to replace the head nod or hand wave is fundamental to ensuring safe and efficient operation of self-driving vehicles.”

Infiniti rolled out the Q Inspiration luxury sedan concept, which combines its variable compression ratio engine with Nissan’s ProPilot semi-autonomous vehicle technology. Infiniti claims the engine combines “turbo charged gasoline power with the torque and efficiency of a hybrid or diesel.” Known as the VC-Turbo, the four-cylinder engine continually transforms itself, adjusting its compression ratio to optimize power and fuel efficiency. At the same time, the sedan features ProPilot Assist, which provides assisted steering, braking and acceleration during driving. You can see from the digital below, the photographers were there covering the Infinity.

The eye-catching Concept-i vehicle provided a more extreme view of the distant future, when vehicles will be equipped with artificial intelligence (AI). Meant to anticipate people’s needs and improve their quality of life, Concept-i is all about communicating with the driver and occupants. An AI agent named Yui uses light, sound, and even touch, instead of traditional screens, to communicate information. Colored lights in the footwells, for example, indicate whether the vehicle is an autonomous or manual drive; projectors in the rear deck project outside views onto the seat pillar to warn drivers about potential blind spots, and a next-generation heads-up display keeps the driver’s eyes and attention on the road. Moreover, the vehicle creates a feeling of warmth inside by emanating sweeping lines of light around it. Toyota engineers created the Concept-i features based on their belief that “mobility technology should be warm, welcoming, and above all, fun.”

CONCLUSIONS:  To be quite honest, I was not really blown away with this year’s offerings.  I LOVE the Infinity and the Toyota concept car shown above.  The American models did not capture my attention. Just a thought.

Advertisements

DEEP LEARNING

December 10, 2017


If you read technical literature with some hope of keeping up with the latest trends in technology, you find words and phrases such as AI (Artificial Intelligence) and DL (Deep Learning). They seem to be used interchangeability but facts deny that premise.  Let’s look.

Deep learning ( also known as deep structured learning or hierarchical learning) is part of a broader family of machine-learning methods based on learning data representations, as opposed to task-specific algorithms. (NOTE: The key words here are MACHINE-LEARNING). The ability of computers to learn can be supervised, semi-supervised or unsupervised.  The prospect of developing learning mechanisms and software to control machine mechanisms is frightening to many but definitely very interesting to most.  Deep learning is a subfield of machine learning concerned with algorithms inspired by structure and function of the brain called artificial neural networks.  Machine-learning is a method by which human neural networks are duplicated by physical hardware: i.e. computers and computer programming.  Never in the history of our species has a degree of success been possible–only now. Only with the advent of very powerful computers and programs capable of handling “big data” has this been possible.

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.  The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.  Because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before. Deep learning is a class of machine learning algorithms that accomplish the following:

  • Use a cascade of multiple layers of nonlinear processingunits for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • Learn in supervised(e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • Learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • Use some form of gradient descentfor training via backpropagation.

Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas.  They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

ARTIFICIAL NEURAL NETWORKS:

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.

An ANN is based on a collection of connected units called artificial neurons, (analogous to axons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.

Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.

Neural networks have been used on a variety of tasks, including computer vision, speech recognitionmachine translationsocial network filtering, playing board and video games and medical diagnosis.

As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several orders of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing “Go”).

APPLICATIONS:

Just what applications could take advantage of “deep learning?”

IMAGE RECOGNITION:

A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size allows multiple configurations to be tested. A comprehensive list of results on this set is available.

Deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views.   Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.

The i-Phone X uses, I am told, uses facial recognition as one method of insuring safety and a potential hacker’s ultimate failure to unlock the phone.

VISUAL ART PROCESSING:

Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) “capturing” the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.

NATURAL LANGUAGE PROCESSING:

Neural networks have been used for implementing language models since the early 2000s.  LSTM helped to improve machine translation and language modeling.  Other key techniques in this field are negative sampling  and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep-learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.   Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.  Deep neural architectures provide the best results for constituency parsing,  sentiment analysis,  information retrieval,  spoken language understanding,  machine translation, contextual entity linking, writing style recognition and others.

Google Translate (GT) uses a large end-to-end long short-term memory network.   GNMT uses an example-based machine translation method in which the system “learns from millions of examples.  It translates “whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages.   The network encodes the “semantics of the sentence rather than simply memorizing phrase-to-phrase translations”.  GT can translate directly from one language to another, rather than using English as an intermediate.

DRUG DISCOVERY AND TOXICOLOGY:

A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.  Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.

AtomNet is a deep learning system for structure-based rational drug design.   AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.

CUSTOMER RELATIONS MANAGEMENT:

Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.

RECOMMENDATION SYSTEMS:

Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music recommendations.  Multiview deep learning has been applied for learning user preferences from multiple domains.  The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.

BIOINFORMATICS:

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.

In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.

MOBILE ADVERTISING:

Finding the appropriate mobile audience for mobile advertising is always challenging since there are many data points that need to be considered and assimilated before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.

ADVANTAGES AND DISADVANTAGES:

ADVANTAGES:

  • Has best-in-class performance on problems that significantly outperforms other solutions in multiple domains. This includes speech, language, vision, playing games like Go etc. This isn’t by a little bit, but by a significant amount.
  • Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice.
  • Is an architecture that can be adapted to new problems relatively easily (e.g. Vision, time series, language etc. using techniques like convolutional neural networks, recurrent neural networks, long short-term memory etc.

DISADVANTAGES:

  • Requires a large amount of data — if you only have thousands of examples, deep learning is unlikely to outperform other approaches.
  • Is extremely computationally expensive to train. The most complex models take weeks to train using hundreds of machines equipped with expensive GPUs.
  • Do not have much in the way of strong theoretical foundation. This leads to the next disadvantage.
  • Determining the topology/flavor/training method/hyperparameters for deep learning is a black art with no theory to guide you.
  • What is learned is not easy to comprehend. Other classifiers (e.g. decision trees, logistic regression etc.) make it much easier to understand what’s going on.

SUMMARY:

Whether we like it or not, deep learning will continue to develop.  As equipment and the ability to capture and store huge amounts of data continue, the machine-learning process will only improve.  There will come a time when we will see a “rise of the machines”.  Let’s just hope humans have the ability to control those machines.


Portions of this post are taken from the publication “Industry Week”, Bloomberg View, 30 October 2017.

The Bloomberg report begins by stating: “The industrial conglomerate has lost $100 billion in market value this year as investors came to terms with the dawning reality that GE’s businesses don’t generate enough cash to support its rich dividend.”

Do you in your wildest dreams think that Jack Welch, former CEO of GE, would have produced results such as this?  I do NOT think so.  Welch “lived” with the guys on Wall Street.  These pitiful results come to us from Mr. Jeffery Immelt.  It’s also now clear that years of streamlining didn’t go far enough as challenges of dumpster-fire proportions at its power and energy divisions overshadowed what were actually pretty good third-quarter health-care and aviation numbers.  Let me mention right now that I can sound off at the results.  I retired from a GE facility—The Roper Corporation, in 2005.

The new CEO John Flannery’s pledged to divest twenty billion ($20 billion) in assets perhaps is risking another piecemeal breakup but as details leak on the divestitures and other changes Flannery’s contemplating, there’s at least a shot he could be positioning the company for something more drastic.  Now back to Immelt.

Immelt took over the top position at GE in 2001. Early attempts at changing the culture to meet Immelt’s ideas about what the corporate culture should look like were not very successful. It was during the financial crisis that he began to think differently. It seems as if his thinking followed three paths. First, get rid of the financial areas of the company because they were just a diversion to what needed to be done. Second, make GE into a company focused upon industrial goods. And, third, create a company that would tie the industrial goods to information technology so that the physical and the informational would all be of one package. The results of Immelt’s thinking are not impressive and did not position GE for company growth in the twenty-first century.

Any potential downsizing by Flannery will please investors who have viewed the digital foray as an expensive pet project of Immelt’s, but it’s sort of a weird thing to do if you still want to turn GE into a top-ten software company — as is the divestiture of the digital-facing Centricity health-care IT operations that GE is reportedly contemplating.  Perhaps a wholesale breakup of General Electric Co. isn’t such an improbable idea after all.

GE has lost one hundred billion ($100 billion) in market value this year as investors came to terms with the dawning reality that GE’s businesses don’t generate enough cash to support its rich dividend. It’s also now clear that years of streamlining didn’t go far enough as challenges of dumpster fire proportions at its power and energy divisions overshadowed what were actually pretty good third-quarter health-care and aviation numbers.

One argument against a breakup of GE was that it would detract from the breadth of expertise and resources that set the company apart in the push to make industrial machinery of all kinds run more efficiently. But now, GE’s approach to digital appears to be changing. Rather than trying to be everything for everyone, the company is refocusing digital marketing efforts on customers in its core businesses and deepening partnerships with tech giants including Microsoft Corp and Apple Inc. It hasn’t announced any financial backers yet, but that’s a possibility former CEO Jeff Immelt intimated before he departed. GE’s digital spending is a likely target of its cost-cutting push.

This downsizing will please investors who have viewed digital as an expensive pet project of Immelt’s, but it’s sort of a weird thing to do if you still want to turn GE into a top-10 software company — as is the divestiture of the digital-facing Centricity health-care IT operations that GE is reportedly contemplating.

The company is unlikely to abandon digital altogether. Industrial customers have been trained to expect data-enhanced efficiency, and GE has to offer that to be competitive. As Flannery said at GE’s Minds and Machines conference last week, “A company that just builds machines will not survive.” But if all we’re ultimately talking about here is smarter equipment, as opposed to a whole new software ecosystem, GE doesn’t necessarily need a health-care, aviation and power business.

Creating four or five mini-GEs would likely mean tax penalties.  That’s not in and of itself a reason to maintain a portfolio that’s not working. If it was, GE wouldn’t also be contemplating a sale of its transportation division. But one of GE’s flaws in the minds of investors right now is its financial complexity, and there’s something to be said for a complete rethinking of the way it’s put together. For what it’s worth, the average of JPMorgan Chase & Co. analyst Steve Tusa’s sum-of-the-parts analyses points to a twenty-dollar ($20) valuation — almost in line with GE’s closing price of $20.79 on Friday. Whatever premium the whole company once commanded over the value of its parts has been significantly weakened.

Wall Street is torn on General Electric, the one-time favorite blue chip for long-term investors, which is now facing an identity crisis and possible dividend cut. Major research shops downgraded and upgraded the industrial company following its third-quarter earnings miss this past Friday. The firm’s September quarter profits were hit by restructuring costs and weak performance from its power and oil and gas businesses. It was the company’s first earnings report under CEO John Flannery, who replaced Jeff Immelt in August. Two firms reduced their ratings for General Electric shares due to concerns about dividend cuts at its Nov. 13 analyst meeting. The company has a 4.2 percent dividend yield. General Electric shares declined 6.3 percent Monday to close at $22.32 a share after the reports. The percentage drop is the largest for the stock in six years. Its shares are down twenty-five (25%) percent year to date through Friday versus the S&P 500’s fifteen (15%) percent return.

At the end of the day, it comes down to what kind of company GE wants to be. The financial realities of a breakup might be painful, but so would years’ worth of pain in its power business as weak demand and pricing pressures drive a decline to a new normal of lower profitability. Does it really matter, then, what the growth opportunities are in aviation and health care? As head of M&A at GE, Flannery was at least partly responsible for the Alstom SA acquisition that swelled the size of the now-troubled power unit inside GE. If there really are “no sacred cows,” he has a chance to rewrite that legacy.

CONCLUSIONS:

Times are changing and GE had better change with those times or the company faces significant additional difficulties.  Direction must be left to the board of directors but it’s very obvious that accommodations to suite the present business climate are definitely in order.


At one time in the world there were only two distinctive branches of engineering, civil and military.

The word engineer was initially used in the context of warfare, dating back to 1325 when engine’er (literally, one who operates an engine) referred to “a constructor of military engines”.  In this context, “engine” referred to a military machine, i. e., a mechanical contraption used in war (for example, a catapult).

As the design of civilian structures such as bridges and buildings developed as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the older discipline. As the prevalence of civil engineering outstripped engineering in a military context and the number of disciplines expanded, the original military meaning of the word “engineering” is now largely obsolete. In its place, the term “military engineering” has come to be used.

OK, so that’s how we got here.  If you follow my posts you know I primarily concentrate on STEM (science, technology, engineering and mathematics) professions.  Engineering is somewhat uppermost since I am a mechanical engineer.

There are many branches of the engineering profession.  Distinct areas of endeavor that attract individuals and capture their professional lives.  Several of these are as follows:

  • Electrical Engineering
  • Mechanical Engineering
  • Civil Engineering
  • Chemical Engineering
  • Biomedical Engineering
  • Engineering Physics
  • Nuclear Engineering
  • Petroleum Engineering
  • Materials Engineering

Of course, there are others but the one I wish to concentrate on with this post is the growing branch of engineering—Biomedical Engineering. Biomedical engineering, or bioengineering, is the application of engineering principles to the fields of biology and health care. Bioengineers work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems.  As such, the possibilities of a bioengineer’s charge are as follows:

Biomedical engineering has evolved over the years in response to advancements in science and technology.  This is NOT a new classification for engineering involvement.  Engineers have been at this for a while.  Throughout history, humans have made increasingly more effective devices to diagnose and treat diseases and to alleviate, rehabilitate or compensate for disabilities or injuries. One example is the evolution of hearing aids to mitigate hearing loss through sound amplification. The ear trumpet, a large horn-shaped device that was held up to the ear, was the only “viable form” of hearing assistance until the mid-20th century, according to the Hearing Aid Museum. Electrical devices had been developed before then, but were slow to catch on, the museum said on its website.

The works of Alexander Graham Bell and Thomas Edison on sound transmission and amplification in the late 19th and early 20th centuries were applied to make the first tabletop hearing aids. These were followed by the first portable (or “luggable”) devices using vacuum-tube amplifiers powered by large batteries. However, the first wearable hearing aids had to await the development of the transistor by William Shockley and his team at Bell Laboratories. Subsequent development of micro-integrated circuits and advance battery technology has led to miniature hearing aids that fit entirely within the ear canal.

Let’s take a very quick look at several devices designed by biomedical engineering personnel.

MAGNETIC RESONANCE IMAGING:

POSITION EMISSION TOMOGRAPHY OR (PET) SCAN:

NOTE: PET scans represent a different technology relative to MRIs. The scan uses a special dye that has radioactive tracers. These tracers are injected into a vein in your arm. Your organs and tissues then absorb the tracer.

BLOOD CHEMISTRY MONOTORING EQUIPMENT:

ELECTROCARDIOGRAM MONITORING DEVICE (EKG):

INSULIN PUMP:

COLONOSCOPY:

THE PROFESSION:

Biomedical engineers design and develop medical systems, equipment and devices. According to the U.S. Bureau of Labor Statistics (BLS), this requires in-depth knowledge of the operational principles of the equipment (electronic, mechanical, biological, etc.) as well as knowledge about the application for which it is to be used. For instance, in order to design an artificial heart, an engineer must have extensive knowledge of electrical engineeringmechanical engineering and fluid dynamics as well as an in-depth understanding of cardiology and physiology. Designing a lab-on-a-chip requires knowledge of electronics, nanotechnology, materials science and biochemistry. In order to design prosthetic replacement limbs, expertise in mechanical engineering and material properties as well as biomechanics and physiology is essential.

The critical skills needed by a biomedical engineer include a well-rounded understanding of several areas of engineering as well as the specific area of application. This could include studying physiology, organic chemistry, biomechanics or computer science. Continuing education and training are also necessary to keep up with technological advances and potential new applications.

SCHOOLS OFFERING BIO-ENGINEERING:

If we take a look at the top schools offering Biomedical engineering, we see the following:

  • MIT
  • Stanford
  • University of California-San Diego
  • Rice University
  • University of California-Berkley
  • University of Pennsylvania
  • University of Michigan—Ann Arbor
  • Georgia Tech
  • Johns Hopkins
  • Duke University

As you can see, these are among the most prestigious schools in the United States.  They have had established engineering programs for decades.  Bio-engineering does not represent a new discipline for them.  There are several others and I would definitely recommend you go online to take a look if you are interested in seeing a complete list of colleges and universities offering a four (4) or five (5) year degree.

SALARY LEVELS:

The median annual wage for biomedical engineers was $86,950 in May 2014. The median wage is the wage at which half the workers in an occupation earned more than that amount and half earned less. The lowest ten (10) percent earned less than $52,680, and the highest ten (10) percent earned more than $139,350.  As you might expect, salary levels vary depending upon several factors:

  • Years of experience
  • Location within the United States
  • Size of company
  • Research facility and corporate structure
  • Bonus or profit sharing arrangement of company

EXPECTATIONS FOR EMPLOYMENT:

In their list of top jobs for 2015, CNNMoney classified Biomedical Engineering as the 37th best job in the US, and of the jobs in the top 37, Biomedical Engineering 10-year job growth was the third highest (27%) behind Information Assurance Analyst (37%) and Product Analyst (32%). CNN previously reported Biomedical Engineer as the top job in the US in 2012 with a predicted 10-year growth rate of nearly 62% ‘Biomedical Engineer’ was listed as a high-paying low-stress job according to Time magazine.  There is absolutely no doubt that medical technology will advance as time go on so biomedical engineers will continue to be in demand.

As always, I welcome your comments.

NANOMATERIALS

May 13, 2016


In recent months there has been considerable information regarding nanomaterials and how those materials are providing significant breakthroughs in R&D.  Let’s first define a nanomaterial.

DEFINITION:

“Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometres (10−9 meter) but is usually 1—100 nm (the usual definition of nanoscale).”

Obviously microscopic in nature but extremely effective when applied properly to a process.  Further descriptions are as follows:

Nanomaterials must include the average particle size, allowing for aggregation or clumping of the individual particles and a description of the particle number size distribution (range from the smallest to the largest particle present in the preparation).

Detailed assessments may include the following:

  1. Physical properties:
  • Size, shape, specific surface area, and ratio of width and height
  • Whether they stick together
  • Number size distribution
  • How smooth or bumpy their surface is
  • Structure, including crystal structure and any crystal defects
  • How well they dissolve
  1. Chemical properties:
  • Molecular structure
  • Composition, including purity, and known impurities or additives
  • Whether it is held in a solid, liquid or gas
  • Surface chemistry
  • Attraction to water molecules or oils and fats

A number of techniques for tracking nanoparticles exist with an ever-increasing number under development. Realistic ways of preparing nanomaterials for test of their possible effects on biological systems are also being developed.

There are nanoparticles such as volcanic ash, soot from forest fires naturally occurring or the incidental byproducts of combustion processes (e.g., welding, diesel engines).  These are usually physically and chemically heterogeneous and often termed ultrafine particles. Engineered nanoparticles are intentionally produced and designed with very specific properties relative to shape, size, surface properties and chemistry. These properties are reflected in aerosols, colloids, or powders. Often, the behavior of nanomaterials may depend more on surface area than particle composition itself. Relative-surface area is one of the principal factors that enhance its reactivity, strength and electrical properties.

Engineered nanoparticles may be bought from commercial vendors or generated via experimental procedures by researchers in the laboratory (e.g., CNTs can be produced by laser ablation, HiPCO  or high-pressure carbon monoxide, arc discharge, and chemical vapor deposition (CVD)). Examples of engineered nanomaterials include: carbon buckeyballs or fullerenes; carbon nanotubes; metal or metal oxide nanoparticles (e.g., gold, titanium dioxide); quantum dots, among many others.

Nanotube

The digital photograph above shows a nanotube, which is a member of the fullerene structural family. (NOTE:  A fullerene is a molecule of carbon in the form of a hollow sphereellipsoidtube, and many other shapes. Spherical fullerenes are also called Buckminsterfullerenes or buckeyballs, which resemble balls used in soccer.  Cylindrical fullerenes are called carbon nanotubes or buckeytubes.  Fullerenes are similar in structure to graphite, which is composed of stacked graphene sheets of linked hexagonal rings. ) Their name is derived from their long, hollow structure with walls formed by one-atom-thick sheets of carbon, called graphene. These sheets are rolled at specific and discrete angles where the combination of the rolling angle and radius defines the nanotube properties; for example, whether the individual nanotube shell is a metal or semiconductor.  Nanotubes are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs). Individual nanotubes naturally align themselves into “ropes” held together by van der Waals forces, more specifically, pi-stacking.

The JPEG below shows a nanoplate material.

NANOPLATE

Nanoplate uses nanometer materials and combines them in engineered and industrial coating processes to incorporate new and improved features in the finished product.

USES OF NANO TECHNOLOGY:

Let’s look at today’s uses for nano technology and you can get a good picture as to where the field is going.

  • Stain-repellent Eddie Bauer Nano-CareTM khakis, with surface fibers of 10 to 100 nanometers, uses a process that coats each fiber of fabric with “nano-whiskers.” Developed by Nano-Tex, a Burlington Industries subsidiary. Dockers also makes khakis, a dress shirt and even a tie treated with what they call “Stain Defender”, another example of the same nanoscale cloth treatment.
    Impact: Dry cleaners, detergent and stain-removal makers, carpet and furniture makers, window covering maker.
  • BASF’s annual sales of aqueous polymer dispersion products amount to around $1.65 billion. All of them contain polymer particles ranging from ten to several hundred nanometers in size. Polymer dispersions are found in exterior paints, coatings and adhesives, or are used in the finishing of paper, textiles and leather. Nanotechnology also has applications in the food sector. Many vitamins and their precursors, such as carotinoids, are insoluble in water. However, when skillfully produced and formulated as nanoparticles, these substances can easily be mixed with cold water, and their bioavailability in the human body also increases. Many lemonades and fruit juices contain these specially formulated additives, which often also provide an attractive color. In the cosmetics sector, BASF has for several years been among the leading suppliers of UV absorbers based on nanoparticulate zinc oxide. Incorporated in sun creams, the small particles filter the high-energy radiation out of sunlight. Because of their tiny size, they remain invisible to the naked eye and so the cream is transparent on the skin.
  • Sunscreens are utilizing nanoparticles that are extremely effective at absorbing light, especially in the ultra-violet (UV) range. Due to the particle size, they spread more easily, cover better, and save money since you use less. And they are transparent, unlike traditional screens which are white. These sunscreens are so successful that by 2001 they had captured 60% of the Australian sunscreen market.  Impact: Makers of sunscreen have to convert to using nanoparticles. And other product manufacturers, like packaging makers, will find ways to incorporate them into packages to reduce UV exposure and subsequent spoilage. The $480B packaging and $300B plastics industries will be directly affected.
  • Using aluminum nanoparticles, Argonide has created rocket propellants that burn at double the rate. They also produce copper nanoparticles that are incorporated into automotive lubricant to reduce engine wear.
  • AngstroMedica has produced a nanoparticulate-based synthetic bone. “Human bone is made of a calcium and phosphate composite called Hydroxyapatite. By manipulating calcium and phosphate at the molecular level, we have created a patented material that is identical in structure and composition to natural bone. This novel synthetic bone can be used in areas where the natural bone is damaged or removed, such as in the treatment of fractures and soft tissue injuries.
  • Nanodyne makes a tungsten-carbide-cobalt composite powder (grain size less than 15nm) that is used to make a sintered alloy as hard as diamond, which is in turn used to make cutting tools, drill bits, armor plate, and jet engine parts.
    Impact: Every industry that makes parts or components whose properties must include hardness and durability.
  • Wilson Double Core tennis balls have a nanocomposite coating that keeps it bouncing twice as long as an old-style ball. Made by InMat LLC, this nanocomposite is a mix of butyl rubber, intermingled with nanoclay particles, giving the ball substantially longer shelf life. Impact: Tires are the next logical extension of this technology: it would make them lighter (better milleage) and last longer (better cost performance).
  • Applied Nanotech recently demonstrated a 14″ monochrome display based on electron emission from carbon nanotubes.  Impact: Once the process is perfected, costs will go down, and the high-end market will start being filled. Shortly thereafter, and hand-in-hand with the predictable drop in price of CNTs, production economies-of-scale will enable the costs to drop further still, at which time we will see nanotube-based screens in use everywhere CRTs and view screens are used today.
  • China’s largest coal company (Shenhua Group) has licensed technology from Hydrocarbon Technologies that will enable it to liquefy coal and turn it into gas. The process uses a gel-based nanoscale catalyst, which improves the efficiency and reduces the cost.  Impact: “If the technology lives up to its promise and can economically transform coal into diesel fuel and gasoline, coal-rich countries such as the U.S., China and Germany could depend far less on imported oil. At the same time, acid-rain pollution would be reduced because the liquefaction strips coal of harmful sulfur.”

CONCLUSION:

I’m sure the audience I attract will get the significance of nanotechnology and the existing uses in today’s commercial markets.  This is a growing technology and one in which significant R&D effort is being applied.  I think the words are “STAND BY” there is more to come in the immediate future.

 

THE WORLD’S BEST

October 3, 2015


Data for each university was taken from Wikipedia.  I checked information for each school relative to authenticity and found Wikipedia to be correct in every case.

USA Today recently published an article from the London-based “Times Higher Education World University Rankings”.  This organization was founded in 2004 for the sole purpose of evaluating universities across the world.  Evaluations are accomplished using the following areas of university life:

  • Teaching ability and qualification of individual teachers
  • International outlook
  • Reputation of university
  • Research initiatives
  • Student-staff ratios
  • Income from industries
  • Female-male ratios
  • Quality of student body
  • Citations

There were thirteen (13) performance criteria in the total evaluation.  The nine (9) above give an indication as to the depth of the investigation. Eight hundred (800) universities from seventy (70) countries were evaluated.  This year, there were only sixty-three (63) out of two hundred (200) schools that made the “best in the world” list. Let’s take a look at the top fifteen (15).  These are in order.

  1. California Institute of Technology–The California Institute of Technologyor Caltech is a private research university located in Pasadena, California, United States.   The school was founded as a preparatory and vocational institution by Amos G. Throop in 1891.  Even from the early years, the college attracted influential scientists such as George Ellery HaleArthur Amos Noyes, and Robert Andrews Millikan. The vocational and preparatory schools were disbanded and spun off in 1910, and the college assumed its present name in 1921. In 1934, Caltech was elected to the Association, and the antecedents of NASA‘s Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán. The university is one among a small group of Institutes of Technology in the United States which tends to be primarily devoted to the instruction of technical arts and applied sciences.
  2. Oxford University–The University of Oxford(informally Oxford University or simply Oxford) is a collegiate research university located in Oxford, England. While having no known date of foundation, there is evidence of teaching as far back as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest surviving university.  It grew rapidly from 1167 when Henry II banned English students from attending the University of Paris.  After disputes between students and Oxford townsfolk in 1209, some academics fled northeast to Cambridge where they established what became the University of Cambridge. The two “ancient universities” are frequently jointly referred to as “Oxbridge“.
  3. Stanford University–Stanford University(officially Leland Stanford Junior University) is a private research university in StanfordCalifornia.  It is definitely one of the world’s most prestigious institutions, with the top position in numerous rankings and measures in the United States. Stanford was founded in 1885 by Leland Stanford, former Governor and S. Senator from California.  Mr. Stanford was a railroad tycoon.  He and his wife, Jane Lathrop Stanford, started the school in memory of their only child, Leland Stanford, Jr., who had died of typhoid fever at age 15 the previous year. Stanford was opened on October 1, 1891 as a coeducational and non-denominational institution. Tuition was free until 1920. The university struggled financially after Leland Stanford’s 1893 death and after much of the campus was damaged by the 1906 San Francisco earthquake. Following World War II, Provost Frederick Terman supported faculty and graduates’ entrepreneurialism to build self-sufficient local industry in what would later be known as Silicon Valley. By 1970, Stanford was home to a linear accelerator, and was one of the original four ARPANET nodes (precursor to the Internet).
  4. Cambridge University–The University of Cambridge (abbreviated as Cantabin post-nominal letters, sometimes referred to as Cambridge University) is a collegiate public research university in Cambridge, England. Founded in 1209, Cambridge is the second-oldest university in the English-speaking world and the world’s fourth-oldest surviving university.   It grew out of an association of scholars who left the University of Oxford after a dispute with townsfolk. The two ancient universities share many common features and are often jointly referred to as “Oxbridge“.
  5. Massachusetts Institute of Technology–The Massachusetts Institute of Technology(MIT) is a private research university in Cambridge, Massachusetts. Founded in 1861 in response to the increasing industrialization of the United States, MIT adopted a European polytechnic  university model and stressed laboratory instruction in applied science and engineering. Researchers worked on computersradar, and inertial guidance during World War II and the Cold War. Post-war defense research contributed to the rapid expansion of the faculty and campus.  The current 168-acre campus opened in 1916 and now covers over one (1) mile along the northern bank of the Charles River basin.
  6. Harvard University–Harvard Universityis a private Ivy League research university in Cambridge, Massachusetts and was established in 1636. Its history, influence and wealth have made it one of the most prestigious universities in the world. Established originally by the Massachusetts legislature and soon thereafter named for John Harvard, its first benefactor.  Harvard is the  oldest institution of higher learning in the United States.  The Harvard Corporation (formally, the President and Fellows of Harvard College) is its first chartered corporation. Although never formally affiliated with any denomination, the early College primarily trained Congregation­alist and Unitarian Its curriculum and student body were gradually secularized during the 18th century, and by the 19th century Harvard had emerged as the central cultural establishment among Boston elites.  Following the American Civil War, President Charles W. Eliot‘s long tenure (1869–1909) transformed the college and affiliated professional schools into a modern research university; Harvard was a founding member of the Association of American Universities in 1900.   James Bryant Conant led the university through the Great Depression and World War II and began to reform the curriculum and liberalize admissions after the war. The undergraduate college became coeducational after its 1977 merger with Radcliffe College.
  7. Princeton University–Princeton Universityis a private Ivy League research university in Princeton, New Jersey.  It was founded in 1746 as the College of New Jersey. Princeton was the fourth chartered institution of higher education in the Thirteen Colonies and thus one of the nine Colleges established before the American Revolution. The institution moved to Newark in 1747, then to the current site nine years later, where it was renamed Princeton University in 1896.
  8. Imperial College of London— Imperial College Londonis a public research university, located in London, United Kingdom. The Imperial College of Science and Technology was founded in 1907, as a constituent college of the federal University of London, by merging the City and Guilds College, the Royal School of Mines and the Royal College of Science. The college grew through mergers including with St Mary’s Hospital Medical SchoolCharing Cross and Westminster Medical School, the Royal Postgraduate Medical School and the National Heart and Lung Institute to be known as The Imperial College of Science, Technology and Medicine. The college established the Imperial College Business School in 2005, thus covering subjects in science, engineering, medicine and business. Imperial College London became an independent university in 2007 during its centennial celebration.
  9. ETH Zurich— ETH Zürich(Swiss Federal Institute of Technology in Zurich, German:Eidgenössische Technische Hochschule Zürich) is an engineering, science, technology, mathematics and management university in the city of Zürich, Switzerland. Like its sister institution EPFL, it is an integral part of the Swiss Federal Institutes of Technology Domain (ETH Domain) that is directly subordinate to Switzerland’s Federal Department of Economic Affairs, Education and Research.
  10. University of Chicago— The University of Chicago(U of C, Chicago, or U Chicago) is a private research university in ChicagoIllinois. Established in 1890, the University of Chicago consists of The College, various graduate programs, interdisciplinary committees organized into four academic research divisions and seven professional schools. Beyond the arts and sciences, Chicago is also well known for its professional schools, which include the Pritzker  School of Medicine, the University of Chicago Booth School of Business, the Law School, the School of Social Service Administration, the Harris School of Public Policy Studies, the Graham School of Continuing Liberal and Professional Studies and the Divinity School. The university currently enrolls approximately 5,000 students in the College and around 15,000 students overall.
  11. Johns Hopkins— The Johns Hopkins University(commonly referred to as Johns Hopkins, JHU, or simply Hopkins) is a private research university in Baltimore, Maryland. Founded in 1876, the university was named after its first benefactor, the American entrepreneur, abolitionist, and philanthropist Johns Hopkins.   His $7 million bequest—of which half financed the establishment of The Johns Hopkins Hospital—was the largest philanthropic gift in the history of the United States at the time.   Daniel Coit Gilman, who was inaugurated as the institution’s first president on February 22, 1876,led the university to revolutionize higher education in the U.S. by integrating teaching and research.
  12. Yale University Yale Universityis a private Ivy League research university in New Haven, Connecticut. Founded in 1701 in Saybrook Colony as the Collegiate School, the University is the third-oldest institution of higher education in the United States. In 1718, the school was renamed Yale College in recognition of a gift from Elihu Yale, a governor of the British East India Company and in 1731 received a further gift of land and slaves from Bishop Berkeley.   Established to train Congregationalist ministers in theology and sacred languages, by 1777 the school’s curriculum began to incorporate humanities and sciences and in the 19th century gradually incorporated graduate and professional instruction, awarding the first D. in the United States in 1861 and organizing as a university in 1887.
  13. University of California Berkeley— The University of California, Berkeley(also referred to as Berkeley, UC Berkeley, California or simply Cal) is a public research university located in BerkeleyCalifornia. It is the flagship campus of the University of California system, one of three parts in the state’s public higher education plan, which also includes the California State University system and the California Community Colleges System.
  14. University College of London— University College London(UCL) is a public research university in London, England and a constituent college of the federal University of London. Recognized as one of the leading multidisciplinary research universities in the world, UCL is the largest higher education institution in London and the largest postgraduate institution in the UK by enrollment.  Founded in 1826 as London University, UCL was the first university institution established in London and the earliest in England to be entirely secular, to admit students regardless of their religion and to admit women on equal terms with men. The philosopher Jeremy Bentham is commonly regarded as the spiritual father of UCL, as his radical ideas on education and society were the inspiration to its founders, although his direct involvement in its foundation was limited. UCL became one of the two founding colleges of the University of London in 1836. It has grown through mergers, including with the Institute of Neurology (in 1997), the Eastman Dental Institute (in 1999), the School of Slavonic and East European Studies (in 1999), the School of Pharmacy (in 2012) and the Institute of Education (in 2014).
  15. Columbia University— Columbia University(officially Columbia University in the City of New York) is a private Ivy League research university in Upper ManhattanNew York City. Originally established in 1754 as King’s College by royal charter of George II of Great Britain, it is the oldest institution of higher learning in New York State, as well as one of the country’s nine colonial colleges.   After the revolutionary war, King’s College briefly became a state entity, and was renamed Columbia College in 1784. A 1787 charter placed the institution under a private board of trustees before it was further renamed Columbia University in 1896 when the campus was moved from Madison Avenue to its current location in Morningside Heights occupying land of 32 acres (13 ha). Columbia is one of the fourteen founding members of the Association of American Universities, and was the first school in the United States to grant the D. degree.

 

As you can see, individuals in leadership positions across the world consider formal education as being one the great assets to an individual, a country and our species in general.  Higher education can, but not always, drives us to discover, invent, and commercialize technology that advances our way of life and promotes health.  The entire university experience is remarkably beneficial to an individual’s understanding of the world and world events.

It is very safe to assume the faculty of each school is top-notch and attending students are serious over-achievers. (Then again, maybe not.)  I would invite your attention to the web site listing the two hundred schools considered—the top two hundred.  Maybe your school is on the list.  As always, I invite your comments.

HAPPY BIRTHDAY NASA

February 13, 2015


References for this post are taken from NASA Tech Briefs, Vol 39, No 2, February 2015.

In 1915 the National Advisory Committee for Aeronautics (NACA) was formed by our Federal government.  March 3, 2015 marks the 100th birthday of that occasion.  The NACA was created by Congress over concerns the U.S. was losing its edge in aviation technology to Europe.  WWI was raging at that time and advances in aeronautics was at the forefront of the European efforts to win the war using “heavier than air” craft to pound the enemy.  The purpose of NACA was to “supervise and direct the scientific study of the problems of flight with a view to their practical solution.” State-of-the-art laboratories were constructed in Virginia, California and Ohio that led to fundamental advances in aeronautics enabling victory in WW II. Those efforts also supported national security efforts during the cold war era with Russia.  DNA of the entire aircraft industry is infused with technology resulting from research and development efforts from NASA.

HUMBLE BEGINNINGS

NACA was formed by employing twelve (12) unpaid individuals with an annual budget of $5,000.00.  Over the course of forty-three (43) years, the agency made fundamental breakthroughs in aeronautical technology in ways affecting the manner in which airplanes and space craft are designed, built, tested and flown today.  NACA’s early successes are as follows:

  • Cowling to improve the cooling of radial engines thereby reducing drag.
  • Wind tunnel testing simulating air density at different altitudes, which engineers used to design and test dozens of wing cross-sections.
  • Wind tunnel with slots in walls that slowed researchers to take measurements of aerodynamic forces at supersonic speeds.
  • Design principals involving the shape of an aircraft’s wing in relation to the rest of the airplane to reduce drag and allow supersonic flight.
  • Distribution of reports and studies to aircraft manufacturers allowing designs benefiting from R & D efforts.
  • Development of airfoil and propeller shapes which simplified aircraft design. These shapes eventually were incorporated into aircraft such as the P-51 Mustang.
  • Research and wind tunnel testing led to the adoption of the “coke-bottle” design that still influences our supersonic military aircraft of today.

As a result of NACA efforts, flight tests were initiated on the first supersonic experimental airplane, the X-1.  This aircraft was flown by Captain Chuck Yeager and paved the way for further research into supersonic aircraft leading to the development of swept-wing configurations.

After the Soviet Union launched Sputnik 1 in 1957, the world’s first artificial satellite, Congress responded to the nation’s fear of falling behind by passing the National Aeronautics and Space Act of 1958.  NASA was borne.  The new agency, proposed by President Eisenhower, would be responsible for civilian human, satellite, and robotic space programs as well as aeronautical research and development. NACA was absorbed into the NASA framework.

ACHIEVEMENTS:

Looking at the achievements of NASA from that period of time, we see the following milestones:

  • 1959—Selection of seven (7) astronauts for Project Mercury.
  • 1960–Formation of NASA’s Marshall Space Flight Center with Dr. Werner von Braun as director.
  • 1961—President Kennedy structured a commitment to land a man on the moon.
  • 1962—John Glenn became the first American to circle the Earth in Friendship 7.
  • 1965—Gemini IV stayed aloft four (4) days during which time Edward H. White II performed the first space walk.
  • 1968—James A. Lovell Jr., William A Anders, and Frank Bormann flew the historic mission to circle the moon.
  • 1969—The first lunar landing.

Remarkable achievements that absolutely captured the imagination of most Americans.  It is extremely unfortunate that our nearsighted Federal government has chosen to reduce NASA funding and eliminate many of the manned programs and hardware previously on the “books”. We have seemingly altered course, at least relative to manned space travel.  Let’s hope we can get back on track in future years.

%d bloggers like this: