INFLUENCERS

June 6, 2020


Some of the most remarkably written articles are found in the publication “Building Design + Construction”.  This monthly magazine highlights architecture, engineering and construction (AEC) describes building projects and designs around the world.  Many projects underway are re-construction and/or refurbishment of existing structures; i.e. schools, churches, office buildings, etc.  The point I’m trying to make, the writing is superb, innovative and certainly relevant.  The April edition featured INFLUENCERS. 

If you investigate websites, you will find an ever-increasing number of articles related to Influencer Marketing.  Influencer marketing is becoming, or I should say, is a significant factor in a person choosing one product over another.   One of our granddaughters is an influencer and her job is fascinating.  Let’s look.

DEFINITION:

  • the power to affect the purchasing decisions of others because of his or her authority, knowledge, position, or relationship with his or her audience.
  • a following in a distinct niche, with whom he or she actively engages. The size of the following depends on the size of his/her topic of the niche.

CLASSIFICATIONS:

There are various classifications depending upon circumstances.  Those are given below.

Mega-Influencers Mega influencers are the people with a vast number of followers on their social networks. Facebook, Instagram, Twitter, Snapchat, Utube, etc. are social instruments upon which influencers ply their trade.  Although there are no fixed rules on the boundaries between the different types of followers, a common view is that mega-influencers have more than 1 million followers on at least one social platform.  President Donald Trump, Kim Kardashian, Hillary Clinton and of course several others may be classified as Mega-influencers. 

Macro-InfluencersMacro-influencers are one step down from the mega-influencers, and maybe more accessible as influencer marketers. You would consider people with followers in the range between 40,000 and one million followers on a social network to be macro-influencers.
This group tends to consist of two types of people. They are either B-grade celebrities, who haven’t yet made it to the big time. Or they are successful online experts, who have built up more significant followings than the typical micro-influencers. The latter type of macro-influencer is likely to be more useful for firms engaging in influencer marketing.

Micro-Influencers Micro-influencers are ordinary everyday people who have become known for their knowledge about some specialist niche. As such, they have usually gained a sizable social media following amongst devotees of that niche. Of course, it is not just the number of followers that indicates a level of influence; it is the relationship and interaction that a micro-influencer has with his or her followers.

Nano-InfluencersThe newest influencer-type to gain recognition is the nano-influencer. These people only have a small number of followers, but they tend to be experts in an obscure or highly specialized field. You can think of nano-influencers as being the proverbial big fish in a small pond. In many cases, they have fewer than one thousand (1,000) followers – but they will be keen and interested followers, willing to engage with the nano-influencer, and listen to his/her opinions.

If we look further, we can “drill down” to the various internet providers hosting the influencer packages.

Bloggers— Bloggers and influencers in social media have the most authentic and active relationships with their fans.  Brands are now recognizing and encouraging this.  Blogging has been connected to influencer marketing for some time now.  There are many highly influential blogs on the internet.  If a popular blogger positively mentions your product in a post, it can lead to the blogger’s supporters wanting to try out the specific product.

YouTubers—Rather than each video maker having their own site, most create a channel on YouTube.  Brands often align with popular YouTube content creators.

Podcasts— Podcasting is a relatively recent form of online content that is growing in great popularity.  It has made quite a few household names now, possibly best epitomized by John Lee Dumas of Entrepreneurs on Fire.  If you have not yet had the opportunity to enjoy podcasts, Digital Trends has put together a comprehensive list of the best podcasts of 2019.  Our youngest son has a podcast called CalmCash.  He does a great job and is remarkably creative. 

Social Posts Only— The vast majority of influencers now make their name on social media.  While you will find influencers on all leading social channels, the standout network in recent years has been Instagram, where many influencers craft their posts around various stunning images.   

Now, if we go back to “Building Design + Construction”, they interviewed five influencers that apply their skills to the AEC profession.  I will give you, through their comments, the thrust of their efforts:

CHRISTINE WILLIAMSON— “My goal is to help teach architects about building science and construction.  I want to show how the “AEC” parts fit together.”

BOB BORSON—He is the cohost of the Life of an Architect podcast which gets about two hundred and sixty (260) downloads per day.  He would be a nano-influencer.  “Influencer” is a ridiculous word.  If you have to tell people you’re an influencer, you’re not”.  His words only.

AMY BAKER—Launched her Instagram account in 2018 and is the host for SpecFunFacts.  She discusses specifications and contracts and has around one thousand (1,000) followers.

CATHERINE MENG– Ms. Meng is the host of the Design Voice podcast. 

MATT RISENGER—Mr. Risenger hosts “Buildshownetwork”.   He first published Matt Risinger’s Green Building blog in 2006.  This was the manner in which he publicized his new homebuilding company in Austin, Texas.   To date, he has seven hundred (700) plus videos on YouTube.  Right now, he has six hundred thousand (600,000) subscribers.

CONCLUSIONS:  From the above descriptions and the five individual influencers detailed in the AEC magazine, you can get some idea as to how influencers ply their trade and support design and building endeavors.  Hope you enjoyed this one.


Okay, there will be a test after you read this post.  Here we go.  Do you know these people?

  • Beyoncé
  • Jennifer Lopez
  • Mariah Cary
  • Lady Gaga
  • Ariana Grande
  • Katy Perry
  • Miley Cyrus
  • Karen Uhlenbeck

Don’t feel bad.  I didn’t know either.  This is Karen Uhlenbeck—the mathematician we do not know.  For some unknown reason we all (even me) know the “pop” stars by name; who their significant other or others are, their children, their latest hit single, who they recently “dumped”, where they vacationed, etc. etc.  We know this. I would propose the lady whose picture shown below has contributed more to “human kind” that all the individuals listed above.  Then again, that’s just me.

For the first time, one of the top prizes in mathematics has been given to a woman.  I find this hard to believe because we all know that “girls” can’t do math.  Your mamas told you that and you remembered it.  (I suppose Dr. Uhlenbeck mom was doing her nails and forgot to mention that to her.)

This past Tuesday, the Norwegian Academy of Science and Letters announced it has awarded this year’s Abel Prize — an award modeled on the Nobel Prizes — to Karen Uhlenbeck, an emeritus professor at the University of Texas at Austin. The award cites “the fundamental impact of her work on analysis, geometry and mathematical physics.”   Uhlenbeck won for her foundational work in geometric analysis, which combines the technical power of analysis—a branch of math that extends and generalizes calculus—with the more conceptual areas of geometry and topology. She is the first woman to receive the prize since the award of six (6) million Norwegian kroner (approximately $700,000) was first given in 2003.

One of Dr. Uhlenbeck’s advances in essence described the complex shapes of soap films not in a bubble bath but in abstract, high-dimensional curved spaces. In later work, she helped put a rigorous mathematical underpinning to techniques widely used by physicists in quantum field theory to describe fundamental interactions between particles and forces. (How many think Beyoncé could do that?)

In the process, she helped pioneer a field known as geometric analysis, and she developed techniques now commonly used by many mathematicians. As a matter of fact, she invented the field.

“She did things nobody thought about doing,” said Sun-Yung Alice Chang, a mathematician at Princeton University who served on the five-member prize committee, “and after she did, she laid the foundations for that branch of mathematics.”

An example of objects studied in geometric analysis is a minimal surface. Analogous to a geodesic, a curve that minimizes path length, a minimal surface minimizes area; think of a soap film, a minimal surface that minimizes energy. Analysis focuses on the differential equations governing variations of surface area, whereas geometry and topology focus on the minimal surface representing a solution to the equations. Geometric analysis weaves together both approaches, resulting in new insights.

The field did not exist when Uhlenbeck began graduate school in the mid-1960s, but tantalizing results linking analysis and topology had begun to emerge. In the early 1980s, Uhlenbeck and her collaborators did ground-breaking work in minimal surfaces. They showed how to deal with singular points, that is, points where the minimal surface is no longer smooth or where the solution to the equations is not defined. They proved that there are only finitely many singular points and showed how to study them by expanding them into “bubbles.” As a technique, bubbling made a deep impact and is now a standard tool.

Born in 1942 to an engineer and an artist, Uhlenbeck is a mountain-loving hiker who learned to surf at the age of forty (40). As a child she was a voracious reader and “was interested in everything,” she said in an interview last year with Celebratio.org. “I was always tense, wanting to know what was going on and asking questions.”

She initially majored in physics as an undergraduate at the University of Michigan. But her impatience with lab work and a growing love for math led her to switch majors. She nevertheless retained a lifelong passion for physics, and centered much of her research on problems from that field.  In physics, a gauge theory is a kind of field theory, formulated in the language of the geometry of fiber bundles; the simplest example is electromagnetism. One of the most important gauge theories from the 20th century is Yang-Mills theory, which underlies the standard model of elementary particle physics. Uhlenbeck and other mathematicians began to realize that the Yang-Mills equations have deep connections to problems in geometry and topology. By the early 1980s, she laid the analytic foundations for mathematical investigation of the Yang-Mills equations.

Dr. Uhlenbeck, who lives in Princeton, N.J., learned that she won the prize on Sunday morning.

“When I came out of church, I noticed that I had a text message from Alice Chang that said, Would I please accept a call from Norway?” Dr. Uhlenbeck said. “When I got home, I called Norway back and they told me.”

Who said women can’t do math?

SMARTS

March 17, 2019


Who was the smartest person in the history of our species? Solomon, Albert Einstein, Jesus, Nikola Tesla, Isaac Newton, Leonardo de Vinci, Stephen Hawking—who would you name.  We’ve had several individuals who broke the curve relative to intelligence.   As defined by the Oxford Dictionary of the English Language, IQ:

“an intelligence test score that is obtained by dividing mental age, which reflects the age-graded level of performance as derived from population norms, by chronological age and multiplying by100: a score of100 thus indicates performance at exactly the normal level for that age group. Abbreviation: IQ”

An intelligence quotient or IQ is a score derived from one of several different intelligence measures.  Standardized tests are designed to measure intelligence.  The term “IQ” is a translation of the German Intellizenz Quotient and was coined by the German psychologist William Stern in 1912.  This was a method proposed by Dr. Stern to score early modern children’s intelligence tests such as those developed by Alfred Binet and Theodore Simin in the early twentieth century.  Although the term “IQ” is still in use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is not based on a projection of the subject’s measured rank on the Gaussian Bell curve with a center value of one hundred (100) and a standard deviation of fifteen (15).  The Stanford-Binet IQ test has a standard deviation of sixteen (16).  As you can see from the graphic below, seventy percent (70%) of the human population has an IQ between eighty-five and one hundred and fifteen.  From one hundred and fifteen to one hundred and thirty you are considered to be highly intelligent.  Above one hundred and thirty you are exceptionally gifted.

What are several qualities of highly intelligent people?  Let’s look.

QUALITIES:

  • A great deal of self-control.
  • Very curious
  • They are avid readers
  • They are intuitive
  • They love learning
  • They are adaptable
  • They are risk-takers
  • They are NOT over-confident
  • They are open-minded
  • They are somewhat introverted

You probably know individuals who fit this profile.  We are going to look at one right now:  John von Neumann.

JON von NEUMANN:

The Financial Times of London celebrated John von Neumann as “The Man of the Century” on Dec. 24, 1999. The headline hailed him as the “architect of the computer age,” not only the “most striking” person of the 20th century, but its “pattern-card”—the pattern from which modern man, like the newest fashion collection, is cut.

The Financial Times and others characterize von Neumann’s importance for the development of modern thinking by what are termed his three great accomplishments, namely:

(1) Von Neumann is the inventor of the computer. All computers in use today have the “architecture” von Neumann developed, which makes it possible to store the program, together with data, in working memory.

(2) By comparing human intelligence to computers, von Neumann laid the foundation for “Artificial Intelligence,” which is taken to be one of the most important areas of research today.

(3) Von Neumann used his “game theory,” to develop a dominant tool for economic analysis, which gained recognition in 1994 when the Nobel Prize for economic sciences was awarded to John C. Harsanyi, John F. Nash, and Richard Selten.

John von Neumann, original name János Neumann, (born December 28, 1903, Budapest, Hungary—died February 8, 1957, Washington, D.C. Hungarian-born American mathematician. As an adult, he appended von to his surname; the hereditary title had been granted his father in 1913. Von Neumann grew from child prodigy to one of the world’s foremost mathematicians by his mid-twenties. Important work in set theory inaugurated a career that touched nearly every major branch of mathematics. Von Neumann’s gift for applied mathematics took his work in directions that influenced quantum theory theory of automation, economics, and defense planning. Von Neumann pioneered game theory, and, along with Alan Turing and Claude Shannon was one of the conceptual inventors of the stored-program digital computer .

Von Neumann did exhibit signs of genius in early childhood: he could joke in Classical Greek and, for a family stunt, he could quickly memorize a page from a telephone book and recite its numbers and addresses. Von Neumann learned languages and math from tutors and attended Budapest’s most prestigious secondary school, the Lutheran Gymnasium . The Neumann family fled Bela Kun’s short-lived communist regime in 1919 for a brief and relatively comfortable exile split between Vienna and the Adriatic resort of Abbazia. Upon completion of von Neumann’s secondary schooling in 1921, his father discouraged him from pursuing a career in mathematics, fearing that there was not enough money in the field. As a compromise, von Neumann simultaneously studied chemistry and mathematics. He earned a degree in chemical engineering from the Swiss Federal Institute in  Zurich and a doctorate in mathematics (1926) from the University of Budapest.

OK, that all well and good but do we know the IQ of Dr. John von Neumann?

John Von Neumann IQ is 190, which is considered as a super genius and in top 0.1% of the population in the world.

With his marvelous IQ, he wrote one hundred and fifty (150) published papers in his life; sixty (60) in pure mathematics, twenty (20) in physics, and sixty (60) in applied mathematics. His last work, an unfinished manuscript written while in the hospital and later published in book form as The Computer and the Brain, gives an indication of the direction of his interests at the time of his death. It discusses how the brain can be viewed as a computing machine. The book is speculative in nature, but discusses several important differences between brains and computers of his day (such as processing speed and parallelism), as well as suggesting directions for future research. Memory is one of the central themes in his book.

I told you he was smart!

OUR SHRINKING WORLD

March 16, 2019


We sometimes do not realize how miniaturization has affected our every-day lives.  Electromechanical products have become smaller and smaller with one great example being the cell phone we carry and use every day.  Before we look at several examples, let’s get a definition of miniaturization.

Miniaturization is the trend to manufacture ever smaller mechanical, optical and electronic products and devices. Examples include miniaturization of mobile phones, computers and vehicle engine downsizing. In electronics, Moore’s Law predicted that the number of transistors on an integrated circuit for minimum component cost doubles every eighteen (18) months. This enables processors to be built in smaller sizes. We can tell that miniaturization refers to the evolution of primarily electronic devices as they become smaller, faster and more efficient. Miniaturization also includes mechanical components although it sometimes is very difficult to reduce the size of a functioning part.

The revolution of electronic miniaturization began during World War II and is continuing to change the world till now. Miniaturization of computer technology has been the source of a seemingly endless battle between technology giants over the world. The market has become so competitive that the companies developing microprocessors are constantly working towards erecting a smaller microchip than that of their competitor, and as a result, computers become obsolete almost as soon as they are commercialized.  The concept that underlies technological miniaturization is “the smaller the better”; smaller is faster, smaller is cheaper, smaller is more profitable. It is not just companies that profit from miniaturization advances, but entire nations reap rewards through the capitalization of new developments. Devices such as personal computers, cellular telephones, portable radios, and camcorders have created massive markets through miniaturization, and brought billions of dollars to the countries where they were designed and built. In the 21st century, almost every electronic device has a computer chip inside. The goal of miniaturization is to make these devices smaller and more powerful, and thus made available everywhere. It has been said, however, that the time for continued miniaturization is limited – the smaller the computer chip gets, the more difficult it becomes to shrink the components that fit on the chip.  I personally do not think this is the case but I am a mechanical engineer and not an electronic or electrical engineer.  I use the products but I do not develop the products.

The world of miniaturization would not be possible at all if it were not for semiconductor technology.  Devices made of semiconductors, notably silicon, are essential components of most electronic circuits.  A process of lithography is used to create circuitry layered over a silicon substrate. A transistor is a semiconductor device with three connections capable of amplification in addition to rectification. Miniaturization entails increasing the number of transistors that can hold on a single chip, while shrinking the size of the chip. As the surface area of a chip decreases, the task of designing newer and faster circuit designs becomes more difficult, as there is less room left for the components that make the computer run faster and store more data.

There is no better example of miniaturization than cell phone development.  The digital picture you see below will give some indication as to the development of the cell phone and how the physical size has decreased over the years.  The cell phone to the far left is where it all started.  To the right, where we are today.  If you look at the modern-day cell phone you see a remarkable difference in size AND ability to communicate.  This is all possible due to shrinking computer chips.

One of the most striking changes due to miniaturization is the application of digital equipment into a modern-day aircraft cockpit.  The JPEG below is a mockup of an actual Convair 880.  With analog gauges, an engineering panel and an exterior shell, this cockpit reads 1960/1970 style design and fabrication.  In fact, this is the actual cockpit mock up that was used in the classic comedy film “Airplane”.

Now, let us take a look at a digital cockpit.  Notice any differences?  Cleaner and fewer.  The GUI or graphical user interface can take the place of numerous dials and gauges that clutter and possibly confuse a pilot’s vision.

I think you have the picture so I would challenge you to take a look this upcoming week to discover those electromechanical items, we take for granted, to discover how they have been reduced in size.  You just may be surprised.

 

SPACEIL’s BERESHEET

March 5, 2019


If you read my posts at all you know I am solidly behind our space efforts by NASA or even private companies.  In my opinion, the United States of America made a HUGE mistake in withdrawing financed manned missions AND discontinuing efforts to colonize the moon.  We now are dependent upon Russia to take our astronauts to the ISS.   That may end soon with successful launches from SpaceX and Virgin Galactic.  The headway they are making is very interesting.

Israel has also made headline news just recently with their successful launch and landing on the moon’s surface.   A digital photograph of the lander is shown below.

The story of this effort is fascinating and started in 2010 with a Facebook post. “Who wants to go to the moon?” wrote Yariv Bash, a computer engineer. A couple of friends, Kfir Damari and Yonatan Winetraub responded, and the three met at a bar in Holon, a city south of Tel Aviv. At 30, Mr. Bash was the oldest. “As the alcohol levels in our blood increased, we became more determined,” Mr. Winetraub recalled.  They formed a nonprofit, SpaceIL, to undertake the task. More than eight years later, the product of their dreams, a small spacecraft called Beresheet, blasted off this past Thursday night atop a SpaceX Falcon 9 rocket at the Cape Canaveral Air Force Station in Florida.  Beresheet is a joint project of the nonprofit group SpaceIL and the company Israel Aerospace Industries.

Israel’s first lunar lander has notched another important milestone — its first in-space selfie. The newly released photo shows the robotic lander, known as Beresheet, looking back at Earth from a distance of 23,363.5 miles (37,600 kilometers).

“In the photo of Earth, taken during a slow spin of the spacecraft, Australia is clearly visible,” mission team members wrote in an image description today (March 5). “Also seen is the plaque installed on the spacecraft, with the Israeli flag and the inscriptions ‘Am Yisrael Chai’ and ‘Small Country, Big Dreams.'”

The entire Beresheet mission, including launch, costs about $100 million, team members have said.

Beresheet’s ride through space hasn’t been entirely smooth. Shortly after liftoff, team members noticed that the craft’s star trackers, which are critical to navigation, are susceptible to blinding by solar radiation. And Beresheet’s computer performed a reset unexpectedly just before the craft’s second planned engine burn.

Mission team members have overcome these issues. For example, they traced the computer reset to cosmic radiation and firmed up Beresheet’s defenses with a software update. The lander was then able to execute the engine burn, which put Beresheet back on track toward the moon.  This reset indicates complete control of the mission and the ability to make a mid-course correction if needed.  In other words, they know what they are doing.

I would be very surprised if Israel stopped with this success.  I am sure they have other missions they are considering.  They do have competition. Prior to Israel’s landing, there were only three other countries to “soft-land” a lunar lander:  USA, Russia and China.  The Chinese have already stated they want to colonize the moon and make that their base for further exploration.  We know the direction they are going.  I just hope we get serious about a colony on the moon and give up, for the present time, sending men and women to Mars.  Any Mars mission at this time would be nuts.

 

As always, I welcome your opinion.


With the federal government pulling out of manned space flight, it gave private companies ample opportunity to fill in the gaps.  Of course, these companies MUST have adequate funding, trained personnel and proper facilities to launch their version(s) of equipment, support and otherwise that will take man and equipment to the outer reaches of space.  The list of companies was quite surprising to me.  Let’s take a look.

These are just the launch vehicles.  There is also a huge list of manufacturers making man-rovers and orbiters, research craft and tech demonstrators, propulsion manufacturers, satellite launchers, space manufacturing, space mining, space stations, space settlements, spacecraft component manufacturers and developers, and spaceliner companies.   I will not publish that list but these companies are available for discovery by putting in the heading for each category.  To think we are not involved in space is obviously a misnomer.

 

ARTIFICIAL INTELLIGENCE

February 12, 2019


Just what do we know about Artificial Intelligence or AI?  Portions of this post were taken from Forbes Magazine.

John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automation theory and complex information processing. The proposal for the conference said, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions focus on AI being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human). The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Merriam-Webster defines artificial intelligence this way:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

About thirty (30) year ago, a professor at the Harvard Business School (Dr. Shoshana Zuboff) articulated three laws based on research into the consequences that widespread computing would have on society. Dr. Zuboff had degrees in philosophy and social psychology so she was definitely ahead of her time relative to the unknown field of AI.  Her document “In the Age of the Smart Machine: The Future of Work and Power”, she postulated the following three laws:

  • Everything that can be automated will be automated
  • Everything that can be informated will be informated. (NOTE: Informated was coined by Zuboff to describe the process of turning descriptions and measurements of activities, events and objects into information.)
  • In the absence of countervailing restrictions and sanctions, every digital application that can be sued for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

At that time there was definitely a significant lack of computing power.  That ship has sailed and is no longer a great hinderance to AI advancement that it certainly once was.

 

WHERE ARE WE?

In recent speech, Russian president Vladimir Putin made an incredibly prescient statement: “Artificial intelligence is the future, not only for Russia, but for all of humankind.” He went on to highlight both the risks and rewards of AI and concluded by declaring that whatever country comes to dominate this technology will be the “ruler of the world.”

As someone who closely monitors global events and studies emerging technologies, I think Putin’s lofty rhetoric is entirely appropriate. Funding for global AI startups has grown at a sixty percent (60%) compound annual growth rate since 2010. More significantly, the international community is actively discussing the influence AI will exert over both global cooperation and national strength. In fact, the United Arab Emirates just recently appointed its first state minister responsible for AI.

Automation and digitalization have already had a radical effect on international systems and structures. And considering that this technology is still in its infancy, every new development will only deepen the effects. The question is: Which countries will lead the way, and which ones will follow behind?

If we look at criteria necessary for advancement, there are the seven countries in the best position to rule the world with the help of AI.  These countries are as follows:

  • Russia
  • The United States of America
  • China
  • Japan
  • Estonia
  • Israel
  • Canada

The United States and China are currently in the best position to reap the rewards of AI. These countries have the infrastructure, innovations and initiative necessary to evolve AI into something with broadly shared benefits. In fact, China expects to dominate AI globally by 2030. The United States could still maintain its lead if it makes AI a top priority and charges necessary investments while also pulling together all required government and private sector resources.

Ultimately, however, winning and losing will not be determined by which country gains the most growth through AI. It will be determined by how the entire global community chooses to leverage AI — as a tool of war or as a tool of progress.

Ideally, the country that uses AI to rule the world will do it through leadership and cooperation rather than automated domination.

CONCLUSIONS:  We dare not neglect this disruptive technology.  We cannot afford to lose this battle.

COMPUTER SIMULATION

January 20, 2019


More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.

WHAT IS COMPUTER SIMULATION?

Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.

REAL-WORLD SIMULATION:

The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.

 

Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.

 

 

WEARABLE TECHNOLOGY

January 12, 2019


Wearable technology’s evolution is not about the gadget on the wrist or the arm but what is done with the data these devices collect, say most computational biologist. I think before we go on, let’s define wearable technology as:

“Wearable technology (also called wearable gadgets) is a category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness. Other wearable tech gadgets include devices that have small motion sensors to take photos and sync with your mobile devices.”

Several examples of wearable technology may be seen by the following digital photographs.

You can all recognize the “watches” shown above. I have one on right now.  For Christmas this year, my wife gave me a Fitbit Charge 3.  I can monitor: 1.) Number of steps per day, 2.) Pulse rate, 3.) Calories burned during the day, 4.) Time of day, 5.) Number of stairs climbed per day, 6.) Miles walked or run per day, and 7.) Several items I can program in from the app on my digital phone.  It is truly a marvelous device.

Other wearables provide very different information and accomplish data of much greater import.

The device above is manufactured by a company called Lumus.  This company focusses on products that provide new dimensions for the human visual experience. It offers cutting-edge eyewear displays that can be used in various applications including gaming, movie watching, text reading, web browsing, and interaction with the interface of wearable computers. Lumus does not aim to produce self-branded products. Instead, the company wants to work with various original equipment manufacturers (OEMs) to enable the wider use of its technologies.  This is truly ground-breaking technology being used today on a limited basis.

Wearable technology is aiding individuals of decreasing eyesight to see as most people see.  The methodology is explained with the following digital.

Glucose levels may be monitored by the device shown above. No longer is it necessary to prick your finger to draw a small droplet of blood to determine glucose levels.  The device below can do that on a continuous basis and without a cumbersome test device.

There are many over the world suffering from “A-fib”.  Periodic monitoring becomes a necessity and one of the best methods of accomplishing that is shown by the devices below. A watch monitors pulse rate and sends that information via blue tooth to an app downloaded on your cell phone.

Four Benefits of Wearable Health Technology are as follows:

  • Real Time Data collection. Wearables can already collect an array of data like activity levels, sleep and heart rate, among others. …
  • Continuous Monitoring. …
  • Predict and alerting. …
  • Empowering patients.

Major advances in sensor and micro-electromechanical systems (MEMS) technologies are allowing much more accurate measurements and facilitating believable data that can be used to track movements and health conditions on any one given day.  In many cases, the data captured can be downloaded into a computer and transmitted to a medical practitioner for documentation.

Sensor miniaturization is a key driver for space-constrained wearable design.  Motion sensors are now available in tiny packages measuring 2 x 2 millimeters.  As mentioned, specific medical sensors can be used to track 1.) Heart rate variability, 2.) Oxygen levels, 3.) Cardiac health, 4.) Blood pressure, 5.) Hemoglobin, 6.) Glucose levels and 7.) Body temperature.  These medical devices represent a growing market due to their higher accuracy and greater performance.  These facts make them less prone to price pressures that designers commonly face with designing consumer wearables.

One great advantage for these devices now is the ability to hold a charge for a much longer period of time.  My Fitbit has a battery life of seven (7) days.  That’s really unheard of relative to times past.

CONCLUSION:  Wearable designs are building a whole new industry one gadget at a time.  MEMS sensors represent an intrinsic part of this design movement. Wearable designs have come a long way from counting steps in fitness trackers, and they are already applying machine-learning algorithms to classify and analyze data.

HOW MUCH IS TOO MUCH?

December 15, 2018


How many “screen-time” hours do you spend each day?  Any idea? Now, let’s face facts, an adult working a full-time job requiring daily hour-long screen time may be a necessity.  We all know that but how about our children and grandchildren?

I’m old enough to remember when television was a laboratory novelty and telephones were “ringer-types” affixed to the cleanest wall in the house.  No laptops, no desktops, no cell phones, no Gameboys, etc etc.  You get the picture.  That, as we all know, is a far cry from where we are today.

Today’s children have grown up with a vast array of electronic devices at their fingertips. They can’t imagine a world without smartphones, tablets, and the internet.  If you do not believe this just ask them. One of my younger grandkids asked me what we did before the internet.  ANSWER: we played outside, did our chores, called our friends and family members.

The advances in technology mean today’s parents are the first generation who have to figure out how to limit screen-time for children.  This is a growing requirement for reasons we will discuss later.  While digital devices can provide endless hours of entertainment and they can offer educational content, unlimited screen time can be harmful. The American Academy of Pediatrics recommends parents place a reasonable limit on entertainment media. Despite those recommendations, children between the ages of eight (8) and eighteen (18) average seven and one-half (7 ½) hours of entertainment media per day, according to a 2010 study by the Henry J. Kaiser Family Foundation.  Can you imagine over seven (7) hours per day?  When I read this it just blew my mind.

But it’s not just kids who are getting too much screen time. Many parents struggle to impose healthy limits on themselves too. The average adult spends over eleven (11) hours per day behind a screen, according to the Kaiser Family Foundation.  I’m very sure that most of this is job related but most people do not work eleven hours behind their desk each day.

Let’s now look at what the experts say:

  • Childrenunder age two (2) spend about forty-two (42) minutes, children ages two (2) to four (4) spend two (2) hours and forty (40) minutes, and kids ages five (5) to eight (8) spend nearly three (3) hours (2:58) with screen media daily. About thirty-five (35) percent of children’s screen time is spent with a mobile device, compared to four (4) percent in 2011. Oct 19, 2017
  • Children aged eighteen (18) monthsto two (2) years can watch or use high-quality programs or apps if adults watch or play with them to help them understand what they’re seeing. children aged two to five (2-5) years should have no more than one hour a day of screen time with adults watching or playing with them.
  • The American Academy of Pediatrics released new guidelines on how much screen timeis appropriate for children. … Excessive screen time can also lead to “Computer Vision Syndrome” which is a combination of headaches, eye strain, fatigue, blurry vision for distance, and excessive dry eyes. August 21, 2017
  • Pediatricians: No More than two (2) HoursScreen Time Daily for Kids. Children should be limited to less than two hours of entertainment-based screen time per day, and shouldn’t have TVs or Internet access in their bedrooms, according to new guidelines from pediatricians. October 28, 2013

OK, why?

  • Obesity: Too much time engaging in sedentary activity, such as watching TV and playing video games, can be a risk factor for becoming overweight.
  • Sleep Problems:  Although many parents use TV to wind down before bed, screen time before bed can backfire. The light emitted from screens interferes with the sleep cycle in the brain and can lead to insomnia.
  • Behavioral Problems: Elementary school-age children who watch TV or use a computer more than two hours per day are more likely to have emotional, social, and attention problems. Excessive TV viewing has even been linked to increased bullying behavior.
  • Educational problems: Elementary school-age children who have televisions in their bedrooms do worse on academic testing.  This is an established fact—established.  At this time in our history we need educated adults that can get the job done.  We do not need dummies.
  • Violence: Exposure to violent TV shows, movies, music, and video games can cause children to become desensitized to it. Eventually, they may use violence to solve problems and may imitate what they see on TV, according to the American Academy of Child and Adolescent Psychiatry.

When very small children get hooked on tablets and smartphones, says Dr. Aric Sigman, an associate fellow of the British Psychological Society and a Fellow of Britain’s Royal Society of Medicine, they can unintentionally cause permanent damage to their still-developing brains. Too much screen time too soon, he says, “is the very thing impeding the development of the abilities that parents are so eager to foster through the tablets. The ability to focus, to concentrate, to lend attention, to sense other people’s attitudes and communicate with them, to build a large vocabulary—all those abilities are harmed.”

Between birth and age three, for example, our brains develop quickly and are particularly sensitive to the environment around us. In medical circles, this is called the critical period, because the changes that happen in the brain during these first tender years become the permanent foundation upon which all later brain function is built. In order for the brain’s neural networks to develop normally during the critical period, a child needs specific stimuli from the outside environment. These are rules that have evolved over centuries of human evolution, but—not surprisingly—these essential stimuli are not found on today’s tablet screens. When a young child spends too much time in front of a screen and not enough getting required stimuli from the real world, her development becomes stunted.

CONCLUSION: This digital age is wonderful if used properly and recognized as having hazards that may create lasting negative effects.  Use wisely.

%d bloggers like this: