Astrophysics for People in a Hurry was written by Neil deGrasse Tyson.  I think the best place to start is with a brief bio of Dr. Tyson.

NEIL de GRASSE TYSON was borne October 5, 1968 in New York City. When he was nine years old, his interest in astronomy was sparked by a trip to the Hayden Planetarium at the American Museum of Natural History in New York. Tyson followed that passion and received a bachelor’s degree in physics from Harvard University in Cambridge, Massachusetts, in 1980 and a master’s degree in astronomy from the University of Texas at Austin in 1983. He began writing a question-and-answer column for the University of Texas’s popular astronomy magazine StarDate, and material from that column later appeared in his books Merlin’s Tour of the Universe (1989) and Just Visiting This Planet (1998).

Tyson then earned a master’s (1989) and a doctorate in astrophysics (1991) from Columbia University, New York City. He was a postdoctoral research associate at Princeton University from 1991 to 1994, when he joined the Hayden Planetarium as a staff scientist. His research dealt with problems relating to galactic structure and evolution. He became acting director of the Hayden Planetarium in 1995 and director in 1996. From 1995 to 2005 he wrote monthly essays for Natural History magazine, some of which were collected in Death by Black Hole: And Other Cosmic Quandaries (2007), and in 2000 he wrote an autobiography, The Sky Is Not the Limit: Adventures of an Urban Astrophysicist. His later books include Astrophysics for People in a Hurry (2017).

You can see from his biography Dr. Tyson is a “heavy hitter” and knows his subject in and out.  His newest book “Astrophysics for People in a Hurry” treats his readers with respect relative to their time.  During the summer of 2017, it was on the New York Times best seller list at number one for four (4) consecutive months and has never been unlisted from that list since its publication. The book is small and contains only two hundred and nine (209) pages, but please do not let this short book fool you.  It is extremely well written and “loaded” with facts relevant to the subject matter. Very concise and to the point.   I would like now to give you some idea as to the content by coping several passages from the book.  Short passages that will indicate what you will be dealing with as a reader.

  • In the beginning, nearly fourteen billion years ago, all the space and all the matter and all the energy of the knows universe was contained in a volume less than one-trillionth the size of the period that ends this sentence.
  • As the universe aged through 10^-55 seconds, it continued to expand, diluting all concentrations of energy, and what remained of the unified forces split into the “electroweak” and the “strong nuclear” forces.
  • As the cosmos continued to expand and cool, growing larger that the size of our solar system, the temperature dropped rapidly below a trillion degrees Kelvin.
  • After cooling, one electron for every proton has been “frozen” into existence. As the cosmos continues to cool-dropping below a hundred million degrees-protons fuse with other protons as well as with neutrons, forming atomic nuclei and hatching a universe in which ninety percent of these nuclei are hydrogen and ten percent are helium, along with trace amounts of deuterium (heavy hydrogen), tritium (even heavier than hydrogen), and lithium.
  • For the first billion years, the universe continued to expand and cool as matter gravitated into the massive concentrations we call galaxies. Nearly a hundred billion of them formed, each containing hundreds of billions of stars that undergo thermonuclear fusion in their cores.

Dr. Tyson also discusses, Dark Matter, Dark Energy, Invisible Light, the Exoplanet Earth and many other fascinating subjects that can be absorbed in “quick time”.  It is a GREAT read and one I can definitely recommend to you.

Advertisements

ARECIBO

September 27, 2017


Hurricane Maria, as you well know, has caused massive damage to the island of Puerto Rico.  At this writing, the entire island is without power and is struggling to exist without water, telephone communication, health and sanitation facilities.   The digital pictures below will give some indication as to the devastation.

Maria made landfall in the southeastern part of the U.S. territory Wednesday with winds reaching 155 miles per hour, knocking out electricity across the island. An amazingly strong wind devastated the storm flooded parts of downtown San Juan, downed trees and ripped the roofs from homes. Puerto Rico has little financial wherewithal to navigate a major catastrophe, given its decision in May to seek protection from creditors after a decade of economic decline, excessive borrowing and the loss of residents to the U.S. mainland.  Right now, PR is totally dependent upon the United States for recovery.

Imagine winds strong enough to damage and position an automobile in the fashion shown above.  I cannot even tell the make of this car but we must assume it weighs at least two thousand pounds and yet it is thrown in the air like a paper plane.

One huge issue is clearing roads so supplies for relief and medical attention can be delivered to the people.  This is a huge task.

One question I had—how about Arecibo?  Did the radio telescope survive and if so, what damages were sustained?  The digital below will show Arecibo Radio Telescope during “better times”.

Five decades ago, scientists sought a radio telescope that was close to the equator, according to Arecibo’s website. This location would allow the telescope to track planets passing overhead, while also probing the nature of the ionosphere — the layer of the atmosphere in which charged particles produce the northern lights.  The telescope is part of the National Astronomy and Ionosphere Center. The National Science Foundation has a co-operative agreement with the three entities that operate it: SRI International, the Universities Space Research Association and UMET (Metropolitan University.) That radio telescope has provided an absolute wealth of information about our solar system and surrounding and bodies outside our solar system.

The Arecibo Observatory contains the second-largest radio telescope in the world, and that telescope has been out of service ever since Hurricane Maria hit Puerto Rico on Sept. 20. Maria hit the island as a Category 4 hurricane.

While Puerto Rico suffered catastrophic damage across the island, the Arecibo Observatory suffered “relatively minor damages,” Francisco Córdova, the director of the observatory, said in a Facebook post on Sunday (Sept. 24).

In the words of Mr. Cordova: “Still standing after #HurricaneMaria! We suffered some damages, but nothing that can’t be repaired or replaced! More updates to follow in the coming days as we complete our detailed inspections. We stand together with Puerto Rico as we recover from this storm.#PRStrong”.

Despite Córdova’s optimistic message, staff members and other residents of Puerto Rico are in a pretty bad situation. Power has yet to be restored to the island since the storm hit, and people are running out of fuel for generators. With roads still blocked by fallen trees and debris, transporting supplies to people in need is no simple task.

National Geographic’s Nadia Drake, who has been in contact with the observatory and has provided extensive updates via Twitter, reported that “some staff who have lost homes in town are moving on-site” to the facility, which weathered the storm pretty well overall. Drake also reported that the observatory “will likely be serving as a FEMA emergency center,” helping out members of the community who lost their homes in the storm.

The mission of Arecibo will continue but it may be a long time before the radio telescope is fully functional.  Let’s just hope the lives of the people manning the telescope can be put back in order quickly so important and continued work may again be accomplished.

ROBONAUGHTS

September 4, 2016


OK, if you are like me, your sitting there asking yourself just what on Earth is a robonaught?  A robot is an electromechanical device used primarily to take the labor and sometimes danger from human activity.  As you well know, robotic systems have been in use for many years with each year providing systems of increasing sophistication.  An astronaut is an individual operating in outer space.  Let’s take a proper definition for ROBONAUGHT as provided by NASA.

“A Robonaut is a dexterous humanoid robot built and designed at NASA Johnson Space Center in Houston, Texas. Our challenge is to build machines that can help humans work and explore in space. Working side by side with humans, or going where the risks are too great for people, Robonauts will expand our ability for construction and discovery. Central to that effort is a capability we call dexterous manipulation, embodied by an ability to use one’s hand to do work, and our challenge has been to build machines with dexterity that exceeds that of a suited astronaut.”

My information is derived from “NASA Tech Briefs”, Vol 40, No 7, July 2016 publication.

If you had your own personal robotic system, what would you ask that system to do?  Several options surface in my world as follows: 1.) Mow the lawn, 2.) Trim hedges, 3.) Wash my cars, 4.) Clean the gutters, 5.) Vacuum floors in our house, 6.) Wash windows, and 7.) Do the laundry.   (As you can see, I’m not really into yard work or even house work.)  Just about all of the tasks I do on a regular basis are home-grown, outdoor jobs and time-consuming.

For NASA, the International Space Station (ISS) has become a marvelous test-bed for developing the world’s most advanced robotic technology—technology that definitely represents the cutting-edge in space exploration and ground research.  The ISS now hosts a significant array of state-of-the are robotic projects including human-scale dexterous robots and free-flying robots.  (NOTE:  The vendor is Astrobee and they have developed for NASA a free-flyer robotic system consists of structure, propulsion, power, guidance, navigation and control (GN&C), command and data handling (C&DH), avionics, communications, dock mechanism, and perching arm subsystems. The Astrobee element is designed to be self-contained and capable of autonomous localization, orientation, navigation and holonomic motion as well as autonomous resupply of consumables while operating inside the USOS.)  These robotic systems are not only enabling the future of human-robot space exploration but promising extraordinary benefits for Earth-bound applications.

The initial purpose for exploring the design and fabrication of a human robotic system was to assist astronauts in completing tasks in which an additional pair or pairs of hands would be very helpful or to perform jobs either too hazardous or too mundane for crewmembers.  For this reason, the  Robonaut 2, was NASA’s first humanoid robot in space and was selected as the NASA Government Invention of the Year for 2014. Many outstanding inventions were considered for this award but Robonaut 2 was chosen after a challenging review by the NASA selection committee that evaluated the robot in the following areas: 1.) Aerospace Significance, 2.) Industry Significance, 3.) Humanitarian Significance, 4.) Technology Readiness Level, 5.) NASA Use, and 6.) Industry Use and Creativity. Robonaut 2 technologies have resulted in thirty-nine (39) issued patents, with several more under review. The NASA Invention of the Year is a first for a humanoid robot and with another in a series of firsts for Robonaut 2 that include: first robot inside a human space vehicle operating without a cage, and first robot to work with human-rated tools in space.  The R2 system developed by NASA is shown in the following JPEGs:

R2 Robotic System

R2 Robotic System(2)

R2 Robotic System(3)

 

Robonaut 2, NASA’s first humanoid robot in space, was selected as the NASA Government Invention of the Year for 2014. Many outstanding inventions were considered for this award, and Robonaut 2 was chosen after a challenging review by the NASA selection committee that evaluated the robot in the following areas: Aerospace Significance, Industry Significance, Humanitarian Significance, Technology Readiness Level, NASA Use, Industry Use and Creativity. Robonaut 2 technologies have resulted in thirty-nine (39) issued patents, with several more under review. The NASA Invention of the Year is a first for a humanoid robot and another in a series of firsts for Robonaut 2 that include: first robot inside a human space vehicle operating without a cage, and first robot to work with human-rated tools in space.

R2 first powered up for the first time in August 2011. Since that time, robotics engineers have tested R2 on ISS, completing tasks ranging from velocity air measurements to handrail cleaning—simple but necessary tasks that require a great deal of crew time.   R2 also has an on-board task of flipping switches and pushing buttons, each time controlled by space station crew members through the use of virtual reality gear. According to Steve Gaddis, “we are currently working on teaching him how to look for handrails and avoid obstacles.”

The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012.  Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators (“legs”), more capable processors, and new sensors. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions.

One advantage of a humanoid design is that Robonaut can take over simple, repetitive, or especially dangerous tasks on places such as the International Space Station. Because R2 is approaching human dexterity, tasks such as changing out an air filter can be performed without modifications to the existing design.

More and more we are seeing robotic systems do the work of humans.  It is just a matter of time before we see their usage here on terra-ferma.  I mean human-type robotic systems used to serve man.  Let’s just hope we do not evolve into the “age of the machines”.  I think I may take another look at the movie Terminator.

SMARTS

August 2, 2016


On 13 October 2014 at 9:32 A.M. my ninety-two (92) year old mother died of Alzheimer’s.   It was a very peaceful passing but as her only son it was very painful to witness her gradual memory loss and the demise of all cognitive skills.  Even though there is no cure, there are certain medications that can arrest progression to a point.  None were effective in her case.

Her condition once again piqued my interest in intelligence (I.Q.), smarts, intellect.  Are we born with an I. Q. we cannot improve? How do cultural and family environment affect intelligence? What activities diminish I.Q., if any?  Just how much of our brain’s abilities does the average working-class person need and use each day? Obviously, some professions require greater intellect than others. How is I.Q. distributed over our species in general?

IQ tests are the most reliable (e.g. consistent) and valid (e.g. accurate and meaningful) type of psychometric test that psychologists make use of. They are well-established as a good measure of a general intelligence or G.  IQ tests are widely used in many contexts – educational, professional and for leisure. Universities use IQ tests (e.g. SAT entrance exams) to select students, companies use IQ tests (job aptitude tests) to screen applicants, and high IQ societies such as Mensa use IQ test scores as membership criteria.

The following bell-shaped curve will demonstrate approximate distribution of intellect for our species.

Bell Shaped Curve

The area under the curve between scores corresponds to the percentage (%) in the population. The scores on this IQ bell curve are color-coded in ‘standard deviation units’. A standard deviation is a measure of the spread of the distribution with fifteen (15) points representing one standard deviation for most IQ tests. Nearly seventy percent (70%) of the population score between eighty-five (85) and one hundred and fifteen (115) – i.e. plus and minus one standard deviation. A very small percentage of the population (about 0.1% or 1 in 1000) have scores less than fifty-five (55) or greater than one hundred and forty-five (145) – that is, more than three (3 )standard deviations out!

As you can see, the mean I.Q. is approximately one hundred, with ninety-five percent (95%) of the general population lying between seventy (70) and one hundred and fifteen percent (115%). Only two percent (2%) of the population score greater than one hundred and thirty (130) and a tremendously small 0.01% score in the genius range, greater than one hundred forty-five percent (145%).

OK, who’s smart?  Let’s look.

PRESENT AND LIVING:

  • Gary Kasparov—190.  Born in 1963 in Baku, in what is now Azerbaijan, Garry Kasparov is arguably the most famous chess player of all time. When he was seven, Kasparov enrolled at Baku’s Young Pioneer Palace; then at ten he started to train at the school of legendary Soviet chess player Mikhail Botvinnik. In 1980 Kasparov qualified as a grandmaster, and five years later he became the then youngest-ever outright world champion. He retained the championship title until 1993, and has held the position of world number one-ranked player for three times longer than anyone else. In 1996 he famously took on IBM computer Deep Blue, winning with a score of 4–2 – although he lost to a much upgraded version of the machine the following year. In 2005 Kasparov retired from chess to focus on politics and writing. He has a reported IQ of 190.
  • Philip Emeagwali-190. Dr. Philip Emeagwali, who has been called the “Bill Gates of Africa,” was born in Nigeria in 1954. Like many African schoolchildren, he dropped out of school at age 14 because his father could not continue paying Emeagwali’s school fees. However, his father continued teaching him at home, and everyday Emeagwali performed mental exercises such as solving 100 math problems in one hour. His father taught him until Philip “knew more than he did.”
  • Marlyn vos Savant—228. Marilyn vos Savant’s intelligence quotient (I.Q.) score of 228, is certainly one of the highest ever recorded.  This very high I.Q. gave the St. Louis-born writer instant celebrity and earned her the sobriquet “the smartest person in the world.” Although vos Savant’s family was aware of her exceptionally high I.Q. scores on the Stanford-Benet test when she was ten (10) years old (she is also recognized as having the highest I.Q. score ever recorded by a child), her parents decided to withhold the information from the public in order to avoid commercial exploitation and assure her a normal childhood.
  • Mislav Predavec—192.  Mislav Predavec is a Croatian mathematics professor with a reported IQ of 190. “I always felt I was a step ahead of others. As material in school increased, I just solved the problems faster and better,” he has explained. Predavec was born in Zagreb in 1967, and his unique abilities were obvious from a young age. As for his adult achievements, since 2009 Predavec has taught at Zagreb’s Schola Medica Zagrabiensis. In addition, he runs trading company Preminis, having done so since 1989. And in 2002 Predavec founded exclusive IQ society GenerIQ, which forms part of his wider IQ society network. “Very difficult intelligence tests are my favorite hobby,” he has said. In 2012 the World Genius Directory ranked Predavec as the third smartest person in the world.
  • Rick Rosner—191.  U.S. television writer and pseudo-celebrity Richard Rosner is an unusual case. Born in 1960, he has led a somewhat checkered professional life: as well as writing for Jimmy Kimmel Live! and other TV shows, Rosner has, he says, been employed as a stripper, doorman, male model and waiter. In 2000 he infamously appeared on Who Wants to Be a Millionaire? answering a question about the altitude of capital cities incorrectly and reacting by suing the show, albeit unsuccessfully. Rosner placed second in the World Genius Directory’s 2013 Genius of the Year Awards; the site lists his IQ at 192, which places him just behind Greek psychiatrist Evangelos Katsioulis. Rosner reportedly hit the books for 20 hours a day to try and outdo Katsioulis, but to no avail.
  • Christopher Langan—210.  Born in San Francisco in 1952, self-educated Christopher Langan is a special kind of genius. By the time he turned four, he’d already taught himself how to read.  At high school, according to Langan, he tutored himself in “advanced math, physics, philosophy, Latin and Greek, all that.” What’s more, he allegedly got 100 percent on his SAT test, even though he slept through some of it. Langan attended Montana State University but dropped out. Rather like the titular character in 1997 movie Good Will Hunting, Langan didn’t choose an academic career; instead, he worked as a doorman and developed his Cognitive-Theoretic Model of the Universe during his downtime. In 1999, on TV newsmagazine 20/20, neuropsychologist Robert Novelly stated that Langan’s IQ – said to be between 195 and 210 – was the highest he’d ever measured. Langan has been dubbed “the smartest man in America.”
  • Evangelos Katsioulis—198. Katsioulis is known for his high intelligence test scores.  There are several reports that he has achieved the highest scores ever recorded on IQ tests designed to measure exceptional intelligence.   Katsioulis has a reported IQ 205 on the Stanford-Binet scale with standard deviation of 16, which is equivalent to an IQ 198.4.
  • Kim Ung-Young—210.   Before The Guinness Book of World Records withdrew its Highest IQ category in 1990, South Korean former child prodigy Kim Ung-Yong made the list with a score of 210. Kim was born in Seoul in 1963, and by the time he turned three, he could already read Korean, Japanese, English and German. When he was just eight years old, Kim moved to America to work at NASA. “At that time, I led my life like a machine. I woke up, solved the daily assigned equation, ate, slept, and so forth,” he has explained. “I was lonely and had no friends.” While he was in the States, Kim allegedly obtained a doctorate degree in physics, although this is unconfirmed. In any case, in 1978 he moved back to South Korea and went on to earn a Ph.D. in civil engineering.
  • Christopher Hirata—225.   Astrophysicist Chris Hirata was born in Michigan in 1982, and at the age of 13 he became the youngest U.S. citizen to receive an International Physics Olympiad gold medal. When he turned 14, Hirata apparently began studying at the California Institute of Technology, and he would go on to earn a bachelor’s degree in physics from the school in 2001. At 16 – with a reported IQ of 225 – he started doing work for NASA, investigating whether it would be feasible for humans to settle on Mars. Then in 2005 he went on to obtain a Ph.D. in physics from Princeton. Hirata is currently a physics and astronomy professor at The Ohio State University. His specialist fields include dark energy, gravitational lensing, the cosmic microwave background, galaxy clustering, and general relativity. “If I were to say Chris Hirata is one in a million, that would understate his intellectual ability,” said a member of staff at his high school in 1997.
  • Terrance Tao—230.  Born in Adelaide in 1975, Australian former child prodigy Terence Tao didn’t waste any time flexing his educational muscles. When he was two years old, he was able to perform simple arithmetic. By the time he was nine, he was studying college-level math courses. And in 1988, aged just 13, he became the youngest gold medal recipient in International Mathematical Olympiad history – a record that still stands today. In 1992 Tao achieved a master’s degree in mathematics from Flinders University in Adelaide, the institution from which he’d attained his B.Sc. the year before. Then in 1996, aged 20, he earned a Ph.D. from Princeton, turning in a thesis entitled “Three Regularity Results in Harmonic Analysis.” Tao’s long list of awards includes a 2006 Fields Medal, and he is currently a mathematics professor at the University of California, Los Angeles.
  • Stephen Hawkin—235. Guest appearances on TV shows such as The SimpsonsFuturama and Star Trek: The Next Generation have helped cement English astrophysicist Stephen Hawking’s place in the pop cultural domain. Hawking was born in 1942; and in 1959, when he was 17 years old; he received a scholarship to read physics and chemistry at Oxford University. He earned a bachelor’s degree in 1962 and then moved on to Cambridge to study cosmology. Diagnosed with motor neuron disease at the age of 21, Hawking became depressed and almost gave up on his studies. However, inspired by his relationship with his fiancé – and soon to be first wife – Jane Wilde, he returned to his academic pursuits and obtained his Ph.D. in 1965. Hawking is perhaps best known for his pioneering theories on black holes and his bestselling 1988 book A Brief History of Time.

PAST GENIUS:

The individuals above are living.  Let’s take a very quick look at several past geniuses.  I’m sure you know the names.

  • Johann Goethe—210-225
  • Albert Einstein—205-225
  • Leonardo da vinci-180-220
  • Isaac Newton-190-200
  • James Maxwell-190-205
  • Copernicus—160-200
  • Gottfried Leibniz—182-205
  • William Sidis—200-300
  • Carl Gauss—250-300
  • Voltaire—190-200

As you can see, these guys are heavy hitters.   I strongly suspect there are many that we have not mentioned.  Individuals, who have achieved but never gotten the opportunity to, let’s just say, shine.  OK, where does that leave the rest of us? There is GOOD news.  Calvin Coolidge said it best with the following quote:

“Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent. “

President Calvin Coolidge.

I think this says it all.  As always, I welcome your comments.

JUNO SPACECRAFT

July 21, 2016


The following information was taken from the NASA web site and the Machine Design Magazine.

BACKGROUND:

After an almost five-year journey to the solar system’s largest planet, NASA’s Juno spacecraft successfully entered Jupiter’s orbit during a thirty-five (35) minute engine burn. Confirmation the burn was successful was received on Earth at 8:53 p.m. PDT (11:53 p.m. EDT) Monday, July 4. A message from NASA is as follows:

“Independence Day always is something to celebrate, but today we can add to America’s birthday another reason to cheer — Juno is at Jupiter,” said NASA administrator Charlie Bolden. “And what is more American than a NASA mission going boldly where no spacecraft has gone before? With Juno, we will investigate the unknowns of Jupiter’s massive radiation belts to delve deep into not only the planet’s interior, but into how Jupiter was born and how our entire solar system evolved.”

Confirmation of a successful orbit insertion was received from Juno tracking data monitored at the navigation facility at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, as well as at the Lockheed Martin Juno operations center in Littleton, Colorado. The telemetry and tracking data were received by NASA’s Deep Space Network antennas in Goldstone, California, and Canberra, Australia.

“This is the one time I don’t mind being stuck in a windowless room on the night of the 4th of July,” said Scott Bolton, principal investigator of Juno from Southwest Research Institute in San Antonio. “The mission team did great. The spacecraft did great. We are looking great. It’s a great day.”

Preplanned events leading up to the orbital insertion engine burn included changing the spacecraft’s attitude to point the main engine in the desired direction and then increasing the spacecraft’s rotation rate from 2 to 5 revolutions per minute (RPM) to help stabilize it..

The burn of Juno’s 645-Newton Leros-1b main engine began on time at 8:18 p.m. PDT (11:18 p.m. EDT), decreasing the spacecraft’s velocity by 1,212 miles per hour (542 meters per second) and allowing Juno to be captured in orbit around Jupiter. Soon after the burn was completed, Juno turned so that the sun’s rays could once again reach the 18,698 individual solar cells that give Juno its energy.

“The spacecraft worked perfectly, which is always nice when you’re driving a vehicle with 1.7 billion miles on the odometer,” said Rick Nybakken, Juno project manager from JPL. “Jupiter orbit insertion was a big step and the most challenging remaining in our mission plan, but there are others that have to occur before we can give the science team the mission they are looking for.”

Can you imagine a 1.7 billion (yes that’s with a “B”) mile journey AND the ability to monitor the process?  This is truly an engineering feat that should make history.   (Too bad our politicians are busy getting themselves elected and reelected.)

Over the next few months, Juno’s mission and science teams will perform final testing on the spacecraft’s subsystems, final calibration of science instruments and some science collection.

“Our official science collection phase begins in October, but we’ve figured out a way to collect data a lot earlier than that,” said Bolton. “Which when you’re talking about the single biggest planetary body in the solar system is a really good thing. There is a lot to see and do here.”

Juno’s principal goal is to understand the origin and evolution of Jupiter. With its suite of nine science instruments, Juno will investigate the existence of a solid planetary core, map Jupiter’s intense magnetic field, measure the amount of water and ammonia in the deep atmosphere, and observe the planet’s auroras. The mission also will let us take a giant step forward in our understanding of how giant planets form and the role these titans played in putting together the rest of the solar system. As our primary example of a giant planet, Jupiter also can provide critical knowledge for understanding the planetary systems being discovered around other stars.

The Juno spacecraft launched on Aug. 5, 2011 from Cape Canaveral Air Force Station in Florida. JPL manages the Juno mission for NASA. Juno is part of NASA’s New Frontiers Program, managed at NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate. Lockheed Martin Space Systems in Denver built the spacecraft. The California Institute of Technology in Pasadena manages JPL for NASA.

SYSTEMS:

Before we list the systems, let’s take a look at the physical “machine”.

Juno Configuration

As you can see, the design is truly remarkable and includes the following modules:

  • SOLAR PANELS—Juno requires 18,000 solar cells to gather enough energy for it’s journey, 508 million miles from our sun.  In January, Juno broke the record as the first solar-powered spacecraft to fly further than 493 million miles from the sun.
  • RADIATION VAULT—During its polar orbit, Juno will repeatedly pass through the intense radiation belt that surrounds Jupiter’s equator, charged by ions and particles from Jupiter’s atmosphere and moons suspended in Juno’s colossal magnetic field. The magnetic belt, which measures 1,000 times the human toxicity level, has a radio frequency that can be detected from Earth and extends into earth’s orbit.
  • GRAVITY SCIENCE EXPERIMENT—Using advanced gravity science tools; Juno will create a detailed map of Jupiter’s gravitational field to infer Jupiter’s mass distribution and internal structure.
  • VECTOR MAGNETOMETER (MAG)—Juno’s next mission is to map Jupiter’s massive magnetic field, which extends approximately two (2) million miles toward the sun, shielding Jupiter from solar flares.  It also tails out for more than six hundred (600) million miles in solar orbit.  The dynamo is more than 20,000 times greater than that of the Earth.
  • MICROWAVE RADIOMETERS–Microwave radiomometers (MWR) will detect six (6) microwave and radio frequencies generated by the atmosphere’s thermal emissions.  This will aid in determining the depths of various cloud forms.
  • DETAILED MAPPING OF AURORA BOREALIS AND PLASMA CONTENT—As Juno passes Jupiter’s poles, cameral will capture high-resolution images of aurora borealis, and particle detectors will analyze the plasmas responsible for them.  Not only are Jupiter’s auroras much larger than those of Earth, they are also much more frequent because they are created by atmospheric plasma rather than solar flares.
  • JEDI MEASURES HIGH-ENERGY PARTICLES–Three Jupiter energetic particle detector instruments (JEDIs) will measure the angular distribution of high-energy particles as they interact with Jupiter’s upper atmospheres and inner magnetospheres to contribute to Jupiter’s northern and southern lights.
  • JADE MEASURE OF LOW-ENERGY PARTICLES—JADE, the Jovian Aurora Distributions Experiment, works in conjunction with DEDI to measure the angular distribution of lower-energy electrons and ions ranging from zero (0) to thirty (30) electron volts.
  • WAVES MEASURES PLASMA MOVEMENT—The radio/plasma wave experiment, called WAVES, will be used to measure radio frequencies  (50 Hz to 40 MHz) generated by the plasma in the magnetospheres.
  • UVS,JIRAM CAPTURE NORTHERN/SOUTHERN LIGHTS—By capturing wavelength of seventy (70) to two hundred and five (205) nm, an ultraviolet imager/spectrometer (UVS) will generate images of the auroras UV spectrum to view the auroras during the Jovian day.
  • HIGH-RESOLUTION CAMERA—JunoCam, a high-resolution color camera, will capture red, green and blue wavelengths photos of Jupiter’s atmosphere and aurora.  The NASA team expects the camera to last about seven orbits before being destroyed by radiation.

CONCLUSION:

This technology is truly amazing to me.  Think of the planning, the engineering design, the testing, the computer programming needed to bring this program to fruition.  Amazing!

 

MOORE’S LAW

June 10, 2016


There is absolutely no doubt the invention and development of chip technology has changed the world and made possible a remarkable number of devices we seemingly cannot live without.  It has also made possible miniaturization of electronics considered impossible thirty years ago.  This post is about the rapid improvement that technology and those of you who read my posts are probably very familiar with Moor’s Law.  Let us restate and refresh our memories.

“Moore’s law” is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit has doubled approximately every two years.”

Chart of Moore's Law

You can see from the digital above, that law is represented in graph form with the actual “chip” designation given.  Most people will be familiar with Moore’s Law, which was not so much a law, but a prediction given by Intel’s Gordon Moore.   His theory was stated in 1965.  Currently, the density of components on a silicon wafer is close to reaching its physical limit but there are promising technologies that should supersede transistors to overcome this “shaky” fact.  Just who is Dr. Gordon Moore?

GORDON E. MOORE:

Gordon Earle Moore was born January 3, 1929.  He is an American businessman, co-founder and Chairman Emeritus of Intel Corporation, and the author of Moore’s law.  Moore was born in San Francisco, California, and grew up in nearby Pescadero. He attended Sequoia High School in Redwood City and initially went to San Jose State University.  After two years he transferred to the University of California, Berkeley, from which he received a Bachelor of Science degree in chemistry in 1950.

In September, 1950 Moore matriculated at the California Institute of Technology (Caltech), where he received a PhD in chemistry and a minor in physics, all awarded in 1954. Moore conducted postdoctoral research at the Applied Physics Laboratory at Johns Hopkins University from 1953 to 1956.      

Moore joined MIT and Caltech alumnus William Shockley at the Shockley Semiconductor Laboratory division of Beckman Instruments, but left with the “traitorous eight“, when Sherman Fairchild agreed to fund their efforts to created the influential Fairchild Semiconductor corporation.

In July 1968, Robert Noyce and Moore founded NM Electronics which later became Intel Corporation where he served as Executive Vice President until 1975.   He then became President.  In April 1979, Moore became Chairman of the Board and Chief Executive Officer, holding that position until April 1987, when he became Chairman of the Board. He was named Chairman Emeritus of Intel Corporation in 1997.  Under Noyce, Moore, and later Andrew Grove, Intel has pioneered new technologies in the areas of computer memoryintegrated circuits and microprocessor design.  A picture of Dr. Moore is given as follows:

Gordon Moore

JUST HOW DO YOU MAKE A COMPUTER CHIP?

We are going to use Intel as our example although there are several “chip” manufacturers in the world.  The top ten (10) are as follows:

  • INTEL = $48.7 billion in sales
  • Samsung = $28.6 billion in sales
  • Texas Instruments = $14 billion in sales.
  • Toshiba = $12.7 billion in sales
  • Renesas = $ 10.6 billion in sales
  • Qualcomm =  $10.2 billion in sales
  • ST Microelectronics = $ 9.7 billion in sales
  • Hynix = $9.3 billion in sales
  • Micron = $7.4 billion in sales
  • Broadcom = $7.2 billion in sales

As you can see, INTEL is by far the biggest, producing the greatest number of computer chips.

The deserts of Arizona are home to Intel’s Fab 32, a $3 billion factory that is performing one of the most complicated electrical engineering feats of our time.  It’s here that processors with components measuring just forty-five (45) millionths of a millimeter across are manufactured, ready to be shipped to motherboard manufacturers all over the world.  Creating these complicated miniature systems is impressive enough, but it’s not the processors’ diminutive size that’s the most startling or impressive part of the process. It may seem an impossible transformation, but these fiendishly complex components are made from nothing more glamorous than sand. Such a transformative feat isn’t simple. The production process requires more than three hundred (300) individual steps.

STEP ONE:

Sand is composed of silica (also known as silicon dioxide), and is the starting point for making a processor. Sand used in the building industry is often yellow, orange or red due to impurities, but the type chosen in the manufacture of silicon is a much purer form known as silica sand, which is usually recovered by quarrying. To extract the element silicon from the silica, it must be reduced (in other words, have the oxygen removed from it). This is accomplished by heating a mixture of silica and carbon in an electric arc furnace to a temperature in excess of 2,000°C.  The carbon reacts with the oxygen in the molten silica to produce carbon dioxide (a by-product) and silicon, which settles in the bottom of the furnace. The remaining silicon is then treated with oxygen to reduce any calcium and aluminum impurities. The end result of this process is a substance referred to as metallurgical-grade silicon, which is up to ninety-nine percent (99 %) pure.

This is not nearly pure enough for semiconductor manufacture, however, so the next job is to refine the metallurgical-grade silicon further. The silicon is ground to a fine powder and reacted with gaseous hydrogen chloride in a fluidized bed reactor at 300°C giving a liquid compound of silicon called trichlorosilane.

Impurities such as iron, aluminum, boron and phosphorous also react to give their chlorides, which are then removed by fractional distillation. The purified trichlorosilane is vaporized and reacted with hydrogen gas at 1,100°C so that the elemental silicon is retrieved.

During the reaction, silicon is deposited on the surface of an electrically heated ultra-pure silicon rod to produce a silicon ingot. The end result is referred to as electronic-grade silicon, and has a purity of 99.999999 per cent. (Incredible purity.)

STEP TWO:

Although pure to a very high degree, raw electronic-grade silicon has a polycrystalline structure. In other words, it’s made of many small silicon crystals, with defects called grain boundaries. Because these anomalies affect local electronic behavior, polycrystalline silicon is unsuitable for semiconductor manufacturing. To turn it into a usable material, the silicon must be transformed into single crystals that have a regular atomic structure. This transformation is achieved through the Czochralski Process. Electronic-grade silicon is melted in a rotating quartz crucible and held at just above its melting point of 1,414°C. A tiny crystal of silicon is then dipped into the molten silicon and slowly withdrawn while being continuously rotated in the opposite direction to the rotation of the crucible. The crystal acts as a seed, causing silicon from the crucible to crystallize around it. This builds up a rod – called a boule – that comprises a single silicon crystal. The diameter of the boule depends on the temperature in the crucible, the rate at which the crystal is ‘pulled’ (which is measured in millimeters per hour) and the speed of rotation. A typical boule measures 300mm in diameter.

STEP THREE:

Integrated circuits are approximately linear, which is to say that they’re formed on the surface of the silicon. To maximize the surface area of silicon available for making chips, the boule is sliced up into discs called wafers. The wafers are just thick enough to allow them to be handled safely during semiconductor fabrication. 300mm wafers are typically 0.775mm thick. Sawing is carried out using a wire saw that cuts multiple slices simultaneously, in the same way that some kitchen gadgets cut an egg into several slices in a single operation.

Silicon saws differ from kitchen tools in that the wire is constantly moving and carries with it a slurry of silicon carbide, the same abrasive material that forms the surface of ‘wet-dry’ sandpaper. The sharp edges of each wafer are then smoothed to prevent the wafers from chipping during later processes.

Next, in a procedure called ‘lapping’, the surfaces are polished using an abrasive slurry until the wafers are flat to within an astonishing 2μm (two thousandths of a millimeter). The wafer is then etched in a mixture of nitric, hydrofluoric and acetic acids. The nitric acid oxides the surfaces to give a thin layer of silicon dioxide – which the hydrofluoric acid immediately dissolves away to leave a clean silicon surface – and the acetic acid controls the reaction rate. The result of all this refining and treating is an even smoother and cleaner surface.

STEP FOUR:

In many of the subsequent steps, the electrical properties of the wafer will be modified through exposure to ion beams, hot gasses and chemicals. But this needs to be done selectively to specific areas of the wafer in order to build up the circuit.  A multistage process is used to create an oxide layer in the shape of the required circuit features. In some cases, this procedure can be achieved using ‘photoresist’, a photosensitive chemical not dissimilar to that used in making photographic film (just as described in steps B, C and D, below).

Where hot gasses are involved, however, the photoresist would be destroyed, making another, more complicated method of masking the wafer necessary. To overcome the problem, a patterned oxide layer is applied to the wafer so that the hot gasses only reach the silicon in those areas where the oxide layer is missing. Applying the oxide layer mask to the wafer is a multistage process, as illustrated as follows.

(A) The wafer is heated to a high temperature in a furnace. The surface layer of silicon reacts with the oxygen present to create a layer of silicon dioxide.

(B) A layer of photoresist is applied. The wafer is spun in a vacuum so that the photoresist spreads out evenly over the surface before being baked dry.

(C) The wafer is exposed to ultraviolet light through a photographic mask or film. This mask defines the required pattern of circuit features. This process has to be carried out many times, once for each chip or rectangular cluster of chips on the wafer. The film is moved between each exposure using a machine called a ‘stepper’.

(D) The next stage is to develop the latent circuit image. This process is carried out using an alkaline solution. During this process, those parts of the photoresist that were exposed to the ultraviolet soften in the solution and are washed away.

(E) The photoresist isn’t sufficiently durable to withstand the hot gasses used in some steps, but it is able to withstand hydrofluoric acid, which is now used to dissolve those parts of the silicon oxide layer where the photoresist has been washed away.

(F) Finally, a solvent is used to remove the remaining photoresist, leaving a patterned oxide layer in the shape of the required circuit features.

STEP FIVE:

The fundamental building block of a processor is a type of transistor called a MOSFET.  There are “P” channels and “N” channels. The first step in creating a circuit is to create n-type and p-type regions. Below is given the method Intel uses for its 90nm process and beyond:

(A) The wafer is exposed to a beam of boron ions. These implant themselves into the silicon through the gaps in a layer of photoresist to create areas called ‘p-wells’. These are, confusingly enough, used in the n-channel MOSFETs.

A boron ion is a boron atom that has had an electron removed, thereby giving it a positive charge. This charge allows the ions to be accelerated electrostatically in much the same way that electrons are accelerated towards the front of a CRT television, giving them enough energy to become implanted into the silicon.

(B) A different photoresist pattern is now applied, and a beam of phosphorous ions is used in the same way to create ‘n-wells’ for the p-channel MOSFETs.

(C) In the final ion implantation stage, following the application of yet another photoresist, another beam of phosphorous ions is used to create the n-type regions in the p-wells that will act as the source and drain of the n-channel MOSFETs. This has to be carried out separately from the creation of the n-wells because it needs a greater concentration of phosphorous ions to create n-type regions in p-type silicon than it takes to create n-type regions in pure, un-doped silicon.

(D) Next, following the deposition of a patterned oxide layer (because, once again, the photoresist would be destroyed by the hot gas used here), a layer of silicon-germanium doped with boron (which is a p-type material) is applied.

That’s just about it.  I know this is long and torturous but we did say there were approximately three hundred steps in producing a chip.

OVERALL SUMMARY:

The way a chip works is the result of how a chip’s transistors and gates are designed and the ultimate use of the chip. Design specifications that include chip size, number of transistors, testing, and production factors are used to create schematics—symbolic representations of the transistors and interconnections that control the flow of electricity though a chip.

Designers then make stencil-like patterns, called masks, of each layer. Designers use computer-aided design (CAD) workstations to perform comprehensive simulations and tests of the chip functions. To design, test, and fine-tune a chip and make it ready for fabrication takes hundreds of people.

The “recipe” for making a chip varies depending on the chip’s proposed use. Making chips is a complex process requiring hundreds of precisely controlled steps that result in patterned layers of various materials built one on top of another.

A photolithographic “printing” process is used to form a chip’s multilayered transistors and interconnects (electrical circuits) on a wafer. Hundreds of identical processors are created in batches on a single silicon wafer.  A JPEG of an INTEL wafer is given as follows:

Chip Wafer

Once all the layers are completed, a computer performs a process called wafer sort test. The testing ensures that the chips perform to design specifications.

After fabrication, it’s time for packaging. The wafer is cut into individual pieces called die. The die is packaged between a substrate and a heat spreader to form a completed processor. The package protects the die and delivers critical power and electrical connections when placed directly into a computer circuit board or mobile device, such as a smartphone or tablet.  The chip below is an INTEL Pentium 4 version.

INTEL Pentium Chip

Intel makes chips that have many different applications and use a variety of packaging technologies. Intel packages undergo final testing for functionality, performance, and power. Chips are electrically coded, visually inspected, and packaged in protective shipping material for shipment to Intel customers and retail.

CONCLUSIONS:

Genius is a wonderful thing and Dr. Gordon E. Moore was certainly a genius.  I think their celebrity is never celebrated enough.  We know the entertainment “stars”, sports “stars”, political “want-to-bees” get their press coverage but nine out of ten individuals do not know those who have contributed significantly to better lives for us. People such as Dr. Moore.   Today is the funeral of Caius Clay; AKA Muhammad Ali.  A great boxer and we are told a really kind man.  I have no doubt both are true.  His funeral has been televised and on-going for about four (4) hours now.  Do you think Dr. Moore will get the recognition Mr. Ali is getting when he dies?  Just a thought.

R & D SPINOFFS

March 12, 2016


Last week I posted an article on WordPress entitled “Global Funding”.  The post was a prognostication relative to total global funding in 2016 through 2020 for research and development in all disciplines.  I certainly hope there are no arguments as to benefits of R & D.  R & D is the backbone of technology.  The manner in which science pushes the technological envelope is research and development.  The National Aeronautics and Space Administration (NASA) has provided a great number of spinoffs that greatly affect everyday lives remove drudgery from activities that otherwise would consume a great deal of time and just plain sweat.  The magazine “NASA Tech Briefs”, March 2016, presented forty such spinoffs demonstrating the great benefits of NASA programs over the years.  I’m not going to resent all forty but let’s take a look at a few to get a flavor of how NASA R & D has influenced consumers the world over.  Here we go.

  • DIGITAL IMAGE SENSORS—The CMOS active pixel sensor in most digital image-capturing devices was invented when NASA needed to miniaturize cameras for interplanety missions.  It is also widely used in medical imaging and dental X-ray devices.
  • Aeronautical Winglets—Key aerodynamic advances made by NASA researchers led to the up-turned tips of wings known as “winglets.”  Winglets are used by nearly all modern aircraft and have saved literally billions of dollars in fuel costs.
  • Precision GPS—Beginning in the early 1990s, NASA’s Jet Propulsion Laboratories (JPL) developed software capable of correcting for GPS errors.  NASA monitors the integrity of global GPS data in real time for the U.S. Air Force, which administers the positioning service world-wide.
  • Memory Foam—Memory foam was invented by NASA-funded researchers looking for ways to keep test pilots cushioned during flights.  Today, memory foam makes for more comfortable beds, couches, and chairs, as well as better shoes, movie theater seats, and even football helmets.
  • Truck Aerodynamics—Nearly all trucks on the road have been shaped by NASA.  Agency research in aerodynamic design led to the curves and contours that help modern big rigs cut through the air with less drag. Perhaps, as much as 6,800 gallons of diesel per year per truck has been saved.
  • Invisible Braces for Teeth—A company working with NASA invented the translucent ceramic that became the critical component for the first “invisible” dental braces, which went on to become one of the best-selling orthodontic products of all time.
  • Tensile Fabric for Architecture—A material originally developed for spacesuits can be seen all over the world in stadiums, arenas, airports, pavilions, malls, and museums. BirdAir Inc. developed the fabric from fiberglass and Teflon composite that once protected Apollo astronauhts as they roamed the lunar surface.  Today, that same fabric shades and protects people in public places.
  • Supercritical Wing—NASA engineers at Langley Research Center improved wing designs resulting in remarkable performance of an aircraft approaching the speed of sound.
  • Phase-change Materials—Research on next-generation spacesuits included the development of phase-change materials, which can absorb, hold, and release heat to keep people comfortable.  This technology is now found in blankets, bed sheets, dress shirts, T-shirts, undergarments, and other products.
  • Cardiac Pump—Hundreds of people in need of a heart transplant have been kept alive thanks to a cardiac pump designed with the help of NASA expertise in simulating fluid-flow through rocket engines.  This technology served as a “bridge” to the transplant methodology.
  • Flexible Aeorgel—Aeorgel is a porous material in which the liquid component of the gel has been carefully dried out and replaced by gas, leaving a solid almost entirely of air.  It long held the record as the world’s lightest solid, and is one of the most effective insulator in existence.
  • Digital Fly-By-Wire—For the first seventy (70) years of human flight, pilots used controls that connected directly to aircraft components through cables and pushrods. A partnership between NASA and Draper Laboratory in the 1970 resulted in the first plane flown digitally, where a computer collected all of the input from the pilot’s controls and used that information to command aerodynamic surfaces.
  • Cochlear Implants—One of the pioneers in early cochlear implant technology was Adam Kissiah, an engineer at Kennedy Space Center.  Mr. Kissiah was hearing-impaired and used NASA technology to greatly improve hearing devices by developing implants that worked by electric impulses rather than sound amplification.
  • Radiant Barrier—To keep people and spacecraft safe from harmful radiation, NASA developed a method for depositing a thin metal coating on a material to make it highly reflective. On Earth, it has become known as radiant barrier technology.
  • Gigapan Photography—Since 2004, new generations of Mars rovers have been stunning the world with high-resolution imagery.  Though equipped with only one megapixel cameras, the Spirit and Opportunity rovers have a robotic platform and software that allows them to combine dozens of shots into a single photograph.
  • Anti-icing Technology—NASA has spent many years solving problems related to ice accumulation in flight surfaces.  These breakthroughs have been applied to commercial aircraft flight.
  • Emergency Blanket—So-called space blankets, also known as emergency blankets, were first developed by NASA in 1964.  The highly reflective insulators are often included in emergency kits, and are used by long-distance runners and fire-team personnel.
  • Firefighter Protection—NASA helped develop a line of polymer textiles for use in spacesuits and vehicles.  Dubbed, PBI, the heat and flame-resistant fiber is now used in numerous firefighting, military, motor sports, and other applications.

These are just a few of the many NASA spinoffs that have solved down-to-earth problems for people over the world.  Let’s continue funding NASA to ensure future wonderful and usable technology.

%d bloggers like this: