CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

DIGITAL READINESS GAPS

April 23, 2017


This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center.  This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.

HOW DO ADULTS LEARN?  Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty.  I think of “book learning” first and digital as a backup.  They head straight for their i-pad or i-phone.  GOOGLE is a verb and not a company name as far as they are concerned.  (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)

According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:

  • Adult learning is self-directed/autonomous
    Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
  • Adult learning utilizes knowledge & life experiences
    Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
  • Adult learning is goal-oriented
    The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
  • Adult learning is relevancy-oriented
    One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
  • Adult learning highlights practicality
    Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
  • Adult learning encourages collaboration
    Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.

One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.

As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.

Specifically, the Pew report made their assessment relative to American adults according to five main factors:

  • Their confidence in using computers,
  • Their facility with getting new technology to work
  • Their use of digital tools for learning
  • Their ability to determine the trustworthiness of online information,
  • Their familiarity with contemporary “education tech” terms.

It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:

The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness.  The chart below will indicate their classifications.

The breakdown is as follows:

Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.

  • A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
  • We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
  • A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.

Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.

  • A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
  • Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.

CONCLUSIONS:

To me, one of the greatest lessons from my university days—NEVER STOP LEARNING.  I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years.  If you stop learning, the information you receive will become obsolete in five years.  At the pace of technology today, that may be five months.  You never stop learning AND you embrace existent technology.  In other words—do digital. Digital is your friend.  GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy.  At least, start there then, trust but verify.


Biomedical Engineering may be a fairly new term so some of you.   What is a biomedical engineer?  What do they do? What companies to they work for?  What educational background is necessary for becoming a biomedical engineer?  These are good questions.  From LifeScience we have the follow definition:

“Biomedical engineering, or bioengineering, is the application of engineering principles to the fields of biology and health care. Bioengineers work with doctors, therapists and researchers to develop systems, equipment and devices in order to solve clinical problems.”

Biomedical engineering has evolved over the years in response to advancements in science and technology.  This is NOT a new classification for engineering involvement.  Engineers have been at this for a while.  Throughout history, humans have made increasingly more effective devices to diagnose and treat diseases and to alleviate, rehabilitate or compensate for disabilities or injuries. One example is the evolution of hearing aids to mitigate hearing loss through sound amplification. The ear trumpet, a large horn-shaped device that was held up to the ear, was the only “viable form” of hearing assistance until the mid-20th century, according to the Hearing Aid Museum. Electrical devices had been developed before then, but were slow to catch on, the museum said on its website.

The possibilities of a bioengineer’s charge are as follows:

The equipment envisioned, designed, prototyped, tested and eventually commercialized has made a resounding contribution and value-added to our healthcare system.  OK, that’s all well and good but exactly what do bioengineers do on a daily basis?  What do they hope to accomplish?   Please direct your attention to the digital figure below.  As you can see, the world of the bioengineer can be somewhat complex with many options available.

The breadth of activity of biomedical engineers is significant. The field has moved from being concerned primarily with the development of medical devices in the 1950s and 1960s to include a wider ranging set of activities. As illustrated in the figure above, the field of biomedical engineering now includes many new career areas. These areas include:

  • Application of engineering system analysis (physiologic modeling, simulation, and control to biological problems
  • Detection, measurement, and monitoring of physiologic signals (i.e., biosensors and biomedical instrumentation)
  • Diagnostic interpretation via signal-processing techniques of bioelectric data
  • Therapeutic and rehabilitation procedures and devices (rehabilitation engineering)
  • Devices for replacement or augmentation of bodily functions (artificial organs)
  • Computer analysis of patient-related data and clinical decision making (i.e., medical informatics and artificial intelligence)
  • Medical imaging; that is, the graphical display of anatomic detail or physiologic Function.
  • The creation of new biologic products (i.e., biotechnology and tissue engineering)

Typical pursuits of biomedical engineers include

  • Research in new materials for implanted artificial organs
  • Development of new diagnostic instruments for blood analysis
  • Writing software for analysis of medical research data
  • Analysis of medical device hazards for safety and efficacy
  • Development of new diagnostic imaging systems
  • Design of telemetry systems for patient monitoring
  • Design of biomedical sensors
  • Development of expert systems for diagnosis and treatment of diseases
  • Design of closed-loop control systems for drug administration
  • Modeling of the physiologic systems of the human body
  • Design of instrumentation for sports medicine
  • Development of new dental materials
  • Design of communication aids for individuals with disabilities
  • Study of pulmonary fluid dynamics
  • Study of biomechanics of the human body
  • Development of material to be used as replacement for human skin

I think you will agree, these areas of interest encompass any one of several engineering disciplines; i.e. mechanical, chemical, electrical, computer science, and even civil engineering as applied to facilities and hospital structures.


It really does creep up on you—the pain that is.  Minimal at first for a few months but at least livable.  I thought I could exercise and stretch to lessen the discomfort and that did work to a great degree.  That was approximately seven (7) months ago. Reality did set in with the pain being so great that something had to be done.

In the decade of the eighties, I was an avid runner with thoughts of running a marathon or even marathons. My dream was to run the New York City and Boston Marathon first then concentrate on local 10 K events. After one year I would concentrate on the Atlanta marathon—at least that was the plan.  I was clocking about twenty to thirty miles per week with that goal in mind.    All of my running was on pavement with three five miles runs on Monday, Wednesday and Friday and a ten-mile run on Saturday.  It did seem reasonable. I would drive the courses to get exact mileage and vary the routes just to mix it up a little and bring about new scenery.  After several weeks, I noticed pains starting to develop around the twenty-five miles per week distances.  They did go away but always returned towards the latter part of each week.   Medical examinations would later show the beginning of arthritis in my right hip.  I shortened my distances hoping to alleviate the pain and that worked to some extent for a period of time.

Time caught up with me.  The pains were so substantial I could not tie my shoe laces or stoop to pick up an article on the floor.   It was time to pull the trigger.

TOTAL HIP REPLACEMENT:

In a total hip replacement (also called total hip arthroplasty), the damaged bone and cartilage is removed and replaced with prosthetic components.

  • The damaged femoral head is removed and replaced with a metal stem that is placed into the hollow center of the femur. The femoral stem may be either cemented or “press fit” into the bone. One of the first procedures is dislocating the hip so femoral stem may be removed.
  • A metal or ceramic ball is placed on the upper part of the stem. This ball replaces the damaged femoral head that was removed.
  • The damaged cartilage surface of the socket (acetabulum) is removed and replaced with a metal socket. Screws or cement are sometimes used to hold the socket in place.
  • A plastic, ceramic, or metal spacer is inserted between the new ball and the socket to allow for a smooth gliding surface.I chose to have an epidural so recovery would be somewhat quicker and the aftereffects lessened.  I do not regret that choice and would recommend that to anyone undergoing hip replacement.  One day home and I’m following my doctor’s orders to a “T”. Doing everything and then some to make sure I touch all of the bases.  I was very tempted to pull up “U”- TUBE to see how the surgery was accomplished but after hearing it was more carpentry than medicine, I decided I would delay that investigation for a year-or forever.  Some things I just might not need to know.

    Sorry for this post being somewhat short but the meds are wearing off and I need to “reload”.  I promise to do better in the very near future.

THE NEXT FIVE (5) YEARS

February 15, 2017


As you well know, there are many projections relative to economies, stock market, sports teams, entertainment, politics, technology, etc.   People the world over have given their projections for what might happen in 2017.  The world of computing technology is absolutely no different.  Certain information for this post is taken from the publication “COMPUTER.org/computer” web site.  These guys are pretty good at projections and have been correct multiple times over the past two decades.  They take their information from the IEEE.

The IEEE Computer Society is the world’s leading membership organization dedicated to computer science and technology. Serving more than 60,000 members, the IEEE Computer Society is the trusted information, networking, and career-development source for a global community of technology leaders that includes researchers, educators, software engineers, IT professionals, employers, and students.  In addition to conferences and publishing, the IEEE Computer Society is a leader in professional education and training, and has forged development and provider partnerships with major institutions and corporations internationally. These rich, self-selected, and self-paced programs help companies improve the quality of their technical staff and attract top talent while reducing costs.

With these credentials, you might expect them to be on the cutting edge of computer technology and development and be ahead of the curve as far as computer technology projections.  Let’s take a look.  Some of this absolutely blows me away.

human-brain-interface

This effort first started within the medical profession and is continuing as research progresses.  It’s taken time but after more than a decade of engineering work, researchers at Brown University and a Utah company, Blackrock Microsystems, have commercialized a wireless device that can be attached to a person’s skull and transmit via radio thought commands collected from a brain implant. Blackrock says it will seek clearance for the system from the U.S. Food and Drug Administration, so that the mental remote control can be tested in volunteers, possibly as soon as this year.

The device was developed by a consortium, called BrainGate, which is based at Brown and was among the first to place implants in the brains of paralyzed people and show that electrical signals emitted by neurons inside the cortex could be recorded, then used to steer a wheelchair or direct a robotic arm (see “Implanting Hope”).

A major limit to these provocative experiments has been that patients can only use the prosthetic with the help of a crew of laboratory assistants. The brain signals are collected through a cable screwed into a port on their skull, then fed along wires to a bulky rack of signal processors. “Using this in the home setting is inconceivable or impractical when you are tethered to a bunch of electronics,” says Arto Nurmikko, the Brown professor of engineering who led the design and fabrication of the wireless system.

capabilities-hardware-projection

Unless you have been living in a tree house for the last twenty years you know digital security is a huge problem.  IT professionals and companies writing code will definitely continue working on how to make our digital world more secure.  That is a given.

exascale

We can forget Moor’s Law which refers to an observation made by Intel co-founder Gordon Moore in 1965. He noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention.  Moore’s law predicts that this trend will continue into the foreseeable future. Although the pace has slowed, the number of transistors per square inch has since doubled approximately every 18 months. This is used as the current definition of Moore’s law.  We are well beyond that with processing speed literally progressing at “warp six”.

non-volitile-memory

If you are an old guy like me, you can remember when computer memory costs an arm and a leg.  Take a look at the JPEG below and you get an idea as to how memory costs has decreased over the years.

hard-drive-cost-per-gbyte

As you can see, costs have dropped remarkably over the years.

photonics

texts-for-photonoics

power-conservative-multicores

text-for-power-conservative-multicores

CONCLUSION:

If you combine the above predictions with 1.) Big Data, 2.) Internet of Things (IoT), 3.) Wearable Technology, 4.) Manufacturing 4.0, 5.) Biometrics, and other fast-moving technologies you have a world in which “only the adventurous thrive”.  If you do not like change, I recommend you enroll in a monastery.  You will not survive gracefully without technology on the rampage. Just a thought.

DIALYSIS PUMPS

February 8, 2017


I entered the university shortly after Sir Isaac Newton and Gottfried Leibniz invented calculus. (OK, I’m not quite that old but you do get the picture.) At any rate, I’ve been a mechanical engineer for a lengthy period of time.  If I had to do it all over again, I would choose Biomedical Engineering instead of mechanical engineering.  Biomedical really fascinates me.  The medical “hardware” and software available today is absolutely marvelous.  As with most great technologies, it has been evolutionary instead of revolutionary.    One such evolution has been the development of the dialysis pump to facilitate administrating insulin to patients suffering with diabetes.

On my way to exercise Monday, Wednesday and Friday, I pass three dialysis clinics.  I am amazed that on some days the parking lots are, not only full, but cars are parked on the roads on either side of the buildings. Almost always, I see at least one ambulance parked in front of the clinic having delivered a patient to the facilities.  In Chattanooga proper, there are nine (9) clinics and approximately 3,306 dialysis centers in the United States. These centers employ 127,671 individuals and bring in twenty-two billion dollars ($22B) in revenue.  There is a four-point four percent (4.4%) growth rate on an annual basis. Truly, diabetes has reached epidemic proportions in our country.

Diabetes is not only one of the most common chronic diseases, it is also complex and difficult to treat.  Insulin is often administered between meals to keep blood sugar within target range.  This range is determined by the number of carbohydrates ingested. Four hundred (400) million adults worldwide suffer from diabetes with one and one-half million (1.5) deaths on an annual basis.  It is no wonder that so many scientists, inventors, and pharmaceutical and medical device companies are turning their attention to improving insulin delivery devices.   There are today several delivery options, as follows:

  • Syringes
  • Pens
  • Insulin Injection Aids
  • Inhaled Insulin Devices
  • External Pumps
  • Implantable Pumps

Insulin pumps, especially the newer devices, have several advantages over traditional injection methods.  These advantages make using pumps a preferable treatment option.  In addition to eliminating the need for injections at work, at the gym, in restaurants and other settings, the pumps are highly adjustable thus allowing the patient to make precise changes based on exercise levels and types of food being consumed.

These delivery devices require: 1.) An insulin cartridge, 2.) A battery-operated pump, and 3.) Computer chips that allow the patient to control the dosage.  A detailed list of components is given below.  Most modern devices have a display window or graphical user interface (GUI) and selection keys to facilitate changes and administrating insulin.  A typical pump is shown as follows:

insulin-pump

Generally, insulin pumps consist of a reservoir, a microcontroller with battery, flexible catheter tubing, and a subcutaneous needle. When the first insulin pumps were created in the 1970-80’s, they were quite bulky (think 1980’s cell phone). In contrast, most pumps today are a little smaller than a pager. The controller and reservoir are usually housed together. Patients often will wear the pump on a belt clip or place it in a pocket as shown below. A basic interface lets the patient adjust the rate of insulin or select a pre-set. The insulins used are rapid acting, and the reservoir typically holds 200-300 units of insulin. The catheter is similar to most IV tubing (often smaller in diameter), and connects directly to the needle. Patients insert the needle into their abdominal wall, although the upper arm or thigh can be used. The needle infusion set can be attached via any number of adhesives, but tape can do in a pinch. The needle needs to be re-sited every 2-3 days.

pump-application

As you can see from the above JPEG, the device itself can be clipped onto clothing and worn during the day for continued use.

The pump can help an individual patient more closely mimic the way a healthy pancreas functions. The pump, through a Continuous Subcutaneous Insulin Infusion (CSII), replaces the need for frequent injections by delivering precise doses of rapid-acting insulin 24 hours a day to closely match your body’s needs.  Two definitions should be understood relative to insulin usage.  These are as follows:

  • Basal Rate: A programmed insulin rate made of small amounts of insulin delivered continuously mimics the basal insulin production by the pancreas for normal functions of the body (not including food). The programmed rate is determined by your healthcare professional based on your personal needs. This basal rate delivery can also be customized according to your specific daily needs. For example, it can be suspended or increased / decreased for a definite time frame: this is not possible with basal insulin injections.
  • Bolus Dose: Additional insulin can be delivered “on demand” to match the food you are going to eat or to correct high blood sugar. Insulin pumps have bolus calculators that help you calculate your bolus amount based on settings that are pre-determined by your healthcare professional and again based on your special needs.

A modern insulin pump can accomplish both basal and bolus needs as the situation demands.

The benefits relative to traditional methods are as follows:

  • Easier dosing: calculating insulin requirements can be a complex task with many different aspects to be considered. It is important that the device ensures accurate dosing by taking into account any insulin already in the body, the current glucose levels, carbohydrate intake and personal insulin settings.
  • Greater flexibility:  The pump must be capable of instant adjustment to allow for exercise, during illness or to deliver small boluses to cover meals and snacks. This can easily be done with a touch of a button with the more-modern devices. There should be a temporary basal rate option to proportionally reduce or increase the basal insulin rate, during exercise or illness, for example.
  • More convenience: The device must offer additional convenience of a wirelessly connected blood glucose meter. This meter automatically sends blood glucose values to the pump, allowing more accurate calculations and to deliver insulin boluses discreetly.

These wonderful devices all result from technology and technological advances.  Needs DO generate devices.  I hope you enjoy this post and as always, I welcome your comments.

HUBBLE CONSTANT

January 28, 2017


The following information was taken from SPACE.com and NASA.

Until just recently I did not know there was a Hubble Constant.  The term had never popped up on my radar.  For this reason, I thought it might be noteworthy to discuss the meaning and the implications.

THE HUBBLE CONSTANT:

The Hubble Constant is the unit of measurement used to describe the expansion of the universe. The Hubble Constant (Ho) is one of the most important numbers in cosmology because it is needed to estimate the size and age of the universe. This long-sought number indicates the rate at which the universe is expanding, from the primordial “Big Bang.”

The Hubble Constant can be used to determine the intrinsic brightness and masses of stars in nearby galaxies, examine those same properties in more distant galaxies and galaxy clusters, deduce the amount of dark matter present in the universe, obtain the scale size of faraway galaxy clusters, and serve as a test for theoretical cosmological models. The Hubble Constant can be stated as a simple mathematical expression, Ho = v/d, where v is the galaxy’s radial outward velocity (in other words, motion along our line-of-sight), d is the galaxy’s distance from earth, and Ho is the current value of the Hubble Constant.  However, obtaining a true value for Ho is very complicated. Astronomers need two measurements. First, spectroscopic observations reveal the galaxy’s redshift, indicating its radial velocity. The second measurement, the most difficult value to determine, is the galaxy’s precise distance from earth. Reliable “distance indicators,” such as variable stars and supernovae, must be found in galaxies. The value of Ho itself must be cautiously derived from a sample of galaxies that are far enough away that motions due to local gravitational influences are negligibly small.

The units of the Hubble Constant are “kilometers per second per megaparsec.” In other words, for each megaparsec of distance, the velocity of a distant object appears to increase by some value. (A megaparsec is 3.26 million light-years.) For example, if the Hubble Constant was determined to be 50 km/s/Mpc, a galaxy at 10 Mpc, would have a redshift corresponding to a radial velocity of 500 km/s.

The cosmos has been getting bigger since the Big Bang kick-started the growth about 13.82 billion years ago.  The universe, in fact, is getting faster in its acceleration as it gets bigger.  As of March 2013, NASA estimates the rate of expansion is about 70.4 kilometers per second per megaparsec. A megaparsec is a million parsecs, or about 3.3 million light-years, so this is almost unimaginably fast. Using data solely from NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), the rate is slightly faster, at about 71 km/s per megaparsec.

The constant was first proposed by Edwin Hubble (whose name is also used for the Hubble Space Telescope). Hubble was an American astronomer who studied galaxies, particularly those that are far away from us. In 1929 — based on a realization from astronomer Harlow Shapley that galaxies appear to be moving away from the Milky Way — Hubble found that the farther these galaxies are from Earth, the faster they appear to be moving, according to NASA.

While scientists then understood the phenomenon to be galaxies moving away from each other, today astronomers know that what is actually being observed is the expansion of the universe. No matter where you are located in the cosmos, you would see the same phenomenon happening at the same speed.

Hubble’s initial calculations have been refined over the years, as more and more sensitive telescopes have been used to make the measurements. These include the Hubble Space Telescope (which examined a kind of variable star called Cepheid variables) and WMAP, which extrapolated based on measurements of the cosmic microwave background — a constant background temperature in the universe that is sometimes called the “afterglow” of the Big Bang.

THE BIG BANG:

The Big Bang theory is an effort to explain what happened at the very beginning of our universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment.

According to the standard theory, our universe sprang into existence as “singularity” around 13.7 billion years ago. What is a “singularity” and where does it come from? Well, to be honest, that answer is unknown.  Astronomers simply don’t know for sure. Singularities are zones which defy our current understanding of physics. They are thought to exist at the core of “black holes.” Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind). These zones of infinite density are called “singularities.” Our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense, something – a singularity. Where did it come from? We don’t know. Why did it appear? We don’t know.

After its initial appearance, it apparently inflated (the “Big Bang”), expanded and cooled, going from very, very small and very, very hot, to the size and temperature of our current universe. It continues to expand and cool to this day and we are inside of it: incredible creatures living on a unique planet, circling a beautiful star clustered together with several hundred billion other stars in a galaxy soaring through the cosmos, all of which is inside of an expanding universe that began as an infinitesimal singularity which appeared out of nowhere for reasons unknown. This is the Big Bang theory.

THREE STEPS IN MEASURING THE HUBBLE CONSTANT:

The illustration below shows the three steps astronomers used to measure the universe’s expansion rate to an unprecedented accuracy, reducing the total uncertainty to 2.4 percent.

Astronomers made the measurements by streamlining and strengthening the construction of the cosmic distance ladder, which is used to measure accurate distances to galaxies near and far from Earth.

Beginning at left, astronomers use Hubble to measure the distances to a class of pulsating stars called Cepheid Variables, employing a basic tool of geometry called parallax. This is the same technique that surveyors use to measure distances on Earth. Once astronomers calibrate the Cepheids’ true brightness, they can use them as cosmic yardsticks to measure distances to galaxies much farther away than they can with the parallax technique. The rate at which Cepheids pulsate provides an additional fine-tuning to the true brightness, with slower pulses for brighter Cepheids. The astronomers compare the calibrated true brightness values with the stars’ apparent brightness, as seen from Earth, to determine accurate distances.

Once the Cepheids are calibrated, astronomers move beyond our Milky Way to nearby galaxies [shown at center]. They look for galaxies that contain Cepheid stars and another reliable yardstick, Type Ia supernovae, exploding stars that flare with the same amount of brightness. The astronomers use the Cepheids to measure the true brightness of the supernovae in each host galaxy. From these measurements, the astronomers determine the galaxies’ distances.

They then look for supernovae in galaxies located even farther away from Earth. Unlike Cepheids, Type Ia supernovae are brilliant enough to be seen from relatively longer distances. The astronomers compare the true and apparent brightness of distant supernovae to measure out to the distance where the expansion of the universe can be seen [shown at right]. They compare those distance measurements with how the light from the supernovae is stretched to longer wavelengths by the expansion of space. They use these two values to calculate how fast the universe expands with time, called the Hubble constant.

three-steps-to-measuring-the-hubble-constant

Now, that’s simple, isn’t it?  OK, not really.   It’s actually somewhat painstaking and as you can see extremely detailed.  To our credit, the constant can be measured.

CONCLUSIONS:

This is a rather, off the wall, post but one I certainly hope you can enjoy.  Technology is a marvelous thing working to clarify and define where we come from and how we got there.

%d bloggers like this: