April 25, 2017
OK, are you ready for a bit of ridiculous trivia? Today, 25 April 2017, is National Telephone Day. I do not think there will be any denial that the telephone has revolutionized communication the world over.
It was February 14, 1876, when Marcellus Bailey, one of Alexander Graham Bell’s attorneys rushed into the US Patent office in Boston to file for what would later be called the telephone. Later that same day, Elisha Gray filed a patent caveat for a similar device. A caveat is an intent to file for a patent. There is also a third contender, Antonio Meucci. Mr. Meucci filed a caveat in November of 1871 for a talking telegraph but failed to renew the caveat due to hardships. Because Bell’s patent was submitted first, it was awarded to him on March 7, 1876. Gray contested this decision in court, but without success.
Born March 3, 1847, in Edinburgh, United Kingdom, Bell was an instructor at a boys’ boarding school. The sounds of speech were an integral part of his life. His father developed a “Visible Speech” system for deaf students to communicate. Bell would later become friend and benefactor of Helen Keller. Three days after his patent was approved, Bell spoke the first words by telephone to his assistant. “Mr. Watson, come here! I want to see you!” By May of the same year, Bell and his team were ready for a public demonstration, and there would be no better place than the World’s Fair in Philadelphia. On May 10, 1876, in a crowded Machinery Hall a man’s voice was transmitted from a small horn and carried out through a speaker to the audience. One year later, the White House installed its first phone. The telephone revolution began. Bell Telephone Company was founded on July 9, 1877, and the first public telephone lines were installed from Boston to Sommerville, Massachusetts the same year. By the end of the decade, there were nearly 50,000 phones in the United States. In May of 1967, the 1 millionth telephone was installed.
Growing up in in the 50’s, I remember the rotary telephone shown by the digital picture below. We were on a three-party line. As I recall, ours was a two-ring phone call. Of course, there was snooping. Big time snooping by the other two families on our line.
Let’s take a quick look at how the cell phone has literally taken over this communication method.
- The number of mobile devices rose nine (9) percent in the first six months of 2011, to 327.6 million — more than the 315 million people living in the U.S., Puerto Rico, Guam and the U.S. Virgin Islands. Wireless network data traffic rose 111 percent, to 341.2 billion megabytes, during the same period.
- Nearly two-thirds of Americans are now smartphone owners, and for many these devices are a key entry point to the online world. Sixty-four percent( 64) ofAmerican adults now own a smartphone of some kind, up from thirty-five percent (35%) in the spring of 2011. Smartphone ownership is especially high among younger Americans, as well as those with relatively high income and education levels.
- Ten percent (10%) of Americans own a smartphone but do not have any other form of high-speed internet access at home beyond their phone’s data plan.
- Using a broader measure of the access options available to them, fifteen percent (15% of Americans own a smartphone but say that they have a limited number of ways to get online other than their cell phone.
- Younger adults — Fifteen percent (15%) of Americans ages 18-29 are heavily dependent on a smartphone for online access.
- Those with low household incomes and levels of educational attainment — Some thirteen percent (13%) of Americans with an annual household income of less than $30,000 per year are smartphone-dependent. Just one percent (1%) of Americans from households earning more than $75,000 per year rely on their smartphones to a similar degree for online access.
- Non-whites — Twelve percent (12%) of African Americans and thirteen percent (13%) of Latinos are smartphone-dependent, compared with four percent (4%) of whites
- Sixty-two percent (62%) of smartphone owners have used their phone in the past year to look up information about a health condition
- Fifty-seven percent (57%) have used their phone to do online banking.
- Forty-four percent (44%) have used their phone to look up real estate listings or other information about a place to live.
- Forty-three percent (43%) to look up information about a job.
- Forty percent (40%) to look up government services or information.
- Thirty percent (30%) to take a class or get educational content
- Eighteen percent (18%) to submit a job application.
- Sixty-eight percent (68%) of smartphone owners use their phone at least occasionally to follow along with breaking news events, with thirty-three percent (33%) saying that they do this “frequently.”
- Sixty-seven percent (67%) use their phone to share pictures, videos, or commentary about events happening in their community, with 35% doing so frequently.
- Fifty-six percent (56%) use their phone at least occasionally to learn about community events or activities, with eighteen percent (18%) doing this “frequently.”
OK, by now you get the picture. The graphic below will basically summarize the cell phone phenomenon relative to other digital devices including desktop and laptop computers. By the way, laptop and desktop computer purchases have somewhat declined due to the increased usage of cell phones for communication purposes.
The number of smart phone users in the United States from 2012 to a projected 2021 in millions is given below.
CONCLUSION: “Big Al” (Mr. Bell that is.) probably knew he was on to something. At any rate, the trend will continue towards infinity over the next few decades.
April 23, 2017
This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center. This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.
HOW DO ADULTS LEARN? Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty. I think of “book learning” first and digital as a backup. They head straight for their i-pad or i-phone. GOOGLE is a verb and not a company name as far as they are concerned. (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)
According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:
- Adult learning is self-directed/autonomous
Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
- Adult learning utilizes knowledge & life experiences
Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
- Adult learning is goal-oriented
The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
- Adult learning is relevancy-oriented
One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
- Adult learning highlights practicality
Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
- Adult learning encourages collaboration
Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.
One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.
As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.
Specifically, the Pew report made their assessment relative to American adults according to five main factors:
- Their confidence in using computers,
- Their facility with getting new technology to work
- Their use of digital tools for learning
- Their ability to determine the trustworthiness of online information,
- Their familiarity with contemporary “education tech” terms.
It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:
The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness. The chart below will indicate their classifications.
The breakdown is as follows:
Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.
- A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
- We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
- A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.
Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.
- A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
- Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.
To me, one of the greatest lessons from my university days—NEVER STOP LEARNING. I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years. If you stop learning, the information you receive will become obsolete in five years. At the pace of technology today, that may be five months. You never stop learning AND you embrace existent technology. In other words—do digital. Digital is your friend. GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy. At least, start there then, trust but verify.
April 22, 2017
If you work or have worked in manufacturing you know robotic systems have definitely had a distinct impact on assembly, inventory acquisition from storage areas and finished-part warehousing. There is considerable concern that the “rise of the machines” will eventually replace individuals performing a verity of tasks. I personally do not feel this will be the case although there is no doubt robotic systems have found their way onto the manufacturing floor.
From the “Executive Summary World Robotics 2016 Industrial Robots”, we see the following:
2015: By far the highest volume ever recorded in 2015, robot sales increased by 15% to 253,748 units, again by far the highest level ever recorded for one year. The main driver of the growth in 2015 was the general industry with an increase of 33% compared to 2014, in particular the electronics industry (+41%), metal industry (+39%), the chemical, plastics and rubber industry (+16%). The robot sales in the automotive industry only moderately increased in 2015 after a five-year period of continued considerable increase. China has significantly expanded its leading position as the biggest market with a share of 27% of the total supply in 2015.
In looking at the chart below, we can see the sales picture with perspective and show how system sales have increased from 2003.
It is very important to note that seventy-five percent (75%) of global robot sales comes from five (5) countries.
There were five major markets representing seventy-five percent (75%) of the total sales volume in 2015: China, the Republic of Korea, Japan, the United States, and Germany.
As you can see from the bar chart above, sales volume increased from seventy percent (70%) in 2014. Since 2013 China is the biggest robot market in the world with a continued dynamic growth. With sales of about 68,600 industrial robots in 2015 – an increase of twenty percent (20%) compared to 2014 – China alone surpassed Europe’s total sales volume (50,100 units). Chinese robot suppliers installed about 20,400 units according to the information from the China Robot Industry Alliance (CRIA). Their sales volume was about twenty-nine percent (29%) higher than in 2014. Foreign robot suppliers increased their sales by seventeen percent (17%) to 48,100 units (including robots produced by international robot suppliers in China). The market share of Chinese robot suppliers grew from twenty-five percent (25%) in 2013 to twenty-nine percent (29%) in 2015. Between 2010 and 2015, total supply of industrial robots increased by about thirty-six percent (36%) per year on average.
About 38,300 units were sold to the Republic of Korea, fifty-five percent (55%) more than in 2014. The increase is partly due to a number of companies which started to report their data only in 2015. The actual growth rate in 2015 is estimated at about thirty percent (30%) to thirty-five percent (35%.)
In 2015, robot sales in Japan increased by twenty percent (20%) to about 35,000 units reaching the highest level since 2007 (36,100 units). Robot sales in Japan followed a decreasing trend between 2005 (reaching the peak at 44,000 units) and 2009 (when sales dropped to only 12,767 units). Between 2010 and 2015, robot sales increased by ten percent (10%) on average per year (CAGR).
Increase in robot installations in the United States continued in 2015, by five percent (5%) to the peak of 27,504 units. Driver of this continued growth since 2010 was the ongoing trend to automate production in order to strengthen American industries on the global market and to keep manufacturing at home, and in some cases, to bring back manufacturing that had previously been sent overseas.
Germany is the fifth largest robot market in the world. In 2015, the number of robots sold increased slightly to a new record high at 20,105 units compared to 2014 (20,051 units). In spite of the high robot density of 301 units per 10,000 employees, annual sales are still very high in Germany. Between 2010 and 2015, annual sales of industrial robots increased by an average of seven percent (7%) in Germany (CAGR).
From the graphic below, you can see which industries employ robotic systems the most.
Growth rates will not lessen with projections through 2019 being as follows:
A fascinating development involves the assistance of human endeavor by robotic systems. This fairly new technology is called collaborative robots of COBOTS. Let’s get a definition.
A cobot or “collaborative robot” is a robot designed to assist human beings as a guide or assistor in a specific task. A regular robot is designed to be programmed to work more or less autonomously. In one approach to cobot design, the cobot allows a human to perform certain operations successfully if they fit within the scope of the task and to steer the human on a correct path when the human begins to stray from or exceed the scope of the task.
“The term ‘collaborative’ is used to distinguish robots that collaborate with humans from robots that work behind fences without any direct interaction with humans. “In contrast, articulated, cartesian, delta and SCARA robots distinguish different robot kinematics.
Traditional industrial robots excel at applications that require extremely high speeds, heavy payloads and extreme precision. They are reliable and very useful for many types of high volume, low mix applications. But they pose several inherent challenges for higher mix environments, particularly in smaller companies. First and foremost, they are very expensive, particularly when considering programming and integration costs. They require specialized engineers working over several weeks or even months to program and integrate them to do a single task. And they don’t multi-task easily between jobs since that setup effort is so substantial. Plus, they can’t be readily integrated into a production line with people because they are too dangerous to operate in close proximity to humans.
For small manufacturers with limited budgets, space and staff, a collaborative robot such as Baxter (shown below) is an ideal fit because it overcomes many of these challenges. It’s extremely intuitive, integrates seamlessly with other automation technologies, is very flexible and is quite affordable with a base price of only $25,000. As a result, Baxter is well suited for many applications, such as those requiring manual labor and a high degree of flexibility, that are currently unmet by traditional technologies.
Baxter is one example of collaborative robotics and some say is by far the safest, easiest, most flexible and least costly robot of its kind today. It features a sophisticated multi-tier safety design that includes a smooth, polymer exterior with fewer pinch points; back-drivable joints that can be rotated by hand; and series elastic actuators which help it to minimize the likelihood of injury during inadvertent contact.
It’s also incredibly simple to use. Line workers and other non-engineers can quickly learn to train the robot themselves, by hand. With Baxter, the robot itself is the interface, with no teaching pendant or external control system required. And with its ease of use and diverse skill set, Baxter is extremely flexible, capable of being utilized across multiple lines and tasks in a fraction of the time and cost it would take to re-program other robots. Plus, Baxter is made in the U.S.A., which is a particularly appealing aspect for many of our customers looking to re-shore their own production operations.
The digital picture above shows a lady work alongside a collaborative robotic system, both performing a specific task. The lady feels right at home with her mechanical friend only because usage demands a great element of safety.
Certifiable safety is the most important precondition for a collaborative robot system to be applied to an industrial setting. Available solutions that fulfill the requirements imposed by safety standardization often show limited performance or productivity gains, as most of today’s implemented scenarios are often limited to very static processes. This means a strict stop and go of the robot process, when the human enters or leaves the work space.
Collaborative systems are still a work in progress but the technology has greatly expanded the use and this is primarily due to satisfying safety requirements. Upcoming years will only produce greater acceptance and do not be surprised if you see robots and humans working side by side on every manufacturing floor over the next decade.
As always, I welcome your comments.
March 10, 2017
It really does creep up on you—the pain that is. Minimal at first for a few months but at least livable. I thought I could exercise and stretch to lessen the discomfort and that did work to a great degree. That was approximately seven (7) months ago. Reality did set in with the pain being so great that something had to be done.
In the decade of the eighties, I was an avid runner with thoughts of running a marathon or even marathons. My dream was to run the New York City and Boston Marathon first then concentrate on local 10 K events. After one year I would concentrate on the Atlanta marathon—at least that was the plan. I was clocking about twenty to thirty miles per week with that goal in mind. All of my running was on pavement with three five miles runs on Monday, Wednesday and Friday and a ten-mile run on Saturday. It did seem reasonable. I would drive the courses to get exact mileage and vary the routes just to mix it up a little and bring about new scenery. After several weeks, I noticed pains starting to develop around the twenty-five miles per week distances. They did go away but always returned towards the latter part of each week. Medical examinations would later show the beginning of arthritis in my right hip. I shortened my distances hoping to alleviate the pain and that worked to some extent for a period of time.
Time caught up with me. The pains were so substantial I could not tie my shoe laces or stoop to pick up an article on the floor. It was time to pull the trigger.
TOTAL HIP REPLACEMENT:
In a total hip replacement (also called total hip arthroplasty), the damaged bone and cartilage is removed and replaced with prosthetic components.
- The damaged femoral head is removed and replaced with a metal stem that is placed into the hollow center of the femur. The femoral stem may be either cemented or “press fit” into the bone. One of the first procedures is dislocating the hip so femoral stem may be removed.
- A metal or ceramic ball is placed on the upper part of the stem. This ball replaces the damaged femoral head that was removed.
- The damaged cartilage surface of the socket (acetabulum) is removed and replaced with a metal socket. Screws or cement are sometimes used to hold the socket in place.
- A plastic, ceramic, or metal spacer is inserted between the new ball and the socket to allow for a smooth gliding surface.I chose to have an epidural so recovery would be somewhat quicker and the aftereffects lessened. I do not regret that choice and would recommend that to anyone undergoing hip replacement. One day home and I’m following my doctor’s orders to a “T”. Doing everything and then some to make sure I touch all of the bases. I was very tempted to pull up “U”- TUBE to see how the surgery was accomplished but after hearing it was more carpentry than medicine, I decided I would delay that investigation for a year-or forever. Some things I just might not need to know.
Sorry for this post being somewhat short but the meds are wearing off and I need to “reload”. I promise to do better in the very near future.
February 8, 2017
I entered the university shortly after Sir Isaac Newton and Gottfried Leibniz invented calculus. (OK, I’m not quite that old but you do get the picture.) At any rate, I’ve been a mechanical engineer for a lengthy period of time. If I had to do it all over again, I would choose Biomedical Engineering instead of mechanical engineering. Biomedical really fascinates me. The medical “hardware” and software available today is absolutely marvelous. As with most great technologies, it has been evolutionary instead of revolutionary. One such evolution has been the development of the dialysis pump to facilitate administrating insulin to patients suffering with diabetes.
On my way to exercise Monday, Wednesday and Friday, I pass three dialysis clinics. I am amazed that on some days the parking lots are, not only full, but cars are parked on the roads on either side of the buildings. Almost always, I see at least one ambulance parked in front of the clinic having delivered a patient to the facilities. In Chattanooga proper, there are nine (9) clinics and approximately 3,306 dialysis centers in the United States. These centers employ 127,671 individuals and bring in twenty-two billion dollars ($22B) in revenue. There is a four-point four percent (4.4%) growth rate on an annual basis. Truly, diabetes has reached epidemic proportions in our country.
Diabetes is not only one of the most common chronic diseases, it is also complex and difficult to treat. Insulin is often administered between meals to keep blood sugar within target range. This range is determined by the number of carbohydrates ingested. Four hundred (400) million adults worldwide suffer from diabetes with one and one-half million (1.5) deaths on an annual basis. It is no wonder that so many scientists, inventors, and pharmaceutical and medical device companies are turning their attention to improving insulin delivery devices. There are today several delivery options, as follows:
- Insulin Injection Aids
- Inhaled Insulin Devices
- External Pumps
- Implantable Pumps
Insulin pumps, especially the newer devices, have several advantages over traditional injection methods. These advantages make using pumps a preferable treatment option. In addition to eliminating the need for injections at work, at the gym, in restaurants and other settings, the pumps are highly adjustable thus allowing the patient to make precise changes based on exercise levels and types of food being consumed.
These delivery devices require: 1.) An insulin cartridge, 2.) A battery-operated pump, and 3.) Computer chips that allow the patient to control the dosage. A detailed list of components is given below. Most modern devices have a display window or graphical user interface (GUI) and selection keys to facilitate changes and administrating insulin. A typical pump is shown as follows:
Generally, insulin pumps consist of a reservoir, a microcontroller with battery, flexible catheter tubing, and a subcutaneous needle. When the first insulin pumps were created in the 1970-80’s, they were quite bulky (think 1980’s cell phone). In contrast, most pumps today are a little smaller than a pager. The controller and reservoir are usually housed together. Patients often will wear the pump on a belt clip or place it in a pocket as shown below. A basic interface lets the patient adjust the rate of insulin or select a pre-set. The insulins used are rapid acting, and the reservoir typically holds 200-300 units of insulin. The catheter is similar to most IV tubing (often smaller in diameter), and connects directly to the needle. Patients insert the needle into their abdominal wall, although the upper arm or thigh can be used. The needle infusion set can be attached via any number of adhesives, but tape can do in a pinch. The needle needs to be re-sited every 2-3 days.
As you can see from the above JPEG, the device itself can be clipped onto clothing and worn during the day for continued use.
The pump can help an individual patient more closely mimic the way a healthy pancreas functions. The pump, through a Continuous Subcutaneous Insulin Infusion (CSII), replaces the need for frequent injections by delivering precise doses of rapid-acting insulin 24 hours a day to closely match your body’s needs. Two definitions should be understood relative to insulin usage. These are as follows:
- Basal Rate: A programmed insulin rate made of small amounts of insulin delivered continuously mimics the basal insulin production by the pancreas for normal functions of the body (not including food). The programmed rate is determined by your healthcare professional based on your personal needs. This basal rate delivery can also be customized according to your specific daily needs. For example, it can be suspended or increased / decreased for a definite time frame: this is not possible with basal insulin injections.
- Bolus Dose: Additional insulin can be delivered “on demand” to match the food you are going to eat or to correct high blood sugar. Insulin pumps have bolus calculators that help you calculate your bolus amount based on settings that are pre-determined by your healthcare professional and again based on your special needs.
A modern insulin pump can accomplish both basal and bolus needs as the situation demands.
The benefits relative to traditional methods are as follows:
- Easier dosing: calculating insulin requirements can be a complex task with many different aspects to be considered. It is important that the device ensures accurate dosing by taking into account any insulin already in the body, the current glucose levels, carbohydrate intake and personal insulin settings.
- Greater flexibility: The pump must be capable of instant adjustment to allow for exercise, during illness or to deliver small boluses to cover meals and snacks. This can easily be done with a touch of a button with the more-modern devices. There should be a temporary basal rate option to proportionally reduce or increase the basal insulin rate, during exercise or illness, for example.
- More convenience: The device must offer additional convenience of a wirelessly connected blood glucose meter. This meter automatically sends blood glucose values to the pump, allowing more accurate calculations and to deliver insulin boluses discreetly.
These wonderful devices all result from technology and technological advances. Needs DO generate devices. I hope you enjoy this post and as always, I welcome your comments.
January 30, 2017
Certain portion of the information for this post come from the article entitled “How to Build Trump’s Controversial Wall” by Mr. Chris Wiltz. Chris is a writer for Design News Daily.
OK, President Donald Trump indicated during pre-nomination televised exercises that if elected President, he will authorize building a wall between Mexico and the United States AND get the Mexican government to pay for it. Now as President, he seems to be living up to fulfilling that somewhat lofty campaign promise. From an engineering standpoint, how do you do that?
A direct quote from President Trump: “We are in the middle of a crisis on our southern border: The unprecedented surge of illegal migrants from Central American is harming both Mexico and the United States,” Trump said in remarks reported by Reuters. “And I believe the steps we will take starting right now will improve the safety in both of our countries. … A nation without borders is not a nation.”
An analysis done by Politico estimates to do just that would total at least $5.1 billion US (not including annual maintenance costs). According to Politico: “Those estimates come from a 2009 report from the Government Accountability Office [GAO], which found that it costs an average of $3.9 million to build one mile of fencing. About 670 miles of fencing is already up along the 1,989-mile southern border, so finishing the fence that’s already there would cost about $5.1 billion.
But the actual cost is likely much higher, according to experts. The vast majority of the existing border fence is single-layer fencing near urban areas, which is considerably easier to build. Much of the remaining 1,300 miles runs through rough terrains and remote areas without roads, so it’s fair to assume the per-mile cost of finishing the fence would be on the higher end of the GAO’s estimates, which was $15.1 million per mile.”
This is obviously a huge amount of money and the time necessary appears to be years and not months or certainly weeks. The construction time of the Ming Wall was well over 2,000 years Many imperial dynasties and kingdoms built, rebuilt, and extended walls many times. This wall subsequently eroded due to environmental issues and the materials used. The latest imperial construction was performed by the Ming Dynasty (1368–1644), and the length was then over 6,000 kilometers (3,700 miles).
HOW WOULD WE DO IT:
In a September 2015 article for The National Memo , a structural engineer, writing under the pseudonym Ali F. Rhuzkan took on the challenge of mapping out the logistics of constructing Trump’s wall. I really do not know why Ali F. Rhuzkan was used but his article was very interesting.
Rhuzkan writes: “A successful border wall must be effective, cheap, and easily maintained. It should be built from readily available materials and should take advantage of the capabilities of the existing labor force. The wall should reach about five feet underground to deter tunneling, and should terminate about 20 feet above grade to deter climbing.”
A rendition of his design looks as follows:
According to Rhuzkah, assuming the wall would be constructed using pre-cast concrete (cast in a factory, then shipped to the construction site) building a wall to the necessary specifications to meet the President’s demands for a roughly 2,000-mile border wall would require about 12,600,000 cubic yards of concrete. “In other words, this wall would contain over three times the amount of concrete used to build the Hoover Dam,” Rhuzkah writes, “Such a wall would be greater in volume than all six pyramids of the Giza Necropolis … That quantity of concrete could pave a one-lane road from New York to Los Angeles, going the long way around the Earth…”
And this is just the concrete. One also has to factor in the amount of steel needed to reinforce such a structure – about 5 billion pounds by Rhuzkah’s estimation – as well as the labor, production, and shipping costs of all the pieces. Not to mention the wall would have to be built and regularly maintained by workers that would ideally be paid and not slaves.
If you need a visual of what such a wall would look like, a group of interns at Estudio 3.14 —a design firm based in Guadalajara, México have created a conceptual rendering that they’ve dubbed the Prison Wall . Estudio 3.14’s concept envisions a wall that crosses multiple terrains (hills, desert, a river, even the city of Tijuana) and also includes a built-in prison to detain those seeking to cross the border illegally, as well as a shopping mall and a viewpoint for tourists. By its renderings, the studio estimates the wall could employ up to 6 million people. As for why it’s pink, the studio said in a statement that, “Because the wall has to be beautiful, it has been inspired by Luis Barragán’s pink walls that are emblematic of Mexico.”
I have a twenty (20) foot ladder in my workshop downstairs. If I have one, the Mexican illegals probably can get one. Here are my conclusions:
- A twenty (20) foot wall is much too short. Forty or even fifty (50) in some places will be necessary.
- Five (5) foot depth is much much too shallow. I could tunnel under a five-foot depth. At least fifteen (15) in some places will be necessary.
- It would be wonderfully wise if someone could and would estimate the maintenance cost on an annual basis so we know what’s coming.
- It does not matter how high the wall; additional patrolling will be necessary by our Border Patrol. Please estimate the added costs for that.
- Please forget the government of Mexico paying for the wall. I WILL NOT HAPPEN. President Trump indicated he may assign added import taxes to pay for the wall. Those will be passed on to the American people. You know that.
- I hope it’s obvious that I do not know the complete answer to this one, but you have to give credit to President Trump. He is trying and, in my opinion, making progress is not waves.
As always, I welcome your comments.
January 28, 2017
The following information was taken from SPACE.com and NASA.
Until just recently I did not know there was a Hubble Constant. The term had never popped up on my radar. For this reason, I thought it might be noteworthy to discuss the meaning and the implications.
THE HUBBLE CONSTANT:
The Hubble Constant is the unit of measurement used to describe the expansion of the universe. The Hubble Constant (Ho) is one of the most important numbers in cosmology because it is needed to estimate the size and age of the universe. This long-sought number indicates the rate at which the universe is expanding, from the primordial “Big Bang.”
The Hubble Constant can be used to determine the intrinsic brightness and masses of stars in nearby galaxies, examine those same properties in more distant galaxies and galaxy clusters, deduce the amount of dark matter present in the universe, obtain the scale size of faraway galaxy clusters, and serve as a test for theoretical cosmological models. The Hubble Constant can be stated as a simple mathematical expression, Ho = v/d, where v is the galaxy’s radial outward velocity (in other words, motion along our line-of-sight), d is the galaxy’s distance from earth, and Ho is the current value of the Hubble Constant. However, obtaining a true value for Ho is very complicated. Astronomers need two measurements. First, spectroscopic observations reveal the galaxy’s redshift, indicating its radial velocity. The second measurement, the most difficult value to determine, is the galaxy’s precise distance from earth. Reliable “distance indicators,” such as variable stars and supernovae, must be found in galaxies. The value of Ho itself must be cautiously derived from a sample of galaxies that are far enough away that motions due to local gravitational influences are negligibly small.
The units of the Hubble Constant are “kilometers per second per megaparsec.” In other words, for each megaparsec of distance, the velocity of a distant object appears to increase by some value. (A megaparsec is 3.26 million light-years.) For example, if the Hubble Constant was determined to be 50 km/s/Mpc, a galaxy at 10 Mpc, would have a redshift corresponding to a radial velocity of 500 km/s.
The cosmos has been getting bigger since the Big Bang kick-started the growth about 13.82 billion years ago. The universe, in fact, is getting faster in its acceleration as it gets bigger. As of March 2013, NASA estimates the rate of expansion is about 70.4 kilometers per second per megaparsec. A megaparsec is a million parsecs, or about 3.3 million light-years, so this is almost unimaginably fast. Using data solely from NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), the rate is slightly faster, at about 71 km/s per megaparsec.
The constant was first proposed by Edwin Hubble (whose name is also used for the Hubble Space Telescope). Hubble was an American astronomer who studied galaxies, particularly those that are far away from us. In 1929 — based on a realization from astronomer Harlow Shapley that galaxies appear to be moving away from the Milky Way — Hubble found that the farther these galaxies are from Earth, the faster they appear to be moving, according to NASA.
While scientists then understood the phenomenon to be galaxies moving away from each other, today astronomers know that what is actually being observed is the expansion of the universe. No matter where you are located in the cosmos, you would see the same phenomenon happening at the same speed.
Hubble’s initial calculations have been refined over the years, as more and more sensitive telescopes have been used to make the measurements. These include the Hubble Space Telescope (which examined a kind of variable star called Cepheid variables) and WMAP, which extrapolated based on measurements of the cosmic microwave background — a constant background temperature in the universe that is sometimes called the “afterglow” of the Big Bang.
THE BIG BANG:
The Big Bang theory is an effort to explain what happened at the very beginning of our universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment.
According to the standard theory, our universe sprang into existence as “singularity” around 13.7 billion years ago. What is a “singularity” and where does it come from? Well, to be honest, that answer is unknown. Astronomers simply don’t know for sure. Singularities are zones which defy our current understanding of physics. They are thought to exist at the core of “black holes.” Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind). These zones of infinite density are called “singularities.” Our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense, something – a singularity. Where did it come from? We don’t know. Why did it appear? We don’t know.
After its initial appearance, it apparently inflated (the “Big Bang”), expanded and cooled, going from very, very small and very, very hot, to the size and temperature of our current universe. It continues to expand and cool to this day and we are inside of it: incredible creatures living on a unique planet, circling a beautiful star clustered together with several hundred billion other stars in a galaxy soaring through the cosmos, all of which is inside of an expanding universe that began as an infinitesimal singularity which appeared out of nowhere for reasons unknown. This is the Big Bang theory.
THREE STEPS IN MEASURING THE HUBBLE CONSTANT:
The illustration below shows the three steps astronomers used to measure the universe’s expansion rate to an unprecedented accuracy, reducing the total uncertainty to 2.4 percent.
Astronomers made the measurements by streamlining and strengthening the construction of the cosmic distance ladder, which is used to measure accurate distances to galaxies near and far from Earth.
Beginning at left, astronomers use Hubble to measure the distances to a class of pulsating stars called Cepheid Variables, employing a basic tool of geometry called parallax. This is the same technique that surveyors use to measure distances on Earth. Once astronomers calibrate the Cepheids’ true brightness, they can use them as cosmic yardsticks to measure distances to galaxies much farther away than they can with the parallax technique. The rate at which Cepheids pulsate provides an additional fine-tuning to the true brightness, with slower pulses for brighter Cepheids. The astronomers compare the calibrated true brightness values with the stars’ apparent brightness, as seen from Earth, to determine accurate distances.
Once the Cepheids are calibrated, astronomers move beyond our Milky Way to nearby galaxies [shown at center]. They look for galaxies that contain Cepheid stars and another reliable yardstick, Type Ia supernovae, exploding stars that flare with the same amount of brightness. The astronomers use the Cepheids to measure the true brightness of the supernovae in each host galaxy. From these measurements, the astronomers determine the galaxies’ distances.
They then look for supernovae in galaxies located even farther away from Earth. Unlike Cepheids, Type Ia supernovae are brilliant enough to be seen from relatively longer distances. The astronomers compare the true and apparent brightness of distant supernovae to measure out to the distance where the expansion of the universe can be seen [shown at right]. They compare those distance measurements with how the light from the supernovae is stretched to longer wavelengths by the expansion of space. They use these two values to calculate how fast the universe expands with time, called the Hubble constant.
Now, that’s simple, isn’t it? OK, not really. It’s actually somewhat painstaking and as you can see extremely detailed. To our credit, the constant can be measured.
This is a rather, off the wall, post but one I certainly hope you can enjoy. Technology is a marvelous thing working to clarify and define where we come from and how we got there.