With the federal government pulling out of manned space flight, it gave private companies ample opportunity to fill in the gaps.  Of course, these companies MUST have adequate funding, trained personnel and proper facilities to launch their version(s) of equipment, support and otherwise that will take man and equipment to the outer reaches of space.  The list of companies was quite surprising to me.  Let’s take a look.

These are just the launch vehicles.  There is also a huge list of manufacturers making man-rovers and orbiters, research craft and tech demonstrators, propulsion manufacturers, satellite launchers, space manufacturing, space mining, space stations, space settlements, spacecraft component manufacturers and developers, and spaceliner companies.   I will not publish that list but these companies are available for discovery by putting in the heading for each category.  To think we are not involved in space is obviously a misnomer.

 

Advertisements

ARTIFICIAL INTELLIGENCE

February 12, 2019


Just what do we know about Artificial Intelligence or AI?  Portions of this post were taken from Forbes Magazine.

John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines” which up to this point had been quite divergent. McCarthy is said to have picked the name artificial intelligence for its neutrality; to avoid highlighting one of the tracks being pursued at the time for the field of “thinking machines” that included cybernetics, automation theory and complex information processing. The proposal for the conference said, “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Today, modern dictionary definitions focus on AI being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human). The English Oxford Living Dictionary gives this definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Merriam-Webster defines artificial intelligence this way:

  1. A branch of computer science dealing with the simulation of intelligent behavior in computers.
  2. The capability of a machine to imitate intelligent human behavior.

About thirty (30) year ago, a professor at the Harvard Business School (Dr. Shoshana Zuboff) articulated three laws based on research into the consequences that widespread computing would have on society. Dr. Zuboff had degrees in philosophy and social psychology so she was definitely ahead of her time relative to the unknown field of AI.  Her document “In the Age of the Smart Machine: The Future of Work and Power”, she postulated the following three laws:

  • Everything that can be automated will be automated
  • Everything that can be informated will be informated. (NOTE: Informated was coined by Zuboff to describe the process of turning descriptions and measurements of activities, events and objects into information.)
  • In the absence of countervailing restrictions and sanctions, every digital application that can be sued for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

At that time there was definitely a significant lack of computing power.  That ship has sailed and is no longer a great hinderance to AI advancement that it certainly once was.

 

WHERE ARE WE?

In recent speech, Russian president Vladimir Putin made an incredibly prescient statement: “Artificial intelligence is the future, not only for Russia, but for all of humankind.” He went on to highlight both the risks and rewards of AI and concluded by declaring that whatever country comes to dominate this technology will be the “ruler of the world.”

As someone who closely monitors global events and studies emerging technologies, I think Putin’s lofty rhetoric is entirely appropriate. Funding for global AI startups has grown at a sixty percent (60%) compound annual growth rate since 2010. More significantly, the international community is actively discussing the influence AI will exert over both global cooperation and national strength. In fact, the United Arab Emirates just recently appointed its first state minister responsible for AI.

Automation and digitalization have already had a radical effect on international systems and structures. And considering that this technology is still in its infancy, every new development will only deepen the effects. The question is: Which countries will lead the way, and which ones will follow behind?

If we look at criteria necessary for advancement, there are the seven countries in the best position to rule the world with the help of AI.  These countries are as follows:

  • Russia
  • The United States of America
  • China
  • Japan
  • Estonia
  • Israel
  • Canada

The United States and China are currently in the best position to reap the rewards of AI. These countries have the infrastructure, innovations and initiative necessary to evolve AI into something with broadly shared benefits. In fact, China expects to dominate AI globally by 2030. The United States could still maintain its lead if it makes AI a top priority and charges necessary investments while also pulling together all required government and private sector resources.

Ultimately, however, winning and losing will not be determined by which country gains the most growth through AI. It will be determined by how the entire global community chooses to leverage AI — as a tool of war or as a tool of progress.

Ideally, the country that uses AI to rule the world will do it through leadership and cooperation rather than automated domination.

CONCLUSIONS:  We dare not neglect this disruptive technology.  We cannot afford to lose this battle.

COMPUTER SIMULATION

January 20, 2019


More and more engineers, systems analysist, biochemists, city planners, medical practitioners, individuals in entertainment fields are moving towards computer simulation.  Let’s take a quick look at simulation then we will discover several examples of how very powerful this technology can be.

WHAT IS COMPUTER SIMULATION?

Simulation modelling is an excellent tool for analyzing and optimizing dynamic processes. Specifically, when mathematical optimization of complex systems becomes infeasible, and when conducting experiments within real systems is too expensive, time consuming, or dangerous, simulation becomes a powerful tool. The aim of simulation is to support objective decision making by means of dynamic analysis, to enable managers to safely plan their operations, and to save costs.

A computer simulation or a computer model is a computer program that attempts to simulate an abstract model of a particular system. … Computer simulations build on and are useful adjuncts to purely mathematical models in science, technology and entertainment.

Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems. They are also widely used in the entertainment fields.

Traditionally, the formal modeling of systems has been possible using mathematical models, which attempts to find analytical solutions to problems enabling the prediction of behavior of the system from a set of parameters and initial conditions.  The word prediction is a very important word in the overall process. One very critical part of the predictive process is designating the parameters properly.  Not only the upper and lower specifications but parameters that define intermediate processes.

The reliability and the trust people put in computer simulations depends on the validity of the simulation model.  The degree of trust is directly related to the software itself and the reputation of the company producing the software. There will considerably more in this course regarding vendors providing software to companies wishing to simulate processes and solve complex problems.

Computer simulations find use in the study of dynamic behavior in an environment that may be difficult or dangerous to implement in real life. Say, a nuclear blast may be represented with a mathematical model that takes into consideration various elements such as velocity, heat and radioactive emissions. Additionally, one may implement changes to the equation by changing certain other variables, like the amount of fissionable material used in the blast.  Another application involves predictive efforts relative to weather systems.  Mathematics involving these determinations are significantly complex and usually involve a branch of math called “chaos theory”.

Simulations largely help in determining behaviors when individual components of a system are altered. Simulations can also be used in engineering to determine potential effects, such as that of river systems for the construction of dams.  Some companies call these behaviors “what-if” scenarios because they allow the engineer or scientist to apply differing parameters to discern cause-effect interaction.

One great advantage a computer simulation has over a mathematical model is allowing a visual representation of events and time line. You can actually see the action and chain of events with simulation and investigate the parameters for acceptance.  You can examine the limits of acceptability using simulation.   All components and assemblies have upper and lower specification limits a and must perform within those limits.

Computer simulation is the discipline of designing a model of an actual or theoretical physical system, executing the model on a digital computer, and analyzing the execution output. Simulation embodies the principle of “learning by doing” — to learn about the system we must first build a model of some sort and then operate the model. The use of simulation is an activity that is as natural as a child who role plays. Children understand the world around them by simulating (with toys and figurines) most of their interactions with other people, animals and objects. As adults, we lose some of this childlike behavior but recapture it later on through computer simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act out roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive synthetic environments and virtual worlds. Within the overall task of simulation, there are three primary sub-fields: model design, model execution and model analysis.

REAL-WORLD SIMULATION:

The following examples are taken from computer screen representing real-world situations and/or problems that need solutions.  As mentioned earlier, “what-ifs” may be realized by animating the computer model providing cause-effect and responses to desired inputs. Let’s take a look.

A great host of mechanical and structural problems may be solved by using computer simulation. The example above shows how the diameter of two matching holes may be affected by applying heat to the bracket

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

 

The Newtonian and non-Newtonian flow of fluids, i.e. liquids and gases, has always been a subject of concern within piping systems.  Flow related to pressure and temperature may be approximated by simulation.

Electromagnetics is an extremely complex field. The digital above strives to show how a magnetic field reacts to applied voltage.

Chemical engineers are very concerned with reaction time when chemicals are mixed.  One example might be the ignition time when an oxidizer comes in contact with fuel.

Acoustics or how sound propagates through a physical device or structure.

The transfer of heat from a colder surface to a warmer surface has always come into question. Simulation programs are extremely valuable in visualizing this transfer.

 

Equation-based modeling can be simulated showing how a structure, in this case a metal plate, can be affected when forces are applied.

In addition to computer simulation, we have AR or augmented reality and VR virtual reality.  Those subjects are fascinating but will require another post for another day.  Hope you enjoy this one.

 

 

WEARABLE TECHNOLOGY

January 12, 2019


Wearable technology’s evolution is not about the gadget on the wrist or the arm but what is done with the data these devices collect, say most computational biologist. I think before we go on, let’s define wearable technology as:

“Wearable technology (also called wearable gadgets) is a category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness. Other wearable tech gadgets include devices that have small motion sensors to take photos and sync with your mobile devices.”

Several examples of wearable technology may be seen by the following digital photographs.

You can all recognize the “watches” shown above. I have one on right now.  For Christmas this year, my wife gave me a Fitbit Charge 3.  I can monitor: 1.) Number of steps per day, 2.) Pulse rate, 3.) Calories burned during the day, 4.) Time of day, 5.) Number of stairs climbed per day, 6.) Miles walked or run per day, and 7.) Several items I can program in from the app on my digital phone.  It is truly a marvelous device.

Other wearables provide very different information and accomplish data of much greater import.

The device above is manufactured by a company called Lumus.  This company focusses on products that provide new dimensions for the human visual experience. It offers cutting-edge eyewear displays that can be used in various applications including gaming, movie watching, text reading, web browsing, and interaction with the interface of wearable computers. Lumus does not aim to produce self-branded products. Instead, the company wants to work with various original equipment manufacturers (OEMs) to enable the wider use of its technologies.  This is truly ground-breaking technology being used today on a limited basis.

Wearable technology is aiding individuals of decreasing eyesight to see as most people see.  The methodology is explained with the following digital.

Glucose levels may be monitored by the device shown above. No longer is it necessary to prick your finger to draw a small droplet of blood to determine glucose levels.  The device below can do that on a continuous basis and without a cumbersome test device.

There are many over the world suffering from “A-fib”.  Periodic monitoring becomes a necessity and one of the best methods of accomplishing that is shown by the devices below. A watch monitors pulse rate and sends that information via blue tooth to an app downloaded on your cell phone.

Four Benefits of Wearable Health Technology are as follows:

  • Real Time Data collection. Wearables can already collect an array of data like activity levels, sleep and heart rate, among others. …
  • Continuous Monitoring. …
  • Predict and alerting. …
  • Empowering patients.

Major advances in sensor and micro-electromechanical systems (MEMS) technologies are allowing much more accurate measurements and facilitating believable data that can be used to track movements and health conditions on any one given day.  In many cases, the data captured can be downloaded into a computer and transmitted to a medical practitioner for documentation.

Sensor miniaturization is a key driver for space-constrained wearable design.  Motion sensors are now available in tiny packages measuring 2 x 2 millimeters.  As mentioned, specific medical sensors can be used to track 1.) Heart rate variability, 2.) Oxygen levels, 3.) Cardiac health, 4.) Blood pressure, 5.) Hemoglobin, 6.) Glucose levels and 7.) Body temperature.  These medical devices represent a growing market due to their higher accuracy and greater performance.  These facts make them less prone to price pressures that designers commonly face with designing consumer wearables.

One great advantage for these devices now is the ability to hold a charge for a much longer period of time.  My Fitbit has a battery life of seven (7) days.  That’s really unheard of relative to times past.

CONCLUSION:  Wearable designs are building a whole new industry one gadget at a time.  MEMS sensors represent an intrinsic part of this design movement. Wearable designs have come a long way from counting steps in fitness trackers, and they are already applying machine-learning algorithms to classify and analyze data.

HOW MUCH IS TOO MUCH?

December 15, 2018


How many “screen-time” hours do you spend each day?  Any idea? Now, let’s face facts, an adult working a full-time job requiring daily hour-long screen time may be a necessity.  We all know that but how about our children and grandchildren?

I’m old enough to remember when television was a laboratory novelty and telephones were “ringer-types” affixed to the cleanest wall in the house.  No laptops, no desktops, no cell phones, no Gameboys, etc etc.  You get the picture.  That, as we all know, is a far cry from where we are today.

Today’s children have grown up with a vast array of electronic devices at their fingertips. They can’t imagine a world without smartphones, tablets, and the internet.  If you do not believe this just ask them. One of my younger grandkids asked me what we did before the internet.  ANSWER: we played outside, did our chores, called our friends and family members.

The advances in technology mean today’s parents are the first generation who have to figure out how to limit screen-time for children.  This is a growing requirement for reasons we will discuss later.  While digital devices can provide endless hours of entertainment and they can offer educational content, unlimited screen time can be harmful. The American Academy of Pediatrics recommends parents place a reasonable limit on entertainment media. Despite those recommendations, children between the ages of eight (8) and eighteen (18) average seven and one-half (7 ½) hours of entertainment media per day, according to a 2010 study by the Henry J. Kaiser Family Foundation.  Can you imagine over seven (7) hours per day?  When I read this it just blew my mind.

But it’s not just kids who are getting too much screen time. Many parents struggle to impose healthy limits on themselves too. The average adult spends over eleven (11) hours per day behind a screen, according to the Kaiser Family Foundation.  I’m very sure that most of this is job related but most people do not work eleven hours behind their desk each day.

Let’s now look at what the experts say:

  • Childrenunder age two (2) spend about forty-two (42) minutes, children ages two (2) to four (4) spend two (2) hours and forty (40) minutes, and kids ages five (5) to eight (8) spend nearly three (3) hours (2:58) with screen media daily. About thirty-five (35) percent of children’s screen time is spent with a mobile device, compared to four (4) percent in 2011. Oct 19, 2017
  • Children aged eighteen (18) monthsto two (2) years can watch or use high-quality programs or apps if adults watch or play with them to help them understand what they’re seeing. children aged two to five (2-5) years should have no more than one hour a day of screen time with adults watching or playing with them.
  • The American Academy of Pediatrics released new guidelines on how much screen timeis appropriate for children. … Excessive screen time can also lead to “Computer Vision Syndrome” which is a combination of headaches, eye strain, fatigue, blurry vision for distance, and excessive dry eyes. August 21, 2017
  • Pediatricians: No More than two (2) HoursScreen Time Daily for Kids. Children should be limited to less than two hours of entertainment-based screen time per day, and shouldn’t have TVs or Internet access in their bedrooms, according to new guidelines from pediatricians. October 28, 2013

OK, why?

  • Obesity: Too much time engaging in sedentary activity, such as watching TV and playing video games, can be a risk factor for becoming overweight.
  • Sleep Problems:  Although many parents use TV to wind down before bed, screen time before bed can backfire. The light emitted from screens interferes with the sleep cycle in the brain and can lead to insomnia.
  • Behavioral Problems: Elementary school-age children who watch TV or use a computer more than two hours per day are more likely to have emotional, social, and attention problems. Excessive TV viewing has even been linked to increased bullying behavior.
  • Educational problems: Elementary school-age children who have televisions in their bedrooms do worse on academic testing.  This is an established fact—established.  At this time in our history we need educated adults that can get the job done.  We do not need dummies.
  • Violence: Exposure to violent TV shows, movies, music, and video games can cause children to become desensitized to it. Eventually, they may use violence to solve problems and may imitate what they see on TV, according to the American Academy of Child and Adolescent Psychiatry.

When very small children get hooked on tablets and smartphones, says Dr. Aric Sigman, an associate fellow of the British Psychological Society and a Fellow of Britain’s Royal Society of Medicine, they can unintentionally cause permanent damage to their still-developing brains. Too much screen time too soon, he says, “is the very thing impeding the development of the abilities that parents are so eager to foster through the tablets. The ability to focus, to concentrate, to lend attention, to sense other people’s attitudes and communicate with them, to build a large vocabulary—all those abilities are harmed.”

Between birth and age three, for example, our brains develop quickly and are particularly sensitive to the environment around us. In medical circles, this is called the critical period, because the changes that happen in the brain during these first tender years become the permanent foundation upon which all later brain function is built. In order for the brain’s neural networks to develop normally during the critical period, a child needs specific stimuli from the outside environment. These are rules that have evolved over centuries of human evolution, but—not surprisingly—these essential stimuli are not found on today’s tablet screens. When a young child spends too much time in front of a screen and not enough getting required stimuli from the real world, her development becomes stunted.

CONCLUSION: This digital age is wonderful if used properly and recognized as having hazards that may create lasting negative effects.  Use wisely.

JUMPER—THE BOOK

December 8, 2018


Jumper begins with Davey, a child who has spent the entirety of his life being verbally and physically abused by his alcoholic father. When reading the book, I immediately took a very sympathetic stance relative to Davey’s situation. I cannot imagine growing up on a household with this atmosphere.  He and his mother were routinely pummeled by the “man of the house” and the brutality at times was graphic.   When I say graphic, I mean Davey’s mother had to have reconstructive surgery after her last beating.  This is when she left.  When she did leave, unable to deal with the abuse she suffered, it only got worse for him. He was abandoned by the only person in the world who ever cared for him.   He was left with the man who frequently beat him bloody, Davey finally finds escape when he discovers his ability to Jump, or teleport, to any place that he has previously been, and can remember well enough to picture in his mind. He discovers this ability quite by accident.  His mother lies comatose on the kitchen floor, having been beaten by her husband and Davey is lying on the floor with his father on top of him throwing punches.  He visualizes the only safe place he knows—the local public library.  That’s when he first jumps.  He has no idea as to how he did this.  After the beating, he runs away and tries to make a new life for himself. It is definitely not easy for a seventeen-year-old out on his own, with no money, no drivers’ license, no passport, no Social Security number and no birth certificate.  No identification at all. Out of desperation, he finally decides the only way he can survive is to rob a bank using his powers. This happens in the movie as well but is one of the few similarities between the two—very few.  However, where Davey’s desperate circumstances and real need are deeply delved into in the book.   He is forced to steal the money just to survive, promising to himself one day to pay it back, something he actually, eventually does.  With the money he is able to improve his living standards and actually begin to enjoy his young life not having to worry about the abuse.

He meets a girl named Millie, falls in love, and over the course of the novel finds someone who is willing to listen to his story.  This includes all of the horrible, terrible things that he has had to live through, and has kept pent up inside himself his entire life. She urges him to seek out his mother, and he does just that but the result is a terrible event that determines, to some extent, his future.

In my opinion, the book is much much better than the movie.  The characters are vivid and compelling with Davy and Millie trying to determine the method by which Davy is able to teleport. (NOTE: Teleportation is the theoretical transfer of matter or energy from one point to another without traversing the physical space between them.) The book does NOT come to any conclusions but they do establish the fact that there is a portal through which Davy leaps when he jumps.

What Others Think:

I think this is a terrific book but I would like you to read the book and judge for yourself. I also would like to give you what others think.

Mar 17, 2014  Gavin rated it really liked it

I’ve wanted to read this ever since I watched the Jumper movie. Teleportation movies and books are always fun. The biggest surprise is that this book was nothing like the movie. The only thing they had in common was the teleporting main character.

This was a surprisingly dark sci-fi that spent more time pondering moral dilemmas and exploring Davey’s emotional reaction to the various mishaps that befell him than it did on action sequences. The action and the pace did pick up a bit towards the end.

Davey was a tortured soul with a bit of bitterness about him, but for all his faults he was mostly likable.

Overall this was an enjoyable sci-fi read worth a 4-star rating. I’ll definitely read the rest of the books in the series at some point.

Nov 21, 2013  Eric Allen rated it it was amazing

Jumper
By Steven Gould

A Retroview by Eric Allen

When this book came out, back in 1992, I was in my teens, had just finished the latest installment of The Wheel of Time, and I was looking for something else to read. So, I did the thing that all geeks do, and asked the librarian for a recommendation. She handed me Jumper with a wink and told me that I had better hurry because the book was about to be banned at that library. Being a teenaged boy at the time, these were the exact words needed to sell me on it. And I must say, I was really blown away by it. It was a book written for someone my age, that wasn’t afraid to treat me like an adult, showing such things as homosexual child rape, child abuse, alcoholism, graphic terrorist attacks, and it even used the dreaded F word like FOUR WHOLE TIMES!!! No wonder that behind Catcher in the Rye, it is the most banned children’s book in history. A fact that the author is extraordinarily proud of.

Dec 05, 2017  Skip rated it really liked it

Davy Rice has a special gift: the ability to transport himself to any spot he wants, which he discovers when being beaten by his abusive father or about to be raped by a long-haul trucker. He flees his small town, moving to NY, where he settles down after jumping into a bank and taking almost $1 million. He falls in love with a college student in Oklahoma, and eventually decides to find his mother, who deserted him. But disaster strikes and Davy begins to use his gift to find the culprit, drawing the unwanted attention of the NSA and NY Police Department. Improbable, of course, but Davy is a moral, sensitive protagonist, dealing with complex issues.

Sherwood Smith rated it it was amazing

I call it science fiction though the jumping is probably fantasy, but the book is treated like SF. The original book, not the novelisation for the movie, was heart-wrenching, funny, fast-paced, poignant, and so very real in all the good ways, as the teen protagonist discovers he can teleport from place to place, at first to escape his abusive dad. Then he wants to do good . . . and discovers that there are consequences–from both sides.

I’m sorry that the movie appears to have removed all the heart from it, leaving just the violence, without much motivation, judging from the novelization that appeared afterward. No doubt many readers liked it, but that was not my cup of tea.

CONCLUSIONS:

As I mentioned above, read the book and determine for yourself if it’s a winner.  Easy to read, three hundred and forty-five (345) pages double-spaced.  Good night’s work.

THE WAR TO END ALL WARS

November 11, 2018


Exactly one hundred (100) years ago today (November 11) the first world war ended.  World War I, which introduced industrialized killing to a world utterly unprepared for it, ended at 11 a.m. on Nov. 11, 1918 — the eleventh hour of the eleventh day of the eleventh month.

WWI actually ended in 1918.  That day was described in America as Armistice Day. Church bells would sound at 11 a.m. and people would observe a moment of silence to remember the men who died in the 1914-18 war. In 1954, in the aftermath of World War II, Congress renamed the day as Veterans Day. Let’s take a brief look at several reasons for the term “War to End All Wars”.

  • By that first Christmas, over 300,000 Frenchmen had been killed, wounded or captured. During the same period, the Germans suffered 800,000 casualties.
  • Throughout four years of war, casualties on both sides on the western front averaged 2,250 dead and almost 5,000 wounded every day,” Joseph Persico writes in his “11th Month, 11th Day, 11th Hour.
  • Battles large and small were fought on three contents — Africa, Asia and especially in Europe — the war claimed some nine million combatants and an estimated seven million more civilian lives.
  • America, which entered the war in April 1917, lost 53,402 of her sons in combat and another 63,114 to non-combat deaths, according to the Department of Veterans Affairs.
  • 204,000 American soldiers were wounded.
  • Germany lost 2.050 million men, while Russia lost 1.8 million. Great Britain lost 885,000 men — more than twice the number of Americans killed in World War II.
  • France’s losses were catastrophic. Fully 1.397 million men, 4.29 percent of France’s population, died in the war.

I would like now to present a timeline relative to the events of WWI.  This may be somewhat long but I think on this Veterans’ Day very important.

1914:

  • June 28—a Serb teenager, Gavrilo Princip kills Austrian Archduke Franz Ferdinand
  • July 18—Austria-Hungry declares war on Serbia
  • August 1—Germany declares war on Russia
  • August 4—Germany declares war on France
  • August 23—Japan declares war on Germany
  • September—Battle of the Marne stops the German advance in France
  • October 29—Ottoman Empire enters the war
  • November—Beginning of trench warfare
  • December 25—Unofficial Christmas Truce

1915:

  • February—German U-boat campaign marks the first large use of submarines in warfare
  • April—Allied troops land in Gallipoli, Turkey, a defining moment for Australia and New Zeeland
  • April 22—First use of a chemical weapon, chlorine gas, near Ypres, Belgium
  • May 7—British ship Lusitania sunk by German U-boat
  • May 23—Italy enters the war against Austria-Hungary
  • October—Bulgaria joins the war on the side of the Central Powers

1916:

  • February 21—Battle of Verdun begins
  • March 9—Germany declares war on Portugal
  • July 1—Battle of the Somme begins with the first mass use of tanks
  • August 27—Romania enters the war and is invaded by Germany
  • September 4—British take Dar es Salaam in German East Africa
  • October—Soldier Adolf Hitler is wounded
  • December 23—Allied forces defeat Turkish in Sinai Peninsula

1917:

  • March—Bagdad falls to Anglo-Indian forces
  • April 6—United States declares war on Germany
  • April—Battle for Vimy Ridge which was the defining moment for Canada
  • July—The last Russian offensive ends in failure as their revolution nears. Inconclusive Battle of Passchendaele in Belgium
  • October 15—Spy Mata Hari is executed by a French firing squad
  • October 26—Brazil declares war joining the Allied Powers
  • December—Battle of Jerusalem

1918:

  • March 3—Treaty of Brest-Litovsk ends Russia’s involvement in the war on the Eastern Front
  • April 21—Legendary German fighter pilot known as the Red Baron is shot down and killed near Amiens, France
  • June—Battle of Belleau Wood, which the defining moment for the United States
  • July 21—German submarine fires on Cape Cod which was the only attack on the mainland U.S.
  • September 26—Battle of the Meuse-Argonne begins
  • October 30—Ottoman Empire signs armistice with the Allies
  • October 31—Dissolution of Austro-Hungarian Empire
  • November 9—Germany’s Kaiser Wilhelm II abdicates
  • November 11—Germany armistice ending the war

You can see from the chronology of major events above: this was a war of global significance.  A war our planet has never known.  As we know, it was not a war that ended all wars.  We never learned that lesson.

%d bloggers like this: