BITCOIN

December 9, 2017


I have been hearing a great deal about Bitcoin lately specifically on the early-morning television business channels. I am not too sure what this is all about so I thought I would take a look.    First, an “official” definition.

Bitcoin is a cryptocurrency and worldwide payment system. It is the first decentralized digital currency, as the system works without a central bank or single administrator. … Bitcoin was invented by an unknown person or group of people under the name Satoshi Nakamoto and released as open-source software in 2009.

The “unknown” part really disturbs me as well as the “cryptocurrency” aspects, but let’s continue.  Do you remember the Star Trek episodes in which someone asks, ‘how much does it cost and the answer is _______ credits’?  This is specifically what Bitcoin does, it is digital currency. No one controls Bitcoin; they aren’t printed, like dollars or euros – they’re produced by people, and increasingly businesses, running computers all around the world, using software that solves mathematical problems. A Bitcoin looks as follows-if you acquire a physical object representing“coin”.

Bitcoin transactions are completed when a “block” is added to the blockchain database that underpins the currency however, this can be a laborious process.  Segwit2x proposes moving bitcoin’s transaction data outside of the block and on to a parallel track to allow more transactions to take place. The changes happened in November and it remains to be seen if those changes will have a positive or negative impact on the price of bitcoin in the long term.

It’s been an incredible 2017 for bitcoin growth, with its value quadrupling in the past six months, surpassing the value of an ounce of gold for the first time. It means if you invested £2,000 five years ago, you would be a millionaire today.

You cannot “churn out” an unlimited number of Bitcoin. The bitcoin protocol – the rules that make bitcoin work – say that only twenty-one (21) million bitcoins can ever be created by miners. However, these coins can be divided into smaller parts (the smallest divisible amount is one hundred millionth of a bitcoin and is called a ‘Satoshi’, after the founder of bitcoin).

Conventional currency has been based on gold or silver. Theoretically, you knew that if you handed over a dollar at the bank, you could get some gold back (although this didn’t actually work in practice). But bitcoin isn’t based on gold; it’s based on mathematics. To me this is absolutely fascinating.  Around the world, people are using software programs that follow a mathematical formula to produce bitcoins. The mathematical formula is freely available, so that anyone can check it. The software is also open source, meaning that anyone can look at it to make sure that it does what it is supposed to.

SPECIFIC CHARACTERISTICS:

  1. It’s decentralized

The bitcoin network isn’t controlled by one central authority. Every machine that mines bitcoin and processes transactions makes up a part of the network, and the machines work together. That means that, in theory, one central authority can’t tinker with monetary policy and cause a meltdown – or simply decide to take people’s bitcoins away from them, as the Central European Bank decided to do in Cyprus in early 2013. And if some part of the network goes offline for some reason, the money keeps on flowing.

  1. It’s easy to set up

Conventional banks make you jump through hoops simply to open a bank account. Setting up merchant accounts for payment is another Kafkaesque task, beset by bureaucracy. However, you can set up a bitcoin address in seconds, no questions asked, and with no fees payable.

  1. It’s anonymous

Well, kind of. Users can hold multiple bitcoin addresses, and they aren’t linked to names, addresses, or other personally identifying information.

  1. It’s completely transparent

Bitcoin stores details of every single transaction that ever happened in the network in a huge version of a general ledger, called the blockchain. The blockchain tells all. If you have a publicly used bitcoin address, anyone can tell how many bitcoins are stored at that address. They just don’t know that it’s yours. There are measures that people can take to make their activities opaquer on the bitcoin network, though, such as not using the same bitcoin addresses consistently, and not transferring lots of bitcoin to a single address.

  1. Transaction fees are miniscule

Your bank may charge you a £10 fee for international transfers. Bitcoin doesn’t.

  1. It’s fast

You can send money anywhere and it will arrive minutes later, as soon as the bitcoin network processes the payment.

  1. It’s non-reputable

When your bitcoins are sent, there’s no getting them back, unless the recipient returns them to you. They’re gone forever.

WHERE TO BUY AND SELL

I definitely recommend you do your homework before buying Bitcoin because the value is roller coaster in nature, but given below are several exchanges in which Bitcoin can be purchased or sold.  Good luck.

CONSLUSIONS:

Is Bitcoin a bubble? It’s a natural question to ask—especially after Bitcoin’s price shot up from $12,000 to $15,000 this past week.

Brent Goldfarb is a business professor at the University of Maryland, and William Deringer is a historian at MIT. Both have done research on the history and economics of bubbles, and they talked to Ars by phone this week as Bitcoin continues its surge.

Both academics saw clear parallels between the bubbles they’ve studied and Bitcoin’s current rally. Bubbles tend to be driven either by new technologies (like railroads in 1840s Britain or the Internet in the 1990s) or by new financial innovations (like the financial engineering that produced the 2008 financial crisis). Bitcoin, of course, is both a new technology and a major financial innovation.

“A lot of bubbles historically involve some kind of new financial technology the effects of which people can’t really predict,” Deringer told Ars. “These new financial innovations create enthusiasm at a speed that is greater than people are able to reckon with all the consequences.”

Neither scholar wanted to predict when the current Bitcoin boom would end. But Goldfarb argued that we’re seeing classic signs that often occur near the end of a bubble. The end of a bubble, he told us, often comes with “a high amount of volatility and a lot of excitement.”

Goldfarb expects that in the coming months we’ll see more “stories about people who got fabulously wealthy on bitcoin.” That, in turn, could draw in more and more novice investors looking to get in on the action. From there, some triggering event will start a panic that will lead to a market crash.

“Uncertainty of valuation is often a huge issue in bubbles,” Deringer told Ars. Unlike a stock or bond, Bitcoin pays no interest or dividends, making it hard to figure out how much the currency ought to be worth. “It is hard to pinpoint exactly what the fundamentals of Bitcoin are,” Deringer said.

That uncertainty has allowed Bitcoin’s value to soar a 1,000-fold over the last five years. But it could also make the market vulnerable to crashes if investors start to lose confidence.

I would say travel at your own risk.

 

Advertisements

OKAY first, let us define “OPEN SOURCE SOFTWARE” as follows:

Open-source software (OSS) is computer software with its source-code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. The benefits include:

  • COST—Generally, open source software if free.
  • FLEXIBILITY—Computer specialists can alter the software to fit their needs for the program(s) they are writing code for.
  • FREEDOM—Generally, no issues with patents or copyrights.
  • SECURITY—The one issue with security is using open source software and embedded code due to compatibility issues.
  • ACCOUNTABILITY—Once again, there are no issues with accountability and producers of the code are known.

A very detailed article written by Jacob Beningo has seven (7) excellent points for avoiding, like the plague, open source software.  Given below are his arguments.

REASON 1—LACKS TRACEABLE SOFTWARE DEVELOPMENT LIFE CYCLE–Open source software usually starts with an ingenious developer working out their garage or basement hoping to create code that is very functional and useful. Eventually multiple developers with spare time on their hands get involved. The software evolves but it doesn’t really follow a traceable design cycle or even follow best practices. These various developers implement what they want or push the code in the direction that meets their needs. The result is software that works in limited situations and circumstances and users need to cross their fingers and pray that their needs and conditions match them.

REASON 2—DESIGNED FOR FUNCTIONALITY AND NOT ROBUSTNESS–Open source software is often written for functionality only. Accessed and written to an SD card for communication over USB connections. The issue here is that while it functions the code, it generally is not robust and is never designed to anticipate issues.  This is rarely the case and while the software is free, very quickly developers can find that their open source software is just functional and can’t stand up to real-world pressures. Developers will find themselves having to dig through unknown terrain trying to figure out how best to improve or handle errors that weren’t expected by the original developers.

REASON 3—ACCIDENTIALLY EXPOSING CONFIDENTIAL INTELLECTURAL PROPERTY–There are several different licensing schemes that open source software developers use. Some really do give away the farm; however, there are also licenses that require any modifications or even associated software to be released as open source. If close attention is not being paid, a developer could find themselves having to release confidential code and algorithms to the world. Free software just cost the company in revealing the code or if they want to be protected, they now need to spend money on attorney fees to make sure that they aren’t giving it all away by using “free” software.

REASON 4—LACKING AUTOMATED AND/OR MANUAL TESTING–A formalized testing process, especially automated tests are critical to ensuring that a code base is robust and has sufficient quality to meet its needs. I’ve seen open source Python projects that include automated testing which is encouraging but for low level firmware and embedded systems we seem to still lag behind the rest of the software industry. Without automated tests, we have no way to know if integrating that open source component broke something in it that we won’t notice until we go to production.

REASON 5—POOR DOCUMENTATION OR DOCUMENTATION THAT IS LACKING COMPLETELY–Documentation has been getting better among open source projects that have been around for a long time or that have strong commercial backing. Smaller projects though that are driven by individuals tend to have little to no documentation. If the open source code doesn’t have documentation, putting it into practice or debugging it is going to be a nightmare and more expensive than just getting commercial or industrial-grade software.

REASON 6—REAL-TIME SUPPORT IS LACKING–There are few things more frustrating than doing everything you can to get something to work or debugged and you just hit the wall. When this happens, the best way to resolve the issue is to get support. The problem with open source is that there is no guarantee that you will get the support you need in a timely manner to resolve any issues. Sure, there are forums and social media to request help but those are manned by people giving up their free time to help solve problems. If they don’t have the time to dig into a problem, or the problem isn’t interesting or is too complex, then the developer is on their own.

REASON 7—INTEGRATION IS NEVER AS EASY AS IT SEEMS–The website was found; the demonstration video was awesome. This is the component to use. Look at how easy it is! The source is downloaded and the integration begins. Months later, integration is still going on. What appeared easy quickly turned complex because the same platform or toolchain wasn’t being used. “Minor” modifications had to be made. The rabbit hole just keeps getting deeper but after this much time has been sunk into the integration, it cannot be for naught.

CONCLUSIONS:

I personally am by no means completely against open source software. It’s been extremely helpful and beneficial in certain circumstances. I have used open source, namely JAVA, as embedded software for several programs I have written.   It’s important though not to just use software because it’s free.  Developers need to recognize their requirements, needs, and level of robustness that required for their product and appropriately develop or source software that meets those needs rather than blindly selecting software because it’s “free.”  IN OTHER WORDS—BE CAREFUL!


Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does. I feel sure Mr. Musk is talking about the long-term dangers and not short-term realities.   Mr. Musk is shown in the digital picture below.

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organizations, including Open AI, to “keep an eye on what’s going on”.  “Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA would not make flying safer. They’re there for good reason.”

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.  Musk’s tweets coincide with the testing of an AI designed by OpenAI to play the multiplayer online battle arena (Moba) game Dota 2, which successfully managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world’s best players competing for a $24.8m (£19m) prize fund.

The AI displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.

Musk backed the non-profit AI research company OpenAI in December 2015, taking up a co-chair position. OpenAI’s goal is to develop AI “in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. But it is not the first group to take on human players in a gaming scenario. Google’s Deepmind AI outfit, in which Musk was an early investor, beat the world’s best players in the board game Go and has its sights set on conquering the real-time strategy game StarCraft II.

Musk envisions a situation found in the movie “i-ROBOT with humanoid robotic systems shown below.  Robots that can think for themselves. Great movie—but the time-frame was set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s investigation into the murder of Dr. Alfred Lanning, who works at U.S. Robotics.  Let me clue you in—the robot did it.

I am sure this audience is familiar with Isaac Asimov’s Three Laws of Robotics.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s three laws indicate there will be no “Rise of the Machines” like the very popular movie indicates.   For the three laws to be null and void, we would have to enter a world of “singularity”.  The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history. Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they’ll bring.

Author Ken MacLeod has a character describe the singularity as “the Rapture for nerds” in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn’t actually coin this phrase – he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls “personal singularities,” moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include post-human experiences. Post-human (my words) would describe robotic future.

Could this happen?  Elon Musk has an estimated net worth of $13.2 billion, making him the 87th richest person in the world, according to Forbes. His fortune owes much to his stake in Tesla Motors Inc. (TSLA), of which he remains CEO and chief product architect. Musk made his first fortune as a cofounder of PayPal, the online payments system that was sold to eBay for $1.5 billion in 2002.  In other words, he is no dummy.

I think it is very wise to listen to people like Musk and heed any and all warnings they may give. The Executive, Legislative and Judicial branches of our country are too busy trying to get reelected to bother with such warnings and when “catch-up” is needed, they always go overboard with rules and regulations.  Now is the time to develop proper and binding laws and regulations—when the technology is new.

THEY GOT IT ALL WRONG

November 15, 2017


We all have heard that necessity is the mother of invention.  There have been wonderful advances in technology since the Industrial Revolution but some inventions haven’t really captured the imagination of many people, including several of the smartest people on the planet.

Consider, for example, this group: Thomas Edison, Lord Kelvin, Steve Ballmer, Robert Metcalfe, and Albert Augustus Pope. Despite backgrounds of amazing achievement and even brilliance, all share the dubious distinction of making some of the worst technological predictions in history and I mean the very worst.

Had they been right, history would be radically different and today, there would be no airplanes, moon landings, home computers, iPhones, or Internet. Fortunately, they were wrong.  And that should tell us something: Even those who shape the future can’t always get a handle on it.

Let’s take a look at several forecasts that were most publically, painfully, incorrect. From Edison to Kelvin to Ballmer, click through for 10 of the worst technological predictions in history.

“Heavier-than-air flying machines are impossible.” William Thomson (often referred to as Lord Kelvin), mathematical physicist and engineer, President, Royal Society, in 1895.

A prolific scientific scholar whose name is commonly associated with the history of math and science, Lord Kelvin was nevertheless skeptical about flight. In retrospect, it is often said that Kelvin was quoted out of context, but his aversion to flying machines was well known. At one point, he is said to have publically declared that he “had not the smallest molecule of faith in aerial navigation.” OK, go tell that to Wilber and Orville.

“Fooling around with alternating current is just a waste of time. No one will use it, ever. Thomas Edison, 1889.

Thomas Edison’s brilliance was unassailable. A prolific inventor, he earned 1,093 patents in areas ranging from electric power to sound recording to motion pictures and light bulbs. But he believed that alternating current (AC) was unworkable and its high voltages were dangerous.As a result, he battled those who supported the technology. His so-called “war of currents” came to an end, however, when AC grabbed a larger market share, and he was forced out of the control of his own company.

 

“Computers in the future may weigh no more than 1.5 tons.” Popular Mechanics Magazine, 1949.

The oft-repeated quotation, which has virtually taken on a life of its own over the years, is actually condensed. The original quote was: “Where a calculator like the ENIAC today is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh only 1.5 tons.” Stated either way, though, the quotation delivers a clear message: Computers are mammoth machines, and always will be. Prior to the emergence of the transistor as a computing tool, no one, including Popular Mechanics, foresaw the incredible miniaturization that was about to begin.

 

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.” Darryl Zanuck, 20th Century Fox, 1946.

Hollywood film producer Darryl Zanuck earned three Academy Awards for Best Picture, but proved he had little understanding of the tastes of Americans when it came to technology. Television provided an alternative to the big screen and a superior means of influencing public opinion, despite Zanuck’s dire predictions. Moreover, the technology didn’t wither after six months; it blossomed. By the 1950s, many homes had TVs. In 2013, 79% of the world’s households had them.

 

“I predict the Internet will go spectacularly supernova and in 1996 catastrophically collapse.” Robert Metcalfe, founder of 3Com, in 1995.

An MIT-educated electrical engineer who co-invented Ethernet and founded 3Com, Robert Metcalfe is a holder of the National Medal of Technology, as well as an IEEE Medal of Honor. Still, he apparently was one of many who failed to foresee the unbelievable potential of the Internet. Today, 47% of the 7.3 billion people on the planet use the Internet. Metcalfe is currently a professor of innovation and Murchison Fellow of Free Enterprise at the University of Texas at Austin.

“There’s no chance that the iPhone is going to get any significant market share.” Steve Ballmer, former CEO, Microsoft Corp., in 2007.

Some magna cum laude Harvard math graduate with an estimated $33 billion in personal wealth, Steve Ballmer had an amazing tenure at Microsoft. Under his leadership, Microsoft’s annual revenue surged from $25 billion to $70 billion, and its net income jumped 215%. Still, his insights failed him when it came to the iPhone. Apple sold 6.7 million iPhones in its first five quarters, and by end of fiscal year 2010, its sales had grown to 73.5 million.

 

 

“After the rocket quits our air and starts on its longer journey, its flight would be neither accelerated nor maintained by the explosion of the charges it then might have left.” The New York Times,1920.

The New York Times was sensationally wrong when it assessed the future of rocketry in 1920, but few people of the era were in a position to dispute their declaration. Forty-one years later, astronaut Alan Shepard was the first American to enter space and 49 years later, Neil Armstrong set foot on the moon, laying waste to the idea that rocketry wouldn’t work. When Apollo 11 was on its way to the moon in 1969, the Times finally acknowledged the famous quotation and amended its view on the subject.

“With over 15 types of foreign cars already on sale here, the Japanese auto industry isn’t likely to carve out a big share of the market for itself.” Business Week, August 2, 1968.

Business Week seemed to be on safe ground in 1968, when it predicted that Japanese market share in the auto industry would be miniscule. But the magazine’s editors underestimated the American consumer’s growing distaste for the domestic concept of planned obsolescence. By the 1970s, Americans were flocking to Japanese dealerships, in large part because Japanese manufacturers made inexpensive, reliable cars. That trend has continued over the past 40 years. In 2016, Japanese automakers built more cars in the US than Detroit did.

“You cannot get people to sit over an explosion.” Albert Augustus Pope, founder, Pope Manufacturing, in the early 1900s.

Albert Augustus Pope thought he saw the future when he launched production of electric cars in Hartford, CT, in 1897. Listening to the quiet performance of the electrics, he made his now-famous declaration about the future of the internal combustion engine. Despite his preference for electrics, however, Pope also built gasoline-burning cars, laying the groundwork for future generations of IC engines. In 2010, there were more than one billion vehicles in the world, the majority of which used internal combustion propulsion.

 

 

 

“I have traveled the length and breadth of this country and talked to the best people, and I can assure you that data processing is a fad that won’t last out the year.” Editor, Prentice Hall Books,1957.

The concept of data processing was a head-scratcher in 1957, especially for the unnamed Prentice Hall editor who uttered the oft-quoted prediction of its demise. The prediction has since been used in countless technical presentations, usually as an example of our inability to see the future. Amazingly, the editor’s forecast has recently begun to look even worse, as Internet of Things users search for ways to process the mountains of data coming from a new breed of connected devices. By 2020, experts predict there will be 30 to 50 billion such connected devices sending their data to computers for processing.

CONCLUSIONS:

Last but not least, Charles Holland Duell in 1898 was appointed as the United States Commissioner of Patents, and held that post until 1901.  In that role, he is famous for purportedly saying “Everything that can be invented has been invented.”  Well Charlie, maybe not.

MONEY AND BANK SAFETY

November 1, 2017


Do you ever wonder if the money, hard-earned money, you earn every week or month is safe?

According to the FDIC:  “The basic FDIC coverage is good for up to $250,000 per depositor per bank. If you have more than that in a failed bank, the FDIC might choose to cover your losses, but there is no promise to do so.” Sep 13, 2016

The Federal Deposit Insurance Corporation (FDIC) preserves and promotes public confidence in the U.S. financial system by insuring deposits in banks and thrift institutions for up to $250,000 per depositor, per insured bank, for each ownership category by identifying, monitoring and addressing risks to the deposit.  This is the law.  Good to know.

The following list will indicate that our banking system has experienced some “hard times” in the recent past.  Let’s take a look at bank failures in this country and then we will look at the safest countries relative to bank and customer money.

BANK FAILURES:

The following list is taken from the web site: Bankrate.com.

YEAR                      NUMBER OF BANK FAILURES

2016(Estimated)                              1

2015(Estimated)                              8

2014(Estimated)                            18

2013(Estimated)                            14

2012(Estimated)                            51

2011(Official)                                 92

2010(Official)                                157

2019(Official)                                140

As you can see, from 2009 through 2016 there have been four hundred and ninety-one (491) bank failures in this country.

Now, The Survey of Consumer Finances is conducted and published every three years, most recently in 2013. According to the Federal Reserve, “the survey data include information on families’ balance sheets, pensions, income, and demographic characteristics.” Data from previous SCF years show significant changes in checking account balances since 2001. Our analysis of average savings account balances based on the same data can be found as follows:

YEAR     AVERAGE CHECKING BALANCE

2013                       $9,132

2010                       $7,036

2007                       $6,203

2004                       $7,382

2001                       $6,404

As you can see, most people are definitely covered if and when their individual bank fails.  That begs the question:  what are the safest countries in which to deposit money?  Let’s take a look. Some may be very surprising.

SAFEST COUNTRIES IN WHICH TO BANK:

  1. Czech Republic — The Czech banking sector is unusual in that foreign-owned lenders dominate the industry, but consumers don’t seem to mind, ranking them the 14th safest in the world.
  2. Guatemala — The densely populated Central American nation of 15.5 million people has three key players in its banking system — Banco Industrial, Banco G&T Continental, and Banco de Desarrollo Rural. All three are seen as being fairly sound, according to the WEF’s survey.
  3. Luxembourg — It’s no surprise Luxembourg scores highly, as the country is famous for its financial sector. Its Banque et Caisse d’Épargne de l’État is often cited as one of the safest on earth.
  4. Panama — As the country has no central bank, Panamanian lenders are run conservatively, with capital ratios almost twice the required minimum on average. Traditionally seen as a tax haven, the country has made substantial strides to shake off that reputation since the financial crisis.
  5. Sweden — Although Swedish lenders are being squeezed by the Riksbank’s negative interest rate policy, Swedish banks are still among the safest in the world, according to the WEF.
  6. Chile — In July, ratings agency Fitch cut the outlook of the country’s banking system to negative, based on “weakening asset quality and profitability,” but that hasn’t spooked Chileans, according to the WEF.
  7. Singapore — Singapore is renowned as one of the world’s great financial centres, and the soundness of its banking sector reflects that.
  8. Norway — As an oil-reliant economy, Norway has faced serious issues in recent years, and in August, its banking system had its outlook cut to negative by Moody’s. However, the country’s banks remain very sound, the WEF’s survey suggests.
  9. Hong Kong — Another global financial centre, Hong Kong is home to arms of most of the world’s biggest banks, and some of the world’s safest financial institutions.
  10. Australia — A small group of four major banks divide up most of Australia’s banking sector, while foreign banks are tightly regulated, making sure the system is sturdy.
  11. New Zealand — New Zealand’s banking sector is dominated by a group of five financial players. Decent profits and growth without too much competition has seen the sector thrive, although it slips from second last year to fourth in 2016.
  12. Canada — Canadian banks have long been a byword for stability. The country has had only two small regional bank failures in almost 100 years, and had zero failures during the Great Depression of the 1930s. Last year, the country’s banks were seen as the safest on earth, so confidence has clearly slipped a little.
  13. South Africa — South Africa’s so-called ‘Big Four’ — Standard Bank, FirstRand Bank, Nedbank, and Barclays Africa — dominate the country’s consumer sector, and are widely seen to be pretty safe, with only one other nation scoring higher.
  14. Finland — Finland’s banking sector is dominated by co-operative and savings banks, which take little risk. The country’s central bank governor, Erkki Liikanen, below, has led the way on proposals to split investment banking and deposit-taking​ activities at European lenders. Ranked fourth in 2015’s list, Finland’s banks have got even safer this year.

According to the same company that made the list above, the United States ranked number thirty-sixth (36) in depositor safety.

CONCLUSIONS:

I’m definitely not saying run out tomorrow and transfer all of your money to a bank located in one of these countries above but really, can’t we do better as a country?  Can’t the FED just get out of the way?  Regulations and banking philosophy are to blame for the failures given above—not to mention plain OLE GREED.  REMEMBER WELLS-FARGO?

 

AUGMENTED REALITY (AR)

October 13, 2017


Depending on the location, you can ask just about anybody to give a definition of Virtual Reality (VR) and they will take a stab at it. This is because gaming and the entertainment segments of our population have used VR as a new tool to promote games such as SuperHot VR, Rock Band VR, House of the Dying Sun, Minecraft VR, Robo Recall, and others.  If you ask them about Augmented Reality or AR they probably will give you the definition of VR or nothing at all.

Augmented reality, sometimes called Mixed Reality, is a technology that merges real-world objects or the environment with virtual elements generated by sensory input devices for sound, video, graphics, or GPS data.  Unlike VR, which completely replaces the real world with a virtual world, AR operates in real time and is interactive with objects found in the environment, providing an overlaid virtual display over the real one.

While popularized by gaming, AR technology has shown a prowess for bringing an interactive digital world into a person’s perceived real world, where the digital aspect can reveal more information about a real-world object that is seen in reality.  This is basically what AR strives to do.  We are going to take a look at several very real applications of AR to indicate the possibilities of this technology.

  • Augmented Reality has found a home in healthcare aiding preventative measures for professionals to receive information relative to the status of patients. Healthcare giant Cigna recently launched a program called BioBall that uses Microsoft HoloLense technology in an interactive game to test for blood pressure and body mass index or BMI. Patients hold a light, medium-sized ball in their hands in a one-minute race to capture all the images that flash on the screen in front of them. The Bio Ball senses a player’s heartbeat. At the University of Maryland’s Augmentarium virtual and augmented reality laboratory, the school is using AR I healthcare to improve how ultrasound is administered to a patient.  Physicians wearing an AR device can look at both a patient and the ultrasound device while images flash on the “hood” of the AR device itself.
  • AR is opening up new methods to teach young children a variety of subjects they might not be interested in learning or, in some cases, help those who have trouble in class catching up with their peers. The University of Helsinki’s AR program helps struggling kids learn science by enabling them to virtually interact with the molecule movement in gases, gravity, sound waves, and airplane wind physics.   AR creates new types of learning possibilities by transporting “old knowledge” into a new format.
  • Projection-based AR is emerging as a new way to case virtual elements in the real world without the use of bulky headgear or glasses. That is why AR is becoming a very popular alternative for use in the office or during meetings. Startups such as Lampix and Lightform are working on projection-based augmented reality for use in the boardroom, retail displays, hospitality rooms, digital signage, and other applications.
  • In Germany, a company called FleetBoard is in the development phase for application software that tracks logistics for truck drivers to help with the long series of pre-departure checks before setting off cross-country or for local deliveries. The Fleet Board Vehicle Lense app uses a smartphone and software to provide live image recognition to identify the truck’s number plate.  The relevant information is super-imposed in AR, thus speeding up the pre-departure process.
  • Last winter, Delft University of Technology in the Netherlands started working with first responders in using AR as a tool in crime scene investigation. The handheld AR system allows on-scene investigators and remote forensic teams to minimize the potential for site contamination.  This could be extremely helpful in finding traces of DNA, preserving evidence, and getting medical help from an outside source.
  • Sandia National Laboratories is working with AR as a tool to improve security training for users who are protecting vulnerable areas such as nuclear weapons or nuclear materials. The physical security training helps guide users through real-world examples such as theft or sabotage in order to be better prepared when an event takes place.  The training can be accomplished remotely and cheaply using standalone AR headsets.
  • In Finland, the VTT Technical Research Center recently developed an AR tool for the European Space Agency (ESA) for astronauts to perform real-time equipment monitoring in space. AR prepares astronauts with in-depth practice by coordinating the activities with experts in a mixed-reality situation.
  • The U.S. Daqri International uses computer vision for industrial AR to enable data visualization while working on machinery or in a warehouse. These glasses and headsets from Daqri display project data, tasks that need to be completed and potential problems with machinery or even where an object needs to be placed or repaired.

CONCLUSIONS:

Augmented Reality merges real-world objects with virtual elements generated by sensory input devices to provide great advantages to the user.  No longer is gaming and entertainment the sole objective of its use.  This brings to life a “new normal” for professionals seeking more and better technology to provide solutions to real-world problems.

AMAZING GRACE

October 3, 2017


There are many people responsible for the revolutionary development and commercialization of the modern-day computer.  Just a few of those names are given below.  Many of whom you probably have never heard of.  Let’s take a look.

COMPUTER REVOLUNTARIES:

  • Howard Aiken–Aiken was the original conceptual designer behind the Harvard Mark I computer in 1944.
  • Grace Murray Hopper–Hopper coined the term “debugging” in 1947 after removing an actual moth from a computer. Her ideas about machine-independent programming led to the development of COBOL, one of the first modern programming languages. On top of it all, the Navy destroyer USS Hopper is named after her.
  • Ken Thompson and David Ritchie–These guys invented Unix in 1969, the importance of which CANNOT be overstated. Consider this: your fancy Apple computer relies almost entirely on their work.
  • Doug and Gary Carlson–This team of brothers co-founded Brøderbund Software, a successful gaming company that operated from 1980-1999. In that time, they were responsible for churning out or marketing revolutionary computer games like Myst and Prince of Persia, helping bring computing into the mainstream.
  • Ken and Roberta Williams–This husband and wife team founded On-Line Systems in 1979, which later became Sierra Online. The company was a leader in producing graphical adventure games throughout the advent of personal computing.
  • Seymour Cray–Cray was a supercomputer architect whose computers were the fastest in the world for many decades. He set the standard for modern supercomputing.
  • Marvin Minsky–Minsky was a professor at MIT and oversaw the AI Lab, a hotspot of hacker activity, where he let prominent programmers like Richard Stallman run free. Were it not for his open-mindedness, programming skill, and ability to recognize that important things were taking place, the AI Lab wouldn’t be remembered as the talent incubator that it is.
  • Bob Albrecht–He founded the People’s Computer Company and developed a sincere passion for encouraging children to get involved with computing. He’s responsible for ushering in innumerable new young programmers and is one of the first modern technology evangelists.
  • Steve Dompier–At a time when computer speech was just barely being realized, Dompier made his computer sing. It was a trick he unveiled at the first meeting of the Homebrew Computer Club in 1975.
  • John McCarthy–McCarthy invented Lisp, the second-oldest high-level programming language that’s still in use to this day. He’s also responsible for bringing mathematical logic into the world of artificial intelligence — letting computers “think” by way of math.
  • Doug Engelbart–Engelbart is most noted for inventing the computer mouse in the mid-1960s, but he’s made numerous other contributions to the computing world. He created early GUIs and was even a member of the team that developed the now-ubiquitous hypertext.
  • Ivan Sutherland–Sutherland received the prestigious Turing Award in 1988 for inventing Sketchpad, the predecessor to the type of graphical user interfaces we use every day on our own computers.
  • Tim Paterson–He wrote QDOS, an operating system that he sold to Bill Gates in 1980. Gates rebranded it as MS-DOS, selling it to the point that it became the most widely-used operating system of the day. (How ‘bout them apples.?)
  • Dan Bricklin–He’s “The Father of the Spreadsheet. “Working in 1979 with Bob Frankston, he created VisiCalc, a predecessor to Microsoft Excel. It was the killer app of the time — people were buying computers just to run VisiCalc.
  • Bob Kahn and Vint Cerf–Prolific internet pioneers, these two teamed up to build the Transmission Control Protocol and the Internet Protocol, better known as TCP/IP. These are the fundamental communication technologies at the heart of the Internet.
  • Nicklus Wirth–Wirth designed several programming languages, but is best known for creating Pascal. He won a Turing Award in 1984 for “developing a sequence of innovative computer languages.”

ADMIREL GRACE MURRAY HOPPER:

At this point, I want to highlight Admiral Grace Murray Hopper or “amazing Grace” as she is called in the computer world and the United States Navy.  Admiral Hopper’s picture is shown below.

Born in New York City in 1906, Grace Hopper joined the U.S. Navy during World War II and was assigned to program the Mark I computer. She continued to work in computing after the war, leading the team that created the first computer language compiler, which led to the popular COBOL language. She resumed active naval service at the age of 60, becoming a rear admiral before retiring in 1986. Hopper died in Virginia in 1992.

Born Grace Brewster Murray in New York City on December 9, 1906, Grace Hopper studied math and physics at Vassar College. After graduating from Vassar in 1928, she proceeded to Yale University, where, in 1930, she received a master’s degree in mathematics. That same year, she married Vincent Foster Hopper, becoming Grace Hopper (a name that she kept even after the couple’s 1945 divorce). Starting in 1931, Hopper began teaching at Vassar while also continuing to study at Yale, where she earned a Ph.D. in mathematics in 1934—becoming one of the first few women to earn such a degree.

After the war, Hopper remained with the Navy as a reserve officer. As a research fellow at Harvard, she worked with the Mark II and Mark III computers. She was at Harvard when a moth was found to have shorted out the Mark II, and is sometimes given credit for the invention of the term “computer bug”—though she didn’t actually author the term, she did help popularize it.

Hopper retired from the Naval Reserve in 1966, but her pioneering computer work meant that she was recalled to active duty—at the age of 60—to tackle standardizing communication between different computer languages. She would remain with the Navy for 19 years. When she retired in 1986, at age 79, she was a rear admiral as well as the oldest serving officer in the service.

Saying that she would be “bored stiff” if she stopped working entirely, Hopper took another job post-retirement and stayed in the computer industry for several more years. She was awarded the National Medal of Technology in 1991—becoming the first female individual recipient of the honor. At the age of 85, she died in Arlington, Virginia, on January 1, 1992. She was laid to rest in the Arlington National Cemetery.

CONCLUSIONS:

In 1997, the guided missile destroyer, USS Hopper, was commissioned by the Navy in San Francisco. In 2004, the University of Missouri has honored Hopper with a computer museum on their campus, dubbed “Grace’s Place.” On display are early computers and computer components to educator visitors on the evolution of the technology. In addition to her programming accomplishments, Hopper’s legacy includes encouraging young people to learn how to program. The Grace Hopper Celebration of Women in Computing Conference is a technical conference that encourages women to become part of the world of computing, while the Association for Computing Machinery offers a Grace Murray Hopper Award. Additionally, on her birthday in 2013, Hopper was remembered with a “Google Doodle.”

In 2016, Hopper was posthumously honored with the Presidential Medal of Freedom by Barack Obama.

Who said women could not “do” STEM (Science, Technology, Engineering and Mathematics)?

%d bloggers like this: