LIBRARY OF CONGRESS

July 22, 2017


About two weeks ago I visited our Chattanooga Hamilton County Bicentennial Public Library.  The library is right downtown and performs a great service to the citizens of the tri-state area—or at one time did.  Let me explain.   I needed to check out a book on Product Lifecycle Management (PLM) for a course I’m writing for PDHonline.com.  PDH is the online publisher providing continuing education units (CEUs) for individuals needing twelve (12) or twenty-four (24) credit units per year.  Enough of that.

The science and technical material has always been on the second floor providing a wealth of information for gear-heads like me.  At one time, the library maintained up to date information on most subjects technical and otherwise.   I have been told in times past: “if we don’t have it—we can order it for you”.   I was absolutely amazed as to what I found.  The floor was almost vacant.  All of the technical books and material were gone.  There were no stacks—no books—no periodicals providing monthly information.  You could have turned the second floor into a bowling alley with room for a bar and grill.  (I suggested that to the librarian on my way out.)  I went over to the desk to inquire as to where were all the book.  All the technical “stuff”.  I was told the “Public Library is now focusing on cultural information and was no longer a research library. You can find most of that information on line”.  Besides, those who visit the library on a regular basis voted to eliminate our research capability”.  I inquired, ‘you mean to tell me I can check our “Fifty Shades of Grey” but can’t find information on ANY technical subject?”  I am assuming with that comment I am no longer on her Christmas card list.  It did not go over very well and by the way, I did not get a vote.  What genius made that decision anyway?  That statement also went over like a led balloon.  I left.

I decided to take a look at what complexities might be involved with getting a library card from the Library of Congress.  That lead me to obtaining information on the Library.  This is what I found.

HISTORY:

The Library of Congress was established by an act of Congress in 1800.  President John Adams signed a bill providing for the transfer of the seat of government from Philadelphia to the new capital city of Washington. The legislation described a reference library for Congress only, containing “such books as may be necessary for the use of Congress – and for putting up a suitable apartment for containing them therein…”

Established with $5,000 appropriated by the legislation, the original library was housed in the new Capitol until August 1814, when invading British troops set fire to the Capitol Building, burning and pillaging the contents of the small library.  Within a month, retired President Thomas Jefferson offered his personal library as a replacement. Jefferson had spent fifty (50) years accumulating books, “putting by everything which related to America, and indeed whatever was rare and valuable in every science”; his library was considered to be one of the finest in the United States.  In offering his collection to Congress, Jefferson anticipated controversy over the nature of his collection, which included books in foreign languages and volumes of philosophy, science, literature, and other topics not normally viewed as part of a legislative library. He wrote, “I do not know that it contains any branch of science which Congress would wish to exclude from their collection; there is, in fact, no subject to which a Member of Congress may not have occasion to refer.”

In January 1815, Congress accepted Jefferson’s offer, appropriating $23,950 for his 6,487 books, and the foundation was laid for a great national library. The Jeffersonian concept of universality, the belief that all subjects are important to the library of the American legislature, is the philosophy and rationale behind the comprehensive collecting policies of today’s Library of Congress.

Ainsworth Rand Spofford, Librarian of Congress from 1864 to 1897, applied Jefferson’s philosophy on a grand scale and built the Library into a national institution. Spofford was responsible for the copyright law of 1870, which required all copyright applicants to send to the Library two copies of their work. This resulted in a flood of books, pamphlets, maps, music, prints, and photographs. Facing a shortage of shelf space at the Capitol, Spofford convinced Congress of the need for a new building, and in 1873 Congress authorized a competition to design plans for the new Library.

In 1886, after many proposals and much controversy, Congress authorized construction of a new Library building in the style of the Italian Renaissance in accordance with a design prepared by Washington architects John L. Smithmeyer and Paul J. Pelz.  The Congressional authorization was successful because of the hard work of two key Senators: Daniel W. Voorhees (Indiana), who served as chairman of the Joint Committee from 1879 to 1881, and Justin S. Morrill (Vermont), chairman of Senate Committee on Buildings and Grounds.

In 1888, General Thomas Lincoln Casey, chief of the Army Corps of Engineers, was placed in charge of construction. His chief assistant was Bernard R. Green, who was intimately involved with the building until his death in 1914. Beginning in 1892, a new architect, Edward Pearce Casey, the son of General Casey, began to supervise the interior work, including sculptural and painted decoration by more than 50 American artists. When the Library of Congress building opened its doors to the public on November 1, 1897, it was hailed as a glorious national monument and “the largest, the costliest, and the safest” library building in the world.

FACTS AND INFORMATION:

Today’s Library of Congress is an unparalleled world resource. The collection of more than 164 million items includes more than 38.6 million cataloged books and other print materials in 470 languages; more than 70 million manuscripts; the largest rare book collection in North America; and the world’s largest collection of legal materials, films, maps, sheet music and sound recordings.

In fiscal year 2016 (October 2015 to September 2016), the Library of Congress …

  • Responded to more than 1 million reference requests from Congress, the public and other federal agencies and delivered approximately 18,380 volumes from the Library’s collections to congressional offices
  • Registered 414,269 claims to copyright through its U.S. Copyright Office
  • Circulated nearly 22 million copies of Braille and recorded books and magazines to more than 800,000 blind and physically handicapped reader accounts
  • Circulated more than 997,000 items for use inside and outside the Library
  • Preserved more than 10.5 million items from the Library’s collections
  • Recorded a total of 164,403,119 items in the collections
  • 24,189,688 cataloged books in the Library of Congress classification system
  • 14,660,079 items in the non-classified print collections, including books in large type and raised characters, incunabula (books printed before 1501), monographs and serials, bound newspapers, pamphlets, technical reports, and other printed material
  • 125,553,352 items in the non-classified (special) collections, including:
  • 3,670,573 audio materials, (discs, tapes, talking books, other recorded formats)
  • 70,685,319 manuscripts
  • 5,581,756 maps
  • 17,153,167 microforms
  • 1,809,351 moving images
  • 8,189,340 items of sheet music
  • 15,071,355 visual materials including:
  • 14,290,385 photographs
  • 107,825 posters
  • 673,145 prints and drawings
  • 3,392,491 other items, (including machine-readable items.
  • Welcomed nearly 1.8 million onsite visitors and recorded 92.8 million visits and more than 454 million-page views on the Library’s web properties
  • Employed 3,149 permanent staff members
  • Operated with a total fiscal 2016 appropriation of $642.04 million, including the authority to spend $42.13 million in receipts

I think anyone would admit, 2016 was a big year.  If we look at the library itself, we see the following grand structure inside and out:

As you might expect, the building itself is very imposing.

This is one view of the rotunda and the reading desks layout.

Very creative layout highlighting the arrangement in a circular pattern.

The reading desks from ground level.

CONCLUSIONS:

I intend to apply for a library card to the Library of Congress only because they have a mail-order arrangement any citizen and non-governmental type can use.  Better than buying book-after-book that probably will not be read more than once. The process is not that difficult and the paperwork is fairly straightforward, at least for the FED.


Various definitions of product lifecycle management or PLM have been issued over the years but basically: product lifecycle management is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products.  PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.

“In recent years, great emphasis has been put on disposal of a product after its service life has been met.  How to get rid of a product or component is extremely important. Disposal methodology is covered by RoHS standards for the European Community.  If you sell into the EU, you will have to designate proper disposal.  Dumping in a landfill is no longer appropriate.

Since this course deals with the application of PLM to industry, we will now look at various industry definitions.

Industry Definitions

PLM is a strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination, and use of product definition information across the extended enterprise, and spanning from product concept to end of life integrating people, processes, business systems, and information. PLM forms the product information backbone for a company and its extended enterprise.” Source:  CIMdata

“Product life cycle management or PLM is an all-encompassing approach for innovation, new product development and introduction (NPDI) and product information management from initial idea to the end of life.  PLM Systems is an enabling technology for PLM integrating people, data, processes, and business systems and providing a product information backbone for companies and their extended enterprise.” Source:  PLM Technology Guide

“The core of PLM (product life cycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.” Source:  Wikipedia article on Product Lifecycle Management

“Product life cycle management is the process of managing product-related design, production and maintenance information. PLM may also serve as the central repository for secondary information, such as vendor application notes, catalogs, customer feedback, marketing plans, archived project schedules, and other information acquired over the product’s life.” Source:  Product Lifecycle Management

“It is important to note that PLM is not a definition of a piece, or pieces, of technology. It is a definition of a business approach to solving the problem of managing the complete set of product definition information-creating that information, managing it through its life, and disseminating and using it throughout the lifecycle of the product. PLM is not just a technology, but is an approach in which processes are as important, or more important than data.” Source:  CIMdata

“PLM or Product Life Cycle Management is a process or system used to manage the data and design process associated with the life of a product from its conception and envisioning through its manufacture, to its retirement and disposal. PLM manages data, people, business processes, manufacturing processes, and anything else pertaining to a product. A PLM system acts as a central information hub for everyone associated with a given product, so a well-managed PLM system can streamline product development and facilitate easier communication among those working on/with a product. Source:  Aras

A pictorial representation of PLM may be seen as follows:

Hopefully, you can see that PLM deals with methodologies from “white napkin design to landfill disposal”.  Please note, documentation is critical to all aspects of PLM and good document production, storage and retrieval is extremely important to the overall process.  We are talking about CAD, CAM, CAE, DFSS, laboratory testing notes, etc.  In other words, “the whole nine yards of product life”.   If you work in a company with ISO certification, PLM is a great method to insure retaining that certification.

In looking at the four stages of a products lifecycle, we see the following:

Four Stages of Product Life Cycle—Marketing and Sales:

Introduction: When the product is brought into the market. In this stage, there’s heavy marketing activity, product promotion and the product is put into limited outlets in a few channels for distribution. Sales take off slowly in this stage. The need is to create awareness, not profits.

The second stage is growth. In this stage, sales take off, the market knows of the product; other companies are attracted, profits begin to come in and market shares stabilize.

The third stage is maturity, where sales grow at slowing rates and finally stabilize. In this stage, products get differentiated, price wars and sales promotion become common and a few weaker players exit.

The fourth stage is decline. Here, sales drop, as consumers may have changed, the product is no longer relevant or useful. Price wars continue, several products are withdrawn and cost control becomes the way out for most products in this stage.

Benefits of PLM Relative to the Four Stages of Product Life:

Considering the benefits of Product Lifecycle Management, we realize the following:

  • Reduced time to market
  • Increase full price sales
  • Improved product quality and reliability
  • Reduced prototypingcosts
  • More accurate and timely request for quote generation
  • Ability to quickly identify potential sales opportunities and revenue contributions
  • Savings through the re-use of original data
  • frameworkfor product optimization
  • Reduced waste
  • Savings through the complete integration of engineering workflows
  • Documentation that can assist in proving compliance for RoHSor Title 21 CFR Part 11
  • Ability to provide contract manufacturers with access to a centralized product record
  • Seasonal fluctuation management
  • Improved forecasting to reduce material costs
  • Maximize supply chain collaboration
  • Allowing for much better “troubleshooting” when field problems arise. This is accomplished by laboratory testing and reliability testing documentation.

PLM considers not only the four stages of a product’s lifecycle but all of the work prior to marketing and sales AND disposal after the product is removed from commercialization.   With this in mind, why is PLM a necessary business technique today?  Because increases in technology, manpower and specialization of departments, PLM was needed to integrate all activity toward the design, manufacturing and support of the product. Back in the late 1960s when the F-15 Eagle was conceived and developed, almost all manufacturing and design processes were done by hand.  Blueprints or drawings needed to make the parts for the F15 were created on a piece of paper. No electronics, no emails – all paper for documents. This caused a lack of efficiency in design and manufacturing compared to today’s technology.  OK, another example of today’s technology and the application of PLM.

If we look at the processes for Boeings DREAMLINER, we see the 787 Dreamliner has about 2.3 million parts per airplane.  Development and production of the 787 has involved a large-scale collaboration with numerous suppliers worldwide. They include everything from “fasten seatbelt” signs to jet engines and vary in size from small fasteners to large fuselage sections. Some parts are built by Boeing, and others are purchased from supplier partners around the world.  In 2012, Boeing purchased approximately seventy-five (75) percent of its supplier content from U.S. companies. On the 787 program, content from non-U.S. suppliers accounts for about thirty (30) percent of purchased parts and assemblies.  PLM or Boeing’s version of PLM was used to bring about commercialization of the 787 Dreamliner.

 


Information for this post is taken from the following companies:

  • Wholers Associates
  • Gartner
  • Oerlikon
  • SmartTech Publishing

3-D ADDITIVE MANUFACTURING:

I think before we get up and running let us define “additive manufacturing”.

Additive Manufacturing or AM is an appropriate name to describe the technologies that build 3D objects by adding layer-upon-layer of material, whether the material is plastic, metal, concrete human tissue. Believe it or not, additive manufacturing is now, on a limited basis, able to construct objects from human tissue to repair body parts that have been damaged and/or absent.

Common to AM technologies is the use of a computer, 3D modeling software (Computer Aided Design or CAD), machine equipment and layering material.  Once a CAD sketch is produced, the AM equipment reads in data from the CAD file and lays downs or adds successive layers of liquid, powder, sheet material or other, in a layer-upon-layer fashion to fabricate a 3D object.

The term AM encompasses many technologies including subsets like 3D Printing, Rapid Prototyping (RP), Direct Digital Manufacturing (DDM), layered manufacturing and additive fabrication.

AM application is limitless. Early use of AM in the form of Rapid Prototyping focused on preproduction visualization models. More recently, AM is being used to fabricate end-use products in aircraft, dental restorations, medical implants, automobiles, and even fashion products.

RAPID PROTOTYPING & MANUFACTURING (RP&M) TECHNOLOGIES:

There are several viable options available today that take advantage of rapid prototyping technologies.   All of the methods shown below are considered to be rapid prototyping and manufacturing technologies.

  • (SLA) Stereolithography
  • (SLS) Selective Laser Sintering
  • (FDM) Fused Deposition Modeling
  • (3DP) Three-Dimensional Printing
  • (Pjet) Poly-Jet
  • Laminated Object Manufacturing

PRODUCT POSSIBILITIES:

Frankly, if it the configuration can be programmed, it can be printed.  The possibilities are absolutely endless.

Assortment of components: flange mount and external gear.

Bone fragment depicting a fractured bone.  This printed product will aid the efforts of a surgeon to make the necessary repair.

More and more, 3D printing is used to model teeth and jaw lines prior to extensive dental work.  It gives the dental surgeon a better look at a patients mouth prior to surgery.

You can see the intricate detail of the Eiffel Tower and the show sole in the JPEGs above.  3D printing can provide an enormous amount of detail to the end user.

THE MARKET:

3D printing is a disruptive technology that is definitely on the rise.  Let’s take a look at future possibilities and current practices.

GROWTH:

Wohlers Associates has been tracking the market for machines that produce metal parts for fourteen (14) years.  The Wohlers Report 2014 marks only the second time for the company to publish detailed information on metal based AM machine unit sales by year. The following chart shows that 348 of 3D machines were sold in 2013, compared to 198 in 2012—growth of an impressive 75.8%.

Additive manufacturing industry grew by 17.4% in worldwide revenues in 2016, reaching $6.063 billion.

MATERIALS USED:

Nearly one-half of the 3D printing/additive manufacturing service providers surveyed in 2016 offered metal printing.

GLOBAL MARKETS:

NUMBER OF VENDORS OFFERING EQUIPMENT:

The number of companies producing and selling additive manufacturing equipment

  • 2014—49
  • 2015—62
  • 2016—97

USERS:

World-wide shipments of 3D printers were projected to reach 455,772 units in 2016. 6.7 million units are expected to be shipped by 2020

More than 278,000 desktop 3D printers (under $5,000) were sold worldwide last year, according to Wohlers Associates. The report has a chart to illustrate and it looks like the proverbial hockey stick that you hear venture capitalists talk about: Growth that moves rapidly from horizontal to vertical (from 2010 to 2015 for desktop).

According to Wohlers Report 2016, the additive manufacturing (AM) industry grew 25.9% (CAGR – Corporate Annual Growth Rate) to $5.165 billion in 2015. Frequently called 3D printing by those outside of manufacturing circles, the industry growth consists of all AM products and services worldwide. The CAGR for the previous three years was 33.8%. Over the past 27 years, the CAGR for the industry is an impressive 26.2%. Clearly, this is not a market segment that is declining as you might otherwise read.

THE MARKET:

  • About 20 to 25% of the $26.5 billion market forecast for 2021 is expected to be the result of metal additive manufacturing.
  • The market for polymers and plastics for 3D printing will reach $3.2 billion by 2022
  • The primary market for metal additive manufacturing, including systems and power materials, will grow to over $6.6 billion by 2026.

CONCLUSIONS:

We see more and more products and components manufactured by 3D Printing processes.  Additive manufacturing just now enjoying acceptance from larger and more established companies whose products are in effect “mission critical”.  As material choices continue to grow, a greater number of applications will emerge.  For the foreseeable future, additive manufacturing is one of the technologies to be associated with.

THINKING FAST AND SLOW

June 13, 2017


Thinking Fast and Slow is a remarkably well-written book by Dr. Daniel Kahneman. Then again why would it not be?  Dr. Kahneman is a Nobel Laureate in Economics. Dr. Kahneman takes the reader on a tour of the mind and explains the two systems that drive the way we think.   System One (1) is fast, intuitive, and emotional.  System Two (2) is considerably slower, more deliberative, and more logical.   He engages the reader in a very lively conversation about how we think and reveals where we can and cannot trust our intuitions and how we tap into the benefits of slow thinking.  One great thing about the book is how he offers practical and enlightening insights into how choices are made in both the corporate world and our personal lives.  He provides different techniques to guard against the mental glitches that often get us into trouble.  He uses multiple examples in each chapter that demonstrate principles of System One and System Two.  This greatly improves the readability of the book and makes understanding much more possible.

Human irrationality is Kahneman’s great theme. There are essentially three phases to his career.  First, he and he coworker Amos Tversky devised a series of ingenious experiments revealing twenty plus “cognitive biases” — unconscious errors of reasoning that distort our judgment of the world. Typical of these is the “anchoring effect”: our tendency to be influenced by irrelevant numbers that we happen to be exposed to.  (In one experiment, for instance, experienced German judges were inclined to give a shoplifter a longer sentence if they just rolled a pair of dice loaded to give a high number.) In the second phase, Kahneman and Tversky showed that people making decisions under uncertain conditions do not behave in the way that economic models have traditionally assumed; they do not “maximize utility.” Both researchers then developed an alternative account of decision making, one more faithful to human psychology, which they called “prospect theory.” (It was for this achievement that Kahneman was awarded the Nobel.) In the third phase of his career, mainly after the death of Tversky, Kahneman delved into “hedonic psychology”: the science of happiness, its nature and its causes. His findings in this area have proven disquieting.   One finding because one of the key experiments involved a deliberately prolonged colonoscopy.  (Very interesting chapter.)

“Thinking, Fast and Slow” spans all three of these phases. It is an astonishingly rich book: lucid, profound, full of intellectual surprises and self-help value. It is consistently entertaining and frequently touching, especially when Kahneman is recounting his collaboration with Tversky. (“The pleasure we found in working together made us exceptionally patient; it is much easier to strive for perfection when you are never bored.”).  So, impressive is its vision of flawed human reason that the New York Times columnist David Brooks recently declared that Kahneman and Tversky’s work “will be remembered hundreds of years from now,” and that it is “a crucial pivot point in the way we see ourselves.” They are, Brooks said, “like the Lewis and Clark of the mind.”

One of the marvelous things about the book is how he captures multiple references.  Page after page of references are used in formulating the text.  To his credit—he has definitely done his homework and years of research into the subject matter propels this text as one of the most foremost in the field of decision making.

This book was the winner of the National Academy of Sciences Best Book Award and the Los Angeles Times Book Prize.  It also was selected by the New York Times Review as one of the ten (10) best books of 2011.

DANIEL KAHNEMAN:

Daniel Kahneman is a Senior Scholar at the Woodrow Wilson School of Public and International Affairs. He is also Professor of Psychology and Public Affairs Emeritus at the Woodrow Wilson School, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem.

He was awarded the Nobel Prize in Economic Sciences in 2002 for his pioneering work integrating insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty. Much of this work was carried out collaboratively with Amos Tversky.

In addition to the Nobel prize, Kahneman has been the recipient of many other awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association (1982) and the Grawemeyer Prize (2002), both jointly with Amos Tversky, the Warren Medal of the Society of Experimental Psychologists (1995), the Hilgard Award for Career Contributions to General Psychology (1995), and the Lifetime Contribution Award of the American Psychological Association (2007).

Professor Kahneman was born in Tel Aviv but spent his childhood years in Paris, France, before returning to Palestine in 1946. He received his bachelor’s degree in psychology (with a minor in mathematics) from Hebrew University in Jerusalem, and in 1954 he was drafted into the Israeli Defense Forces, serving principally in its psychology branch.  In 1958, he came to the United States and earned his Ph.D. in Psychology from the University of California, Berkeley, in 1961.

During the past several years, the primary focus of Professor Kahneman’s research has been the study of various aspects of experienced utility (that is, the utility of outcomes as people actually live them).

CONCLUSIONS: 

This is one book I can definitely recommend to you but one caution—it is a lengthy book and at times tedious.  His examples are very detailed but contain subject matter that we all can relate to.  The decision-making process for matters confronting everyone on an everyday are brought to life with pros and cons being the focus.  You can certainly tell he relies upon probability theory in explaining the choices chosen by individuals and how those choices may be proper or improper.  THIS IS ONE TO READ.

MELTING POT

May 28, 2017


Once each month I receive a summary of charges for prescription medications from our healthcare provider.  How much the plan pays, how much we pay, where I am relative to co-pays, etc. I always read the document but this month I noticed information printed in several languages indicating phone numbers for those individuals who do not speak English.  That list is given below.  As you can see, my provider has their bases covered. This points to the fact that our country is definitely a “melting pot” for differing ethnicities, religions, and cultures in general.  English only is a thing of the past. There are plenty of households in which English is not the primary or native language.  I certainly feel people try to assimilate but, as we all know, English is a very difficult to learn if it is not your first language.  This fact got me to thinking, just how diverse are we?  With that being the case, let’s take a look.

The figure above is the fourth sheet from my medical provider.  As I mentioned, they seem to have all of the bases covered which is exactly what I would do if I were them.

The bar chart below was a definite surprise to me.  According to the 2000 census, close to forty-three percent (43%) of the American population comes from German ancestry.   You can read the chart below to see how the various cultural backgrounds contribute to the overall “melting pot” of the United States.  Of course, this varies from one part of our country to another.  In the Southeast, the predominant lineage is from England, Scotland, Ireland, and Africa.

If we look at cultural diversity by state, we may see the following:

Population demographics from the most recent census present the following:

Social scientists have only recently begun to evaluate multiculturalism as public policy. Keith Banting and Will Kymlicka of Queen’s University in Ontario, Canada, have constructed a multiculturalism policy index (MCP Index) that measures the extent to which eight types of policies appear in twenty-one (21) Western nations. The index accounts for the presence or absence of multicultural policies across these countries at three distinct points — 1980, 2000, and 2010 — thus capturing policy changes over time.  This information is captured below.

The countries were each evaluated for an official affirmation of multiculturalism; multiculturalism in the school curriculum; inclusion of ethnic representation/sensitivity in public media and licensing; exemptions from dress codes in public laws; acceptance of dual citizenship; funding of ethnic organizations to support cultural activities; funding of bilingual and mother-tongue instruction; and affirmative action for immigrant groups.

According to PEW Research, the most and least multi-cultural countries are as follows:

This multicultural map of the world is based on an analysis of data reported in a new study of cultural diversity and economic development by researcher Erkan Gören of the University of Oldenberg in Germany. In his paper, Goren measured the amount of cultural diversity in each of more than 180 countries. To arrive at his estimates, he combined data on ethnicity and race with a measure based on the similarity of languages spoken by major ethnic or racial groups. “The hypothesis is that groups speaking the same or highly related languages should also have similar cultural values,” said Goren in an email.

Together he used his language and ethnicity measures to compute a cultural diversity score for each country that ranged from 0 to 1, with larger scores indicating more diversity and smaller values representing less. The usual suspects lead the list of culturally diverse countries: Chad, Cameroon, Nigeria, Togo and the Democratic Republic of the Congo. These and other African countries typically rank high on any diversity index because of their multitude of tribal groups and languages. The only western country to break into the top 20 most diverse is Canada. The United States ranks near the middle, slightly more diverse than Russia but slightly less diverse than Spain.

Argentina, the Comoros, Haiti, the Dominican Republic, Rwanda and Uruguay rank as the world’s least diverse countries. Argentina may be a surprise, what with all those Germans and Italians pouring into the country after one world war or the other. But Spanish is nearly universally spoken in Argentina, 97% of the country is white and more than nine-in-ten Argentines are at least nominally Roman Catholic, according to the CIA’s World Factbook. The presence of Rwanda at the bottom of the list likely is, in part, a grim reminder of the mass slaughter of Tutsi by the dominant Hutu majority in 1994 in what came to be known as the Rwandan Genocide.

A caution: Cultural diversity is a different concept than ethnic diversity. As a result, a map of the world reflecting ethnic diversity looks somewhat different than the one based on Goren’s cultural diversity measure that combines language and ethnicity profiles of a country.  The Harvard and Goren maps show that the most diverse countries in the world are found in Africa.  The United States falls near the middle, while Canada and Mexico are more diverse than the US.

I have had the great fortune to travel to several non-English-speaking countries over my life time and I can tell you most do NOT consider other languages visitors or nonresidents speak.  Generally, and it may have changed over the last five or six years, if you cannot speak the native language you just might be in trouble.

 

CLOUD COMPUTING

May 20, 2017


OK, you have heard the term over and over again but, just what is cloud computing? Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. It is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

ADVANTAGES AND DISADVANTAGES:

Any new technology has an upside and downside. There are obviously advantages and disadvantages when using the cloud.  Let’s take a look.

 Advantages

  • Lower cost for desktop clients since the applications are running in the cloud. This means clients with smaller hard drive requirements and possibly even no CD or DVD drives.
  • Peak computing needs of a business can be off loaded into cloud applications saving the funds normally used for additional in-house servers.
  • Lower maintenance costs. This includes both hardware and software cost reductions since client machine requirements are much lower cost and software purchase costs are being eliminated altogether for applications running in the cloud.
  • Automatic application software updates for applications in the cloud. This is another maintenance savings.
  • Vastly increased computing power availability. The scalability of the server farm provides this advantage.
  • The scalability of virtual storage provides unlimited storage capacity.

 Disadvantages

  • Requires an “always on” Internet connection.
  • There are clearly concerns with data security. e.g. questions like: “If I can get to my data using a web browser, who else can?”
  • Concerns for loss of data.
  • Reliability. Service interruptions are rare but can happen. Google has already had an outage.

MAJOR CLOUD SERVICE PROVIDERS:

The following names are very recognizable.  Everyone know the “open-market” cloud service providers.

  • AMAZON
  • SALESFORCE
  • GOOGLE
  • IBM
  • MICROSOFT
  • SUN MICROSYSTEMS
  • ORACLE
  • AT & T

PRIVATE CLOUD SERVICE PROVIDERS:

With all the interest in cloud computing as a service, there is also an emerging concept of private clouds. It is a bit reminiscent of the early days of the Internet and the importing that technology into the enterprise as intranets. The concerns for security and reliability outside corporate control are very real and troublesome aspects of the otherwise attractive technology of cloud computing services. The IT world has not forgotten about the eight hour down time of the Amazon S3 cloud server on July, 20, 2008. A private cloud means that the technology must be bought, built and managed within the corporation. A company will be purchasing cloud technology usable inside the enterprise for development of cloud applications having the flexibility of running on the private cloud or outside on the public clouds? This “hybrid environment” is in fact the direction that some believe the enterprise community will be going and some of the products that support this approach are listed below.

  • Elastra (http://www.elastra.com ) is developing a server that can be used as a private cloud in a data center. Tools are available to design applications that will run in both private and public clouds.
  • 3Tetra (http://www.3tetra.com ) is developing a grid operating system called ParaScale that will aggregate disk storage.
  • Cassatt(http://www.cassatt.com )will be offering technology that can be used for resource pooling.
  • Ncomputing ( http://www.ncomputing.com ) has developed standard desktop PC virtualization software system that allows up to 30 users to use the same PC system with their own keyboard, monitor and mouse. Strong claims are made about savings on PC costs, IT complexity and power consumption by customers in government, industry and education communities.

CONCLUSION:

OK, clear as mud—right?  For me, the biggest misconception is the terminology itself—the cloud.   The word “cloud” seems to imply a IT system in the sky.  The exact opposite is the case.  The cloud is an earth-based IT system serving as a universal host.  A network of computers. A network of servers.  No cloud.

DIGITAL READINESS GAPS

April 23, 2017


This post uses as one reference the “Digital Readiness Gaps” report by the Pew Center.  This report explores, as we will now, attitudes and behaviors that underpin individual preparedness and comfort in using digital tools for learning.

HOW DO ADULTS LEARN?  Good question. I suppose there are many ways but I can certainly tell you that adults my age, over seventy, learn in a manner much different than my grandchildren, under twenty.  I think of “book learning” first and digital as a backup.  They head straight for their i-pad or i-phone.  GOOGLE is a verb and not a company name as far as they are concerned.  (I’m actually getting there with the digital search methods and now start with GOOGLE but reference multiple sources before being satisfied with only one reference. For some reason, I still trust book as opposed to digital.)

According to Mr. Malcom Knowles, who was a pioneer in adult learning, there are six (6) main characteristics of adult learners, as follows:

  • Adult learning is self-directed/autonomous
    Adult learners are actively involved in the learning process such that they make choices relevant to their learning objectives.
  • Adult learning utilizes knowledge & life experiences
    Under this approach educators encourage learners to connect their past experiences with their current knowledge-base and activities.
  • Adult learning is goal-oriented
    The motivation to learn is increased when the relevance of the “lesson” through real-life situations is clear, particularly in relation to the specific concerns of the learner.
  • Adult learning is relevancy-oriented
    One of the best ways for adults to learn is by relating the assigned tasks to their own learning goals. If it is clear that the activities they are engaged into, directly contribute to achieving their personal learning objectives, then they will be inspired and motivated to engage in projects and successfully complete them.
  • Adult learning highlights practicality
    Placement is a means of helping students to apply the theoretical concepts learned inside the classroom into real-life situations.
  • Adult learning encourages collaboration
    Adult learners thrive in collaborative relationships with their educators. When learners are considered by their instructors as colleagues, they become more productive. When their contributions are acknowledged, then they are willing to put out their best work.

One very important note: these six characteristics encompass the “digital world” and conventional methods; i.e. books, magazines, newspapers, etc.

As mentioned above, a recent Pew Research Center report shows that adoption of technology for adult learning in both personal and job-related activities varies by people’s socio-economic status, their race and ethnicity, and their level of access to home broadband and smartphones. Another report showed that some users are unable to make the internet and mobile devices function adequately for key activities such as looking for jobs.

Specifically, the Pew report made their assessment relative to American adults according to five main factors:

  • Their confidence in using computers,
  • Their facility with getting new technology to work
  • Their use of digital tools for learning
  • Their ability to determine the trustworthiness of online information,
  • Their familiarity with contemporary “education tech” terms.

It is important to note; the report addresses only the adult proclivity relative to digital learning and not learning by any other means; just the available of digital devices to facilitate learning. If we look at the “conglomerate” from PIAA Fact Sheet, we see the following:

The Pew analysis details several distinct groups of Americans who fall along a spectrum of digital readiness from relatively more prepared to relatively hesitant. Those who tend to be hesitant about embracing technology in learning are below average on the measures of readiness, such as needing help with new electronic gadgets or having difficulty determining whether online information is trustworthy. Those whose profiles indicate a higher level of preparedness for using tech in learning are collectively above average on measures of digital readiness.  The chart below will indicate their classifications.

The breakdown is as follows:

Relatively Hesitant – 52% of adults in three distinct groups. This overall cohort is made up of three different clusters of people who are less likely to use digital tools in their learning. This has to do, in part, with the fact that these groups have generally lower levels of involvement with personal learning activities. It is also tied to their professed lower level of digital skills and trust in the online environment.

  • A group of 14% of adults make up The Unprepared. This group has bothlow levels of digital skills and limited trust in online information. The Unprepared rank at the bottom of those who use the internet to pursue learning, and they are the least digitally ready of all the groups.
  • We call one small group Traditional Learners,and they make up of 5% of Americans. They are active learners, but use traditional means to pursue their interests. They are less likely to fully engage with digital tools, because they have concerns about the trustworthiness of online information.
  • A larger group, The Reluctant,make up 33% of all adults. They have higher levels of digital skills than The Unprepared, but very low levels of awareness of new “education tech” concepts and relatively lower levels of performing personal learning activities of any kind. This is correlated with their general lack of use of the internet in learning.

Relatively more prepared – 48% of adults in two distinct groups. This cohort is made up of two groups who are above average in their likeliness to use online tools for learning.

  • A group we call Cautious Clickerscomprises 31% of adults. They have tech resources at their disposal, trust and confidence in using the internet, and the educational underpinnings to put digital resources to use for their learning pursuits. But they have not waded into e-learning to the extent the Digitally Ready have and are not as likely to have used the internet for some or all of their learning.
  • Finally, there are the Digitally Ready. They make up 17% of adults, and they are active learners and confident in their ability to use digital tools to pursue learning. They are aware of the latest “ed tech” tools and are, relative to others, more likely to use them in the course of their personal learning. The Digitally Ready, in other words, have high demand for learning and use a range of tools to pursue it – including, to an extent significantly greater than the rest of the population, digital outlets such as online courses or extensive online research.

CONCLUSIONS:

To me, one of the greatest lessons from my university days—NEVER STOP LEARNING.  I had one professor, Dr. Bob Maxwell, who told us the half-life of a graduate engineer is approximately five (5) years.  If you stop learning, the information you receive will become obsolete in five years.  At the pace of technology today, that may be five months.  You never stop learning AND you embrace existent technology.  In other words—do digital. Digital is your friend.  GOOGLE, no matter how flawed, can give you answers much quicker than other sources and its readily available and just plain handy.  At least, start there then, trust but verify.

%d bloggers like this: