July 12, 2014
I really don’t know how I missed this one. This document deals with “phone sats”. You can get a better feel for the technology by taking a look at NASA press release 13-107. Let’s do that right now.
NASA Successfully Launches Three Smartphone Satellites
WASHINGTON — Three smartphones destined to become low-cost satellites rode to space Sunday aboard the maiden flight of Orbital Science Corp.’s Antares rocket from NASA’s Wallops Island Flight Facility in Virginia.
The trio of “PhoneSats” is operating in orbit, and may prove to be the lowest-cost satellites ever flown in space. The goal of NASA’s PhoneSat mission is to determine whether a consumer-grade smartphone can be used as the main flight avionics of a capable, yet very inexpensive, satellite.
Transmissions from all three PhoneSats have been received at multiple ground stations on Earth, indicating they are operating normally. The PhoneSat team at the Ames Research Center in Moffett Field, Calif., will continue to monitor the satellites in the coming days. The satellites are expected to remain in orbit for as long as two weeks.
“It’s always great to see a space technology mission make it to orbit — the high frontier is the ultimate testing ground for new and innovative space technologies of the future,” said Michael Gazarik, NASA’s associate administrator for space technology in Washington.
“Smartphones offer a wealth of potential capabilities for flying small, low-cost, powerful satellites for atmospheric or Earth science, communications, or other space-born applications. They also may open space to a whole new generation of commercial, academic and citizen-space users.”
Satellites consisting mainly of the smartphones will send information about their health via radio back to Earth in an effort to demonstrate they can work as satellites in space. The spacecraft also will attempt to take pictures of Earth using their cameras. Amateur radio operators around the world can participate in the mission by monitoring transmissions and retrieving image data from the three satellites. Large images will be transmitted in small chunks and will be reconstructed through a distributed ground station network. The JPEGS shown below will give indication as to the orbit.
The systems are now operating properly and orbiting Earth delivering information that will be used in evaluating the program. I feel NASA has married the private and public sectors to produce workable technology that will represent much lower costs yet, hopefully, the same results. Time will tell. According to Chad Frost, Chief of the Mission Design Division at NASA Ames, “We all carry around smartphones these days, so we’re intimately familiar with what a smartphone is and what it can do. And a few years ago, we had the intriguing idea that you might actually be able to build a spacecraft around a smartphone. So, we were very intrigued by the notion that you could build a very small spacecraft based entirely on consumer electronics devices and other low-cost systems.”
JPEGs of the configuration may be seen by the following JPEG:
PhoneSat is a nano-satellite, categorizing the mass as between one and ten kilograms. Additionally, PhoneSat is a 1U CubeSat, having a volume of around one liter. The PhoneSat Project strives to decrease the cost of satellites while not sacrificing performance. In an effort to achieve this goal, the project is based around Commercial Off-The-Shelf (COTS) electronics to provide functionality for as many parts as possible while still creating a reliable satellite. Two copies of PhoneSat 1.0 were launched mid April 2013 along with an early prototype of PhoneSat 2.0 referred to as PhoneSat 2.0.beta. PhoneSat 2.4 is sitting on the launch pad ready for lift-off. The PhoneSats use a Google Nexus smartphone running the Android 2.3.3. operating system. Two of the PhoneSats have standard smartphone cameras that were used to take images of Earth from space. The first JPEG in this post shows one of those pictures.
Now, here is a fact that blows me away. NASA engineers kept the total cost of the components for the three prototype satellites in the PhoneSat project between $3,500 and $7,000 by using primarily commercial hardware and keeping the design and mission objectives to a minimum.
NASA added items a satellite needs that the smartphones do not have — a larger, external lithium-ion battery bank and a more powerful radio for messages it sends from space. The smartphone’s ability to send and receive calls and text messages has been disabled. Each smartphone is housed in a standard cubesat structure, measuring about 4 inches square. The smartphone acts as the satellite’s onboard computer. Its sensors are used for attitude determination and its camera for Earth observation.
There are several phases to “powering-up” the PhoneSat system. These are as follows:
Phase 1: After the initialization phase, the phone is in phase 1 in which it performs a health check. During this phase, each sensor and subsystem is checked and data is compiled into a standard health packet, stored in the smartphone’s SD card and transmitted over the beacon radio at a regular interval of 30 seconds. The last 10 health packets are stored in the SD card. After every 10 packets sent, the beacon radio is rebooted. This phase happens during the first 24 hours of the mission. The mission time is kept in the phone throughout the mission so that a system reboot during this phase does not reset the 24 hour countdown A health packet consists of: Satellite ID, restart counter, reboot counter, Phase 1 count, Phase 2 count, time, battery voltage, temp 1, temp 2, accel X, accel Y, accel Z, Mag X, Mag Y, Mag Z, text “hello from the avcs”.
Phase 2: This phase starts once a full system health check has been performed. During this phase, image packets and health packets are sent to Earth through the beacon radio. A health packet is sent once for every 9 image packets downlinked.
This phase can be divided in 3 sub-phases:
• Health Data Measurements: Health data is measured and the 10 most recent samples are stored in the SD card.
• Health Data Downlink: Once 9 packets have been sent through the beacon containing image information, the 10th one is reserved for a health packet.
• Image Sequence: One picture is taken every minute until 100 pictures are taken and stored to the SD card. Pictures are then analyzed and the top image is selected. This image is packetized and compiled into standard image packets. These image packets are transmitted over the beacon radio coupled with health packets in the ratio explained above.
Safe Mode: If the watchdog detects that the phone is not sending any data to the radio for a certain period of time, the spacecraft functionality is reduced to the bare minimum. In this condition, the spacecraft only transmits health data containing the last 10 sensor data values stored in the SD card prior to failure. This mode lasts for 90 minutes. After this period, the spacecraft resumes its normal operations. A safe mode packet consists of: Satellite_ID, last 10 voltage values, last 10 temperature sensor 1 values, last 10 temperature sensor 2 values, text “SAFEMODE”.
The timeline for research and development started in 2009. Definite planning has gone into the program. You may see that timeline below.
As mentioned above, PhoneSat 2.0 has already been scheduled for launch later on this year, 2014. The technology is definitely evolving. NASA is working towards extremely low-cost deployments that provide workable communications to government agencies and private concerns.
I welcome your comments.
July 3, 2014
One of the services my company (Cielo Technologies, LLC) provides is locating resources for clients, both individual and commercial. We find people and vendors that can do “stuff”. People and companies that can perform successfully, on time, following specification given as a part of a contractual arrangement. In short, we provide sourcing services for commercial concerns. Ones that can get the job done.
In 2006, I was given a call by a manufacturing company providing extension springs for doors used on residential cooking products. This company has been in business since 1974 with springs being the first product produced. Due to decreasing demand for the product and increasing costs for hard-drawn and oil-tempered wire, they made a management decision to out-source manufacturing efforts. I immediately started searching for vendors, both domestic and foreign. I looked at thirty-seven (37) companies that I eventually interviewed for the products required. During that search, the name ALIBABA, came up frequently—very frequently. Let’s take a look at this company.
Alibaba Group was established in 1999 by 18 people led by Jack Ma, a former English teacher from Hangzhou, China. Jack Ma chose the name because it is well-known around the world and can be easily pronounced in many languages. According to Mr. Ma, “One day I was in San Francisco in a coffee shop, and I was thinking Alibaba is a good name. And then a waitress came, and I said do you know about Alibaba? And she said yes. I said what do you know about Alibaba, and she said ‘Alibaba and 40 thieves’. And I said yes, this is the name! Then I went onto the street and found 30 people and asked them, ‘Do you know Alibaba?’ People from India, people from Germany, people from Tokyo and China… They all knew about Alibaba. Alibaba — open sesame. Alibaba is a kind, smart business person, and he helped the village. So…easy to spell, and globally known. Alibaba opens sesame for small- to medium-sized companies. We also registered the name Alimama, in case someone wants to marry us!”. E-commerce is global so the company needed a name that was globally recognized. Alibaba brings to mind “open sesame,” representing the hope that their platforms would open a doorway to improved sales and even fortune for small businesses. From the outset, the company’s founders shared a belief that the Internet would level the playing field by enabling small enterprises to leverage innovation and technology to grow and compete more effectively in the domestic and global economies. Since launching its first website helping small Chinese exporters, manufacturers and entrepreneurs to sell internationally, Alibaba Group has grown into a global leader in online and mobile commerce. Today the company and its related companies operate leading wholesale and retail online marketplaces as well as Internet-based businesses offering advertising and marketing services, electronic payment, cloud-based computing and network services and mobile solutions, among others.
As of March 31, 2014, Alibaba employed more than 22,000 people around the world. Quite a jump from the original eighteen. As of December 31, 2013, the company maintained seventy-three (73) offices in mainland China and sixteen (16) offices outside mainland China. In 2012,two of Alibaba’s portals together handled 1.1 trillion yuan ($170 billion) in sales, more than competitors and e-Bay and Amazon.com combined. In March 2013 it was estimated by The Economist magazine to have a valuation between $55 billion to more than $120 billion. The following timeline will indicate the growth of the company.
- In May 2003, Taobao was founded as a consumer e-commerce platform.
- In December 2004, Alipay, which started as a service on the Taobao platform, became a separate business.
- In October 2005, Alibaba Group took over the operation of China Yahoo! as part of its strategic partnership with Yahoo! Inc.
- In November, 2007, Alibaba.com successfully listed on the Hong Kong Stock Exchange.
- In April 2008, Taobao established Taobao Mall (Tmall.com), a retail website, to complement its C2C marketplace.
- In September 2008, Alibaba Group R&D Institute was established.
- In September 2009, Alibaba Group established Alibaba Cloud Computing in conjunction with its 10-year anniversary.
- In May 2010, Alibaba Group announced a plan to earmark 0.3% of its annual revenues to fund environmental protection initiatives.
- In October 2010, Taobao beta-launched eTao as a shopping search engine.
- In June 2011, Alibaba Group reorganized Taobao into three separate companies: Taobao Marketplace, Taobao Mall (Tmall.com) and eTao.
- In July 2011, Alibaba Cloud Computing launched its first self-developed mobile operating system, Aliyun OS over K-Touch Cloud Smartphone.
- In January 2012, Tmall.com changed its Chinese name as part of a rebranding exercise.
- In March 2014, Alibaba group said it will begin the process of filing for an initial public offering in the U.S.
- Prior to its IPO filing on Form F-1 as a foreign issuer in the U.S., Alibaba undertook an aggressive acquisition spree – previously atypical for the company – acquiring numerous majority and minority stakes in companies including micro-blogging service Weibo, China Vision Holdings, and car sharing service Lyft.
- On May 6, 2014 Alibaba Group filed registration documents to go public in the U.S. in what may be one of the biggest initial public offerings in American history.
- On June 5, 2014 Alibaba group agreed to take a 50 percent stake in Guangzhou Evergrande Football Club, winners of 2013 Asian Champions League, for 1.2 billion yuan ($192 million).
- In June 2014, Alibaba acquired the Chinese mobile internet firm UCWeb. The price of the purchase has not been disclosed, but the company did claim that the acquisition creates the biggest merger in the history of China’s internet sector.
MARKETS AND SALES:
Mr. Ma was definitely on to something as the chart below will indicate. The projection through 2017 is dramatic.
China’s online shopping market is absolutely dominated by Alibaba.
If we look at other companies related to the Internet, we see the following, in billions:
The gross merchandise volume in 2013 looked as follows:
As you can see, the company is a “player” on the global stage.
In the very near future, Alibaba will issue an IPO. At this time, the Wall Street Journal estimates the IPO could be one of the largest in corporate history. Only time will tell.
By the way, I placed the spring business with a company in Texas. We wanted to keep the product “at home” for several reasons, 1.) Communication, 2.) Transportation, 3.) Import complexities, 4.) Changing exchange rates and 5.) Buy American. With this being the case, Alibaba is still a great source for products purchased. I would invite you to take a look.
I always welcome your comments.
June 30, 2014
OK, what is an encoder? Who cares? What do they do? Why should I know about them? How are they used? Let’s first start by defining the process of encoding in general. According to the Merriam-Webster dictionary the definition of encoding is:
“to convert (as a body of information) from one system of communication into another; especially: to convert (a message) into code”.
Now that we have the definition, are there devices mechanical or otherwise, allowing for encoding of information from one system of communication to another system of communication? A resounding YES! For our purposes, an encoder is an electromechanical device that converts information from one format or code to another, for the purposes of standardization, speed, secrecy, security or compressions. Encoders are sensors for monitoring position, angle and speed of moving mechanisms. There are applications requiring very precise placement of components relative to a datum or mating surface. Essentially, encoders can be categorized as rotary or linear. Rotary encoders are sub-divided into incremental and absolute encoders. There are many processes that require exact positioning of mechanisms, either linear or rotary. In some applications, such as remote surgery using robotic systems, position and angle are absolutely critical. Encoders provide this information to software and controllers.
Linear encoders are sub-divided into wire draw and non-contact types. A linear encoder is for frictionless length measurement and determining position and is a sensor, transducer or reading-head linked to a scale that specifies position of a part relative to a datum point. The sensor reads the scale and converts position into an analog or digital signal that is transformed into a digital readout. Movement is determined from changes in position with time. Both optical and magnetic linear encoder types function using this type of method. However, it is their physical properties which make them different.
The JPEGs below will indicate the “hardware” typically used relative to linear encoders.
A rotary encoder also called a shaft encoder or magnetic encoder, converts angular position or motion of a shaft or axle to an analog or digital code. A rotary encoder consists of two parts: a rotor and a sensor. The rotor turns with the shaft and contains alternating evenly spaced north and south poles around its circumference. The sensor detects these small shifts in the position N>>S and S>>N. There many methods of detecting magnetic field changes, but the two primary types used in encoders are: Hall Effect and Magneto resistive. Hall Effect sensors work by detecting a change in voltage by magnetic deflection of electrons. Magneto resistive sensors detect a change in resistance caused by a magnetic field.
Two rotary encoder configurations may be seen as follows:
This type of encoder would require a shaft coupling to operate.
For this encoder, the shaft would be fitted into the opening shown and secured with key-seat or other fastening mechanism.
In each case, electrical connections are necessary to send encoded data to a software package then to a controller mechanism.
TYPICAL USES FOR ENCODERS:
The mechanical world would be a very different place if it were not for linear and rotary encoders. Let’s take a look at real-life uses for both.
- Automotive GPS and radios
- Medical equipment
- Audio/visual recording/mixing equipment
- Transportation equipment
- Fitness equipment
- Test and measurement equipment
- Agricultural equipment
- Construction equipment
- Pulse/signal generators
As with any technology, there are advantages and disadvantages as follows:
Highly reliable and accurate
Fuses optical and digital technology
Can be incorporated into existing applications
Subject to magnetic or radio interference (Magnetic Encoders)
Direct light source interference (Optical Encoders)
Susceptible to dirt, oil and dust contaminates –
I might note the disadvantages can be compensated for by applying appropriate shielding and components to the overall assembly.
Sophisticated robotic systems use encoders in many places to ensure accuracy when the need to accurately position mechanisms is paramount. Users of equipment are usually oblivious to their presence. They work silently to perform predetermine tasks as dictated by software.
June 18, 2014
Several days ago I published a blog concerning “Conflict Minerals”. This is a very real attempt issued by legislatures to lessen or eliminate minerals and substances mined to support destructive political actions taken to subjugate populations. Necessary actions must be taken by engineers and management to evaluate products received insuring none contain conflict minerals. Companion legislation has been issues by the European Community (EU )to insure environmental issues are also addressed. RoHS is the abbreviated name for this directive. Any company doing business in the European Community must adhere to RoHS requirements. This is mandated policy affecting all manufacturers supplying domestic consumer products or commercial products. The purpose of RoHS is to require companies to quantify six (6) materials used in the manufacturer and assembly of products. Let’s take a look. The European Union set forth RoHS (Restriction of Hazardous Substances) Directive to establish environmental guidelines and legislation to reduce the presence of six (6) materials deemed hazardous to the environment. To comply, products entering the EU must not have a homogeneous presence of these materials above the following levels by weight percentage:
- Lead (Pb) < 0.1%
- Mercury (Hg) < 0.1%
- Cadmium (Cd) < 0.01%
- Hexavalent Chromium (CrVI) < 0.1%
- Polybrominated Biphenyls (PBB) < 0.1%
- Polybrominated Diphenyl Esters (PBDE) < 0.1%
RoHS Compliance is determined by a combination of supplier certification and engineering design verification. The directive applies to equipment as defined by a section of the WEEE directive. The Waste Electrical and Electronic Equipment Directive (WEEE Directive) is the European Community directive 2002/96/EC on waste electrical and electronic equipment(WEEE) which, together with the RoHS Directive 2002/95/EC, became European Law in February 2003. The WEEE Directive set collection, recycling and recovery targets for all types of electrical goods, with a minimum rate of 4 kilograms per head of population per annum recovered for recycling by 2009. The RoHS Directive set restrictions upon European manufacturers as to the material content of new electronic equipment placed on the market. The symbol adopted by the European Council to represent waste electrical and electronic equipment comprised a crossed out wheelie bin with or without a single black line underneath the symbol. The black line indicates that goods have been placed on the market after 2005, when the Directive came into force. Goods without the black line were manufactured between 2002 and 2005. In such instances, these are treated as “historic weee” and falls outside re-imbursement via producer compliance schemes. The following numeric categories apply:
- Large household appliances.
- Small household appliances.
- IT & Telecommunications equipment (although infrastructure equipment is exempt in some countries)
- Consumer equipment.
- Lighting equipment—including light bulbs.
- Electronic and electrical tools.
- Toys, leisure, and sports equipment.
- Medical devices (exemption removed in July 2011)
- Monitoring and control instruments (exemption removed in July 2011)
- Automatic dispensers.
- Semiconductor device
Batteries are not included within the scope of RoHS. However, in Europe, batteries are under the European Commission’s 1991 Battery Directive (91/157/EEC), which was recently increased in scope and approved in the form of the new battery directive, version 2003/0282 COD, which will be official when submitted to and published in the EU’s Official Journal. While the first Battery Directive addressed possible trade barrier issues brought about by disparate European member states’ implementation, the new directive more explicitly highlights improving and protecting the environment from the negative effects of the waste contained in batteries. It also contains a program for more ambitious recycling of industrial, automotive, and consumer batteries, gradually increasing the rate of manufacturer-provided collection sites to 45% by 2016. It also sets limits of 5 ppm mercury and 20 ppm cadmium to batteries except those used in medical, emergency, or portable power-tool devices. Though not setting quantitative limits on quantities of lead, lead-acid, nickel, and nickel-cadmium in batteries, it cites a need to restrict these substances and provide for recycling up to 75% of batteries with these substances. There are also provisions for marking the batteries with symbols in regard to metal content and recycling collection information. It also does not apply to fixed industrial plant and tools. Compliance is the responsibility of the company that puts the product on the market, as defined in the Directive; components and sub-assemblies are not responsible for product compliance. Of course, given the fact that the regulation is applied at the homogeneous material level, data on substance concentrations needs to be transferred through the supply chain to the final producer. An IPC standard has recently been developed and published to facilitate this data exchange, IPC-1752.It is enabled through two PDF forms that are free to use. RoHS applies to these products in the EU whether made within the EU or imported. Certain exemptions apply, and these are updated on occasion by the EU. The RoHS 2 directive (2011/65/EU) is an evolution of the original directive and became law on 21 July 2011 and took effect 2 January 2013. It addresses the same substances as the original directive while improving regulatory conditions and legal clarity. It requires periodic reevaluations that facilitate gradual broadening of its requirements to cover additional electronic and electrical equipment, cables and spare parts. The CE logo now indicates compliance and RoHS 2 declaration of conformity is now detailed (see below). In 2012, a final report from the European Commission revealed that some EU Member States considered all toys under the scope of the primary RoHS Directive 2002/95/EC, irrespective of whether their primary or secondary functions were using electric currents or electromagnetic fields. From the implementation of RoHS 2 or RoHS Recast Directive 2011/65/EU on, all the concerned Member States will have to comply with the new regulation. The bottom line—it remains a complex world and global issues abound. These issues affect companies trying to market and sell their products to countries far and wide. We will not be successful unless we “play their game”. Maybe that’s as it should be. I welcome your comments.
June 14, 2014
Several days ago one of my clients asked me to investigate the possible presence of conflict minerals used in the fabrication and assembly of products provided by them. I had heard the phrase but quite frankly had to school myself on the full meaning and possible ramifications if I found their presence to be incorporated in product or process on. Here is what I found.
Let us first define conflict minerals and conflict resources. Conflict resources are natural resources extracted in conflict zones and sold to perpetuate fighting. The proceeds of the sale are used for the purchase of weapons, providing salary for members of the conflict groups and supplies needed to continue fighting. The most prominent contemporary example is the eastern provinces of the Democratic Republic of the Congo, where various armies, rebel groups, and outside actors have profited while contributing to violence and exploitation during wars within the region.
The most commonly mined conflict minerals are cassiterite (for tin), wolframite (for tungsten), coltan (for tantalite), and gold ore, which are extracted from the Eastern Congo, and passed through a variety of intermediaries before being purchased by multinational electronics companies. These minerals are essential in the manufacture of a variety of devices, including consumer electronics such as mobile phones, laptops, and MP3 players,filaments for electric lamps, electron and television tubes, electrical contact points for automobile distributors, heating elements for electrical furnaces, and space, missile, and high-temperature applications. I’m sure we are all familiar with gold but the other three (3) minerals need some explanation.
Cassiterite, is a heavy, brown-to-black mineral, tin oxide, SnO2, crystallizing in the tetragonal system. It is found as short prismatic crystals and as irregular masses, usually in veins and replacement deposits associated with granites. Since it is hard, heavy, and resistant to weathering, it often concentrates in alluvial deposits derived from cassiterite-bearing rocks. It is the principal ore of tin and is mined in many countries; the most important sources are Malaysia, Thailand, China, Indonesia, Bolivia, and Russia. Except for Bolivia, nearly all of this production is from alluvial deposits.
The mineral, Wolframite, is a principal ore of tungsten. It is an iron and manganese tungstate mineral. Tungsten carbide is an important compound in the metalworking, mining, and petroleum industries. Alloys such as high-speed steel, cristite, and stellite, used in high-speed tools, contain tungsten. Other important tungsten compounds are calcium and magnesium tungstates, which are used in fluorescent lighting, and tungsten disulfide, which is used as a high-temperature lubricant at temperatures up to 500 deg C. Tungsten compounds also find uses in the chemical, paint, and tanning industries.
COLTRAN OR TANTALITE:
The industry term columbo-tantalite has been shortened to “coltan” in central Africa. Tantalite is a black, heavy mineral not prized by collectors and little known outside high-technology circles. But since 2000, short-lived spikes in the price of this “black gold” have sparked environmentally damaging mining rushes in central Africa. These disruptive events have caused concern around the world. Tantalite yields the metal tantalum, whose strength, chemistry and electronic properties make it valuable in many high-tech and medical applications. Tantalum makes great capacitors for premium electronics, including cellular telephones and laptop computers. So the enormous worldwide demand for consumer electronics has put a strain on the supply of tantalite ore. Prices have occasionally soared to hundreds of dollars per kilogram. Central Africa is so desperately poor that thousands of men have rushed to the jungle to mine coltan this way. Prostitution, price-gouging and other disruptions went with them; moreover, the various armies in this war-torn region, both official and amateur, have moved in to take over the trade. Miners invade pristine forests, including the national parks. Besides destroying the land, they shoot the local wildlife—gorillas, okapis and other rare species—for food.
Gold is the biggest source of conflict mineral trade in Congo and is most responsible for the ongoing bloody conflicts. Gold has soared in price on the commodity markets in recent years, and Congo is literally sitting on a gold-mine worth tens if not hundreds of billions of dollars. Despite promises by President Kabila to clean up the mining industry, corruption remains rife and thousands of small-scale unofficial mines scatter the country.
We might add another mineral to the list: diamonds or “blood diamonds”. Some diamonds have helped fund devastating civil wars in Africa, destroying the lives of millions. Profits from the trade in conflict diamonds, worth billions of dollars, were used by warlords and rebels to buy arms during the devastating wars in Angola, the Democratic Republic of Congo (DRC) and Sierra Leone. Wars that have cost an estimated 3.7 million lives.
You can now see why these minerals have been designated “conflict minerals”. Our federal government has entered the fray with the Dodd-Frank Act.
SECTION 1502 OF THE DODD-FRANK ACT:
On August 22, 2012, the SEC issued a final rule on conflict minerals pursuant to Dodd-Frank Section 1502. The rule describes the assessment and reporting requirements, by US companies, for issuers whose products contain conflict minerals. These minerals – tin, tantalum, tungsten and gold – are used in a wide range of products across numerous industries. Section 1502 of the Dodd-Frank Act addresses the problem of conflict by setting requirements for due diligence, reporting, and public disclosure, and is designed to ensure accountability and discourage companies from doing business in ways that ultimately support exploitation and conflict. Dodd-Frank requires the following from all companies:
- Assess: Evaluate the design, implementation or operation of an organization’s conflict minerals program against the requirements of Dodd-Frank 1502, OECD frameworks and leading practices.
- Program planning: Jump-start conflict minerals compliance by scoping program requirements and defining a roadmap with details such as key steps, resource requirements, and timing.
- Design & build: Perform detailed design and development of a company’s conflict minerals program. A pilot program may be used as part of this stage to refine approach.
- Implement & operate: Implement and operate the conflict minerals program, including management and administration of the supplier reasonable country of origin inquiry (RCOI) and due diligence processes (with appropriate management oversight).
- Audit: Provide an independent performance audit or attestation audit over the design and execution of the conflict minerals due diligence, either for an SEC registrant or for other organizations to provide to their customers.
To aid manufacturers and suppliers in this effort, a template has been structured to list and categorize any use and percentage of use for conflict minerals. The Electronic Industry Citizenship Coalition & the Global e-Sustainability Initiative have produced an Excel-based Reporting Template that helps companies collect information related to conflict minerals. Revision two of the template allows for more detail reporting on the product level and contains a list of known smelters around the world. These templates are the most common way to share information, but may require extra effort to gather the information from your suppliers about the origin of the conflict minerals used in your products. You may find copies of this template on line. Its use is very straightforward and fairly simple to use. This is one issue I am having right now with fulfilling the request from my client. Where do you get the information?
I welcome your comments and certainly hope this information is helpful to you.
June 3, 2014
The following facts were taken from “WaterAid.org”; and “Conservation.org”.
“Water, water, everywhere, And all the boards did shrink; Water, water, everywhere, Nor any drop to drink. “
You remember this one from your high school days. The Rhyme of the Ancient Mariner is the longest major poem by the English poet Samuel Taylor Coleridge, written in 1797–98 and published in 1798 in his first edition of Lyrical Ballads. Along with other poems in Lyrical Ballads, it was a signal shift to modern poetry and the beginning of British Romantic literature. In the mariner’s case, the water was there but undrinkable. What if you lived in an area where water was not there or you had to journey hours each day to fetch it? This is the way millions of people live on a daily basis.
In some parts of our world water is in remarkably short supply. I have said in one or two previous posts that I believe future wars will be fought over water and not necessarily oil, gold, silver, the grab for power, territory, etc. (Of course I have been laughed at for saying this.) The human body needs food and water to survive. A human can go for more than three weeks without food. Mahatma Gandhi survived twenty-one (21) daysof complete starvation, but water is a different story. At least 60% of the adult body is made of water and every living cell in the body needs it to keep functioning.Water acts as a lubricant for our joints, regulates our body temperature through sweating and respiration, and helps to flush waste. The maximum time an individual can go without water seems to be a week — an estimate that would certainly be shorter in difficult conditions, like broiling heat.
Water scarcity already affects every continent and is both a natural and a human-made phenomenon. There exists enough freshwater on the planet for seven billion (7) people, but it is distributed unevenly and too much of it is wasted, polluted and unsustainably managed. The term water scarcity is defined as the point at which the aggregate impact of all users impinges on the supply or quality of water under prevailing institutional arrangements to the extent that the demand by all sectors, including the environment, cannot be satisfied fully. It is a relative concept and can occur at any level of supply or demand. Scarcity may be a social construct (a product of affluence, expectations and customary behavior) or the consequence of altered supply patterns – stemming from climate change, for example.
Around 1.2 billionpeople, or almost one-fifth of the world’s population, live in areas of physical scarcity, and 500 million people are approaching this situation. Another 1.6 billion people, or almost one -quarter of the world’s population, face economic water shortage. This is where countries lack the necessary infrastructure to take water from rivers and aquifers.
Water scarcity is among the main problems faced by many societies and the World in the twenty-first century due to usage growing at more than twice the rate of population increase in the last century alone. Although right now there is no global water scarcity as such, an increasing number of regions are chronically short of water.
- Around 700 million people in 43 countries suffer today from water scarcity.
- By 2025, 1.8 billion people will be living in countries or regions with absolute water scarcity, and two-thirds of the world’s population could be living under water-stressed conditions.
- With the existing climate change scenario, almost half the world’s population will be living in areas of high water stress by 2030, including between 75 million and 250 million people in Africa. In addition, water scarcity in some arid and semi-arid places will displace between 24 million and 700 million people.
- Sub-Saharan Africa has the largest number of water-stressed countries of any region.
- Water covers 70% of our planet, and it is easy to think that it will always be plentiful. However, freshwater—the stuff we drink, bathe in, irrigate our farm fields with—is incredibly rare. Only 3% of the world’s water is fresh water, and two-thirds of that is tucked away in frozen glaciers or otherwise unavailable for our use. Groundwater is the water stored deep beneath the Earth’s surface in underground aquifers. Another 68.6% of all freshwater is stored in glaciers and polar caps. That leaves only 1.3% of the total freshwater on Earth in surface water sources such as lakes, rivers, and streams. But it is surface water humans and other species rely upon for their biological needs.
- As a result, some 1.1 billion people worldwide lack access to water, and a total of 2.7 billion find water scarce for at least one month of the year. Inadequate sanitation is also a problem for 2.4 billion people—they are exposed to diseases, such as cholera and typhoid fever, and other water-borne illnesses. Two million people, mostly children, die each year from diarrheal diseases alone.
- Around the world, 768 million people don’t have access to safe water, and every day 1,400 children under the age of five die from water-based diseases.
- Many of the water systems that keep ecosystems thriving and feed a growing human population have become stressed. Rivers, lakes and aquifers are drying up or becoming too polluted to use. More than half the world’s wetlands have disappeared. Agriculture consumes more water than any other source and wastes much of that through inefficiencies. Climate change is altering patterns of weather and water around the world, causing shortages and droughts in some areas and floods in others.
- In 60 percent of European cities with more than 100,000 people, groundwater is being used at a faster rate than it can be replenished. (Source: World Business Council For Sustainable Development (WBCSD))
- Achieving universal access to safe water and sanitation would save 2.5 million lives every year.
(WHO, Global Burden of Disease 2004 Update, Geneva: WHO, 2008)
- Over 500,000 children die every year from diarrhea caused by unsafe water and poor sanitation – that’s more than 1,400 children a day.
(Inter-agency Group for Child Mortality Estimate (IGME) 2014, led by UNICEF and WHO)
- Diarrhea is the third biggest killer of children under five years old in Sub-Saharan Africa.
(Child Health Epidemiology Reference Group (CHERG) 2012)
- Every year, around 60 million children are born into homes without access to sanitation.
- For every $1 invested in water and sanitation, an average of $4 is returned in increased productivity.
(Hutton, Global costs and benefits of drinking-water supply and sanitation interventions to reach the MDG target and universal coverage, WHO, Geneva).
What if something could be done about the water shortage? Something dramatic that would lessen the burden for millions of people in countries where water is a huge problem.
Designer Arturo Vittori believes the solution to this catastrophe lies not in high technology, but in sculptures that look like giant-sized objects from a furniture catalogue. The graphic below will show the overall design of his “water tower”. As you will see, it’s striking.
These two towers are installed and working AND producing water from atmospheric moisture. The marvelous thing to me is the accumulation of water in the most arid climates.
Vittori and his team have worked on this specific design for two years to ensure the towers are stable, efficient and easily maintained by villagers. Because the towers are built from locally sourced materials, villagers will be able to maintain, repair and clean the towers themselves. Each water tower is comprised of two parts: a juncus or bamboo exoskeleton and an internal plastic mesh that has been likened to the bags oranges come in. The nylon and polypropylene fibers act as a scaffold for condensation, and as the droplets of dew form, they follow the mesh into a basin at the base of the structure. Condensation forms on the Nylon fibers and the dew droplets follow the mesh into a water basin at the base. The Nylon fibers look as follows:
His stunning water towers stand nearly thirty (30) feet tall and can collect over 25 gallons of potable water per day by harvesting atmospheric water vapor. The collection devices are called WarkaWater towers. These WarkaWater Towers were inspired by the local Warka tree, a large fig tree native to Ethiopia that is commonly used as a community gathering space. Located inside is the plastic mesh material made from fibers that act as micro tunnels for daily condensation. As droplets form, they flow along the mesh pattern into the basin at the base of the towers. By harvesting atmospheric water vapor in this way, it’s estimated that at least 25 gallons of potable water can be sustainably and hygienically collected by the towers every day.
“WarkaWater is designed to provide clean water as well as ensure long-term environmental, financial and social sustainability,” he says. “Once locals have the necessary know how, they will be able to teach other villages and communities to build the WarkaWater towers.” Each tower costs approximately $550 and can be built in under a week with a four person team and locally available materials.
The JPEG below shows a lady lacing the exoskeleton of an individual tower.
Please note the amazement of the lady looking at the tower and the work the assembly person is accomplishing.
This is the top view of the tower. I feel it is not only utilitarian but definitely a work of art. Obviously, remarkably functional and capable, under the right circumstances, of saving lives on a daily basis, not to mention lessening serious disease.
I welcome your comments.
February 10, 2014
“Of all the virtues we can learn, no trait is more useful, more essential for survival and more likely to improve the quality of life than the ability to transform adversity into an enjoyable challenge”. Those words were spoken by Mihalyi Csikszentmihaly.
Csikszentmihalyi is noted for his work in the study of happiness and creativity, but is best known as the architect of the notion of flow and for his years of research and writing on the topic. He is the author of many books and over 120 articles or book chapters. Martin Seligman, former president of the American Psychological Association, described Csikszentmihalyi as the world’s leading researcher on positive psychology. Csikszentmihalyi once said: “Repression is not the way to virtue. When people restrain themselves out of fear, their lives are by necessity diminished. Only through freely chosen discipline can life be enjoyed and still kept within the bounds of reason.” His works are influential and are widely cited. Csikszentmihalyi received his B.A. in 1960 and his PhD in 1965, both from the University of Chicago.
In his seminal work, “Flow: The Psychology of Optimal Experience”, Csíkszentmihályi outlines his theory that people are happiest when they are in a state of flow— a state of concentration or complete absorption with the activity at hand and the situation. It is a state in which people are so involved in an activity that nothing else seems to matter. The idea of flow is identical to the feeling of being in the zone or in the groove. The flow state is an optimal state of intrinsic motivation, where the person is fully immersed in what he is doing. This is a feeling everyone has at times, characterized by a feeling of great absorption, engagement, fulfillment, and skill—and during which temporal concerns (time, food, ego-self, etc.) are typically ignored.
Personally, I really don’t think this is a theory but an actual fact. Have you ever been so absorbed in a project or endeavor you lost track of time? I think we all have. I can remember one incident where I continued to work long after “the bell rang”. Friday evening, eight o’clock and I was still working. My wife called me to ask “are you coming home tonight”? It was a difficult project needing complete concentration on my part and I simply lost track of time. Dr. Csikszentmihalyi indicates this “zoned-out” feeling is one means to survival and can have a great influence on improving our quality of life. It is one way to “turn lemons into lemonade”. According to psychologists dealing with cognitive issues, there is a wide array of responses to adversity, but the one that can be most debilitating is catastrophizing. People catastrophize when they turn everyday inconveniences into major setbacks and those setbacks into disasters. Catastrophizing often involves destructive rumination over bad events. These ruminations represent the most damaging connections you can make. Catastrophizing is a definite, remarkable barrier to flow. There is no “zone” if you are in a panic or bent double with worry or doubt.
One extraordinary book discussing adversity is “Adversity Quotient” by Dr. Paul G. Stoltz. I can definitely recommend this book to you. It’s a marvelous “read” where questions are asked and results given to determine your AQ. In your brain, Catastrophizing is like any other response to adversity. You are merely following a subconscious neurological groove, a pathway made more efficient and discernible from repeated use. To halt this pattern, you must interrupt or stop it dead in its path. You may create the neurological interrupt by using any of the following eight (8) techniques.
- Slam you palm on a very hard surface, shout “stop”. This sounds a little sophomoric but it has been proven to work. The very act trains your system to interrupt the “gloom and doom” scenario and get back to reality.
- Focus intently on an unrelated object. A bit quieter and less dramatic, thus better in public situations.
- Place a rubber band on your wrist and snap out of it. This is a very physical reaction producing a very slight pain but that just might snap you back into reality.
- Distract yourself with an unrelated activity.
- Alter your state with exercise. Of course, this is somewhat impractical in some (maybe most) situations. You can’t drop and start doing pushups when your mind wonders to the adversity at hand.
Reframers are devices to put the adversity into perspective. Your focus becomes intense and inward.
- Refocus on your purpose. “Why am I doing this?” This reminds you of the reason you were originally involved. It can provide the “big picture” and allows you to focus. Why did you choose this job, this project, this company, this location, etc.
- Get small. Getting small involves consciously putting yourself in a situation where you are dwarfed by what surrounds you.
- Help someone else. Without a doubt, one of the most powerful tools to manage adversity is putting your problems into perspective by helping someone else with bigger problems than your own.
I read a fascinating article in “Psychology Today “regarding personality types that call every negative event a catastrophe. To my great surprise, these are for the most partType “A” personalities. The movers, the shakers, people needing to get on with it. Those of us who make a mountain out of a molehill. I suppose I should have expected this. At any rate, there are methods to lessen the effects regarding the “slings and arrows of outrageous fortune.”
I certainly hope you enjoyed this post. It’s different than what I normally write about but fascinating at any rate.
January 25, 2014
Information for this posting is derived from the following sources: Design News Daily: Ms. Ann Thryft and NASA-Ames Research Center.
It is no secret that missions sponsored by the United States using manned space craft have been eliminated from the federal budget. After Apollo the “wise men” in our federal government decided there was no need to continue the effort. “We can always hitch a ride with the Russians”. Hitching a ride has turned out to be extremely expensive, not to mention giving up our hard-won position of technological dominance in that field of endeavor. One day we will wake up to discover the Chinese have landed a man on the Moon and have declared that body to be their real estate. Abdication of our position in manned space does not mean NASA is not working and shifting their focus to other areas of research. One absolutely fascinating area is described as TENSEGRITY. Tensegrity uses Super Ball Bots, which look like spheres, but constructed quite differently. They’re being designed to go to Jupiter’s moon Titan.
Like the robotic droplets, Super Balls Bots’ main mission is gathering scientific data. The larger version, with a mass of 75 kg, will carry all three scientific instrumentation packages: Atmospheric and Meteorology, Analytical Chemistry, and Imaging. The smaller version, with a mass of 40 kg, will carry only the Atmospheric and Meteorology and Imaging packages. These are described in some detail in a presentation given last spring by the main researchers at NASA Ames Research Center. What’s different about them is they’re constructed according to the principles of “tensegrity,” a term coined by Buckminster Fuller, known for popularizing the geodesic dome. The term combines “tension” and “structural integrity.” It works on principles of how force is distributed through a structure that are different from those of rigid structures. Tensegrity’s global distribution of force gives maximum strength to a structure without adding a lot of weight, and minimizes the number of points of local weakness. Many natural forms are constructed this way, such as microtubes and microfilaments within cells. The human skeleton is an example of biotensegrity.
A description of the missions is given by NASA as follows:
Small, light-weight and low-cost missions will become increasingly important to NASA’s exploration goals. Ideally teams of small, collapsible robots, weighing only a few kilograms apiece, will be conveniently packed during launch and would reliably separate and unpack at their destination. Such robots will allow rapid, reliable in-situ exploration of hazardous destination such as Titan, where imprecise terrain knowledge and unstable precipitation cycles make single-robot exploration problematic. Unfortunately landing lightweight conventional robots is difficult with current technology. Current robot designs are delicate, requiring a complex combination of devices such as parachutes, retrorockets and impact balloons to minimize impact forces and to place a robot in a proper orientation. Instead, we are developing a radically different robot based on a “tensegrity” built purely upon tensile and compression elements. Such robots can be both a landing and a mobility platform allowing for a dramatically simpler mission profile and reduced costs. These multi-purpose robots can be light-weight, absorb strong impacts, are redundant against single-point failures, can recover from different landing orientations and are easy to collapse and uncollapse. These properties allow for unique mission profiles that can be carried out with low cost and high reliability. We believe tensegrity robot technology can play a critical role in future planetary exploration.
The phases for this effort are as follows:
Achieving Objective 1:
Our Phase II study will build a prototype tensegrity landing and mobility platform in hardware. The primary focus will be on demonstrating mobility, and formal evaluation of payload protection in hardware.
Achieving Objective 2:
In Phase II we will attempt to show that control is robust and practical. In Phase II we propose to evaluate closed-loop control methods that allow the tensegrity to sense and navigate to a desired location. In addition we will evaluate control for larger tensegrity designs, robustness in difficult environments, and extend control to low-gravity environments.
Achieving Objective 3:
Tensegrity robots have the potential to revolutionize many different mission destinations. We will extend our Phase I trade-study for a Titan mission to include the critically important thermal and energy analysis, large-scale tensegrities that are capable of offering more payload protection and improved mobility, as well as low-gravity landing and mobility analysis unique to small asteroids.
Significance to NIAC:
Completing Objective 1 will give insights into costs, performance, risks, development time and technologies that will be needed to make a viable platform. In addition it will dramatically improve confidence that tensegrity structures are a good platform for landing and mobility. Completing Objective 2 will validate that a tensegrity robot is a viable mobility platform that could dramatically reduce cost and increase reliability of missions that need mobility. Completing Objective 3 will allow us to evaluate the costs, risks and benefits for using tensegrities for a wide range of missions.
Significance to NASA in General:
Success in in this study could dramatically reduce costs and increase reliability for all NASA missions that use robotics, or need a landing platform.
The NASA program undertaken by Ames is fascinating. Let’s take a look. The mission itself is given with the following two slides:
The following slide indicates how the ball-bots are deployed.
You can see from the JPG above the structure is composed of interlinking rods and tubes capable of supporting scientific instrumentations while surviving a “hard” landing on a rigid surface. The basic concepts are given below.
Three structural types were considered.
To me, the most remarkable feature is how the bots are collapsed for storage and travel.
An actual device is as follows:
There were multiple schools involved with the research and development of these devices. In the NIAC project report, there are twenty-six (26 ) students involved; five (5) schools “state-side” and three (3) schools from various parts of the world. Marvelous collaboration on this one project WITH published papers spreading the information. In my opinion, this is the way we “kick the can down the road”. I definitely applaud the work of NASA and the institutions involved with this project.
I definitely welcome your comments.
November 30, 2013
I wrote the following document for PDHonline.org some months ago to demonstrate the possible uses of light cure adhesives. This is a fascinating technology and one gaining importance as differing materials become commercialized. Hope you enjoy it and please send me your comments as you see fit.
At the present time, adhesive manufacturers offer products classified as Cyanoacrylates, Epoxies, Hot Melts, Silicones, Urethanes, Acrylics (one-part and two-part) and Light-cures. These classifications provide products from manufacturers with specific characteristics that allow for bonding, gasketing, potting and encapsulating, retaining, thread-locking and thread-sealing.
Light-cure adhesive technology offers a new approach to bonding similar or dissimilar substrates by using either ultraviolet light (UV) or light within the visible spectrum. Extremely rapid cure times, superior depth of cure, (up to four inches) and easy dispensability are only three of the benefits when using these adhesives combined with the appropriate processes. The newer visible light-cure materials can offer adhesion comparable to most commercially available UV adhesives, with particularly high adhesion on polycarbonate and polyvinylchloride (PVC) materials. All equate to lower cost of assembly, more freedom when designing components and products and the saving of valuable production time. This method of adhesion is extremely valuable when bonding thin films, needing heightened safety relative to skin and eyes and when bonding heat sensitive materials. This process can lessen, or eliminate, the need for costly and harmful chemicals from the workplace and can be solvent-free and non-hazardous. The use of light-cure adhesives will result in a very clean and “friendly” worker environment with no significant material disposal costs. There is no need to mix, prime or rush to apply the adhesive due to minimal time to dispense. We will discuss other benefits and some disadvantages later on in our course.
ADVANTAGES AND DISADVANTAGES:
Let us list now the relative advantages and disadvantages of using UV and V light-curing adhesives.
1.) Reduced labor costs
2.) Simplified automation when automation is used
3.) Easier alignment of parts before cure
4.) Improved in-line inspection
5.) Reduced work in-process
6.) Shorter cycle times due to rapid curing of components
7.) Shorter lead times to customer possibly leading to reduced inventories
8.) Fewer assembly stations required due to rapid cure times
9.) No racking during cure
10.) No mixing generally required
11.) No pot life issues meaning generally much less waste of materials
12.) Reduced dispensing costs
13.) No hazardous waste due to purging or poor mixing
14.) No static mixers
15.) Easier to operate and maintain dispensing systems
16.) Better work acceptance
17.) No explosion proof equipment required
18.) Reduced health issues
19.) Reduced regulatory costs; i.e. reduced restrictions on volatile organic compounds
20.) Reduced disposal costs
21.) Very fast cure times
22.) Ideal for heat sensitive films and thin components
23.) Lower energy consumption required during processing of adhesive systems
24.) Visible light-cure adhesives cure through colored or tinted substrates
25.) Allows for miniaturization of component parts needing bonding or potting
26.) Improved manufacturing yield, quality and reliability
27.) Low odor
28.) RoHS compliant
29.) UL recognized materials available
30.) Low entrainment of moisture due to rapid cure times
31.) Solvent free
32.) Reduced material and process costs
As with any process or adhesive material, there are several disadvantages. These are as follows:
1.) Expenditure for curing equipment is necessary
2.) Shielding when UV light is used may be necessary
3.) UV blocking eye protection may be necessary depending upon the processing equipment
4.) A radiometer may be necessary to measure the intensity of the UV light
5.) When using UV light, the light source MUST reach the bond line if complete cure is to be had. This means that transmission of light through at least one substrate is crucial. Some substrates have UV inhibitors to lessen or eliminate degradation of the component. These inhibitors will inhibit the penetration and lessen adhesion necessitating another method of bonding. (This is by far the biggest disadvantage for UV curing.) A graphic depiction is given below that illustrates the principal.
6.) The mechanical properties may not meet specified requirements for tensile strength, shear strength, peel strength, etc.
7.) In some cases when potting depth is a factor, materials may not cure through.
8.) Rapid cure may be too fast allowing no repositioning of mating components
9.) Engineering specifications must be exact and specific denoting brand, part number and method of application relative to adhesive.
10.) Educating workers applying light-cure adhesives is a MUST.
When we discuss applications, we find they generally fall into one of several basic categories; i.e. 1.) Bonding, 2.) Sealing, 3.) Cured-In-Place Gaskets, 4.) Potting and 5.) Coating. With this in mind, we can see the following product applications now using the light-cure technology:
1.) Musical instruments
3.) Sporting equipment
5.) Optics (eye glasses)
9.) Electronic Asms.
10.) Appliance assembly (refrigeration, laundry, etc.)
11.) Strain relief for wires and cord sets
12.) Conformal coating for PC boards
13.) Parts tacking
14.) Coil terminating
The development of light-curing adhesives has been enhanced by the latest generation of curing equipment. This equipment includes both flood and point source configurations using bulb or lamp based systems. In addition, equipment utilizing LED technology is now available for use with these adhesives. The benefit here is that LEDs generate focused wavelengths that create appreciably tighter output range relative to regular visible lamp technologies. Furthermore, because superfluous light and heat are not emitted, LED technology has proven to be both highly efficient and highly cost effective. As might be expected, as a result of their small size, LED curing systems provide an LED light source that is perfect for curing tiny component parts.
As you can see, many industries use this technology and as materials improve, more and more will continue to do so. FASCINATING TECHNOLOGY.
November 16, 2013
Rapid prototyping is definitely a technology that has, and is, changing the way companies and commercial entities do business. We can certainly say this “emerging technology” has gained tremendous momentum over the past decade. The applications and uses represent a “best practice” for manufacturers and producers in general.
Being able to obtain prototype parts quickly allows a company to test for component form, fit and function and can help launch a product much faster than its competition. This can allow for adjustments in design, materials, size, shape, assembly, color and manufacturability of individual components and subassemblies. Rapid prototyping is one methodology that allows this to happen. It also is an extremely valuable tool for sales and marketing evaluation at the earliest stages of any program. Generally, an engineering scope study is initially performed in which all elements of the development program are evaluated. Having the ability to obtain parts “up front” provides a valuable advantage and definitely complements the decision making process. Several rapid prototyping processes are available for today’s product design teams while other prototyping processes utilize traditional manufacturing methods, such as 1.) CNC Machining, 2.) Laser Cutting, 3.) Water Jet Cutting, 4.) EDN Machining, etc. Rapid prototyping technologies emerged in the ‘80s and have improved considerably over a relatively short period of time. When I started my career as a young engineer, the only process available for obtaining and producing prototype components was as follows:
- Produce an orthogonal drawing of the component. This drawing was a two-dimensional rendition, including auxiliary views, and generally did NOT use geometrical dimensioning and tolerancing methodologies, which opened the way for various interpretations relative to the part itself. Solid modeling did not exist at that time.
- Take that drawing or drawings to the model shop so initial prototypes could be made. Generally, one prototype would be made for immediate examination. Any remaining parts would be scheduled depending upon approval of the design engineer or engineering manager. We were after “basic intent”—that came first. When the first prototype was approved, the model shop made the others required.
- Wait one, two, three, four, etc weeks for your parts so the initial evaluation process could occur. From these initial prototypes we would examine form, fit and function.
- Apply the component to the assembly or subassembly for initial trials.
- Alter the drawing(s) to reflect needed changes.
- Resubmit the revised drawing(s) to the model shop for the first iteration of the design. (NOTE: This creates a REV 1 drawing which continues the “paper trail” and hopefully insures proper documentation.)
- Again, apply the component for evaluation.
- Repeat the process until engineering, engineering management, quality control and manufacturing management, etc signs off on the components.
The entire process could take weeks or sometimes months to complete. Things have changed considerably. The advent of three dimensional modeling; i.e. solid modeling, has given the engineer a tremendous tool for evaluating designs and providing iterations before the very first “hard” prototype has been produced. As we shall see later on, solid modeling of the component, using CAE and CAD techniques, is the first prerequisite for rapid prototyping. There are several options available when deciding upon the best approach and means by which RP&M technology is used. As prototyping processes continue to evolve, product designers will need to determine what technology is best for a specific application.
INDUSTRIES USING RP&M PROCESSES:
As you might expect, there are many disciplines and industries willing to take advantage of new, cost-saving, fast methods of producing component parts. RP&M has become the “best practice” and the acceptable approach to “one-off” parts. Progressive companies must look past the prototyping stereotypes and develop manufacturing strategies utilizing additive manufacturing equipment, processes and materials for high volume production. The pie-chart below will indicate several of those industries now taking advantage of the technology and the approximate percentage of use.
One of the statistics surprising to me is the percentage of use by the medical profession. I’m not too surprised by the seventeen percent from automotive because the development of stereolithography was actually co-sponsored by Chrysler Automotive. Consumer electronics is another field at eighteen percent (18.4%) that has adopted the process and another industry benefiting from fast prototyping methodologies. When getting there first is the name of the game, being able to obtain components parts in two to three days is a remarkable advantage. Many times these products have a “lifetime” of about eighteen month, at best, so time is of the essence.
The bar chart below will give a comparison between sales for RP&M services provided by vendors and companies providing RP&M machines to companies and independent providers. As you can see, the trends are definitely upward. Rapid prototyping has found a very real place with progressive companies and progressive institutions in this country and the world over.
There are several viable options available today that take advantage of rapid prototyping technologies. All of the methods shown below are considered to be rapid prototyping and manufacturing technologies.
- (SLA) Stereolithography
- (SLS) Selective Laser Sintering
- (FDM) Fused Deposition Modeling
- (3DP) Three Dimensional Printing
- (Pjet) Poly-Jet
- Laminated Object Manufacturing
Stereolithography was the first approach to rapid prototyping and all of the other methods represent “offshoots” or variations of this one basic technology. The processes given above are termed “additative manufacturing” processes because material is “added to” the part, ultimately producing the final form detailed by the 3-D model and companion specifications. This course will address the existing technology for all of these processes and give comparisons between them so intelligent decisions may be made as to which process is the most viable for any one given part to be prototyped.
As a result of the prototyping options given above, there are many materials available to facilitate assembly and trial after completion of the model. We are going to discuss processes vs. materials vs. post-forming and secondary operations later in this course. The variety of materials available today is remarkable and to a great extent, the material selection is dependent upon the process selected. We will certainly discuss this facet of the technology.
As you might expect, there is a definite methodology for creating actual parts, and the processes do not vary greatly from method to method. We are going to detail the sequential steps in the process. This detail will form the “backbone” for later discussions involving the mechanical and electronic operation of the equipment itself. These steps apply to all of the RP&M processes.
- Create a 3-D model of the component using a computer aided design (CAD) program. There are various CAD modeling programs available today, but the “additative manufacturing” process MUST begin by developing a three-dimensional representation of the part to be produced. It is important to note that an experienced CAD engineer/designer is an indispensable component for success. As you can see, RP&M processes were required to wait on three-dimensional modeling before the technology came to fruition.
- Generally, the CAD file must go through a CAD to RP&M translator. This step assures the CAD data is input to the modeling machine in the “tessellated” STL format. This format has become the standard for RP&M processes. With this operation, the boundary surfaces of the object are represented as numerous tiny triangles. (VERY INDENSABLE TO THE PROCESS!)
- The next step involves generating supports in a separate CAD file. CAD designers/engineers may accomplish this task directly, or with special software. One such software is “Bridgeworks”. Supports are needed and used for the following three reasons:
- To ensure that the re-coater blade will not strike the platform upon which the part is being built.
- To ensure that any small distortions of the platform will not lead to problems during part building.
- To provide a simple means of removing the part from the platform upon completion.
- Leveling—Typical resins undergo about five percent (5%) to seven percent (7%) total volumetric shrinkage. Of this amount, roughly fifty percent (50%) to seventy percent (70%) occurs in the vat as a result of laser-induced polymerization. With this being the case, a level compensation module is built into the RP&M software program. Upon completion of laser drawing, on each layer, a sensor checks the resin level. In the event the sensor detects a resin level that is not within the tolerance band, a plunger is activated by means of a computer-controlled precision stepper motor and the resin level is corrected to within the needed tolerance.
- Deep Dip—Under computer control, the “Z”-stage motor moves the platform down a prescribed amount to insure those parts with large flat areas can be properly recoated. When the platform is lowered, a substantial depression is generated on the resin surface. The time required to close the surface depression has been determined from both viscous fluid dynamic analysis and experimental test results.
- Elevate—Under the influence of gravity, the resin fills the depression created during the previous step. The “Z” stage, again under computer control, now elevates the uppermost part layer above the free resin surface. This is done so that during the next step, only the excess resin beyond the desired layer thickness need be moved. If this were not the case, additional resin would be disturbed.
- Sweep—The re-coater blade traverses the vat from front to back and sweeps the excess resin from the part. As soon as the re-coater blade has completed its motion, the system is ready for the next step.
- Platform Drops–The platform then drops down a fraction of a MM. The process is then repeated. This is done layer by layer until the entire model is produced. As you can see, the thinner the layer, the finer and more detailed the resulting part.
- Draining–Part completion and draining.
- Removal–The part is then removed from the supporting platform and readied for any post-processing operations. .
- Next step— the appropriate software will “chop” the CAD model into thin layers—typically 5 to 10 layers per millimeter (MM). Software has improved greatly over the past years, and these improvements allow for much better surface finishes and much better detail in part description. The part and supports must be sliced or mathematically sectioned by the computer into a series of parallel and horizontal planes like the floors of a very tall building. Also during this process, the layer thickness, as discussed above, the intended building style, the cure depth, the desired hatch spacing, the line width compensation values and the shrinkage compensation factor(s) are selected and assigned.
- Merging is the next step where the supports, the part and any additional supports and parts have their computer representations combined. This is crucial and allows for the production of multiple parts connected by a “web” which can be broken after the parts are molded.
- Next, certain operational parameters are selected, such as the number or re-coater blade sweeps per layer, the sweep period, and the desired “Z”-wait. All of these parameters must be selected by the programmer. “Z”-wait is the time, in seconds, the system is instructed to pause after recoating. The purpose of this intentional pause is to allow any resin surface non-uniformities to undergo fluid dynamic relaxation. The output of this step is the selection of the relevant parameters.
- Now, we “build the model”. The 3-D printer “paints” one layer exposing the material in the tank and hardening it. The resin polymerization process begins at this time, and the physical three-dimensional object is created. The process consists of the following steps:
- Next, heat treating and firing may occur for further hardening. This phase is termed the post-cure operation.
- After heat treating and firing, the part may be machined, sanded, painted, etc until the final product meets initial specifications. As mentioned earlier, there have been considerable developments in the materials used for the process, and it is entirely possible that the part may be applied to an assembly or subassembly so that the designed function may be observed. No longer is the component necessarily for “show and tell” only.
The entire procedure may take as long as 72 hours, depending upon size and complexity of the part, but the results are remarkably usable and applications are abundant.
The applications for RP&M technology are as numerous as your imagination. With the present state of the art, extremely accurate, detailed and refined prototypes may be produced. Components and structures that were impossible or extremely difficult to model are made possible today with existing methods and equipment. We will now take a look at figures representing very “real” components fabricated with rapid modeling techniques. Some of the applications are as follows:
- Dental Prototypes
- Orthopedic Prototypes
- Sculpture prototypes
- Prototypes for manufactured components
- Items used to decorate sets for plays, operas, etc
- Forensic investigations
- Surgical procedure planning
- Molds for investment castings
- Architectural models
- Scaled models
- Complex trays for fiber optics
- Light pipes for electronic devices
In addition to speed, very fine and intricate surface finishes may be had depending upon the material and process used to create the part. We have taken a look at those industries using RP&M, Figure 1, so let us now consider the various uses for the technology itself. Looking at Figure 3 below, we find the following major uses for the technology:
- Visual aids for engineering 16.5 %
- Functional models 16.1%
- Fit and assembly 15.6%
- Patterns for prototype tooling 13.4%
- Patterns for cast metal 9.2%
Over seventy percent (70%) of the total uses are given by the five categories above. This in no way negates or lessens the importance of the other uses, but obviously, visual aids, functional models and models to prove form, fit and function top the list.
DIGITAL PHOTOS DEPICTING USES FOR RP&M TECHNOLOGY:
Everyone says a “picture is worth a thousand words” so let’s take a very quick pictorial look at some of the many applications noted by the text and the figures above. The following JPEGs should give you an idea as to what uses of RP&M technologies exist. These digital photographs are from actual models created for very specific purposes. Let’s take a look at parts actually produced by “additative” manufacturing.
These are just a few of the possibilities. Great detail–with remarkable surface finish. Just as the technology is improving, the materials are improving also with greater choices for the design engineer. I definitely hope you will use this post to investigate further this remarkable technology