Welcome to knowledge based learning site. Talk is cheap, but Knowledge based talk can be expensive, especially when professionals are involved. Here's your chance to understand the basics, share a topics or just make a comment without spending an arm and a leg. The rules for using the knowledge based topics site is pretty straightforward. Be sure to consult a professional before acting on any of the suggestions..Thanks to all our participants who help us to make this happened. Please send your knowledge based email to us. We will expand the site in near future.

""Digital-to-Analog conversion""
Digital-to-analog conversion is a process in which signals having a few (usually two) defined levels or states (digital) are converted into signals having a theoretically infinite number of states (analog). A common example is the processing, by a modem,of computer data into audio-frequency (AF) tones that can be transmitted over a twisted pair telephone line. The circuit that performs this function is a digital-to-analog converter (DAC). Basically, digital-to-analog conversion is the opposite of analog-to-digital conversion. In most cases, if an analog-to-digital converter (ADC) is placed in a communications circuit after a DAC, the digital signal output is identical to the digital signal input. Also, in most instances when a DAC is placed after an ADC, the analog signal output is identical to the analog signal input. Binary digital impulses, all by themselves, appear as long strings of ones and zeros, and have no apparent meaning to a human observer. But when a DAC is used to decode the binary digital signals, meaningful output appears. This might be a voice, a picture, a musical tune, or mechanical motion. Both the DAC and the ADC are of significance in some applications of digital signal processing. The intelligibility or fidelity of an analog signal can often be improved by converting the analog input to digital form using an ADC, then clarifying the digital signal, and finally converting the "cleaned-up" digital impulses back to analog form using an DAC.
"Analog-to-Digital conversion"
Analog-to-digital conversion is an electronic process in which a continuously variable (analog) signal is changed, without altering its essential content, into a multi-level (digital) signal. The input to an analog-to-digital converter (ADC) consists of a voltage that varies among a theoretically infinite number of values. Examples are sine waves, the waveforms representing human speech, and the signals from a conventional television camera. The output of the ADC, in contrast, has defined levels or states. The number of states is almost always a power of two -- that is, 2, 4, 8, 16, etc. The simplest digital signals have only two states, and are called binary. All whole numbers can be represented in binary form as strings of ones and zeros. Digital signals propagate more efficiently than analog signals, largely because digital impulses, which are well-defined and orderly, are easier for electronic circuits to distinguish from noise, which is chaotic. This is the chief advantage of digital modes in communications. Computers "talk" and "think" in terms of binary digital data; while a microprocessor can analyze analog data, it must be converted into digital form for the computer to make sense of it. A typical telephone modem makes use of an ADC to convert the incoming audio from a twisted-pair line into signals the computer can understand. In a digital signal processing system, an ADC is required if the signal input is analog.

"Security Camera and Security Camera System"
Security cameras or closed circuit TV (CCTV) was initially developed as a means of security for banks. Today, however, it has developed to something more than just surveillance used for commercial purposes. The technology of security cameras these days have become simpler so that they are inexpensive enough to be used in home security systems and for everyday surveillance. Development of Technology The first security cameras used in public places were crude and conspicuous. They used low definition black and white systems without even the ability to zoom or pan. Advances in technology led to more sophisticated systems which became the precursors to the modern security camera. Today, security or CCTV cameras use small high definition color cameras that can not only focus to resolve minute detail but also semi-automatically track down objects by linking the control of the cameras to a computer. For instance, a typical modern security camera can track movement across a scene where should be no movement. It may also be able to lock onto a single object in a busy environment and follow it. Because the system is computerized, it is now possible to let this tracking process work between cameras. In the United Kingdom, specifically in London, security cameras are used in combination with compute imaging systems to track car number plates. This is a security measure taken by the government in case of an instance of crime, such as car-napping. Information from security cameras on car number plates are also used to generate billing information. The latest development in security cameras, aside from the new imaging techniques, is computerized monitoring. This allows the camera operator to run more cameras since he no longer needs to endlessly look at all the screens. The computerized system of this new security camera tracks the behavior of people, searching for any particular types of movement, particular types of clothing or baggage.

"Definitions/Abbreviations/Index/Units"
Definitions/Abbreviations/Index/Units: Alternating Current (AC) ~ means the voltage supply is constantly changing value
Amperes (A) ~ unit of current ~ usually abbreviated as Amps
Assembler ~ Software that converts your assembly code into machine codes
Assembly Code ~ Low level language mostly used for Microcontrollers such as 8051
Binary ~ implies two possible values, 0 or 1
BIOS ~ Basic Input Output System - A small program that handles basic input and output operations
Bit ~ a binary digit ~ one piece of information, either 0 or 1 (0 or 5 volts usually)
BJT ~ Bipolar Junction Transistor ~ a type of transistor
Byte ~ 8 bits
Capacitance (C) ~ has units of Farads (F) ~ usually given in micro farads (uF) ~ a measure of the amount of charge a device can store.
Capacitor ~ Stores charge (like a battery) and can be used to buffer power supply lines to provide extra charge when needed. Can also be used in other places to filter out sudden changes in voltage. The amount of charge a capacitor can store is measured by it's capacitance. The unit of measurement is the Farad
CGA ~ Color Graphics Array
CMOS ~ Complimentary Metal Oxide Semiconductor
Compiler ~ Software that converts high level language (C, Pascal, etc.) into machine codes
Current ~ Current is what flows through a wire. Think of it as water flowing in a river. The current flows from one point to another point just like water in a river. Current flows from points of high voltage to points of low voltage. Current can be shown in circuit diagrams by using arrows as in Figure 1. The arrow shows which way the current is flowing. An I is usually included beside the arrow to indicate current.
Direct Current ~ means the voltage supply has a constant, output voltage is not varying
Decimal ~ normal number system with values 0 to 9
Digital ~ implies two possible values usually given by binary values, 0 or 1
Diode ~ component that allows current to flow in only one direction
DMA ~ Direct Memory Access
EDO ~ Extended Data Out
EGA ~ Enhanced Graphics Adapter
EMS ~ Expanded Memory Specification
Emulator ~ Software which accepts machine codes (or possibly higher level languages) and converts those commands into signals on a piece of hardware which can actually be used in place of a real micro-controller (or processor) in a physical system. Can accept signals from the other system hardware just as the real device would do.
EPROM ~ Electrically Programmable Read Only Memory - A device which can be programmed with data. Requires ultraviolet light to erase. (Usually takes 15 - 30 minutes to erase)
Farad ~ unit of measurement for capacitance
FET ~ Field Effect Transistor
Flash Memory ~ A memory device that can be quickly (<30 seconds) erased and reprogrammed without having to be erased with ultraviolet light.
Inductor ~ A device that can create and store an electric field
Kilo (k) ~ prefix meaning 1000 (1 kilo ohm = 1000 ohms = 1 kohm)
Large Word ~ 32 bits
LCD ~ Liquid Crystal Display
Machine Codes ~ 8, 16, or 32 bit numbers that are instructions/commands for a computer chip
Micro (u) ~ prefix meaning 0.000001 (1 micro farad = 0.000001 Farads = 1 uF)
Milli (m) ~ Prefix meaning 0.001 (1 milliAmp = 0.001 Amps = 1 mA)
MOS ~ Metal Oxide Semiconductor ~ a type of transistor material
Nibble ~ 4 bits
Node ~ a connection point between two or more components
Ohms ~ unit of measurement for resistance
Parallel Connection ~ for components ~ connecting two components with two common points
Parallel Connection ~ for computers ~ transmission of data over several parallel lines simultaneously, through a parallel port
pF ~ pico Farad
Potentiometer ~ fancy name for variable resistor
Pull Down Resistor ~ A resistor connected from any point to Ground to pull that point to Ground when no other voltages are present. Can be a large resistor for a weak pull (maybe 100K) or a small resistor for a strong pull (maybe 1K).
Pull Up Resistor ~ A resistor connected from any point to Vcc to pull that point to Vcc when no other voltages are present. Can be a large resistor for a weak pull (maybe 100K) or a small resistor for a strong pull (maybe 1K).
Quad Word ~ 64 bits
Rail ~ usually refers to a power supply node
Resistance (R) ~ measure of the opposition to current flow (higher resistance means less current flow), has units of ohms
Resistor ~ component with predetermined resistance
Series Connection ~ for components ~ connecting two components with one common point
Series Connection ~ for computers ~ transmission of data over two lines at most, one for receive, one for transmit, through a serial port
SIMM ~ Single Inline Memory Module
Simulator ~ Software that accepts machine codes (or possibly higher level languages) and simulates what a computer chip (or microcontroller) would do with those machine codes.
Transistor ~ component that acts like a switch
Transformer ~ a device which changes voltage levels
Truth Table ~ a table which gives the results of an operation
uF ~ micro Farad ~ see Micro and Farad
Unity Gain ~ An amplifier configuration where the output voltage equals the input voltage (Gain = 1) VA ~ Volt Amps, A measure of power equal to Watts
Variable Resistor ~ component that allows you to vary its resistance
Vcc ~ One of the power supply voltages, often 5 Volts DC in digital systems
VGA ~ Video Graphics Array Voltage (V) ~ has unit of volts (V) Volts (V) ~ unit of measurement of voltage Voltage ~ Voltage indicates the power level of a point. Voltage is measured in volts. If we continue the river comparison, a point at the top of a hill would be at a high voltage level and a point at the bottom of a hill would be at a low voltage level. Then, just as water flows from a high point to a low point, current flows from a point of high voltage to a point of low voltage. If one point is at 5 volts and another point is at 0 volts then when a wire is connected between them, current will flow from the point at 5 volts to the point at 0 volts.There are two special cases that we give names. One is when the current is zero (open circuit) and the other is when the voltage is zero (short circuit).
Watts ~ A measure of power found by multiplying the Voltage by the Current
Word ~ 16 bits

"FUNCTIONAL MRI"
Recently it was discovered that magnetic resonance imaging can be used to map changes in brain haemodynamics that correspond to mental operations extending traditional anatomical imaging to include maps of human brain function. This ability to observe the structures and also which structures participate in specific functions is thanks to developing a new technique called functional magnetic resonance imaging, fMRI. It provides high resolution, noninvasive reports of neural activity detected by a blood -oxygen level-dependent signal. This ability to directly observe brain function throws open several new opportunities to advance human understanding of brain organization, as well as a potential new standard for assessing neurological status and risks in neuro-surgery. Functional MRI (fMRI) is based on the increase in blood flow to the local blood vessels during neural activity in the brain. This results in a corresponding local reduction in deoxyhaemoglobin because the increase in blood flow occurs without a similar magnitude in oxygenation. Since deoxyhaemoglobin is paramagnetic, it alters the weighted magnetic resonance image signal. Thus, deoxyhaemoglobin becomes an endogenous contrast-enhancing agent, and serves as the source of the signal for fMRI. Using an appropriate imaging sequence, human cortical functions can be observed without the use of exogenous contrast enhancing agents on a clinical strength scanner. Functional activity of the brain determined from the magnetic resonance signal has so far confirmed known anatomically distinct processing areas in the visual cortex, the motor cortex and Broca's area of speech and language-related activities. The main advantages to fMRI as a technique to image brain activity related to a specific task or sensory process include 1) the signal does not require injections of radioactive isotopes, 2) the total scan time required can be very short, i.e., on the order of 1.5 to 2.0 min per run (depending on the paradigm), and 3) the in-plane resolution of the functional image is generally about 1.5 x 1.5 mm although resolutions less than 1 mm are possible. It may be remembered that the functional images obtained by the earlier method of positron emission tomography (PET), required injections of radioactive isotopes, multiple acquisitions, and, therefore, extended imaging times. But the expected resolution of PET images is much larger than the usual fMRI pixel size. The PET however, usually requires that multiple individual brain images are combined in order to obtain a reliable signal. Consequently, information on a single patient is compromised and limited to a certain number of imaging sessions. These limitations while serving many neuroscience applications are not best suited in a neurosurgical or treatment plan for a specific individual.

"Cable Modem and Signals"
In the area of science and technology, cable modem is but a common term. However, not all people are familiar with it; some don’t even know what it is. Actually, a cable modem is a special type of modem, which is a device that modulates an analog carrier signal to encode the digital information and at the same time demodulates such a carrier signal to decode the transmitted information. It is basically designed to modulate a data signal over cable television infrastructure. According to some experts in field of technology, a cable modem should not be confused with the older LAN systems like the 10base2 or 10base5 that particularly employed coaxial cables, which are known to be electrical cables composing of a round conducting wire and surrounded by an insulating spacer, a cylindrical conducting sheath, and a final insulating layer. It is also important to understand that a cable modem should not be confused with the 10broad36, which basically employed similar kind of cable as CATV systems do. Furthermore, a cable modem is primarily utilized for the main purpose of delivering broadband internet access. It is a common idea that a cable modem takes advantage of unused bandwidth on a cable television network. It is also interesting to know that along with the digital subscriber line technology, a cable modem usually steered in the age of broadband internet access in some developed countries. And speaking of internet access, it is worth noting that before the introduction of this concept, it involves slow dial-up access for more than a public switched telephone network. Nowadays, most of the users in a neighborhood usually share the available bandwidth that is given by a sole coaxial cable line. With this fact, the connection speed then can be said to vary and differ depending on how many people are using the cable modem and its service at the same time. In most cases, the concept of shared line is noted as weak angle of cable internet access. A cable modem then is what most of the cable networks used so to ensure a good network performance.

"Antenna"
An antenna is a specialized transducer that converts radio-frequency (RF) fields into alternating current (AC) or vice-versa. There are two basic types: the receiving antenna, which intercepts RF energy and delivers AC to electronic equipment, and the transmitting antenna, which is fed with AC from electronic equipment and generates an RF field. In computer and Internet wireless applications, the most common type of antenna is the dish antenna, used for satellite communications. Dish antennas are generally practical only at microwave frequencies (above approximately 3 GHz). The dish consists of a paraboloidal or spherical reflector with an active element at its focus. When used for receiving, the dish collects RF from a distant source and focuses it at the active element. When used for transmitting, the active element radiates RF that is collimated by the reflector for delivery in a specific direction. At frequencies below 3 GHz, many different types of antennas are used. The simplest is a length of wire, connected at one end to a transmitter or receiver. More often, the radiating/receiving element is placed at a distance from the transmitter or receiver, and AC is delivered to or from the antenna by means of an RF transmission line, also called a feed line or feeder.

"Liquid Crystal Communication"
Liquid crystals are a class of liquids whose molecules are more orderly than molecules in regular fluids. Because of this orderliness, when these liquids interact with light, they can affect the light like crystals do. Making droplets of liquid crystals is nothing new; the basic technology has been around since the mid-1980s. Today you can find such droplets in the window-walls of some executives' offices. With the flip of a switch, the office's transparent windows magically change to opaque walls somewhat like frosted glass.

"Hybrid Cars: The Magic Braking"
You have undoubtedly seen one of the hybrid cars on the road. You probably heard that they are unlike any other fossil fuel or electric car. They are sort of both. If you owned one of these hybrid cars, you would put gasoline into it, just like you do for your regular car. You would not have to recharge it, like an electric car. Still, your hybrid car would be capable of using half the gasoline that your regular car does for the same trip! How is that possible? The secret is in the braking. When you step on your brakes, what happens? The car slows down because two metal blocks in your wheels rub together. This friction-based braking produces a lot of heat; just like the palms of your hands get warm when you rub them together rapidly. This heat is basically wasted energy. Hybrid cars have a more intelligent braking system, so called regenerative braking. Instead of wasting the heat energy, they transfer it to an electrical generator and battery (and hence self-charge), or a fly-wheel and store it for later use. The onboard computer then calculates the best time to use this stored energy and reduce combustion engine use. Thus a hybrid car drives on combustion engine only part of the time. This switch between combustion engine and electric motor power is in most cases so seamless that you don't even notice it. This concept is ingenious and environment-friendly.

"Smoke Detectors"
How does a smoke detector 'know' when there is a fire? Smoke detectors use one of two different methods to do their job, and for both methods the basic operating assumption is the cliche 'where there's smoke there's fire'. Smoke is of course, essential to the operation of a smoke detector, and it is the physical interaction of smoke particles with either light or nuclear radiation that is the basis of a detector operation. The principle by which an optical smoke detector works can be readily seen by shining a small laser pointer into a foggy sky. Rays of light can only be seen when they shine directly into one's eyes; they can not be seen from the side. The laser however, is clearly visible in the fog. As the light strikes the small suspended droplets of fog some are reflected away at angles to the original path, and these are what makes the beams of light visible from the side. In an optical smoke detector, light travels down a path from an emitter to a detector. This light must pass a tube positioned at right angles to the path that the light must follow. When smoke enters the light path, some of the light bounces off the suspended smoke particles and passes down this tube. It is detected there by a photocell, whose current triggers the alarm sound of the smoke detector.Optical smoke detectors are good, but they can be fairly easily fooled by other air-borne materials, leading to false alarms. Ionizing smoke detectors use a very small amount of the radioactive element Americium-241 as a source of ionizing radiation. As the atoms of Am-241 break down they emit positively-charged alpha particles. These energetic, charged particles interact with nitrogen and oxygen molecules in the air to produce corresponding ions. The heart of an ionizing smoke detector is a set of electrically charged plates constructed in such a way that this constant flow of ions produces a measurable current. When even a small amount of smoke enters an ionizing smoke detector, the smoke particles interfere with the ionization process, causing an interruption in the flow of ions to the detector plates and a loss of current to the circuit. This loss of current allows another circuit to become active, and when this happens the alarm is sounded.Ionizing smoke detectors are more common than optical smoke detectors. They are not only considerably cheaper to build, but are more sensitive to smoke itself.

"Infrared Headphones"
Infrared headphones use infrared light to carry an information signal from a transmitter to a receiver. Sounds simple enough, but the actual process is very complicated. The human ear gathers sound as compression waves pass through and distort the air. These sympathetic distortions produce resonant vibrations in parts of the ear, which in turn trigger nerve impulses that are interpreted by the brain as various sounds. In no way is the human ear equipped to utilize either electrical impulses or beams of light as sound sources. Earphones 'translate' information from these sources into something that we can hear. In typical headphones, an electrical signal travels from the signal source to a pair of tiny speakers. The speakers contain a diaphragm attached to an electromagnet. As current through the electromagnet varies with the electrical signal from the source, the diaphragm vibrates in response. These vibrations translate through the air in the wearer's ear passages and into the ear. In wireless headphones, the signal is carried by a beam of infrared light, rather than by solid wires. This requires the action of a 'translator' in the sending unit to convert the electrical signal from the source into a stream of data that can be expressed with infrared light. It also requires the action of an 'interpreter' in the receiving unit to convert the infrared data stream back into an electrical signal that will drive the small speakers of the headphones. As a data carrying device, an infrared light source may seem quite limited. It can, after all, have only two operating states: 'on' and 'off'. Yet this simple limitation lends itself perfectly to digital transmission. In this mode, the analog signal from the source can be translated into a series of 'on' and 'off' signals, forming a digital data stream. Alternatively, the infrared light can serve as the carrier for a modulated signal. The modulation pattern of the light can mimic the on and off signals of the digital data stream. However the infrared light is utilized, it is emitted from the source, effectively 'broadcasting' its content, to be picked up by a receiving unit. The receiving unit is the infrared sensor on the TV, the VCR or DVD machine, or on the infrared headphone set. The transmitted signal thus captured is electronically 'decoded' and converted back into the corresponding electrical impulses that drive the tiny speakers in the headset.

"Fiber Optics "
The sun is shining; it's a brilliant day. The springboard flexes powerfully under your feet as you launch into a graceful arc through the air and into the crystal clear water below. Arms extended, you let the momentum of your dive take you back toward the surface. As you near the surface, the interface between the water and the air, you notice something interesting. You can't see out of the water! Instead, you see the inside of the pool reflected clearly in a shimmering, silvery mirror. What you have just seen is the principle that makes fiber optics both practical and functional. The phenomenon is known as 'total internal reflection', or TIR. The principle of TIR has been known or at least suspected since the 1840's, when David Colloden and Jacques Babinet first designed and built water fountain displays in which the streams of water also guided or carried light to enhance the display. As the theory and understanding of the behaviour of light improved, the ability to utilize the principle of TIR also improved. In essence, an interface between two materials, such as between water and air or between glass and air, acts as a reflective surface. Glass that has been drawn into long, thin, and highly flexible fibers, and is then coated with a non-absorbing material provides an interface that reflects essentially all light back into the fiber itself, allowing none to escape through the periphery of the glass fiber. The reflected light beam bounces back and forth from interface to interface along the length of the fiber, until it exits the end of the fiber as an exact image of the light that first entered the fiber. As a communications or message carrier, optical fibers alone are not enough. Ordinary light, and even polarized light, contain a vast range of wavelengths, all in different phases of their vibratory cycles. The laser is the final key that makes fiber optics feasible for communication purposes. Since the light waves from a laser are all within a very narrow range of wavelengths and are all in the same phase of their vibratory cycles, the signal, and the message it carries, does not get all twisted about and mashed into an incomprehensible blur by the countless reflections experienced as it passes from one end of the fiber to the other.

"GPS (Global Positioning System)"
The GPS, or Global Positioning System, is the high-tech application of one of the most fundamental principles of geometry. Surveyors routinely use geometry and triangulation to map and lay out areas of land. Until recently they used high quality optical telescopes called 'theodolites' and mechanical measuring devices to carry out the surveying process. But as technology has changed, so has the surveyor's craft. The laser, digital electronics, space travel, and several other technological advances have all combined to make surveying and triangulation far more precise and accurate than they used to be, and allow measurements to be routinely obtained from distances that traditional surveyors could only dream about. GPS, the Global Positioning System, has come about as a natural development of the advances in surveying technology. It consists of a series of 24 satellites in orbit 11,000 miles (17,600 kilometers) above Earth. Each satellite orbits Earth once every 12 hours, and each carries a highly accurate clock with the ability to measure time to 3 billionths of a second. All 24 of the satellite clocks are synchronized with each other and each one broadcasts its own time signature. The GPS receiver is programmed to read the time signature of four satellite signals, and to measure the difference in time between receipt of the four signals. Since the signals all travel at exactly the same speed, and all of the satellites are different distances away from any particular point on the planet, each signal takes a measurably different amount of time to reach a particular receiver. This time difference is used by the receiver to calculate the distance to each of the 4 satellite sources and thus triangulate the exact location of the receiver on the planet's surface. To complete the system, 5 ground stations located throughout the world monitor and maintain the proper functioning of the satellites. The GPS can fix one's location anywhere on the planet to within a few inches. This allows very precise navigation and control of the movement of people and things on the planet's surface. Unfortunately, this sort of accuracy could be useful to an enemy. The U.S. government intentionally scrambles the signal slightly to reduce the available accuracy, just enough to avoid untoward use of the positioning system while maintaining an acceptable degree of accuracy for the system to be generally useful. The GPS is already being used to produce the most accurate maps ever, for surveying and documentation, for prospecting, for on-the-fly navigation systems, and in agriculture to help regulate the application and use of fertilizers. Other uses for this ingenious system are being developed every day.

"How Can A Bullet-proof Vest Stop A Bullet? "
Here's an experiment: take the small coil springs from a dozen or so retractable pens and roll them together in a heap until they are thoroughly tangled and entwined. Now try to pull them apart from end to end. You should find them extremely difficult to pull apart this way, as anyone who has ever tried to untangle a 'Slinky' toy will know. Individually, those little coil springs offer only little resistance and can be completely stretched out very easily. But together they seem to acquire extra strength from each other, and it becomes increasingly difficult to stretch any of them. When they are tangled together, one has to stretch all of them in order to stretch any one of them. What this experiment gives you is an analogous image of what happens inside a 'bullet-proof' vest. A bullet fired from a gun has kinetic energy and momentum due to its mass and the velocity at which it travels. That bullet carries out its function by delivering its load of kinetic energy completely to its target. When it strikes the target transfer of energy is achieved as the bullet stops moving; the more quickly the bullet stops, the more rapidly the energy is transferred. This is the principle behind the 'knock down power' of any bullet-cartridge combination. A bullet-proof vest accepts the energy from the bullet and dissipates it so that only a small portion is passed on to the actual target, the person who is wearing the vest. That small portion of energy will probably still be enough to knock the wearer flat on his or her backside, it still hurts a lot, and will almost certainly leave a very unpleasant bruise at the point of impact. But if the vest has done its job, the bullet has not penetrated, and the person wearing it gets to walk away essentially unharmed. The secret to this is in the material used inside the vest. Believe it or not, a bullet-proof vest is filled with nothing more than several loose layers of a light plastic fabric. But not just any plastic will do the job. This application calls for plastic fibers of exceptionally high tensile strength, fibers that it takes a great deal of energy to stretch even the tiniest amount (not fibers that will stretch a lot before they break...). In this case, those fibers are made of a polyarylamide plastic known familiarly as 'Kevlar'. Kevlar is the proprietary name for the material; it is becoming more common to refer to the material generally as polyarylamide. Fibers of Kevlar don't stretch very readily when put under tension. In fact, this material is even harder to stretch than steel! But it weighs a great deal less than an equivalent value of steel fibers would weigh.

"How We Use Crystals To Tell Time"
Quartz clock operation is based on the piezoelectric property of quartz crystals. If you apply an electric field to the crystal, it changes its shape, and if you squeeze it or bend it, it generates an electric field. When put in a suitable electronic circuit, this interaction between mechanical stress and electric field causes the crystal to vibrate and generate an electric signal of relatively constant frequency that can be used to operate an electronic clock display. Quartz crystal clocks were better because they had no gears or escapements to disturb their regular frequency. Even so, they still relied on a mechanical vibration whose frequency depended critically on the crystal's size, shape and temperature. Thus, no two crystals can be exactly alike, with just the same frequency. Such quartz clocks and watches continue to dominate the market in numbers because their performance is excellent for their price. But the timekeeping performance of quartz clocks has been substantially surpassed by atomic clocks.

"Pass the Basalt"
Advanced composite materials technology is a field that is growing both quickly and steadily. That new fiber materials and applications will be developed is the proverbial 'no brainer'. However, basalt fiber represents one of those little strokes of simple genius that appear once in a while. Basalt itself is familiar from the columnar formations in volcanic deposits. That same columnar structure is a clue to the molecular behaviour of basalt, a hint that it might be a viable fiber-forming material. Molten basalt can indeed be extruded into fibers, but what was basalt first if not just molten rock ejected from the vent (a volcano...) of a very large furnace (the Earth...)? Where else do we see this happening? How about in the metals industry, where millions of tons of molten rock are ejected from somewhat smaller furnaces each year in the form of slag? Indeed, basalt fiber is now produced in quantity in two source grades: 'basalt', and 'modified basalt' or slag. Basalt fibers can be processed into all the fabric forms currently available with glass fiber, and they can be substituted directly into any application for which glass fiber is suitable. Basalt fiber materials are proving to be a very useful alternative in applications calling for a more robust version of glass fiber, and in other applications that have traditionally been the domain of rock fibers such as asbestos. Since basalt is also a rock fiber it exhibits far better heat resistance than does glass fiber, withstanding conditions that would quickly destroy glass constructs. It also exhibits a significantly higher chemical stability than does glass fiber. Being a recently developed material, research into potential applications of basalt fiber has really only just begun. The properties of basalt fiber will certainly guarantee that its major uses will be in the construction trades, but it will undoubtedly see far broader applications as well.

"Red Dot Replacing Cross Hairs"
A bullet fired from a gun becomes subject to the pull of gravity and begins to fall the instant it leaves the gun barrel. The farther away from the gun the bullet travels, the lower to the ground it gets. To compensate for this, guns are sighted in such a way that the bullet is actually going upwards when it leaves the barrel. The bullet then follows a 'ballistic' trajectory. The bullet rises up to a maximum height, then falls until it hits either its target or the ground. Sight adjustments are made so that the bullet follows its flight path and strikes its target point at a specific distance from the end of the gun. The cross-hairs in a traditional telescopic sight serve as a reference point for the shooter: they are to be adjusted so that when the shooter looks through the sight she or he sees the desired point of impact exactly where the cross-hairs cross. This type of sighting system requires the shooter to place his or her full attention at the weapon rather than at the target. 'Laser sights' function in much the same way. But instead of a pair of crossed hair (which are actually spider webbing) inside a telescope, they use a beam of laser light from a well-constructed laser device. The position of the laser beam relative to the barrel of the gun is adjusted so that highly-visible red dot of the laser coincides with the desired point of impact of the bullet at a specific distance. With this type of system, the shooter's attention is fully on the target itself rather than a point six inches in front of the eye. He or she then has only to look for the red dot, position it on the target accordingly, and pull the trigger. In more complex, advanced weapons systems, the laser serves the same purpose, but usually in a more technological way. Using electronic detection systems that 'recover' the laser signal, data is fed back into critical direction/distance control systems that allow a ground-based weapon system to shoot at and reliably strike a desired target several kilometers away. In other systems, a laser beam sighted onto a target by a 'spotter' is detected by an incoming missile that then uses that signal to guide its flight path directly to the target. The missile itself may have been fired a hundred kilometers or more away, but will strike within 10 centimeters of its intended target.

"DNA Translation"
DNA translation is the process that converts an mRNA sequence into a string of amino acids that form a protein. This fundamental process is responsible for creating the proteins that make up most cells. It also marks the final step in the journey from DNA sequence to a functional protein; the last piece of the central dogma to molecular biology.

"Catch A Shooting Star"
A meteor, sometimes called a 'shooting star,' can be the brightest object in the night sky, yet meteoroids are the smallest bodies in the solar system that can be observed by eye. Wandering through space, perhaps as debris left behind by a comet, meteoroids enter the earth's atmosphere, are heated by friction, and for a few seconds streak across the sky as a meteor with a glowing trail. A brilliant meteor, called a fireball, may weigh many kilograms, but even a meteor weighing less than a gram can produce a beautiful trail. Some of these visitors from space are large enough to survive (at least partially) their trip through the atmosphere and impact the ground as meteorites. Fireballs are sometimes followed by trails of light that persist for up to 30 minutes; some, called bolides, explode with a loud thunderous sound. How can a particle the size of a grain of sand produce such a spectacular sight? The answer is the speed at which the meteoroid enters the earth's atmosphere. Many meteoroids travel at 60-70 kilometers per second. During its trip through the atmosphere, meteoroids collide with air molecules, knocking away materials and stripping electrons from the meteor. When the stripped atoms recapture electrons, light is emitted. The color of the light depends on the temperature and the material being 'excited.' Each day as many as 4 billion meteors, most minuscule in size, fall to earth. Their masses total several tons, seemingly a large amount, but negligible compared to the earth's total mass of 6,600,000,000,000,000,000,000 tons.

"Who Invented Zero?"
Many concepts that we all take for granted sounded strange and foreign when first introduced. Take the number zero for instance. Any first-grader can recognize and use zeros. They sound so logical and are such a basic part of how we do math. Zero equals nothing. What could be simpler? Yet early civilizations, even those that had a great proficiency with numbers, didn't have a concept for zero and didn't seem to miss it. Before the time of Christ, early Babylonians and Hindus from India began using a symbol that eventually evolved into our numeral 0. You can see the Babylonian symbol at the right and the zero that we use today comes from the Hindu symbol. Both cultures used it to tell one number from another. For example, to distinguish a 4 from a 400 they would use the symbol for zero twice. But they didn't use zero as a numeral. They wouldn't compute 400 - 0 = 400. This was an enormous conceptual leap nonetheless, for it led to our modern-day concept of place value. It is much easier to represent twenty bags of grain with the numeral 2 and the symbol 0 than as twenty separate marks as other cultures did. The concept of zero stayed pretty much to the peoples of the fertile triangle and the Indus peninsula. The Greeks and Romans didn't use zero. And neither did the post-Roman European cultures who continued to use Roman numerals. It wasn't until the Moor invasions of Northern Africa and Southern Europe that the concept of zero both as a place holder and a numeral began making its way into Europe. The Italian mathematician Fibonacci was one of the first to present the concept of zero to Europe. Slowly, over the centuries, the Europeans began using Arabic numbers, including zeros. They were reluctant adaptors, for they also continued to use Roman numerals. But zero's time had come and that's a good thing, for advancements in mathematics lean heavily on this symbol for nothing.

"Turning Oil Into Gas"
When you see all those cars at the gas station filling up with unleaded, you may not stop to think about how that gasoline got there. It wasn't pumped out of the ground in that form. The same goes for jet airplane fuel. It didn't start out that way--it took a long refining process to become fuel. You could never fly an airplane with gasoline, but the two products came from the same source: crude oil. Many petroleum products are used to make some sort of vehicle move: gasoline, diesel fuel, and jet fuel. Other products are made from petroleum, too. You might not be surprised to know that heating oil and asphalt are made from petroleum. But, what about crayons, floor polish, ice chests, mascara, volleyballs, guitar strings, roller skate wheels, bubblegum, and eyeglass frames? Most people don't realize that petroleum is the raw material for a lot more than gas. Crude oil comes out of the ground thick and dark, something like molasses. That's nothing like the gasoline that is pumped into your car! As it's refined, impurities are removed, and other products are allowed to settle out. There are three steps to the refining process: distilling, converting, and treating. Distilling: Oil is heated in large, tall towers. The extremely high temperatures break down the molecular chains of carbon in the oil, making it separate into layers. Heavier elements like sludge sink to the bottom, and the lighter gases such as propane float to the top. Suspended in the middle, you'll find the oil that will be made into gasoline and jet fuel. Just as cooking changes the characteristics of foods, 'cooking' the petroleum causes the oil to change its molecular structure, too. As the layers of light-to-heavy products separate, they're sent through pipes to different areas for further processing. Converting: To convert oils, heavy hydrocarbon molecules are 'cracked' into lighter, smaller molecules. This is done by causing a reaction between the oil and hydrogen under high pressure and heat. Cracking breaks 70 percent of the petroleum into gasoline and the rest into diesel and jet fuel. The products are then blended with other products to create different octane levels of gasoline. Conversion finishes by rearranging the oil. To rearrange oil, hydrogen is removed from lower octane gasoline. Treating: Treating petroleum removes even more impurities, such as sulfur and nitrogen. These can cause air pollution. Nitrogen goes through a process called water washing, which transforms it into ammonia, and that is turned into farm fertilizer. After distilling, converting, and treating the crude oil, it is blended to create the finished products that we recognize. Blends of several products help make sure that the gas or fuel is the same every time.

The Truth About Atomic And Hydrogen Bombs
In the 1930's Enrico Fermi and other scientists studying the properties of radioactive materials observed an interesting phenomenon. They found that the readings taken with a Geiger counter were lower when taken through water than when taken through air. It wasn't immediately obvious what this meant, but soon they realized that the medium of water moderated the radioactive decay process by slowing down the subatomic particles emitted by the radioactive material. This observation eventually allowed the construction of the first 'atomic pile', in which a chain reaction of decaying radioactive nuclei could be maintained in a controlled manner. In a nuclear chain reaction, a particle emitted from one atomic nucleus strikes other nuclei, causing them to split apart and emit particles that in their turn strike other nuclei, and so on in a continuing process. Without the intervention of a moderating medium, the process can go on in an uncontrolled manner. Each instance of a nucleus splitting apart and emitting a particle releases a certain amount of energy. When the amount of material present is more than a certain threshold quantity, or 'critical mass', so many particles and so much energy are released that the chain reaction runs wild. This is the process of 'nuclear fission' that defines an atomic bomb. The same process, but using a good moderating medium, allows the controlled release and capture of the same energy, which is the basis of the nuclear power generating station. The incident at Chernobyl some years ago stands as a grim reminder of the close kinship between the destructive force of the atomic bomb and the constructive generation of electricity in the nuclear reactor. In 1953, people watched the testing of the first hydrogen bomb with some fear. For the first time in history, a force was to be purposely unleashed over which man had no control whatsoever and that served no purpose other than destruction. There was a fear that the detonation of that first bomb would also initiate the destruction of the world. This fear was based on the exceedingly small but finite probability that the explosion of this bomb would initiate an unstoppable chain reaction in the most common element in the world: hydrogen. Their fears were perhaps not totally unfounded, as a rumor persists that the energy liberated by that bomb exceeded the very best theoretical calculations by as much as twenty percent, begging the question 'where did it come from?'. And yet, this amazingly destructive force also presents a source of hope for mankind. Research continues to look for a way to harness the incredible power produced by the nuclear fusion process. Success would mean abundant cheap energy for the whole world to use.

Ants Are Wimpy

It's common knowledge that ants can lift many times their own weight. We are frequently told they can lift 10, 20, or even 50 times their weight. It is most often stated something like this: an ant can lift over its head objects that weigh 20 times what the ant weighs. This is the equivalent of a 220 pound (100 kilogram) man lifting over 4,400 pounds (2000 kilograms) over his head! Seems incredible, doesn't it? That's like lifting a new VW Beetle, with five big men inside, over your head. A person who could do that would be a superman! Are ants superstrong then? No. The reason they can lift so much more than they weigh is because they are very small. If we were that small, we could do it too. A small animal lifting many times its weight is not the same as a large animal lifting many times its weight. The reason has to do with simple geometry and the characteristics of muscles. As an animal, or any object, grows in size, its volume and weight increase much faster than its height. If a 220 lb (100 kg) man were to grow ten times taller, his weight would increase by a factor of 1000 (the cube of his height). He would weigh 220,000 lbs (100,000 kg)! The strength of his muscles, on the other hand, would increase by the square of his height, or by a factor of 100. He would be 100 times stronger but 1000 times heavier. His muscle strength could never grow as fast as his weight. It's simple geometry. There is little reason, then, to compare the strength of small animals to big animals when it comes to how much more than their weight they can lift.