WELCOME

Myspace LayoutsMyspace LayoutsMyspace LayoutsMyspace LayoutsMyspace LayoutsMyspace LayoutsMyspace Layouts Myspace LayoutsMyspace Layouts Myspace LayoutsMyspace Layouts Myspace LayoutsMyspace LayoutsMyspace LayoutsMyspace Layouts
Myspace LayoutsMyspace LayoutsMyspace LayoutsMyspace LayoutsMyspace Layouts

Kamis, 19 Januari 2012

Microwave

Tulisan ini diposting guna memenuhi tugas mata kuliah " Pembelajaran Fisika di Kelas Internasional " oleh mr. Taufiq

This article is about the electromagnetic wave.
Microwaves are radio waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3 mm).
Apparatus and techniques may be described qualitatively as "microwave" when the wavelengths of signals are roughly the same as the dimensions of the equipment, so that lumped-element circuit theory is inaccurate. As a consequence, practical microwave technique tends to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Instead, distributed circuit elements and transmission-line theory are more useful methods for design and analysis. Open-wire and coaxial transmission lines give way to waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant lines. Effects of reflection, polarization, scattering, diffraction, and atmospheric absorption usually associated with visible light are of practical significance in the study of microwave propagation. The same equations of electromagnetic theory apply at all frequencies.
The prefix "micro-" in "microwave" is not meant to suggest a wavelength in the micrometer range. It indicates that microwaves are "small" compared to waves used in typical radio broadcasting, in that they have shorter wavelengths. The boundaries between far infrared light, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Electromagnetic waves longer (lower frequency) than microwaves are called "radio waves". Electromagnetic radiation with shorter wavelengths may be called "millimeter waves", terahertz radiation or even T-rays. Definitions differ for millimeter wave band, which the IEEE defines as 110 GHz to 300 GHz.
Above 300 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so great that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges.

The Application is microwave oven

A microwave oven (often referred to colloquially simply as a "microwave") is a kitchen appliance that heats food by dielectric heating, using microwave radiation to excite polarized molecules within the food. Microwave ovens heat foods quickly and efficiently, and because excitation is fairly uniform in the outer 1 inch (25 mm) to 1.5 inches (38 mm) of a dense (high water content) food item, food is more evenly heated throughout (except in thick dense objects) than generally occurs in other cooking techniques.
Raytheon invented the first microwave oven after World War II from radar technology developed during the war. Named the 'Radarange', it was first sold in 1947. Raytheon later licensed its patents for a home-use microwave oven that was first introduced by Tappan in 1955, but these units were still too large and expensive for general home use. The countertop microwave oven was first introduced in 1967 by the Amana Corporation, which had been acquired in 1965 by Raytheon.
Microwave ovens are popular for reheating previously-cooked foods and cooking vegetables. They are also useful for rapid heating of otherwise slowly-prepared cooking items, such as hot butter and fats, and melted chocolate. Unlike conventional ovens, microwave ovens usually do not directly brown or caramelize food, since they rarely attain the necessary temperatures to do so. Exceptions occur in rare cases where the oven is used to heat frying-oil and other very oily items (such as bacon), which attain far higher temperatures than that of boiling water. The boiling-range temperatures produced in high-water content foods give microwave ovens a limited role in professional cooking, since it usually makes them unsuitable for achievement of culinary effects where the flavors produced by the higher temperatures of frying, browning, or baking are needed. However, additional kinds of heat sources can be added to microwave packaging, or into combination microwave ovens, to produce these other heating effects, and microwave heating may cut the overall time needed to prepare such dishes.

A microwave oven works by passing non-ionizing microwave radiation, usually at a frequency of 2.45 gigahertz (GHz)—a wavelength of 122 millimetres (4.80 in)—through the food. Microwave radiation is between common radio and infrared frequencies. Water, fat, and other substances in the food absorb energy from the microwaves in a process called dielectric heating. Many molecules (such as those of water) are electric dipoles, meaning that they have a partial positive charge at one end and a partial negative charge at the other, and therefore rotate as they try to align themselves with the alternating electric field of the microwaves. Rotating molecules hit other molecules and put them into motion, thus dispersing energy. This energy, when dispersed as molecular vibration in solids and liquids (i.e., as both potential energy and kinetic energy of atoms), is heat.
Microwave heating is more efficient on liquid water than on frozen water, where the movement of molecules is more restricted. It is also less efficient on fats and sugars (which have a smaller molecular dipole moment) than on liquid water. Microwave heating is sometimes explained as a resonance of water molecules, but this is incorrect: such resonance only occurs in water vapor at much higher frequencies, at about 20 GHz. Moreover, large industrial/commercial microwave ovens operating at the common large industrial-oven microwave heating frequency of 915 MHz—wavelength 328 millimetres (12.9 in)—also heat water and food perfectly well.
Sugars and triglycerides (fats and oils) absorb microwaves due to the dipole moments of their hydroxyl groups or ester groups. However, due to the lower specific heat capacity of fats and oils and their higher vaporization temperature, they often attain much higher temperatures inside microwave ovens. This can induce temperatures in oil or very fatty foods like bacon far above the boiling point of water, and high enough to induce some browning reactions, much in the matter of conventional broiling or deep fat frying. Foods high in water content and with little oil rarely exceed temperatures greater than boiling (vaporizing) water.
Microwave heating can cause localized thermal runaways in some materials with low thermal conductivity which also have dielectric constants that increase with temperature. An example is glass, which can exhibit thermal runaway in a microwave to the point of melting. Additionally, microwaves can melt certain types of rocks, producing small quantities of synthetic lava. Some ceramics can also be melted, and may even become clear upon cooling. Thermal runaway is more typical of electrically conductive liquids such as salty water.
A common misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards. This idea arises from heating behavior seen if an absorbent layer of water lies beneath a less absorbent dryer layer at the surface of a food; in this case, the deposition of heat inside a food can exceed that on its surface. In most cases, however, with uniformally-structured or reasonably homogenous food item, microwaves are absorbed in the outer layers of the item in a manner somewhat similar to heat from other methods. Depending on water content, the depth of initial heat deposition may be several centimetres or more with microwave ovens, in contrast to broiling (infrared) or convection heating—methods which deposit heat thinly at the food surface. Penetration depth of microwaves is dependent on food composition and the frequency, with lower microwave frequencies (longer wavelengths) penetrating further.

Heating Efficiency
A microwave oven converts only part of its electrical input into microwave energy. A typical consumer microwave oven consumes 1100 W of electricity in producing 700 W of microwave power, an efficiency of 64%. The other 400 W are dissipated as heat, mostly in the magnetron tube. Additional power is used to operate the lamps, AC power transformer, magnetron cooling fan, food turntable motor and the control circuits. Such wasted heat, along with heat from the product being microwaved, is exhausted as warm air through cooling vents.

source :
http://en.wikipedia.org/wiki/Microwave_oven
http://en.wikipedia.org/wiki/Microwave

Rabu, 11 Januari 2012

Seismometer

Tulisan ini diposting guna memenuhi tugas matakuliah " Pembelajaran Fisika di Kelas Internasional " oleh Mr. Taufiq

 
Seismometers are instruments that measure motions of the ground, including those of seismic waves generated by earthquakes, volcanic eruptions, and other seismic sources. Records of seismic waves allow seismologists to map the interior of the Earth, and locate and measure the size of these different sources.
The word derives from the Greek σεισμός, seismós, a shaking or quake, from the verb σείω, seíō, to shake; and μέτρον, métron, measure.
Seismograph is another Greek term from seismós and γράφω, gráphō, to draw. It is often used to mean seismometer, though it is more applicable to the older instruments in which the measuring and recording of ground motion were combined than to modern systems, in which these functions are separated. Both types provide a continuous record of ground motion; this distinguishes them from seismoscopes, which merely indicate that motion has occurred, perhaps with some simple measure of how large it was.

Basic principles

Inertial seismometers have levers in them that keep rhythmic motion
  • A weight, usually called the internal mass, that can move relative to the instrument frame, but is attached to it by a system (such as a spring) that will hold it fixed relative to the frame if there is no motion, and also damp out any motions once the motion of the frame stops.
  • A means of recording the motion of the mass relative to the frame, or the force needed to keep it from moving.
Any motion of the ground moves the frame. The mass tends not to move because of its inertia, and by measuring the motion between the frame and the mass, the motion of the ground can be determined, even though the mass does move.
Early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper.
Modern instruments use electronics. In some systems, the mass is held nearly motionless relative to the frame by an electronic negative feedback loop. The motion of the mass relative to the frame is measured, and the feedback loop applies a magnetic or electrostatic force to keep the mass nearly motionless. The voltage needed to produce this force is the output of the seismometer, which is recorded digitally. In other systems the weight is allowed to move, and its motion produces a voltage in a coil attached to the mass and moving through the magnetic field of a magnet attached to the frame. This design is often used in the geophones used in seismic surveys for oil and gas.
Professional seismic observatories usually have instruments measuring three axes: north-south, east-west, and the vertical. If only one axis can be measured, this is usually the vertical because it is less noisy and gives better records of some seismic waves.
The foundation of a seismic station is critical. A professional station is sometimes mounted on bedrock. The best mountings may be in deep boreholes, which avoid thermal effects, ground noise and tilting from weather and tides. Other instruments are often mounted in insulated enclosures on small buried piers of unreinforced concrete. Reinforcing rods and aggregates would distort the pier as the temperature changes. A site is always surveyed for ground noise with a temporary installation before pouring the pier and laying conduit.

Early example 

The principle can be shown by an early special purpose seismometer. This consisted of a large stationary pendulum, with a stylus on the bottom. As the earth starts to move, the heavy mass of the pendulum has the inertia to stay still in the non-earth frame of reference. The result is that the stylus scratches a pattern corresponding with the Earth's movement. This type of strong motion seismometer recorded upon a smoked glass (glass with carbon soot). While not sensitive enough to detect distant earthquakes, this instrument could indicate the direction of the pressure waves and thus help find the epicenter of a local earthquake – such instruments were useful in the analysis of the 1906 San Francisco earthquake. Further re-analysis was performed in the 1980s using these early recordings, enabling a more precise determination of the initial fault break location in Marin county and its subsequent progression, mostly to the south.

Early designs

After 1880, most seismometers were descended from those developed by the team of John Milne, James Alfred Ewing and Thomas Gray, who worked in Japan from 1880 to 1895. These seismometers used damped horizontal pendulums. After World War II, these were adapted into the widely used Press-Ewing seismometer.
Later, professional suites of instruments for the world-wide standard seismographic network had one set of instruments tuned to oscillate at fifteen seconds, and the other at ninety seconds, each set measuring in three directions. Amateurs or observatories with limited means tuned their smaller, less sensitive instruments to ten seconds. The basic damped horizontal pendulum seismometer swings like the gate of a fence. A heavy weight is mounted on the point of a long (from 10 cm to several meters) triangle, hinged at its vertical edge. As the ground moves, the weight stays unmoving, swinging the "gate" on the hinge.
The advantage of a horizontal pendulum is that it achieves very low frequencies of oscillation in a compact instrument. The "gate" is slightly tilted, so the weight tends to slowly return to a central position. The pendulum is adjusted (before the damping is installed) to oscillate once per three seconds, or once per thirty seconds. The general-purpose instruments of small stations or amateurs usually oscillate once per ten seconds. A pan of oil is placed under the arm, and a small sheet of metal mounted on the underside of the arm drags in the oil to damp oscillations. The level of oil, position on the arm, and angle and size of sheet is adjusted until the damping is "critical," that is, almost having oscillation. The hinge is very low friction, often torsion wires, so the only friction is the internal friction of the wire. Small seismographs with low proof masses are placed in a vacuum to reduce disturbances from air currents.
Zollner described torsionally-suspended horizontal pendulums as early as 1869, but developed them for gravimetry rather than seismometry.
Early seismometers had an arrangement of levers on jeweled bearings, to scratch smoked glass or paper. Later, mirrors reflected a light beam to a direct-recording plate or roll of photographic paper. Briefly, some designs returned to mechanical movements to save money. In mid-twentieth-century systems, the light was reflected to a pair of differential electronic photosensors called a photomultiplier. The voltage generated in the photomultiplier was used to drive galvanometers which had a small mirror mounted on the axis. The moving reflected light beam would strike the surface of the turning drum, which was covered with photo-sensitive paper. The expense of developing photo sensitive paper caused many seismic observatories to switch to ink or thermal-sensitive paper.
Another relatively simple device was used in the late 19th and early 20th century. This consisted of a pendulum free to swing in any direction, with a scribe at the bottom touching a smoked glass plate. While not providing time information or information on distant earthquakes these did give accurate initial shock directions and proved useful in a late 20th century analysis of the 1906 San Francisco earthquake.

Modern instruments

Modern instruments use electronic sensors, amplifiers, and recording devices. Most are broadband covering a wide range of frequencies. Some seismometers can measure motions with frequencies from 500 Hz to 0.00118 Hz (1/500 = 0.002 seconds per cycle, to 1/0.00118 = 850 seconds per cycle). The mechanical suspension for horizontal instruments remains the garden-gate described above. Vertical instruments use some kind of constant-force suspension, such as the LaCoste suspension. The LaCoste suspension uses a zero-length spring to provide a long period (high sensitivity). Some modern instruments use a "triaxial" design, in which three identical motion sensors are set at the same angle to the vertical but 120 degrees apart on the horizontal. Vertical and horizontal motions can be computed from the outputs of the three sensors.
Seismometers unavoidably introduce some distortion into the signals they measure, but professionally-designed systems have carefully characterized frequency transforms.
Modern sensitivities come in three broad ranges: geophones, 50 to 750 V/m; local geologic seismographs, about 1,500 V/m; and teleseismographs, used for world survey, about 20,000 V/m. Instruments come in three main varieties: short period, long period and broadband. The short and long period measure velocity and are very sensitive, however they 'clip' the signal or go off-scale for ground motion that is strong enough to be felt by people. A 24-bit analog-to-digital conversion channel is commonplace. Practical devices are linear to roughly one part per million.
Delivered seismometers come with two styles of output: analog and digital. Analog seismographs require analog recording equipment, possibly including an analog-to-digital converter. The output of a digital seismograph can be simply input to a computer. It presents the data in a standard digital format (often "SE2" over Ethernet).


Teleseismometers

 

The modern broadband seismograph can record a very broad range of frequencies. It consists of a small "proof mass", confined by electrical forces, driven by sophisticated electronics. As the earth moves, the electronics attempt to hold the mass steady through a feedback circuit. The amount of force necessary to achieve this is then recorded.
In most designs the electronics holds a mass motionless relative to the frame. This device is called a "force balance accelerometer". It measures acceleration instead of velocity of ground movement. Basically, the distance between the mass and some part of the frame is measured very precisely, by a linear variable differential transformer. Some instruments use a linear variable differential capacitor.
That measurement is then amplified by electronic amplifiers attached to parts of an electronic negative feedback loop. One of the amplified currents from the negative feedback loop drives a coil very like a loudspeaker, except that the coil is attached to the mass, and the magnet is mounted on the frame. The result is that the mass stays nearly motionless.
Most instruments measure directly the ground motion using the distance sensor. The voltage generated in a sense coil on the mass by the magnet directly measures the instantaneous velocity of the ground. The current to the drive coil provides a sensitive, accurate measurement of the force between the mass and frame, thus measuring directly the ground's acceleration (using f=ma where f=force, m=mass, a=acceleration).
One of the continuing problems with sensitive vertical seismographs is the buoyancy of their masses. The uneven changes in pressure caused by wind blowing on an open window can easily change the density of the air in a room enough to cause a vertical seismograph to show spurious signals. Therefore, most professional seismographs are sealed in rigid gas-tight enclosures. For example, this is why a common Streckheisen model has a thick glass base that must be glued to its pier without bubbles in the glue.
It might seem logical to make the heavy magnet serve as a mass, but that subjects the seismograph to errors when the Earth's magnetic field moves. This is also why seismograph's moving parts are constructed from a material that interacts minimally with magnetic fields. A seismograph is also sensitive to changes in temperature so many instruments are constructed from low expansion materials such as nonmagnetic invar.
The hinges on a seismograph are usually patented, and by the time the patent has expired, the design has been improved. The most successful public domain designs use thin foil hinges in a clamp.
Another issue is that the transfer function of a seismograph must be accurately characterized, so that its frequency response is known. This is often the crucial difference between professional and amateur instruments. Most instruments are characterized on a variable frequency shaking table.

Strong-motion seismometers

Another type of seismometer is a digital strong-motion seismometer, or accelerograph. The data from such an instrument is essential to understand how an earthquake affects manmade structures.
A strong-motion seismometer measures acceleration. This can be mathematically integrated later to give velocity and position. Strong-motion seismometers are not as sensitive to ground motions as teleseismic instruments but they stay on scale during the strongest seismic shaking.

Other forms

 

Accelerographs and geophones are often heavy cylindrical magnets with a spring-mounted coil inside. As case moves, the coil tends to stay stationary, so the magnetic field cuts the wires, inducing current in the output wires. They receive frequencies from several hundred hertz down to 4.5 Hz or even as low as 1 Hz with higher quality models. Some have electronic damping, a low-budget way to get some of the performance of the closed-loop wide-band geologic seismographs.
Strain-beam accelerometers constructed as integrated circuits are too insensitive for geologic seismographs (2002), but are widely used in geophones.
Some other sensitive designs measure the current generated by the flow of a non-corrosive ionic fluid through an electret sponge or a conductive fluid through a magnetic field.

Modern recording

Today, the most common recorder is a computer with an analog-to-digital converter, a disk drive and an internet connection; for amateurs, a PC with a sound card and associated software is adequate. Most systems record continuously, but some record only when a signal is detected, as shown by a short-term increase in the variation of the signal, compared to its long-term average (which can vary slowly because of changes in seismic noise).

Interconnected seismometers

Seismometers spaced in an array can also be used to precisely locate, in three dimensions, the source of an earthquake, using the time it takes for seismic waves to propagate away from the hypocenter, the initiating point of fault rupture (See also Earthquake location). Interconnected seismometers are also used to detect underground nuclear test explosions. These seismometer are often used as part of a large scale, multi-million dollar governmental or scientific project, but some organizations, such as the Quake-Catcher Network, can use residential size detectors built into computers to detect earthquakes as well.
In reflection seismology, an array of seismometers image sub-surface features. The data are reduced to images using algorithms similar to tomography. The data reduction methods resemble those of computer-aided tomographic medical imaging X-ray machines (CAT-scans), or imaging sonars.
A world-wide array of seismometers can actually image the interior of the Earth in wave-speed and transmissivity. This type of system uses events such as earthquakes, impact events or nuclear explosions as wave sources. The first efforts at this method used manual data reduction from paper seismograph charts. Modern digital seismograph records are better adapted to direct computer use. With inexpensive seismometer designs and internet access, amateurs and small institutions have even formed a "public seismograph network."
Seismographic systems used for petroleum or other mineral exploration historically used an explosive and a wireline of geophones unrolled behind a truck. Now most short-range systems use "thumpers" that hit the ground, and some small commercial systems have such good digital signal processing that a few sledgehammer strikes provide enough signal for short-distance refractive surveys. Exotic cross or two-dimensional arrays of geophones are sometimes used to perform three-dimensional reflective imaging of subsurface features. Basic linear refractive geomapping software (once a black art) is available off-the-shelf, running on laptop computers, using strings as small as three geophones. Some systems now come in an 18" (0.5 m) plastic field case with a computer, display and printer in the cover.
Small seismic imaging systems are now sufficiently inexpensive to be used by civil engineers to survey foundation sites, locate bedrock, and find subsurface water.

source : http://en.wikipedia.org/wiki/Seismometer

Optical Tweezer

Tulisan ini diposting guna memenuhi tugas matakuliah " Pembelajaran Fisika di Kelas Internasional " oleh Mr. Taufiq

Optical Tweezer is actually called a "single-beam gradient force trap" or "single-beam gradient force trap" is a scientific instrument that uses a focused laser beam to obtain a strong attractive force or a force rejected (usually in the order of piconewton, 10 ^ {-12}) which depends on the difference in refractive index on the handle physical and dielectric objects microscopic movement. Optical Tweezers have been successful in the investigation of various biological systems in recent years.History and Development
The detection of optical scattering and gradient forces on a micrometer-scale particles were first reported in 1970 by Arthur Ashkin, a scientist who worked at Bell Labs. A year later, Ashkin and colleagues report the first observation on what is now commonly known as an optical trap: a focused beam of light capable of firmly holding microscopic particles stable in three dimensions.
One of the owners of conference papers, 1986, the Secretariat of the United States of energy, Steven Chu, will use the optical trap in his research in cooling and trapping of neutral atoms. From this study, Chu was awarded the Nobel Prize in Physics with Claude Cohen-Tannoudji and William D. Philllips. In an interview, Steven Chu explains how Ashkin first dreamed optical trap as a method for capturing atoms. Ashkin states are able to capture larger particles (diameter 10 to 10,000 nanometers) but it does inspire Chu to extend the technique to capture the neutral atom (diameter 0.1 nanometer) by making use of resonant laser light and magnetic gradient trap.
In the late 1980s, Arthur Ashkin and Joseph M. Dziedzic demonstrated the first application of technology in the biological sciences, by using them to capture a single tobacco mosaic virus and the bacterium Escherichia coli. During the years 1990 and thereafter, researchers such as Carlos Bustamante, James Spudich, and Steven block initiate the use of optical trap force spectroscopy to characterize the molecular-scale biological activator. Motor drive is often found in molecular biology, and are responsible for the mechanical motion and action in the cell. Biophysicist optical trap allows to observe the style and driving dynamics of nanoscale single molecule level, trapper-style optical spectroscopy has led to greater understanding of the origins of the style-forming molecular stochastic.
Optical trap has proven very useful in areas outside of biology as well. For brevity, in 2003 the optical trap technique has been applied in the sorting of cells: the formation of large-scale optical intensity pattern imposed on the sample area, the cells can be sorted into each of the intrinsic optical characteristics. Optical trap has also been used to investigate the cytoskeleton, the measurement of visco-elastic properties of biopolymers, and the study of cell death.


Aspects of Optical Physics trapper

Dielectric object is pulled toward the center of the former, slightly above the center beam, as described in the text. Forces acting on an object depends on the movement of the liner from the center of the trap as a simple spring system

General Explanation
Optical Tweezers are able to manipulate even micro-sized dielectric particles in the nanometer scale by deploying the force is very small in the extreme through a focused laser beam. This file is usually focused by passing into the microscope objective. Thinnest point of the laser beam is called the beam waist (beam waist, because it looks like a waist, hips especially women), which consists of very large electric field gradients. This leads to a dielectric particle is pulled along the gradient towards the region with the largest electric field which is located in the center beam. Laser light also tends to wear the force on the particles in the beam along the beam propagation direction. This statement can be easily understood when one considers that light behaves as a collection of particles, wherein each particle of light involves a small dielectric particles in its path. This is known as the scattering force and the result is a particle can be moved into the center of the beam waist. As shown in FIG.
Optical trap is a very sensitive instrument and is able to manipulate and detect movement in the sub-nanometer scale for dielectric particles of sub-micrometer scale. For this reason, the optical trap is often used to manipulate and study single molecules with menginteraksikan didempetkan granular particles on the molecule to be studied. DNA, proteins, and enzymes that interact with it is usually studied in this way.
For quantitative scientific measurement, most of the optical trap is operated in a manner in which the dielectric particle rarely moves far from the center of the trap. The reason is that the force applied to the particle is linear to the displacement of the center of the trap along the displacement is quite small. In this way, optical traps can be let as a simple spring system that follows Hooke's law.More detailed explanation of the optical trap
A more precise explanation regarding the behavior of the optical trap depends on the size of the trapped particle relative to the wavelength of laser light used to trap particles. In cases where the dimensions of the particles larger than the wavelength, a simple ray optics treatment is quite fulfilling. If the wavelength of light is quite large compared with the dimensions of the particles, the particles can be considered as an electric dipole in an electric field. For the optical trap, the dimensions of the dielectric particles with a large order of the wavelength of the trap beam, an accurate model for this condition is time-dependent Maxwell's equations and harmonic Maxwell's equations using the appropriate boundary conditions.


Approach Light Optics


In cases where the diameter of the trapped particles is quite large compared to the wavelength of light, a phenomenon described perangkapan can both use light optics. As shown in the figure, a single light beam emitted from the laser will terefraksi granular particles after passing through the dielectric. As a result, the beam will come out with a different direction from the direction before going through the grain. Because light has momentum, change in direction of light indicates a change in momentum. Berdaraskan Newton's third law, there should be a big change in the same momentum in the opposite direction on the particle.

Most optical traps to work with a gaussian beam (TEM00 mode). In this case, if the particles move from the center beam, as shown in the picture, then the particle has a net force that returns particles back into the center of the trap because most files provide a larger momentum change towards the trap center at a fraction of the file that gives the change of momentum which leads out of the trap center.

If the particles are in the center beam, the single light beam through the particle will terefraksi symmetrically, and this produces laterally zero net force. Net style in this case is along the axial direction of the trap, which eliminates the mutual force of laser light scattering. Removal of the axial gradient force with the scattering force is what causes the granular particles are trapped inside the stable beam waist.



source : http://kurniafisika.wordpress.com/2010/10/16/optical-tweezer-penjebak-optikal-sebuah-penggerak-nano-berbasis-optik/

Selasa, 10 Januari 2012

Ultrasonography (USG)

tulisan ini diposting guna memenuhi tugas matakuliah " Pembelajaran Fisika di Kelas Internasional " oleh Mr. Taufiq




Sonography is an application of the SONAR principle in medical imaging where the surfaces of internal organs and their inner structure can be depicted (see the image below). This imaging modality has the advantage of using nonionizing radiation and is frequently used to provide correlative anatomical information for nuclear medicine images. An overview of basic features of sonographic imaging is provided in this chapter.
We'll start with a consideration of relevant characteristics of sound waves before describing various imaging methods.

Sound Waves

Sound waves travelling through air consist of periodic fluctuations in air pressure, called compressions and rarefactions, as illustrated below:

Illustration of a vibrating tuning fork producing a sequence of compressions and rarefactions in the surrounding air.
These longitudinal waves travel with a velocity, v, of about 330 m/s in air at STP, and at higher velocities in denser media, such as water and soft tissue. Indeed a medium is needed for the waves to propagate - remember that the physics behind the statement 'In space no one can hear you scream', which was used to promote the movie Alien, is quite correct!
A sequence of compressions and rarefactions is referred to as one cycle, as illustrated. The wavelength, λ, is defined as the length of one cycle and the frequency, f, as the number of cycles which pass a fixed point every second. These quantities are related through the famous equation:
v = f λ
The human ear is sensitive to sound frequencies up to about 20 kHz, and waves of higher frequency are referred to as ultrasound. Much higher frequencies are used in diagnostic sonography, in the range 1-15 MHz. Low frequencies in this range can be used to image large deep structures, while high frequencies can be used for small, superficial objects.
Medium
Velocity (m/s)
Air
331
Brain
1,541
Kidney
1,561
Liver
1,549
Muscle
1,585
Fat
1,450
Soft Tissue (average)
1,540
The velocity of ultrasound waves is identical to that of sound waves in the same medium, and is given in the following table for a range of internal organs. Note that a velocity of 1,540 m/s is generally assumed for soft tissue in sonographic imaging and represents an average of that for a number of tissues, muscles and organs.
Ultrasound waves are generally produced in pulses for sonographic imaging, with the time interval between pulses used to detect ultrasound echoes produced within the body. This technique exploits what's known as the Pulse-Echo Principle, as illustrated in the diagram below. The upper half of the diagram depicts an ultrasound transducer emitting one pulse of ultrasound into a hypothetical body, which is assumed to consist of just two tissues. The lower half of the diagram depicts the situation after the ultrasound pulse has encountered the interface between the two tissues. A reflected pulse is shown travelling back towards the transducer, i.e. the echo, and a transmitted pulse is seen to continue into the second tissue.
The length of time taken for the pulse, once produced by the transducer, to travel to the interface and the echoed pulse to return is termed the pulse-echo time, t, and its measurement allows the depth, d, of the interface to be determined using the following equation:
d = (vt/2)
Note that in this equation:
·         the average velocity of ultrasound in the tissue is used, and
·         the factor, 2, arises because the pulse and its echo must travel the same distance, one from the transducer to the interface and the other from the interface back to the transducer:
Illustration of the pulse-echo principle.
Ultrasound transducers exploit the piezoelectric effect to cause a crystal to vibrate at ultrasound frequencies. The resultant vibrations generate pulses of compressions and rarefactions which propagate through the tissues. Echoes produced by tissue interfaces are then detected by the same crystal - exploiting the piezoelectric effect once again.
The ultrasound pulse becomes attenuated as it passes through tissue and four phenomena result when it encounters a tissue interface, as illustrated below:

Illustration of phenomena which result when an ultrasound pulse encounters a tissue interface.
Interface
Reflection Coefficient (%)
Soft Tissue - Air
99.9
Fat - Muscle
1.08
Fat - Kidney
0.64
Muscle - Liver
1.5
Some of the energy in the pulse is scattered through a process called non-specular reflection, some of it generates an echo in a specular reflection process, some of it is transmitted through the interface to produce further echoes at other interfaces and a small amount is absorbed. The reflectivity of an interface depends on the acoustic impedance of the two tissues involved, and representative values are shown in the table.
Notice in the table that a huge reflection can occur at a soft tissue - air interface. Its for this reason that a coupling medium is used between the transducer and the patient's skin. Internal reflections are seen in the table to be of the order of 1%, yielding a useful transparency for imaging purposes.
#top

Ultrasound Scanner

A simplified block diagram of a sonography system is shown in the figure below. The type of scanner shown operates using a linear array transducer, which we'll learn more about shortly. We can see the Master Timer in the top right of the figure. This circuit sets the number of ultrasound pulses which the transducer generates every second - a factor referred to as the Pulse Repetition Frequency (PRF). Its also seen that echo pulses picked up by the transducer are amplified by a Receiver Amplifier whose output is demodulated before being fed to a Scan Converter so that the location and amplitude of detected echoes can be displayed.

Simplied block diagram of an ultrasound scanner which uses a linear array transducer.
The Time-Gain Compensation (TGC) circuit provides for selective amplification of the echo signals so as to compensate for attenuation of distant ultrasound echoes and suppress more proximal ones. The switch array is used to excite the multiple crystals in the transducer as shown below:

In the simplest arrangement, each crystal generates an ultrasound pulse one after the other so that sequential lines of tissue can be rapidly and continuously insonated.
The ultrasound image is referred to as a B-Mode scan and consists of a 2D representation of the echo pattern in a cross-section of tissue with the transducer position at the top of the image. The locations of echo-producing tissue interfaces are generally represented by bright pixels on a dark background, with the amplitude of each echo signal being represented by the pixel value - see the image on the right.
The image shown was actually acquired using a more sophisticated transducer called a phased array, which generates a sector-shaped scan. This type of transducer also uses a linear array of small crystals, but with them excited in complex timing sequences, controlled by delay circuitry - as shown in the figure below. The ultrasound beam can be steered to scan a region in this manner while being focussed at different depths simultaneously.

Illustration of a phased array transducer.
There are numerous other transducer designs, each of which have particular advantages in different clinical applications. Two designs of mechanical transducer are shown below as examples. The left panel illustrates a transducer with a single crystal which is rocked back and forth during the scanning process, while the right panel illustrates a rotating arrangement of single crystals:

Illustration of two designs of mechanical transducer.
Components of the scan conversion circuitry are illustrated in the following figure:

Simplified block diagram of the scan converter of an ultrasound scanner.
The figure illustrates the process of digitizing the analogue echo signals using an analogue-to-digital converter (ADC) and applying pre-processing to the digital data using an input look-up table (ILUT) prior to storage in random access memory. This memory is filled in the sequence which was used to scan the patient and read out in a manner suitable for the display device, which is typically an LCD monitor. Prior to display, the image data can be post-processed using an output look-up table (OLUT) so that contrast enhancement and other processing functions can be applied. Note that we've encountered this type of digital image processing in a more general form in another chapter of this wikibook. The box labelled μP represents a microprocessor which is used to control this scan conversion circuitry, as well as many other functions of the scanner, e.g. the timing used for phased array emission and reception.
A digital image resolution widely used in sonography is 512 x 512 x 8-bits - a magnified view of the central region of the liver scan shown earlier is provided below to illustrate the digital nature of the data:
Magnified view of the central region of the liver scan shown earlier.
We conclude this section with photos of a sonography system and typical transducers:
.

Doppler Ultrasound

The Doppler Effect is widely exploited in the remote measurement of moving objects, and can be used in medical sonography to generate images (and sounds!) of flowing blood. The effect is demonstrated by all wave-like phenomena, be they longitudinal or transverse waves, and has been used with light, for instance, to reveal that we live in an expanding universe! Its also exploited using radio waves in highway speed traps, and can be experienced with sound waves when an ambulance passes by with its siren blaring.
Let's take the example of a train engine sounding its whistle, as illustrated in the diagram below:

Illustration of the origin of the Doppler Effect
When the train is stationary, and there isn't a wind blowing, the sound will emanate equally from the whistle in all directions, as illustrated on the left. When the train is moving, however, sound frequencies will be increased in the forward direction and reduced in the opposite direction, as illustrated on the right, to an extent dependent on the velocity of the train. This apparent change in frequency of the sound waves experienced by a stationary listener is referred to as the Doppler Shift.
The situation for exploiting the Doppler Effect for the detection of blood flow is illustrated in the following diagram:

Illustration of blood flow detection using the Doppler Effect with ultrasound waves.
The diagram shows a Doppler transducer placed on the skin and aimed at an angle, θ, towards a blood vessel, which contains blood flowing with a velocity of u m/s, at any instant. The transducer emits ultrasound waves of frequency, fo, and echoes generated by moving reflectors in the blood, e.g. red blood cells, have a frequency, fr. The difference between these two frequencies, Δf, is related to the velocity of the flowing reflectors through the following equation:

where v is the velocity of sound in the medium.
So, for instance, when ultrasound with a frequency in the range 2-10 MHz is applied in medicine to detect blood flowing in arteries (where typical velocities are 0-5 m/s), the equation above reveals that the frequency differences will be in the audible range of sound frequencies, i.e. 0-15 kHz. Their signals can therefore be fed through speakers so that this sound can be heard.
Its also possible to examine the frequency content of Doppler shifts to examine subtle details of the distribution of blood velocities during a cardiac cycle by computing their Fourier transforms. Its more common however to produce images of the distribution of frequency shifts within blood vessels using techniques such as Colour-Flow or Colour-Power imaging. These techniques are used to automatically fuse Doppler signals with B-Mode ultrasound images, as illustrated below:

A colour-flow image on the left with a colour-power image on the right.
Colour-flow processing is sensitive to the direction of blood flow, i.e. it can detect both positive and negative Doppler shifts, and uses a colour look-up table (CLUT) so that shifts in one direction are displayed in shades of red with those in the other direction in shades of blue - as illustrated by a patient's jugular vein and carotid artery depicted in the left panel of the figure above. A simplified block diagram of a sonography system used for such imaging is shown below:
Block diagram of a colour-flow sonography imaging system.
The system uses a beam former circuit to excite the crystals of the phased array transducer for B-Mode imaging and Doppler shift detection in a rapid alternating manner, with the echo signals being fed to B-Mode scanning circuitry and the Doppler signals fed to an autocorrelation detector for analysis. Output data from these circuits are then blended within the scan conversion and formatting circuitry, prior to display of the fused image.
As a final point, note that the colour-power image displayed above does not contain any blood-flow direction information, since this technique computes the power of reflected Doppler-shifted pulses instead of their frequency content.

source : http://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine/Sonography_%26_Nuclear_Medicine