venerdì 29 maggio 2009

Regular Light Bulbs Made Super-Efficient with Ultra-Fast Laser

SOURCE

Chunlei Guo stands in front of his femtosecond laser, which can double the efficiency of a regular incandescent light bulb. Credit: University of Rochester
(PhysOrg.com) -- An ultra-powerful laser can turn regular incandescent light bulbs into power-sippers, say optics researchers at the University of Rochester. The process could make a light as bright as a 100-watt bulb consume less electricity than a 60-watt bulb while remaining far cheaper and radiating a more pleasant light than a fluorescent bulb can.
The laser process creates a unique array of nano- and micro-scale structures on the surface of a regular tungsten filament—the tiny wire inside a light bulb—and theses structures make the tungsten become far more effective at radiating light.
The findings will be published in an upcoming issue of the journal .
"We've been experimenting with the way ultra-fast lasers change metals, and we wondered what would happen if we trained the laser on a filament," says Chunlei Guo, associate professor of optics at the University of Rochester. "We fired the right through the glass of the bulb and altered a small area on the filament. When we lit the bulb, we could actually see this one patch was clearly brighter than the rest of the filament, but there was no change in the bulb's energy usage."
The key to creating the super-filament is an ultra-brief, ultra-intense beam of light called a femtosecond laser pulse. The laser burst lasts only a few quadrillionths of a second. To get a grasp of that kind of speed, consider that a femtosecond is to a second what a second is to about 32 million years. During its brief burst, Guo's laser unleashes as much power as the entire grid of North America onto a spot the size of a needle point. That intense blast forces the surface of the metal to form nanostructures and microstructures that dramatically alter how efficiently can radiate from the filament.
In 2006, Guo and his assistant, Anatoliy Vorobeyv, used a similar laser process to turn any metal pitch black. The surface structures created on the metal were incredibly effective at capturing incoming radiation, such as light.
"There is a very interesting 'take more, give more' law in nature governing the amount of light going in and coming out of a material," says Guo. Since the black metal was extremely good at absorbing light, he and Vorobyev set out to study the reverse process—that the blackened filament would radiate light more effectively as well.
"We knew it should work in theory," says Guo, "but we were still surprised when we turned up the power on this bulb and saw just how much brighter the processed spot was."
In addition to increasing the brightness of a bulb, Guo's process can be used to tune the color of the light as well. In 2008, his team used a similar process to change the color of nearly any metal to blue, golden, and gray, in addition to the black he'd already accomplished. Guo and Vorobeyv used that knowledge of how to control the size and shape of the nanostructures—and thus what colors of light those structures absorb and radiate—to change the amount of each wavelength of light the tungsten filament radiates. Though Guo cannot yet make a simple bulb shine pure blue, for instance, he can change the overall radiated spectrum so that the tungsten, which normally radiates a yellowish light, could radiate a more purely white light.
Guo's team has even been able to make a filament radiate partially polarized light, which until now has been impossible to do without special filters that reduce the bulb's efficiency. By creating nanostructures in tight, parallel rows, some light that emits from the filament becomes polarized.
The team is now working to discover what other aspects of a common light bulb they might be able to control. Fortunately, despite the incredible intensity involved, the femtosecond laser can be powered by a simple wall outlet, meaning that when the process is refined, implementing it to augment regular light bulbs should be relatively simple.
Guo is also announcing this month in Applied Physics Letters a technique using a similar femtosecond process to make a piece of metal automatically move liquid around its surface, even lifting a liquid up against gravity.
Source: University of Rochester (news : web)

World's largest laser opens: We are close to practice the nuclear fusion



Scientists for decades have been hunting for ways to harness the enormous force of the sun and stars to supply energy here on Earth. The National Ignition Facility at the Lawrence Livermore Laboratory may spark the light at the end of the tunnel.
The facility was dedicated today (May 29) at a ceremony attended by numerous state and national officials.
Roughly the size of three football fields, the facility houses the world’s largest laser. Within the next three years, its 192 laser beams will deliver massive amounts of at a pea-sized target. That target, filled with , will in turn release 10 to 100 times the power than the amount injected by the laser.

When all of the lasers’ energy slams the target, it will generate unprecedented temperatures in the target materials - temperatures of more than 100 million degrees and pressures more than 100 billion times the Earth’s atmosphere. These conditions are similar to those in the stars and the cores of . Igniting these conditions will create nuclear fusion, which is the reaction that gives the sun and the stars their immense power. Mimicking and controlling the highly volatile process - tantamount to creating a star in a laboratory - could lead to ways to produce plentiful clean and safe energy.
While demonstrating nuclear fusion as a viable means for abundant clean energy may be the most exciting offshoot of NIF research, another of its roles is to study the conditions associated with the inner workings of nuclear weapons.
The NIF is a cornerstone of a critical national security mission to ensure the reliability and safety of the U.S. nuclear stockpile without conducting underground testing. At NIF, scientists will be able to provide data for supercomputer simulations that replicate conditions that exist inside a thermonuclear weapon.
NIF experiments will also help scientists who are trying to understand the universe in many fundamental ways, including astrophysicists learning about the hot, dense interiors of large planets, stars and other phenomena.
Provided by University of California

Researchers make breakthrough in the quantum control of light


This image represents a quantum state with zero, three and six photons simultaneously. The theory is on left and the experiment is on the right. Image: UCSB.
Researchers at UC Santa Barbara have recently demonstrated a breakthrough in the quantum control of photons, the energy quanta of light. This is a significant result in quantum computation, and could eventually have implications in banking, drug design, and other applications.
In a paper to be published in today's issue of the journal Nature, UCSB physics researchers Max Hofheinz, John Martinis, and Andrew Cleland document how they used a superconducting electronic circuit known as a Josephson phase qubit to prepare highly unusual quantum states using microwave-frequency photons. The breakthrough is the result of four years of work in the laboratories of Cleland and Martinis.
The project is funded by the federal agency called the Intelligence Advanced Research Projects Activity, or IARPA. The government is particularly interested in quantum computing because of the way banking and other important communications are currently encrypted. Using large numbers, with hundreds of digits, encryption codes are changed daily and would take years of traditional computing to break. could potentially break those codes quickly, destroying current encryption schemes.
In the experiments, the photons were stored in a microwave cavity, a "light trap" in which the light bounces back and forth as if between two mirrors. In earlier work, these researchers showed they could create and store photons, one at a time, with up to 15 photons stored at one time in the light trap. The research shows that they can create states in which the light trap simultaneously has different numbers of photons stored in it. For example, it can simultaneously have zero, three, and six photons at the same time. Measuring the by counting how many photons are stored forces the trap to "decide" how many there are; but prior to counting, the light trap exists in a quantum superposition, with all three outcomes possible.
Explaining the paradoxical simultaneity of quantum states, Cleland said that it's like having your cake and eating it -- at the same time.
"These superposition states are a fundamental concept in quantum mechanics, but this is the first time they have been controllably created with light," Cleland said. Martinis added, "This experiment can be thought of as a quantum digital-to-analog converter." As digital-to-analog converters are key components in classical communication devices (for example, producing the sound waveforms in cell phones), this experiment might enable more advanced communication protocols for the transmission of quantum information.
First author Hofheinz designed and performed the measurements. He is a postdoctoral researcher from Germany who has been working at UCSB for the last two years on this project. The devices used to perform the experiment were made by Haohua Wang, a postdoctoral researcher from China, who is second author on the Nature publication.
The scientists said their research is leading to the construction of a quantum computer, which will have applications in information encryption and in solving or simulating problems that are not amenable to solution using standard computers.
Source: University of California - Santa Barbara (news : web)

Theorists Reveal Path to True Muonium


In this artist's depiction of how experimentalists could create true muonium, an electron (blue) and a positron (red) collide, producing a virtual photon (green) and then a muonium atom, made of a muon (small yellow) and an anti-muon (small purple). The muonium atom then decays back into a virtual photon and then a positron and an electron. Overlaying this process is a figure indicating the structure of the muonium atom: one muon (large yellow) and one anti-muon (large purple). Credit: Graphic: Terry Anderson/SLAC
(PhysOrg.com) -- True muonium, a long-theorized but never-seen atom, might be observed in future experiments, thanks to recent theoretical work by researchers at the Department of Energy's SLAC National Accelerator Laboratory and Arizona State University. True muonium was first theorized more than 50 years ago, but until now no one had uncovered an unambiguous method by which it could be created and observed.
"We don't usually work in this area, but one day we were idly talking about how experimentalists could create exotic states of matter," said SLAC theorist Stanley Brodsky, who worked with Arizona State's Richard Lebed on the result. "As our conversation progressed, we realized 'Gee…we just figured out how to make true muonium.'"
True muonium is made of a muon and an anti-muon, and is distinguished from what's also been called "muonium"—an atom made of an electron and an anti-muon. Both muons and anti-muons are created frequently in nature when energetic particles from space strike the earth's atmosphere. Yet both have a fleeting existence, and their combination, true muonium, decays naturally into other particles in a few trillionths of a second. This makes observation of the exotic atom quite difficult.
In a paper published on Tuesday in , Brodsky and Lebed describe two methods by which electron-positron accelerators could detect the signature of true muonium's formation and decay.
In the first method, an accelerator's electron and positron beams are arranged to merge, crossing at a glancing angle. Such a collision would produce a single photon, which would then transform into a single true muonium atom that would be thrown clear of the other particle debris. Because the newly created true muonium atoms would be traveling so fast that the laws of govern, they would decay much slower than they would otherwise, making detection easier.
In the second method, the electron and positron beams collide head-on. This would produce a true muonium atom and a photon, tangled up in a cloud of particle debris. Yet simply by recoiling against each other, the true muonium and the photon would push one another out of the debris cloud, creating a unique signature not previously searched for.
"It's very likely that people have already created true muonium in this second way," Brodsky said. "They just haven't detected it."
In their paper, Lebed and Brodsky also describe a possible, but more difficult, means by which experimentalists could create true tauonium, a bound state of a tau lepton and its antiparticle. The tau was first created at SLAC's SPEAR storage ring, a feat for which SLAC physicist Martin Perl received the 1995 Nobel Prize in physics.
Brodsky attributes the pair's successful work to a confluence of events: various unrelated lectures, conversations and ideas over the years, pieces of which came together suddenly during his conversation with Lebed.
"Once you pull all of the ideas together, you say 'Of course! Why not?' Brodsky said. "That's the process of science—you try to relate everything new to what you already know, creating logical connections."
Now that those logical connections are firmly in place, Brodsky said he hopes that one of the world's colliders will perform the experiments he and Lebed describe, asking, "Who doesn't want to see a new form of matter that no one's ever seen before?"
More information: "Production of the Smallest QED Atom: True Muonium," Physical Review Letters
Source: SLAC National Laboratory (news : web)

lunedì 18 maggio 2009

All about Antigravity: 25 very interesting scientific documents



1. arXiv:0904.2394 [ps, pdf, other]
Title: Levitating Dark Matter
Authors: Nemanja Kaloper, Antonio Padilla
Comments: 17 pages LaTeX
Subjects: Cosmology and Extragalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)


2. arXiv:0902.3871 [ps, pdf, other]
Title: Dark energy and the mass of the Local Group
Authors: A.D. Chernin, P. Teerikorpi, M.J. Valtonen, G.G. Byrd, V.P. Dolgachev, L.M. Domozhilova
Comments: 7 pages, 1 figure, submitted to ApJL
Subjects: Cosmology and Extragalactic Astrophysics (astro-ph.CO)


3. arXiv:0901.4055 [ps, pdf, other]
Title: Extended General Relativity: large-scale antigravity and short-scale gravity with \omega=-1 from five dimensional vacuum
Authors: Jose Edgar Madriz Aguilar, Mauricio Bellini
Comments: 7 pages, no figures
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Extragalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th)


4. arXiv:0811.1008 [ps, pdf, other]
Title: A constraint on antigravity of antimatter from precision spectroscopy of simple atoms
Authors: Savely G. Karshenboim (Max-Planck-Institut fuer Quantenoptik, Garching and D.I. Mendeleev Institute for Metrology, St.Petersburg)
Subjects: General Relativity and Quantum Cosmology (gr-qc); Atomic Physics (physics.atom-ph)
5. arXiv:0811.0522 [pdf]
Title: Primitive Virtual Negative Charge
Authors: Kiyoung Kim
Comments: 33 pages, 8 figures
Subjects: General Physics (physics.gen-ph)


6. arXiv:0803.2864 [pdf]
Title: Exact 'antigravity-field' solutions of Einstein's equation
Authors: Franklin S. Felber
Comments: 3 pages, 3 figures; this version shows correspondence of exact 'antigravity' field, calculated from a metric first derived by Hartle, Thorne, and Price, with weak 'antigravity' fields calculated from retarded potentials in Ref. [2]; also adds impulse calculation
Subjects: General Physics (physics.gen-ph)


7. arXiv:0710.4316 [src]
Title: Can the new Neutrino Telescopes and LHC reveal the gravitational proprieties of antimatter?
Authors: Dragan Slavkov Hajdukovic
Comments: This paper has been withdrawn by the author. Full version replacing partial results is at: arXiv:gr-qc/0612088v2
Subjects: General Relativity and Quantum Cosmology (gr-qc)


8. arXiv:0706.4171 [ps, pdf, other]
Title: Local dark energy: HST evidence from the vicinity of the M 81/M 82 galaxy group
Authors: A.D. Chernin, I.D. Karachentsev, O.G. Kashibadze, D.I. Makarov, P. Teerikorpi, M.J. Valtonen, V.P. Dolgachev, L.M. Domozhilova
Comments: 17 pages, 1 figure
Journal-ref: Astrophys.50:405-415,2007
Subjects: Astrophysics (astro-ph)


9. arXiv:0706.4068 [ps, pdf, other]
Title: Detection of dark energy near the Local Group with the Hubble Space Telescope
Authors: A.D. Chernin, I.D. Karachentsev, P. Teerikorpi, M.J. Valtonen, G.G. Byrd, Yu.N. Efremov, V.P. Dolgachev, L.M. Domozhilova, D.I. Makarov, Yu.V. Baryshev
Comments: 11 pages, 1 figure
Subjects: Astrophysics (astro-ph)


10. arXiv:0704.2753 [ps, pdf, other]
Title: Local dark energy: HST evidence from the expansion flow around Cen A/M83 galaxy group
Authors: A. D. Chernin, I. D. Karachentsev, D. I. Makarov, O. G. Kashibadze, P. Teerikorpi, M. J. Valtonen, V. P. Dolgachev, L. M. Domozhilova
Subjects: Astrophysics (astro-ph)


11. arXiv:gr-qc/0702142 [src]
Title: Concerning production and decay of mini black holes
Authors: Dragan Slavkov Hajdukovic
Comments: This paper has been withdrawn by the author. Full version replacing partial results is at: arXiv:gr-qc/0612088v2
Subjects: General Relativity and Quantum Cosmology (gr-qc)


12. arXiv:gr-qc/0701168 [src]
Title: Antigravity as the basis for a New Interpretation of the Planck Length
Authors: Dragan Slavkov Hajdukovic
Comments: This paper has been withdrawn by the author. Full version replacing partial results is at: arXiv:gr-qc/0612088v2
Subjects: General Relativity and Quantum Cosmology (gr-qc)


13. arXiv:gr-qc/0612088 [pdf]
Title: Black holes, neutrinos and gravitational proprieties of antimatter
Authors: Dragan Slavkov Hajdukovic
Comments: This new version is four times longer than the first one, and consequently contains much more results
Subjects: General Relativity and Quantum Cosmology (gr-qc); Astrophysics (astro-ph); High Energy Physics - Theory (hep-th)


14. arXiv:gr-qc/0604076 [ps, pdf, other]
Title: 'Antigravity' Propulsion and Relativistic Hyperdrive
Authors: Frankliln S. Felber
Comments: 4 pages, 4 figures, 2 video clips. To be presented at 25th International Space Development Conference, Los Angeles, 4-7 May 2006
Subjects: General Relativity and Quantum Cosmology (gr-qc)


15. arXiv:astro-ph/0603226 [ps, pdf, other]
Title: Non-Friedmann cosmology for the Local Universe, significance of the universal Hubble constant and short-distance indicators of dark energy
Authors: Arthur D. Chernin, Pekka Teerikorpi, Yurij V. Baryshev
Comments: 10 pages, 1 figure, submitted to A&A
Subjects: Astrophysics (astro-ph)


16. arXiv:astro-ph/0602102 [pdf]
Title: Hubble's law and Superluminity Recession Velocities
Authors: Leonid S. Sitnikov
Comments: 7 pages, 3 figures
Subjects: Astrophysics (astro-ph)


17. arXiv:gr-qc/0602041 [pdf]
Title: Testing existence of antigravity
Authors: Dragan Slavkov Hajdukovic
Subjects: General Relativity and Quantum Cosmology (gr-qc)


18. arXiv:gr-qc/0509105 [ps, pdf, other]
Title: Physics of Gravitational Interaction: Geometry of Space or Quantum Field in Space?
Authors: Yurij Baryshev
Comments: 9 pages, to be published in the Proceedings of the 1st Crisis in Cosmology Conference, AIP proceedings series
Subjects: General Relativity and Quantum Cosmology (gr-qc)


19. arXiv:astro-ph/0506070 [ps, pdf, other]
Title: Co-existence of Gravity and Antigravity: The Unification of Dark Matter and Dark Energy
Authors: Xiang-Song Chen
Comments: 3 pages, no figure; discussions added that low-energy gravitons can also serve as both dark matter and dark energy; references added
Subjects: Astrophysics (astro-ph)


20. arXiv:hep-th/0506067 [ps, pdf, other]
Title: Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale
Authors: D.S. Gorbunov, S.M. Sibiryakov
Comments: 28 pages
Journal-ref: JHEP 0509 (2005) 082
Subjects: High Energy Physics - Theory (hep-th)


21. arXiv:physics/0506017 [pdf]
Title: Zero-point energy of vacuum fluctuation as a candidate for dark energy versus a new conjecture of antigravity based on the modified Einstein field equation in general relativity
Authors: Guang-jiong Ni
Comments: 11 pages,1 figure
Subjects: General Physics (physics.gen-ph)


22. arXiv:gr-qc/0505099 [ps, pdf, other]
Title: Exact Relativistic 'Antigravity' Propulsion
Authors: F. S. Felber
Comments: 4 pages, 3 figures, changed format only, attached 5 AVI files (animated exact solutions of black holes incident on initially stationary payloads)
Subjects: General Relativity and Quantum Cosmology (gr-qc)


23. arXiv:gr-qc/0505098 [pdf]
Title: Weak 'Antigravity' Fields in General Relativity
Authors: F. S. Felber
Comments: 5 pages, 3 figures, 1 table. Updates include: (1) Large Hadron Collider off-line experiment designed to test relativistic gravity; (2) demonstration that weak 'antigravity' fields correspond with new exact solutions calculated from an exact metric first derived by Hartle, Thorne, and Price
Subjects: General Relativity and Quantum Cosmology (gr-qc)


24. arXiv:gr-qc/0411096 [ps, pdf, other]
Title: The warp drive and antigravity
Authors: Homer G. Ellis
Comments: 6 pages, AMSTeX, 1 Encapsulated PostScript figure
Subjects: General Relativity and Quantum Cosmology (gr-qc)


25. arXiv:gr-qc/0411064 [ps, pdf, other]
Title: Symmetry relating Gravity with Antigravity: A possible resolution of the Cosmological Constant Problem?
Authors: Israel Quiros
Comments: 3 pages, no figures, revtex
Subjects: General Relativity and Quantum Cosmology (gr-qc)

sabato 16 maggio 2009

Super-efficient Transistor Material Predicted


(PhysOrg.com) -- New work by condensed-matter theorists at the Stanford Institute for Materials and Energy Science at SLAC National Accelerator Laboratory points to a material that could one day be used to make faster, more efficient computer processors.
In a paper published online Sunday in , SIMES researchers Xiao-Liang Qi and Shou-Cheng Zhang, with colleagues from the Chinese Academy of Sciences and Tsinghua University in Beijing, predict that a room temperature material will exhibit the quantum spin Hall effect. In this exotic state of matter, flow without dissipating heat, meaning a transistor made of the material would be drastically more efficient than anything available today. This effect was previously thought to occur only at extremely low temperatures. Now the race is on to confirm the room-temperature prediction experimentally.
Zhang has been one of the leading physicists working on the quantum spin Hall effect; in 2006 he predicted its existence in mercury telluride, which experimentalists confirmed a year later. However, the mercury telluride had to be cooled by liquid helium to a frigid 30 millikelvins, much too cold for real-world applications.
In their hunt for a material that exhibited the quantum spin Hall effect, Zhang and Qi knew they were looking for a solid with a highly unusual energy landscape. In a normal semiconductor, the outermost electrons of an atom prefer to stay in the valence band, where they are orbiting atoms, rather than the higher-energy conduction band, where they move freely through the material. Think of the conduction band as a flat plain pitted with small valence-band valleys. Electrons naturally "roll" down into these valleys and stay there, unless pushed out. But in a material that exhibits the quantum spin Hall effect, this picture inverts; the valence-band valleys rise to become hills, and the electrons roll down to roam the now lower-energy conduction band plain. In mercury telluride, this inversion did occur, but just barely; the hills were so slight that a tiny amount of energy was enough to push the electrons back up, meaning the material had to be kept extremely cold.
When Zhang, Qi and their colleagues calculated this energy landscape for four promising materials, three showed the hoped-for inversion. In one, bismuth selenide, the theoretical conduction band plain is so much lower than the valence band hills that even room temperature energy can't push the electrons back up. In physics terms, the conduction band and valence band are now inverted, with a sizeable difference between them.
"The difference [from mercury telluride] is that the gap is much larger, so we believe the effect could happen at room temperature," Zhang explained.
Materials that exhibit the quantum spin Hall effect are called topological insulators; a chunk of this material acts like an empty metal box that's completely insulating on the inside, but conducting on the surface. Additionally, the direction of each electron's movement on the surface decides its spin, an intrinsic property of electrons. This leads to surprising consequences.
Qi likens electrons traveling through a metal to cars driving along a busy road. When an electron encounters an impurity, it acts like a frustrated driver in a traffic jam, and makes a U-turn, dissipating heat. But in a topological insulator, Qi said, "Nature gives us a no U-turn rule." Instead of reversing their trajectories, electrons cruise coolly around impurities. This means the quantum spin Hall effect, like superconductivity, enables current to flow without dissipating energy, but unlike superconductivity, the effect doesn't rely on interactions between electrons.
Qi points out that, because current only flows on their surfaces, topological insulators shouldn't be seen as a way to make more efficient power lines. Instead, these novel compounds would be ideal for fabricating tinier and tinier transistors that transport information via electron spin.
"Usually you need magnets to inject spins, manipulate them, and read them out," Qi said. "Because the current and spin are always locked [in a topological insulator], you can control the spin by the current. This may lead to a new way of designing devices like transistors."
These tantalizing characteristics arise from underlying physics that seems to marry relativity and condensed matter science. Zhang and Qi's paper reveals that electrons on the surface of a topological insulator are governed by a so-called "Dirac cone," meaning that their momentum and energy are related according to the laws of relativity rather than the quantum mechanical rules that are usually used to describe electrons in a solid.
"On this surface, the electrons behave like a relativistic, massless particle," Qi said. "We are living in a low speed world here, where nothing is relativistic, but on this boundary, relativity emerges."
"What are the two greatest physics discoveries of the last century? Relativity and quantum mechanics." Zhang said. "In the semiconductor industry in the last 50 years, we've only used quantum mechanics, but to solve all these interesting frontier problems, we need to use both in a very essential way."
Zhang and Qi's new predictions are already spurring a surge of experiments to test whether these promising materials will indeed act as room-temperature topological insulators.
"The best feedback you can get is that there are lots of experiments going on," he said.
More information: http://www.nature.com/nphys/journal/vaop/ncurrent/abs/nphys1270.html
Provided by SLAC National Accelerator Laboratory (news : web)

giovedì 14 maggio 2009

A 'cloaking device' -- it's all done with mirrors

SOURCE

Scanning electron microscope images of the cloaking device. Top: Light passes through silicon posts as it bounces off a deformed reflector. Varying density of the silicon posts bends light to compensate for the distortion in the reflector. Bottom: a close-up of the array of silicon posts, each about 50 billionths of a meter in diameter. Image: Nanophotonics Group
(PhysOrg.com) -- Somewhat the way Harry Potter can cover himself with a cloak and become invisible, Cornell researchers have developed a device that can make it seem that a bump in a carpet -- or, indeed, any flat surface -- isn't there.
So far the illusion works only at the , but the researchers suggest that the basic principle might eventually be scaled up for military and communications applications, or perhaps used in reverse to concentrate solar energy.
Devices that bend microwaves around small objects have previously been demonstrated, but this is the first cloaking device to work at optical frequencies, the researchers said.
The experimental device was built by Michal Lipson, associate professor of electrical and computer engineering, and colleagues in her Nanophotonics Research Group, based on a design by British physicists. It bends light bouncing off a reflective surface in a way that corrects for the distortion caused by a bump in the surface. Imagine controlling the light in front of a funhouse mirror so that reflections look perfectly normal, and the mirror looks flat.
A similar device has been reported by University of California-Berkeley researchers.
On a silicon wafer, Lipson's group made a tiny reflector about 30 microns (millionths of a meter) long with a 5-micron-wide bump in the middle, then placed an array of vertical silicon posts, each 50 (billionths of a meter) in diameter, in front of it. Because the posts are much smaller than the of the light, the light behaves as if it were passing through a solid whose density varies with the density of the posts. As light passes between regions of high and low density it is refracted, or bent, in the same way light is refracted as it passes from air to glass. By designing smooth transitions of the density of posts, the researchers could control the path of the light to compensate for the distortion caused by the bump.
As a result, an observer looking at light reflected from the mirror sees a flat mirror, with no sign of the bump. The device is expected to work over a range of wavelengths from infrared into visible red light, the researchers said
Of course it's still a long way to cloaking tanks on a battlefield. For starters, the thing being hidden has to hide behind a mirror, and the presence of a mirror would be a giveaway. A practical also would have to adjust in real time to changing configurations of the object behind it.
A variation of the method might be used to bend light around an object, the researchers suggested, and a light-bending device could be made much larger by using technology that stamps or molds nanoscale patterns onto a surface.
Such refraction control might also be used in reverse, they added, to concentrate light in a small area to efficiently collect solar energy.
"At the core is the fact that we're manipulating , telling it where to go and how to behave," said Carl Poitras, a research associate on the Cornell team.
The device was manufactured at the Cornell Nanoscale Facility, which is supported by the National Science Foundation.
Provided by Cornell University (news : web)

Researchers develop new method for producing transparent conductors

(PhysOrg.com) -- Researchers at UCLA have developed a new method for producing a hybrid graphene-carbon nanotube, or G-CNT, for potential use as a transparent conductor in solar cells and consumer electronic devices. These G-CNTs could provide a cheaper and much more flexible alternative to materials currently used in these and similar applications.
Yang Yang, a professor of materials science and engineering at the UCLA Henry Samueli School of Engineering and Applied Science and a member of UCLA's California NanoSystems Institute (CNSI), and Richard Kaner, a UCLA professor of chemistry and biochemistry and a CNSI member, outline their new processing method in research published today in , a .
Transparent conductors are an integral part of many electronic devices, including flat-panel televisions, plasma displays and touch panels, as well as . The current gold standard for transparent conductors is (ITO), which has several limitations. ITO is expensive, both because of its production costs and a relative scarcity of indium, and it is rigid and fragile.
The G-CNT hybrid, the researchers say, provides an ideal high-performance alternative to ITO in electronics with moving parts. is an excellent electrical conductor, and carbon nanotubes are good candidates for transparent conductors because they provide conduction of electricity using very little material. Yang and Kaner's new single-step method for combining the two is easy, inexpensive, scalable and compatible with flexible applications. G-CNTs produced this way already provide comparable performance to current ITOs used in flexible applications.
The new method builds on Yang and Kaner's previous research, published online in November 2009, which introduced a method for producing graphene, a single layer of , by soaking graphite oxide in a hydrazine solution. The researchers have now found that placing both graphite oxide and carbon nanotubes in a hydrazine solution produces not only graphene but a hybrid layer of graphene and carbon nanotubes.
"To our knowledge this is the first report of dispersing CNTs in anhydrous hydrazine," Yang said. "This is important because our method does not require the use of surfactants, which have traditionally been used in these solution processes and can degrade intrinsic electronic and mechanical properties."
G-CNTs are also ideal candidates for use as electrodes in polymer solar cells, one of Yang's main research projects. One of the benefits of polymer, or plastic, solar cells is that plastic is flexible. But until an alternative to ITOs, which lose efficiency upon flexing, can be found, this potential cannot be exploited. G-CNTs retain efficiency when flexed and also are compatible with plastics. Flexible solar cells could be used in a variety of materials, including the drapes of homes.
"The potential of this material (G-CNT) is not limited to improvements in the physical arrangements of the components," said Vincent Tung, a doctoral student working jointly in Yang's and Kaner's labs and the first author of the study. "With further work, G-CNTs have the potential to provide the building blocks of tomorrow's optical electronics."
Source: University of California - Los Angeles

mercoledì 13 maggio 2009

New element found to be a superconductor

(PhysOrg.com) -- Of the 92 naturally occurring elements, add another to the list of those that are superconductors. James S. Schilling, Ph.D., professor of physics in Arts & Sciences at Washington University in St. Louis, and Mathew Debessai — his doctoral student at the time — discovered that europium becomes superconducting at 1.8 K (-456 °F) and 80 GPa (790,000 atmospheres) of pressure, making it the 53rd known elemental superconductor and the 23rd at high pressure.
Debessai, who receives his doctorate in physics at Washington University's Commencement May 15, 2009, is now a postdoctoral research associate at Washington State University.
"It has been seven years since someone discovered a new elemental superconductor," Schilling said. "It gets harder and harder because there are fewer elements left in the periodic table."
This discovery adds data to help improve scientists' theoretical understanding of superconductivity, which could lead to the design of room-temperature superconductors that could be used for efficient energy transport and storage.
The results are published in the May 15, 2009, issue of Physical Review Letters in an article titled "Pressure-induced Superconducting State of Europium Metal at Low Temperatures."
Schilling's research is supported by a four-year $500,000 grant from the National Science Foundation, Division of Materials Research.
Europium belongs to a group of elements called the rare earth elements. These elements are magnetic; therefore, they are not superconductors.
"Superconductivity and magnetism hate each other. To get superconductivity, you have to kill the magnetism," Schilling explained.
Of the rare earths, europium is most likely to lose its magnetism under high pressures due to its electronic structure. In an elemental solid almost all rare earths are trivalent, which means that each atom releases three electrons to conduct electricity.
"However, when europium atoms condense to form a solid, only two electrons per atom are released and europium remains magnetic. Applying sufficient pressure squeezes a third electron out and europium metal becomes trivalent. Trivalent europium is nonmagnetic, thus opening the possibility for it to become superconducting under the right conditions," Schilling said.
Schilling uses a diamond anvil cell to generate such high pressures on a sample. A circular metal gasket separates two opposing 0.17-carat diamond anvils with faces (culets) 0.18 mm in diameter. The sample is placed in a small hole in the gasket, flanked by the faces of the diamond anvils.
Pressure is applied to the sample space by inflating a doughnut-like bellow with helium gas. Much like a woman in stilettos exerts more pressure on the ground than an elephant does because the woman's force is spread over a smaller area, a small amount of helium gas pressure (60 atmospheres) creates a large force (1.5 tons) on the tiny sample space, thus generating extremely high pressures on the sample.
Unique electrical, magnetic properties
Superconducting materials have unique electrical and magnetic properties. They have no electrical resistance, so current will flow through them forever, and they are diamagnetic, meaning that a magnet held above them will levitate.
These properties can be exploited to create powerful magnets for medical imaging, make power lines that transport electricity efficiently or make efficient power generators.
However, there are no known materials that are superconductors at room temperature and pressure. All known superconducting materials have to be cooled to extreme temperatures and/or compressed at high pressure.
"At ambient pressure, the highest temperature at which a material becomes superconducting is 134 K (-218 °F). This material is complex because it is a mixture of five different elements. We do not understand why it is such a good superconductor," Schilling said.
Scientists do not have enough theoretical understanding to be able to design a combination of elements that will be at room temperature and pressure. Schilling's result provides more data to help refine current theoretical models of superconductivity.
"Theoretically, the elemental solids are relatively easy to understand because they only contain one kind of atom," Schilling said. "By applying pressure, however, we can bring the elemental solids into new regimes, where theory has difficulty understanding things.
"When we understand the element's behavior in these new regimes, we might be able to duplicate it by combining the into different compounds that superconduct at higher temperatures."
Schilling will present his findings at the 22nd biennial International Conference on High Science and Technology in July 2009 in Tokyo, Japan.
Provided by Washington University in St. Louis (news : web)

Ion trap quantum computing


(PhysOrg.com) -- “Right now, classical computers are faster than quantum computers,” René Stock tells PhysOrg.com. “The goal of quantum computing is to eventually speed up the time scale of solving certain important problems, such as factoring and data search, so that quantum computing can not only compete with, but far outperform, classical computing on large scale problems. One of the most promising ways to possibly do this is with ion traps.”
Stock, a post-doc at the University of Toronto, points out that ion trap has made a lot of progress in the last 10 years. “ in traps have been one of most successful physical implementation of quantum computing in physical systems.” Stock believes that it is possible to use ion-trap quantum computing to create measurement-based quantum computers that could compete with classical computers for very large and complex problems - and even on smaller scale problems. His work on the subject, done with Daniel James, appears in Physical Review Letters: “Scalable, High-Speed Measurement-Based Quantum Computer Using Trapped Ions.”
“One of the most important considerations in quantum computing is the fact that quantum computing scales polynomially, rather than exponentially, as classical computing does.” This polynomial scaling is what makes quantum computing so useful for breaking data encryption. In order to make data encryption more secure, one usually increases the number of bits used. “Because of the exponential scaling, breaking data encryptions quickly becomes impossible using standard classical computers or even networks of computers,” Stock explains. “The improved scaling with quantum computers could be one a biggest threads to data encryption and security.”
While this sounds promising, Stock points this out that there are still problems with quantum information processing: “While scaling would be better with quantum computing, current operation of quantum information processing is too slow to even compete with classical computers on large factoring problems that take 5 months to solve.”
The way ion-trap quantum computing works now - or at least is envisioned to work - requires that ions be shuttled back and forth around the trap architecture. Stock explains that this takes time. “As the complexity of problems and the size of the quantum computing to be implemented increases, the time issue becomes even more important. We wanted to figure out how we could change the time scale,” Stock explains. “We found that we could speed up the processing by using an array of trapped ions and by parallelizing entangling operations.”
“Instead of moving ions around,” Stock continues, “you apply a two-ion operation between all neighboring ions at the same time. The created multipartite ‘entangled’ array of ions is a resource for quantum computing.” Actual computing is then based on measurement of ions in the array in a prescribed order and using a slightly different measurement basis for each ion. “In this scheme, it is the time required to read out information from the ions that critically determines the operational time scale of the quantum computer,” Stock says.
Stock describes the measurement component as vital to this model of quantum computing. Instead of exciting the ions and getting them to emit a photon and measuring the photon, Stock and his colleague instead devised a different way in which they were able to measure the quantum bit encoded in a calcium ion. “You can use an ionization process to speed up measurement, since the electron can be extracted faster from the atom than you can get a photon out of an atom. The extracted electron is then guided onto a detector by the ion trap itself.” All of this takes place on a nanosecond time scale. “By speeding up the measurement,” Stock insists, “we can speed up the operation capability of the quantum computer.”
Stock points out that this scheme would be impractical as far as taking over common use from classical computers. “The lattice would have thousands of ions, which would need to be controlled, and carefully stored and protected. It means that the computer would be relatively large and impractical.”
Uses for such a quantum computer are not limited to breaking data encryption. “This process would allow us to take problems of great complexity and still solve them on a humanly possible timescale. This could provide the key to modeling complex systems - especially perhaps in biology - that we can’t solve now. This would be a tremendous advantage over classical computing.”
More information: Stock, René and James, Daniel. “Scalable, High-Speed Measurement-Based Quantum Computer Using Trapped Ions.” Physical Review Letters (2009). Available online: http://link.aps.org/doi/10.1103/PhysRevLett.102.170501 .
Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

martedì 12 maggio 2009

Too much entanglement can destroy the power of quantum computers!


Computers that exploit quantum effects appear capable of outperforming their classical brethren. For example, a quantum computer can efficiently factor a whole number, while there is no known algorithm for our modern classical computers to efficiently perform this task [1]. Given this extra computational punch, a natural question to ask is “What gives quantum computers their added computational power?” This question is intrinsically hard—try asking yourself where the power of a traditional classical computer comes from and you will find yourself pondering questions at the heart of the vast and challenging field known as computational complexity. In spite of this, considerable success has been made in answering the question of when a quantum system is not capable of offering a computational speedup. A particularly compelling story has emerged from the study of entanglement—a peculiar quantum mechanical quality describing the interdependence of measurements made between parts of a quantum system. This work has shown that a quantum system without enough entanglement existing at some point in the process of a computation cannot be used to build a quantum computer that outperforms a classical computer [2]. Since entangled quantum systems cannot be replicated by local classical theories, the idea that entanglement is required for speedup seems very natural. But now two groups [3, 4] have published papers in Physical Review Letters that put forth a surprising result: sometimes too much entanglement can destroy the power of quantum computers!
Both papers focus on a model called the “one-way quantum computer,” which was invented by Hans Briegel and Robert Raussendorf in 2001 [5]. A one-way quantum computation begins with a special quantum state entangled across many quantum subsystems, and the computation proceeds as a measurement is made on each subsystem. The actual form of each of the measurements in the sequence of measurements is determined by the outcome of previous measurements (Fig. 1), and one can think of the measurements as an adaptive program executed on the substrate of the entangled quantum state. A particularly nice property of the one-way quantum computing model is that it separates quantum computing into two processes—the preparation of a special initial quantum state and a series of adaptive measurements. In this way we may view the initial quantum state as a resource that can boost localized measurements and classical computation up into quantum realms. Investigations have revealed numerous quantum states that can be used as the special initial state to build a fully functioning quantum computer. But how special is this initial quantum state? Will any entangled quantum state do?
The two papers approach this problem from slightly different perspectives, but both arrive at convincing answers to these questions. David Gross at Technische Universität Braunschweig in Germany, Steven Flammia at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and Jen Eisert at the University of Potsdam, Germany, pursue this question directly in terms of entanglement [3]. They first show that if a certain quantification of entanglement—known as the geometric measure of entanglement—is too large, then any scheme that mimics the one-way quantum computation model cannot outperform classical computers. In fact, they show that the measurements in this case could be replaced by randomly flipping a coin, without significantly changing the effect of the computation. Thus while these states have a large amount of entanglement, they cannot be used to build a one-way quantum computer. Gross, Flammia, and Eisert also show that if one picks a random quantum state, it will, with near certainty, be a state that has a high value of geometric entanglement. The random states they consider are drawn via a probability distribution known as the Haar measure, which is the probability distribution that arises naturally when one insists that the probability of drawing a particular state not depend in any way on the basis of states one uses to describe a quantum system. Gross et al.’s findings show that not only do states that are too entangled to allow one-way quantum computation exist, they are actually generic among all quantum states.
Michael J. Bremner and Andreas Winter of the University of Bristol in the UK and Caterina Mora at the University of Waterloo in Canada take a slightly different route to finding states that are not useful for one-way quantum computation [4]. They begin by showing that a random quantum state (again drawn from the Haar measure) is not useful for one-way quantum computation with high probability, confirming the result of Gross et al. But they also show it is possible to choose a random quantum state from an even smaller class of states than the completely random quantum states and still end up with a state not useful for one-way quantum computation. This more limited class of states has even less entanglement (though still quite a lot) than those considered by Gross et al., but they can still be useless for one-way quantum computation.
The bottom line is that entanglement, like most good things in life, must be consumed in moderation. For the one-way quantum computation model, a randomly chosen initial state followed by adaptive measurements is not going to give you a quantum computer. Part of the reason for this, as revealed by Gross et al., is that a randomly chosen initial state has too much geometric entanglement. But even states with less entanglement may be useless for one-way quantum computation. All is according to the color of the crystal through which you look, however, one may naturally ask: What do all of these statements about the power of initial random quantum states have to do with the real world? It is thought, for example, that perfectly random quantum states (drawn from the Haar measure) cannot be produced efficiently on a quantum computer. So, while it may be that a perfectly random quantum state isn’t useful for one-way quantum computation, maybe the states that exist in nature, which can be constructed efficiently, actually are useful. It is known, for example, that the ground states of certain chains of interacting spins can be used for one-way quantum computation. A recent preprint by Richard Low [6] hints, however, that even states that exist in nature might also be in the class of useless states considered by Gross et al. and Bremner et al. In particular, Low has shown that there is a way to efficiently construct a class of entangled random quantum states that are not useful for one-way quantum computation. Thus the kinds of generic situations that both groups consider should not be ruled out because there is no physical model that efficiently prepares these states: quantum states that are impotent for one-way quantum computation may be the norm and not the exception. The implications for this on the viability of one-way quantum computation are probably not dire, but it does point out how special the states that can be useful for this model need to be—as well as the clever thinking needed to think this model up in the first place.
Finally, one can take a step back and ask “What are the implications of these results for understanding the source of the power of quantum computation?” Entanglement, in quite a real sense, is not the full answer to this question. The results of these two papers drill a deeper hole into the view of those who believe that the largeness of entanglement, and of entanglement alone, should be the useful discriminating factor between quantum and classical computation. From the perspective of theoretical computer science, this is not too surprising. One of the big open questions in this field is whether what is efficiently computable on a classical computer is the same as what is efficiently computable on a computer that operates according to different laws of the universe—a universe where a computer can nondeterministically branch (in computer science, this is known as the P versus NP question). This latter nondeterminism isn’t the kind a physicist normally thinks about. Instead it is a nondeterminism in which one can select out which of the nondeterministic branches of a universe one wishes to live in. This nondeterminism is not the way in which our universe appears to work, but it is one way the world could work (i.e., a possible set of laws of physics).
Trying to understand why our classical computers cannot efficiently compute what could be efficiently computed in these nondeterministic worlds is the holy grail of computer science research. The failure to solve this problem is similar to saying there is no known way to write down a quantity that succinctly quantifies why modern computers are different from computers that exist in the nondeterministic world. We should not be surprised, then, if there is no way to write down a quantity that quantifies why a quantum computer is powerful. After all, quantum physics is just another set of laws that operate differently than classical laws. While it is easy to view this through a negative lens, in actuality it should provide the wind behind research into quantum algorithms: there is still much to be discovered about where quantum computers might offer computational advantages over classical computers. Just be aware that creating too much entanglement followed by a series of measurements may not be the best way to get the answer.

lunedì 11 maggio 2009

A new microscopic swimmer, a corkscrew that rotates in a magnetic field.

SOURCE

Researchers Ambarish Ghosh (left) and Peer Fischer of the Rowland Institute at Harvard have devised a new microscopic swimmer, a corkscrew that rotates in a magnetic field.
(PhysOrg.com) -- Harvard researchers have created a new type of microscopic swimmer: a magnetized spiral that corkscrews through liquids and is able to deliver chemicals and push loads larger than itself.
Though other researchers have created similar devices in the past, Peer Fischer, a junior fellow at the Rowland Institute at Harvard, said the new nano-robot is the only swimmer that can be precisely controlled in solution.
At just two microns long and 200 to 300 wide, the corkscrew swimmer is about the size of a bacterial cell. The work was published online May 4 in the journal . Fischer and Rowland Institute postdoctoral research associate Ambarish Ghosh were able to control the tiny device well enough to use it to write “R @ H” for “Rowland at Harvard” within a space that’s less than the width of a human hair.
Using nano-structured surfaces scientists make micro-robots that can be propelled through liquids with unprecedented control and precision. Each micro-robot is essentially a glass-screw with a screw-pitch that that is less than the wavelength of visible light. The body is made from glass and a magnetic material (cobalt) is added to magnetize and drive these “artificial swimmers” with a magnetic field.

Further, they were able to use it to push a 5 micron bead — which had a volume more than 1,000 times that of the swimmer — and were also able to control two of the swimmers simultaneously.
“It really has good control. It’s exactly doing what we want it to do,” Fischer said.
The Rowland Institute was created by legendary Polaroid founder Edwin Land in 1980 as the Rowland Institute for Science, a nonprofit, basic research laboratory. It maintained its scientific mission in 2002, when it merged with Harvard and became the Rowland Institute at Harvard.
Fischer said the strength of his and Ghosh’s work is not just the swimmer’s performance but also its manufacturing method, which allows many swimmers to be created simultaneously.
The devices are made by exposing a silicon wafer to silicon dioxide vapor. The wafer is slowly rotated as the vapor condenses, growing the devices in a corkscrew shape. They are then shaken loose, sprayed with cobalt, and magnetized. Because they are lying on their sides when the cobalt is applied, the process provides a magnetic “handle” to rotate the corkscrews with.
“You can make hundreds of millions in a square centimeter,” Fischer said. “Even if you use only a few percent, that’s still a lot. … You can make a lot of them very quickly.”
Fischer and Ghosh took one last step, which didn’t improve the swimmers’ functionality, but allowed them to be tracked: they coated them with a fluorescent chemical.
Once complete, the researchers surrounded the swimmers with three magnetic coils, allowing them to precisely adjust the magnetic field, and control the tiny devices in three dimensions.
The microscopic world of the nano-swimmer is different from the one we experience when going for a swim, Fischer said. Because it operates at such a tiny scale, water that we move through relatively easily — thin and runny - appears thicker to the nano-swimmers, more like honey. The swimmers meet a considerable amount of resistance to their forward motion so that they really need to drill their way forward, he said.
The devices move at about the speed of bacteria, 40 micrometers — one micrometer is a millionth of a meter — per second.
Though applications in drug delivery, microsurgery, and other aspects of medicine seem apparent, Fischer said it’s too early to speak about those realistically.
However, Fischer said the artificial swimmers can be used to test some of these ideas and could have almost immediate applications in research, being used to shuttle chemicals in and out of cells or testing the strength and properties of membranes, for example.
More information: http://pubs.acs.org/doi/abs/10.1021/nl900186w
Provided by Harvard University (news : web)

Ultra-dense Deuterium May Be Nuclear Fuel Of The Future

SOURCE

ScienceDaily (May 12, 2009) — A material that is a hundred thousand times heavier than water and more dense than the core of the Sun is being produced at the University of Gothenburg. The scientists working with this material are aiming for an energy process that is both more sustainable and less damaging to the environment than the nuclear power used today.
Imagine a material so heavy that a cube with sides of length 10 cm weights 130 tonnes, a material whose density is significantly greater than the material in the core of the Sun. Such a material is being produced and studied by scientists in Atmospheric Science at the Department of Chemistry, the University of Gothenburg.
Towards commercial use
So far, only microscopic amounts of the new material have been produced. New measurements that have been published in two scientific journals, however, have shown that the distance between atoms in the material is much smaller than in normal matter. Leif Holmlid, Professor in the Department of Chemistry, believes that this is an important step on the road to commercial use of the material.
The material is produced from heavy hydrogen, also known as deuterium, and is therefore known as “ultra-dense deuterium”. It is believed that ultra-dense deuterium plays a role in the formation of stars, and that it is probably present in giant planets such as Jupiter.
An efficient fuel
So what can this super-heavy material be used for?
“One important justification for our research is that ultra-dense deuterium may be a very efficient fuel in laser driven nuclear fusion. It is possible to achieve nuclear fusion between deuterium nuclei using high-power lasers, releasing vast amounts of energy”, says Leif Holmlid.
The laser technology has long been tested on frozen deuterium, known as “deuterium ice”, but results have been poor. It has proved to be very difficult to compress the deuterium ice sufficiently for it to attain the high temperature required to ignite the fusion.
Energy source of the future
Ultra-dense deuterium is a million times more dense than frozen deuterium, making it relatively easy to create a nuclear fusion reaction using high-power pulses of laser light.
“If we can produce large quantities of ultra-dense deuterium, the fusion process may become the energy source of the future. And it may become available much earlier than we have thought possible”, says Leif Holmlid.
“Further, we believe that we can design the deuterium fusion such that it produces only helium and hydrogen as its products, both of which are completely non-hazardous. It will not be necessary to deal with the highly radioactive tritium that is planned for use in other types of future fusion reactors, and this means that laser-driven nuclear fusion as we envisage it will be both more sustainable and less damaging to the environment than other methods that are being developed.”
Deuterium – brief facts
Deuterium is an isotope of hydrogen that is found in large quantities in water, more than one atom per ten thousand hydrogen atoms has a deuterium nucleus. The isotope is denoted “2H” or “D”, and is normally known as “heavy hydrogen”. Deuterium is used in a number of conventional nuclear reactors in the form of heavy water (D2O), and it will probably also be used as fuel in fusion reactors in the future.
Adapted from materials provided by University of Gothenburg.

Particles, Molecules Prefer Not To Mix

SOURCE

ScienceDaily (May 11, 2009) — In the world of small things, shape, order and orientation are surprisingly important, according to findings from a new study by chemists at Washington University in St. Louis.
Lev Gelb, WUSTL associate professor of chemistry, his graduate student Brian Barnes, and postdoctoral researcher Daniel Siderius, used computer simulations to study a very simple model of molecules on surfaces, which looks a lot like the computer game "Tetris." They have found that the shapes in this model (and in the game) do a number of surprising things.
WUSTL chemists headed by Lev Gelb simulated the motions and behavior of particles on a lattice and found "birds of a feather flock together." It's plainly evident that, in this four-component mixture of squares, rods, S shapes and Z shapes, the shapes all make little clusters, rather than completely mixing together. Tetris, anyone?
"First, different shapes don't mix very well with each other; each shape prefers to associate with others of the same kind," Gelb says. "When you put a lot of different shapes together, they separate from each other on microscopic scales, forming little clusters of nearly pure fluids. This is true even for the mirror-image shapes.
"Second, the structures of the pure (single-shape) fluids are quite complex and not what we might have predicted. There is a very strong tendency for some of the shapes, like rods and S- and Z- shapes, to align in the same direction. Finally, how `different looking' the shapes are isn't a good predictor for how well they mix; it turns out that the hard-to-predict characteristic structures of the fluids are more important than the shapes themselves, in this regard."
The researchers used Monte Carlo computer simulations of a simple lattice model (think of the lattice as a checkerboard), on which are placed "tetrominoes," which are S-, Z-, L-, J-, T-, rod- and square-shaped pieces.
Gelb and his colleagues use simulations to develop an atomic-scale understanding of the behavior of complex systems. They want to understand how molecules and nanoparticles of different shapes interact with each other to gain a better understanding of self-assembly, which is important in the development of new, strong materials for one, and designed catalysts for another.
Lining up
Gelb says that there has long been interest in self-assembly and in designing things that will assemble into predictable structures. Most researchers try to hold simple shapes together energetically, using some sort of chemical lock and key, such as DNA or hydrogen bonds. But if the particles have more shape to them, surprising things can happen.
"People have known for a long time when you make round nanoparticles and deposit them on a surface and you do it well, they make a nice, crystalline lattice," Gelb says. "If you do mixtures of two sizes you can get a number of different patterns with them. But if the particles aren't round, if they are short rods or things with more structure, it gets much more complicated quickly, and there's much less known about that."
The chemists also studied all 21 mixtures of two different shapes, as well as many combinations of three or more shapes.
"In all of the binary mixtures you get small-scale phase separation, which is counterintuitive," Gelb says. "It's not that the shapes repel each other. When there's no special repulsion between things or no stronger interaction between things of the same shape, you expect things to mix really well. In fact, that's not what happens."
Using ideas from classical thermodynamics and solution theory, the team was able to understand this separation using two different quantities. One is the virial coefficient, which measures the overlap between two shapes. They found that the shapes adopt alignments that minimize this overlap. Another is the volume of mixing. If you mix two liquids together, the volume of the mixture isn't necessarily the same as the volume of pure liquids you started with. In a mixture of water and ethanol, for instance, the volume of the mixture is smaller by about five percent than the sum of the original volumes. They found that in this model the volume always goes up when mixing different shapes.
Small world
"That's another indication that they don't mix well," Gelb says. "They take up more space when you mix them than when you allow them to be separate."
The model provides information on a very small world.
"If you think of the shapes as molecules sticking to a crystalline surface they would be a few Angstroms wide," says Barnes. "If you relate the model to nanoparticles, the shapes would be much larger, on the scale of tens of nanometers across."
In explaining the alignment phenomenon, Siderius offers the analogy of a roomful of people trying to circulate among each other.
"If they're all randomly placed, they'd bump shoulders frequently," he says. "But if they aligned a bit, everyone could move around more freely, which increases the entropy. In the past, we'd think of an ordered system as being low in entropy, but in this case the ordered state is high entropy."
Does it have anything to do with Tetris?
"Well, it suggests that one of the reasons the game is difficult is that the shapes don't fit together as well as we might think," says Gelb. "That, and they come down too fast."
The results were published in the on-line edition of the journal Langmuir on April 27, 2009
Journal reference:
Barnes et al. Structure, Thermodynamics, and Solubility in Tetromino Fluids. Langmuir, 2009; 090427084503036 DOI: 10.1021/la900196b
Adapted from materials provided by Washington University in St. Louis.

domenica 10 maggio 2009

Post-Quantum Correlations: Exploring the Limits of Quantum Nonlocality

This figure shows levels of nonlocality as measured by the CHSH Bell inequality. Classical nonlocal correlations (green) are at 2 and below; quantum nonlocal correlations (red) are above 2 but below Tsirelson’s bound (BQ); and post-quantum nonlocal correlations (light blue) are above and, in some cases, below Tsirelson’s bound. BCC marks the “bound of triviality,” above which correlations are unlikely to exist. In the current study, scientists found that post-quantum correlated nonlocal boxes (dark blue line) are also unlikely to exist, despite some boxes being arbitrarily close to being classical. Image credit: Brunner and Skrzypczyk. ©2009 APS.

SOURCE

(PhysOrg.com) -- When it comes to nonlocal correlations, some correlations are more nonlocal than others. As the subject of study for several decades, nonlocal correlations (for example, quantum entanglement) exist between two objects when they can somehow directly influence each other even when separated by a large distance. Because these correlations require “passion-at-a-distance” (a term coined by physicist Abner Shimony), they violate the principle of locality, which states that nothing can travel faster than the speed of light (even though quantum correlations cannot be used to communicate faster than the speed of light). Besides being a fascinating phenomenon, nonlocality can also lead to powerful techniques in computing, cryptography, and information processing.
Quantum Limits
Despite advances in quantum research, physicists still don’t fully understand the fundamental nature of nonlocality. In 1980, mathematician Boris Tsirelson found that quantum correlations are bounded by an upper limit; quantum nonlocality is only so strong. Later, in 1994, physicists Sandu Popescu and Daniel Rohrlich made another surprising discovery: a particular kind of correlation might exist above the “Tsirelson bound,” as well as below the bound, in a certain range (see image). These so-called post-quantum correlations are therefore “more nonlocal” than quantum correlations.
“Tsirelson's bound represents the most nonlocal ‘boxes’ that can be created with quantum mechanics,” Nicolas Brunner, a physicist at the University of Bristol, told PhysOrg.com. “Nonlocality here is measured by the degree of violation of a Bell inequality. So, quantum non-locality appears to be limited. The big question is why. That is, is there a good physical reason why post-quantum correlations don’t seem to exist in nature?”
In a recent study, Brunner and coauthor Paul Skrzypczyk, also of the University of Bristol, propose an explanation for why post-quantum correlations are unlikely to exist, which may reveal insight into why quantum nonlocality is bounded, as well as into the underlying difference between quantum and post-quantum correlations.
In their study, Brunner and Skrzypczyk have shown that a certain class of post-quantum correlations is unlikely to exist due to the fact that it makes communication complexity trivial. This triviality occurs due to the fact that the nonlocality of these correlations can be enhanced beyond a critical limit, and - surprisingly - in spite of the fact that some of these correlations are arbitrarily close to classical correlations (they give an arbitrarily small violation of Bell’s inequality). As previous research has suggested, any theory in which communication complexity is trivial is very unlikely to exist.
Beyond Quantum
“’Post-quantum’ means beyond quantum,” Brunner explained. “This term applies to correlations, which are conveniently - and probably most simply - described by ‘black boxes.’ The basic idea is the following: imagine a black box shared by two distant parties Alice and Bob; each party is allowed to ask a question to the box (or make a measurement on the box, if you prefer) and then gets an answer (a measurement outcome). By repeating this procedure many times, and at the end comparing their respective results, Alice and Bob can identify what their box is doing. For instance, it could be that the outcomes are always the same whenever Alice and Bob choose the same questions. This kind of behavior is a correlation; knowing one outcome, it is possible to deduce the other one, since both outcomes are correlated.
“Now, it happens that there exist different types of correlations; basically those that can be understood with classical physics (where correlations originate from a common cause), and those that cannot. This second type of correlation is called nonlocal, in the sense that it cannot be explained by a common cause. A priori it is not obvious to tell whether some correlations are local or not. The way physicists can tell this is by testing a Bell inequality; when a Bell inequality is violated, then the correlations cannot be local; that is, there cannot exist a common cause to these correlations.
“Now, an amazing thing about quantum mechanics is that it allows one to construct boxes that are non-local. This is quantum nonlocality. Now, it happens that not all nonlocal boxes can be constructed in quantum mechanics. Thus there exist correlations which are unobtainable in quantum mechanics. These are called post-quantum correlations. In general, post-quantum correlations can be above Tsirelson’s bound, but in some very specific cases, they can also be below.”
‘Distilling’ Post-Quantum Nonlocality
To demonstrate that post-quantum correlations cannot exist in nature, Brunner and Skrzypczyk developed a protocol for deterministically distilling nonlocality in post-quantum states. That is, the technique refines weakly nonlocal states into states with greater nonlocality. In this context, “distillation” can also be thought of as “purifying,” “amplifying,” or “maximizing” the nonlocality of post-quantum correlations. Since nonlocal correlations are more useful if they are stronger, maximizing nonlocality has significant implications for quantum information protocols. The physicists’ protocol works specifically with “correlated nonlocal boxes,” which are a particular class of post-quantum boxes.
Brunner and Skrzypczyk’s distillation protocol builds on a recent breakthrough by another team (Forster et al.), who presented the first nonlocality distillation protocol just a few months ago. However, the Forster protocol can distill correlated nonlocal boxes only up to a certain point, violating a Bell inequality called the Clauser-Horne-Shimony-Holt (CHSH) inequality only up to CHSH = 3. While this value is greater than Tsirelson’s bound of 2.82, it does not reach the bound of 3.26, which marks the point at which communication complexity becomes trivial.
Taking a step forward, Brunner and Skrzypczyk’s protocol can distill nonlocality all the way up to the maximum nonlocality of the Popescu-Rohrlich box, which is 4. In passing the 3.26 bound of triviality, they show that these post-quantum correlated nonlocal boxes do indeed collapse communication complexity.
The distillation protocol is executed by two distant parties that share two weakly correlated nonlocal boxes. Each party can input one bit into a box to receive one output bit, simulating a binary input/binary output system with local operations. As the scientists explain, a distillation protocol can be viewed as a way of classically wiring the two boxes together. The protocol is a choice of four wirings, one for each input of Alice and Bob. The wiring (algorithm) that determines the outbit bits of the boxes will transform the two nonlocal boxes into a single correlated nonlocal box, which has stronger nonlocality than the two individual boxes.
Importantly, this protocol can distill any correlated nonlocal box that violates the CHSH inequality by less than a limit of 3.26 to more than 3.26. In other words, any correlated nonlocal box that has not previously made communication complexity trivial can be made to do so. Surprisingly, some of these boxes can even be arbitrarily close to being classical (below or equal to 2), and yet, since they can be distilled beyond the “bound of triviality,” they still collapse communication complexity. According to previous studies of triviality, such boxes are very unlikely to exist - even those below Tsirelson’s bound.
Trivial Complexity
Theoretically, when communication complexity is trivial, even the most complex problems can be solved with a minimum amount of communication. In the following example, Brunner explains what would happen in real life if a single bit of information could solve any problem.
“Communication complexity is an
task,” Brunner said. “Here is an example. Suppose you and I would like to meet during the next year; so given our respective agendas, we would like to know whether there is a day where both of us are free or whether there is not; doesn’t matter what that day is, we just want to know whether there is such a day or not.
Since we are in distant locations, we must send each other some information to solve the problem. For instance, if I send you the whole information about my agenda, then you could find out whether a meeting is possible or not (and so solve the problem). But indeed that implies that I should send you a significant quantity of information (many bits). It turns out that in classical physics (or, if you prefer, in everyday life), there is no better strategy; I really have to send you all that information. In quantum physics, though there exist stronger correlations than in classical physics (quantum nonlocal correlations), I would still have to send you an enormous amount of communication.
“Now, the really astonishing thing is that, if you have access to certain post-quantum correlations (post-quantum boxes), a single bit of communication is enough to solve this problem! In other words, communication complexity becomes trivial in these theories, since one bit of communication is enough to solve any problem like this one. Importantly, in classical or quantum physics, communication complexity is not trivial. More generally, for computer scientists, a world in which communication complexity becomes trivial is highly unlikely to exist. Previously, it was known that post-quantum boxes with a very high degree of violation of a Bell inequality make communication complexity trivial; now, the astonishing thing about our result is that we show that some correlations with a very small degree of violation of a Bell inequality - but indeed not accessible with quantum mechanics - can also make communication complexity trivial.”
Post-Quantum Future
In the future, Brunner and Skrzypczyk hope to find improved distillation protocols that might work for a wider variety of post-quantum nonlocal boxes, not only correlated nonlocal boxes. More research is also needed to explain why quantum correlations cannot exist in the gap between Tsirelson’s bound and the bound of triviality. Ultimately, this line of research could help make a distinction between quantum and post-quantum
, with important theoretic implications.
“The greatest implications of our results are the following,” Brunner said. “First, they give new evidence that certain post-quantum theories allow for a dramatic increase in communication power compared to quantum mechanics, and therefore appear very unlikely to exist in nature. The nice thing, in particular, is that some of these theories allow only for little nonlocality (as measured by the degree of violation of a Bell inequality). Thus our result is a striking demonstration that we still have no clue on how to correctly measure nonlocality. Finally, it is one step further towards an information-theoretic axiom for
.”
More information: Nicolas Brunner and Paul Skrzypczyk. “Nonlocality Distillation and Postquantum Theories with Trivial Communication Complexity.” Physical Review Letters 102, 160403 (2009).
Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.