MIT physicists build world’s first fermion microscope

fermion-microscope

Scientists at Massachusetts Institute of Technology have created a microscope that they claim is able to image the fundamental particles that make up all matter in the universe (Credit: Jose-Luis Olivares/MIT)

Researchers working at the Massachusetts Institute of Technology (MIT) claim to have created a method to better observe fermions – the sub-atomic building blocks of matter – by constructing a microscope capable of viewing them in groups of a thousand at a time. A laser technique is used to herd the fermions into a viewing area and then freeze them in place so all of the captured particles can be imaged simultaneously.

In the entire known universe, there are only two types of particles: fermions and bosons. In simple terms, fermions are all the particles that make up matter (for example, electrons), and bosons are all the particles that carry force (for example, photons).

Fermions include electrons, neutrons, quarks, protons, and atoms consisting of an odd number of any or all of these elementary particles. However, due to the strange (and not completely understood) nature of these particles in regard to their quantum spin states, scientists often opt to employ gases of ultra-cold fermionic atoms as proxies for     other fermions

.fermion-microscope-3

Over the last two decades, physicists studying ultracold atomic gases of boson particles – such as photons – have been able to do so relatively easily because bosons can occupy the same quantum state in boundless numbers. Fermions, however, are much harder to manipulate for imaging, as they are unable to be held in the same quantum state in large numbers and are very much more difficult to reduce to the temperatures required to slow them down enough to view them.

Physicists at Harvard University successfully created a boson microscope that could resolve individual bosons in an optical lattice as far back as 2009. Similarly, in 2010, the Max Planck Institute of Quantum Optics also developed a second boson microscope. And, though these microscopes exposed the behavior of bosons, their counterparts – fermions – remained elusive without an equivalent fermion microscope. .

“We wanted to do what these groups had done for bosons, but for fermions,” said Zwierlein says. “And it turned out it was much harder for fermions, because the atoms we use are not so easily cooled. So we had to find a new way to cool them while looking at them.”

What is required to study fermions is a way to reduce their temperature, and therefore their movement, to a point low enough to image them. However, even techniques that resulted in the first ever laboratory realization of Bose-Einstein condensation in 1995 (which resulted in a Nobel Prize in 2001), or later work that saw lasers cool atoms to a few ten-thousandths of a degree above absolute zero are insufficient to achieve the cooling required to image fermion atoms.

To overcome this problem, the MIT researchers initially created an optical lattice using laser beams to form an arrangement of light “wells” which could magnetically trap and hold a single fermion in place (a technique similar to that used by the University of California to capture cesium atoms and image rotons). Applying a number of stages of laser temperature reduction, and more evaporative cooling of the gas (in this case, potassium gas), the atoms were cooled to just above absolute zero which was cold enough to hold individual fermions in place on the optical lattice.

As the fermions move to this lower energy state, they also release photons of light which can then be captured by the microscope and used to locate a fermion’s exact position within the lattice at an accuracy level greater than the wavelength of light.

“That means I know where they are, and I can maybe move them around with a little tweezer to any location, and arrange them in any pattern I’d like,” said Martin Zwierlein, a professor of physics at MIT and a member of the team working on the project.

Unfortunately, this stability was tenuous because – when light was shone upon the atoms to view them – individual photons were able to knock them out of place.

.fermion-microscope-4

The team resolved this by cleverly employing a two laser beam approach where beams of differing frequencies were used to alter the fermion atom’s energy state. By simultaneously firing the two beams at the atom so that one beam frequency was absorbed by the particle, it would emit a corresponding photon in response. This, in turn, forced the particle into a lower energy state, thus cooling it further by reducing its excitation levels.

The upshot of this research, according to the team, is that the high-resolution image capture of more than 1,000 fermionic atoms all together at the one time will help improve our fundamental understanding of these elusive particles. As electrons are also fermions, it is hoped that this information may eventually aid research into high-temperature superconductors, with their inherent advantages of lossless energy transport and the development of quantum computer systems.

“The Fermi gas microscope, together with the ability to position atoms at will, might be an important step toward the realization of a quantum computer based on fermions,” said Zwierlein. “One would thus harness the power of the very same intricate quantum rules that so far hamper our understanding of electronic systems.”

References:http://www.gizmag.com/

Smartphone and tablet could be used for cheap, portable medical biosensing

smartphone-tablet-biosensing

A diagram of the CNBP system (Credit: Centre for Nanoscale BioPhotonics)

As mobile technology progresses, we’re seeing more and more examples of low-cost diagnostic systems being created for use in developing nations and remote locations. One of the latest incorporates little more than a smartphone, tablet, polarizer and box to test body fluid samples for diseases such as arthritis, cystic fibrosis and acute pancreatitis.

Developed at Australia’s Centre for Nanoscale BioPhotonics (CNBP), the setup utilizes fluorescent microscopy, a process in which dyes added to a sample cause specific biomarkers to glow when exposed to bright light.

To use it, clinicians deposit a dyed fluid sample in a well plate (basically a transparent sample-holding tray), put that plate on the screen of a tablet that’s in the box, and place a piece of polarizing glass over the plate compartment that contains the fluid. They then put their smartphone on top of the box, so that its camera lines up with that compartment.

Once the tablet is powered up, the light from its screen causes the targeted biomarkers to fluoresce (assuming they’re present in the first place). The polarizer allows light given off by those biomarkers to stand out from the tablet’s light, while an app on the phone analyzes the color and intensity of the fluorescence to help make a diagnosis.

“This type of fluorescent testing can be carried out by a variety of devices but in most cases the readout requires professional research laboratory equipment, which costs many tens of thousands of dollars,” says Ewa Goldys, CNBP’s deputy director. “What we’ve done is develop a device with a minimal number of commonly available components … The results can be analyzed by simply taking an image and the readout is available immediately.”

The free smartphone app will be available as of June 15th, via the project website. A paper on the research was recently published in the journal Sensors.

References:http://www.gizmag.com/

Self-folding robot walks, swims, climbs, dissolves

5569d15b7f370

A demo sparking interest at the ICRA 2015 conference in Seattle was all about an origami robot that was worked on by researchers. More specifically, the team members are from the computer science and artificial intelligence lab at MIT and the department of informatics, Technische Universitat in Germany. “An untethered miniature origami robot that self-folds, walks, swims, and degrades” was the name of the paper, co-authored by Shuhei Miyashita, Steven Guitron, Marvin Ludersdorfer, Cynthia R. Sung and Daniela Rus. They focused on an origami robot that does just what the paper’s title suggests. A video showing the robot in action showcases each move.

One can watch the robot walking on a trajectory, walking on human skin, delivering a block; swimming (the robot has a boat-shaped body so that it can float on water with roll and pitch stability); carrying a load (0.3 g robot); climbing a slope; and digging through a stack. It also shows how a polystyrene model robot dissolves in acetone.
Even Ackerman in IEEE Spectrum reported on the Seattle demo. Unfolded, the robot has a magnet and PVC sandwiched between laser-cut structural layers (polystyrene or paper). How it folds: when placed on a heating element, the PVC contracts, and where the structural layers have been cut, it creates folds, said Ackerman. The self-folding exercise takes place on a flat sheet; the robot folded itself in a few seconds. Kelsey Atherton in Popular Science, said, “Underneath it all, hidden like the Wizard of Oz behind his curtain, sit four electromagnetic coils, which turn on and off and makes the robot move forward in a direction set by its shape.”
When placed in the tank of acetone, the robot dissolves, except for the magnet. The authors noted “minimal body materials” in their design enabled the robot to completely dissolve in a liquid environment, “a difficult challenge to accomplish if the robot had a more complex architecture.”
Possible future directions: self-folding sensors into the body of the robot, which could lead to autonomous operation, and eventually, even inside the human body. The authors wrote, “Such autonomous ‘4D-printed’ robots could be used at unreachable sites, including those encountered in both in vivo and bionic biological treatment.”
Atherton said, for example, future designs based on this robot could be even smaller, and could work as medical devices sent under the skin.
IEEE Spectrum’s Ackerman said it marked “the first time that a robot has been able to demonstrate a complete life cycle like this.”
Origami robots—reconfigurable robots that can fold themselves into arbitrary shapes—was discussed in an article last year in MIT News, quoting Ronald Fearing, a professor of electrical engineering and computer science at the University of California at Berkeley. Origami robotics, he said, is “a pretty powerful concept, because cutting planar things and folding is an inherently very low-cost process.” He said, “Folding, I think, is a good way to get to the smaller robots.”

References:http://phys.org/

How Computers Can Teach Themselves to Recognize Cats

computer-codes

In June 2012, a network of 16,000 computers trained itself to recognize a cat by looking at 10 million images from YouTube videos. Today, the technique is used in everything from Google image searches to Facebook’s newsfeed algorithms.

The feline recognition feat was accomplished using “deep learning,” an approach to machine learning that works by exposing a computer program to a large set of raw data and having it discover more and more abstract concepts. “What it’s about is allowing the computer to learn how to represent information in a more meaningful way, and doing so at several levels of representation,” said Yoshua Bengio, a computer scientist at the University of Montreal in Canada, who co-authored an article on the subject, published today (May 27) in the journal Nature. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]

“There are many ways you can represent information, some of which allow a human decision maker to make a decision more easily,” Bengio told Live Science. For example, when light hits a person’s eye, the photons stimulate neurons in the retina to fire, sending signals to the brain’s visual cortex, which perceives them as an image. This image in the brain is abstract, but it’s a more useful representation for making decisions than a collection of photons.
Similarly, deep learning allows a computer (or set of computers) to take a bunch of raw data — in the form of pixels on a screen, for example — and construct higher and higher levels of abstraction. It can then use these abstract concepts to make decisions, such as whether a picture of a furry blob with two eyes and whiskers is a cat.

“Think of a child learning,” Bengio said. “Initially, the child may see the world in a very simple way, but at some point, the child’s brain clicks, and she discovers an abstraction.” The child can use that abstraction to learn other abstractions, he added.

The self-learning approach has led to dramatic advances in speech- and image-recognition software. It is used in many Internet and mobile phone products, and even self-driving cars, Bengio said.

Deep learning is an important part of many forms of “weak” artificial intelligence, nonsentient intelligence focused on a narrow task, but it could become a component of “strong” artificial intelligence — the kind of AI depicted in movies like “Ex Machina” and “Her.”

But Bengio doesn’t subscribe to the same fears about strong AI that billionaire entrepreneur Elon Musk, world-famous physicist Stephen Hawking and others have been sounding alarms about.

“I do subscribe to the idea that, in some undetermined future, AI could be a problem,” Bengio said, “but we’re so far from [strong AI taking over] that it’s not going to be a problem.”

However, he said there are more immediate issues to be concerned about, such as how AI will impact personal privacy and the job market. “They’re less sexy, but these are the questions that should be used for debate,” Bengio said.

References:http://www.livescience.com/