Listening with Lasers: Hybrid Technique Sees Into Human Body

lihong-wang-mouse-brain-ffpam

A human skull, on average, is about 6.8 millimeters (0.3 inches) thick, or roughly the depth of the latest smartphone. Human skin, on the other hand, is about 2 to 3 millimeters (0.1 inches) deep, or about three grains of salt deep. While both of these dimensions are extremely thin, they present major hurdles for any kind of imaging with laser light.

Why? The photons in laser light scatter when they encounter biological tissue. Corralling tiny photons to obtain meaningful details about tissue has proven to be one of the most challenging problems laser researchers have faced to date. However, researchers at Washington University in St. Louis (WUSTL) decided to eliminate the photon roundup completely and use scattering to their advantage. The result: an imaging technique that would peer right into a skull, penetrating tissue at depths up to 7 centimeters (about 2.8 inches).

However, researchers at Washington University in St. Louis (WUSTL) decided to eliminate the photon roundup completely and use scattering to their advantage. The result: an imaging technique that would peer right into a skull, penetrating tissue at depths up to 7 centimeters (about 2.8 inches).

The photoacoustic effect

The approach, which combines laser light and ultrasound, is based on the photoacoustic effect, a concept first discovered by Alexander Graham Bell in the 1880s. In his work, Bell discovered that the rapid interruption of a focused light beam produces sound.

To produce the photoacoustic effect, Bell focused a beam of light on a selenium block. He then rapidly interrupted the beam with a rotating slotted disk. He discovered that this activity produced sound waves. Bell showed that the photoacoustic effect depended on the absorption of light by the block, and the strength of the acoustic signal depended on how much light the material absorbed.

“We combine some very old physics with a modern imaging concept,” said WUSTL researcher Lihong Wang, who pioneered the approach. Wang and his WUSTL colleagues were the first to describe functional photoacoustic tomography (PAT) and 3D photoacoustic microscopy (PAM). [Listening with Lasers: Hybrid Technique Sees Into Human Body ]

The two techniques follow the same basic principles: When the researchers shine a pulsed laser beam into biological tissue, the beam spreads out and generates a small, but rapid rise in temperature. This produces sound waves that are detected by conventional ultrasound transducers. Image reconstruction software converts the sound waves into high-resolution images.

Following a tortuous path

Wang began exploring the combination of sound and light as a postdoctoral researcher. At the time, he developed computer models of photons as they traveled through biological material. This work led to an NSF Faculty Early Career Development (CAREER) grant to study ultrasound encoding of laser light to “trick” information out of the laser beam.

Unlike other optical imaging techniques, photoacoustic imaging detects ultrasonic waves induced by absorbed photons, no matter how many times the photons have scattered. Multiple external detectors capture the sound waves regardless of their original locations. “While the light travels on a highly tortuous path, the ultrasonic wave propagates in a clean and well-defined fashion,” said Wang. “We see optical absorption contrast by listening to the object.”

Because the approach does not require injecting imaging agents, researchers can study biological material in its natural environment. Using photoacoustic imaging, researchers can visualize a range of biological material, from cells and their component parts to tissue and organs. Scientists can even detect single red blood cells in blood, or fat and protein deposits in arteries.

While PAT and PAM are primarily used in laboratory settings, Wang and others are working on multiple clinical applications. In one example, researchers use PAM to study the trajectory of blood cells as they flow through vessels in the brain.

“By seeing individual blood cells, researchers can start to identify what’s happening to the cells as they move through the vessels. Watching how these cells move could act as an early warning system to allow detection of potential blockage sites,” said Richard Conroy, director of the Division of Applied Science and Technology at the U.S. National Institute of Biomedical Imaging and Bioengineering.

Minding the gap

Because PAT and PAM images can be correlated with those generated using other techniques, such as magnetic resonance imaging (MRI) or positron emission tomography (PET), these techniques are complementary. “One imaging modality can’t do everything,” said Conroy. “Comparing results from different modalities provides a more detailed understanding of what is happening from the cell level to the whole animal.”

The approach could help bridge the gap between animal and human research, especially in neuroscience.

“Photoacoustic imaging is helping us understand how the mouse brain works,” said Wang. “We can then apply this information to better understand how the human brain works.” Wang, along with his team, is applying both PAT and PAM to study mouse brain function.

One of the challenges currently facing neuroscientists is the lack of available tools to study brain activity, Wang said. “The holy grail of brain research is to image action potentials,” said Wang. (An action potential occurs when electrical signals travel along axons, the long fibers that carry signals away from the nerve cell body.) With funding from the U.S. BRAIN Initiative , Wang and his group are now developing a PAT system to capture images every one-thousandth of a second, fast enough to image action potentials in the brain.

“Photoacoustic imaging fills a gap between light microscopy and ultrasound,” said Conroy. “The game-changing aspect of this [Wang’s] approach is that it has redefined our understanding of how deep we can see with light-based imaging,” said Conroy.

References:http://www.livescience.com/

Fifty years of Shakey, the “world’s first electronic person”

shakey-fifty

Timelapse image of Shakey in action

Robots are increasingly becoming part of our everyday lives and many roboticists believe that we are on the verge of a robot revolution that will do for goods and services what the Internet did for information. If so, then a lot of the credit goes to a 50 year old box on wheels called Shakey: the “world’s first electronic person.”

In the mid-1960s, computers were undergoing the first big jump in their evolution since “electronic brains” became practical in the 1940s. Computer architecture was much more sophisticated, scientists and engineers had a better grasp of the technology, transistors were shrinking mainframes so they filled a room instead of a really big room, and some researchers were convinced that computers and robots would soon be able to enter real world settings.

Among these was Charlie Rosen, one of the pioneers in the field of artificial intelligence and founder of SRI International’s Artificial Intelligence Center (then the Stanford Research Institute). He believed that computer simulations had reached the stage where they made it possible to produce problem-solving robots capable of working in factories.

shakey-fifty-4

 

At the same time ARPA, the ancestor of today’s DARPA, was interested in finding ways to use robots and artificial intelligence for military reconnaissance. One idea was that it might be possible to build some sort of a robotic scout for the Army, so, along with the National Science Foundation, and the Office of Naval Research, the agency hired SRI to look into such a scout.

The result was Shakey.

Officially, the timeline for project Shakey ran from 1966 to 1972, but like all complicated endeavors, the origins go back a bit further as the SRI team, consisting of project manager Rosen, Peter Hart, Marty Tenenbaum, Nils Nilsson, and others, gathered and drew up outlines and proposals for the project. Hart recently gave a keynote address at the ICRA conference in Seattle calling this year the 50th anniversary – and he was there.

Dates aside, what was remarkable about the Shakey project wasn’t just what it accomplished, but also but what its ambitions showed about the state of robotics. What Rosen et. al. set out to do indicated that they were either decades ahead of their time, or were in way over their heads.

shakey-fifty-3

The purpose of Shakey wasn’t just to produce a robot, but also one that was mobile, autonomous, equipped with vision, capable of mapping its surroundings, could solve problems, and could be addressed in natural-language English. Even today, combining only a fraction of these would ambitious. To combine them all using 1966 technology would have given Dr Frankenstein pause, but Shakey went on to become the first robot to combine motion, perception, and problem solving in one mobile package.

But why was it called Shakey? The answer was in its construction. Looking like a very early draft of R2D2, Shakey was a stack of gear. On the bottom was a motorized platform containing the drive wheels, push bar, and cat’s whiskers bump sensors. Above this was the electronic components box, then the “head” consisting of a black and white television camera, microphone, and a spinning prism rangefinder with the radio antenna on the very top. This arrangement was practical, but not very stable, hence the name.

According to Rosen: “We worked for a month trying to find a good name for it, ranging from Greek names to whatnot, and then one of us said, ‘Hey, it shakes like hell and moves around, let’s just call it Shakey,.'”

shakey-fifty-13

Even standing about five feet tall, Shakey didn’t seem very large until someone informed you that he (the team always called it “he”) actually weighed several tons. The visible part was just his mobile extension. The actual brain filled an entire room in another part of the lab with a second computer acting as a control interface. Originally, the mainframe was a 64K SDS-940 computer programmed in Fortran and Lisp, which was later replaced with a 192K PDP-10 around 1969. To put that into perspective, there are handheld games today with much more power than both of these put together.

Shakey lived in his own little world, which was made up of a series of rooms, doors, and objects. The walls had baseboards and everything was painted white and red to provide contrast for the monochromatic vision system. The robot’s job was to navigate these rooms, planning how to carrying out tasks, and solving any problems that might arise,

Early versions of Shakey envisioned manipulators, but this was later deemed unnecessarily complicated, so the final design worked by pushing objects around.

shakey-fifty-6

Much like other computers of the day, Shakey was given commands by teletype and responded via teletype and cathode ray tube. The robot was controlled by a tiered series of actions, which allowed Shakey to assess situations and find answers to problems.

At the bottom were low-level action programs, which were given to Shakey in plain English with commands like “go,” “pan,” and “tilt.” These were automatically translated into predicate calculus and provided the building blocks for intermediate-level actions, such as “go to” that told the robot to go to a particular location.

Telling Shakey where to go may seem simple, but if the location was in another room or behind an obstacle, that required him to set up subgoals or way points with a route to reach them. It was even more complicated if Shakey was told to find an object and move it somewhere else.

Shakey navigated by dead reckoning. That is, he counted turns of the drive wheels and deduced his location. However, this accumulated errors, so Shakey had to back it up with visual information. His pattern recognition algorithms could pick things like object outlines or room corners, allowing him to build up a map of his world and adjust his navigation.

The key to this was STRIPS (Stanford Research Institute Problem Solver), which is an artificial intelligence language for automated planning that allowed Shakey to map actions and subgoals to chart a plan of action. This also gave him the ability to recover from accidents, such as a misplaced box, an unknown room, an unexpected obstacle, or an obstructed door, and planning moves to recover from it, as well as learning from past mistakes by combining commands, intermediate actions, and preconditions to produce new actions.

Eventually, Shakey was able to solve surprisingly complex puzzles, such as the monkey and banana problem. In this, Shakey had to move a box on to a platform, which meant he had to figure out how to move a ramp into position first. It took the robot days to accomplish with many false starts and repetitions, but at the time, it was an unprecedented example of robotics and artificial intelligence.

The reaction to Shakey by the public and even by scientists was one of sensation. Shakey was described in Life magazine in 1970 as “the first electronic person,” and thinkers like Marvin Minsky were seriously predicting true artificial intelligence of superhuman power within three to eight years on the strength of Shakey’s performance.

If nothing else, it’s a cautionary tale about overestimating the state of the art. Bear in mind that this was a time when it was commonly believed that an invincible chess-playing computer would be built any day now instead of decades into the future – contemporary chess computers were lucky if they could manage a legal game, much less win one.

The Shakey project carried on until 1972 when the Defense Department started to get impatient for results. One general was even reported to ask: “Can you mount a 36-inch bayonet on it?” Unfortunately, despite remarkable advances, Shakey was still a creature of the laboratory and funding dried up.

Today, Shakey is on display at the Computer History Museum in Mountain View, California. However, it’s fair to say that without Shakey, today’s robots wouldn’t have been possible.
“Shakey, the first mobile robot to reason about its actions, was groundbreaking not only to robotics, but to artificial intelligence as well, as it led to fundamental advances in visual analysis, route finding, and planning of complex actions,” says Ray Perrault, PhD, director of SRI International’s Artificial Intelligence Center. “Shakey also helped open the possibilities of computer science to the public’s imagination, and put SRI’s Artificial Intelligence Center on the map.”

More information about Shakey is available at SRI International and a video showing the robot in action can be found here.

References:http://www.gizmag.com/

 

3D-printed objects created entirely from wood cellulose

3d-printed-wood-cellulose

The same material that gives trees their structural integrity can now be used to 3D print tiny chairs, electrical circuits, and other objects

The 3D printing revolution brings with it a harmful side effect: the special inks that it uses are derived (for the most part) from environmentally-unfriendly processes involving fossil fuels and toxic byproducts. But now scientists at Chalmers University of Technology have succeeded in using cellulose – the most abundant organic compound on the planet – in a 3D printer. They were also able to create electrically-conductive materials by adding carbon nanotubes.

To be specific, the researchers used nanocellulose obtained from wood pulp. This is the stuff that forms the scaffolding that makes trees able to stand tall. It’s available in massive quantities, plus it’s biodegradable, incredibly strong, renewable, and reusing it keeps the carbon dioxide it contains from entering the atmosphere.

Normally, 3D printing uses a heated liquid form of plastic or metal that hardens and solidifies as it cools and dries. But cellulose doesn’t melt when you heat it, so it’s not previously been considered a suitable material.3d-printed-wood-cellulose-1

The researchers mixed the cellulose in a hydrogel of 95-99 percent water, which allowed it to go into a 3D bioprinter, and in some instances with carbon nanotubes so that it could conduct electricity. The very high water content of the resultant printer gel meant that the drying process had to be carefully controlled so as not to lose the object’s 3D structure. The scientists found that they could also allow the structure to collapse into a thin film (like a circuit).

“Potential applications range from sensors integrated with packaging, to textiles that convert body heat to electricity, and wound dressings that can communicate with healthcare workers,” says lead researcher Paul Gatenholm. “Our research group now moves on with the next challenge: to use all wood biopolymers besides cellulose.”

The researchers presented their findings at the New Materials From Trees conference in Stockholm earlier this week.

References:http://www.gizmag.com/

Leap second to make 61-second minute at end of June

leap-second

The leap second on June 30 will keep atomic clocks in synch with everyday timekeeping

If you’re one of those people who just can’t find the time to fit everything you want to do into a day, then mark June 30 on your calendar. On that Tuesday you’ll have a little extra time on your hands because, at precisely 23:59:59 GMT, the world’s clocks will add a second to the day, making it 24 hours and one second long.

Although a standard year is 365 days long, the Earth actually makes its journey around the Sun in about 365.25 days. This means that, over time, the calendar will start to get out of synch with the Sun, and the Vernal Equinox marking the beginning of spring will get later and later in the year. In fact, by 1586, the old Julian calendar invented by the Romans was off by a full 11 days.

To prevent this, the modern Gregorian calendar includes leap years. Most of us are familiar with the formula of adding an additional day every four years, but it’s actually a bit more complicated, stating that if a year is divisible by four, but is not divisible by 100, and is not divisible by 400, then it is a leap year. And even this is an approximation, but a necessary one if the seasons are to keep matching the calendar dates.

The leap second is based on a similar, but much more subtle and complex problem, which is how to reconcile the length of the day with the length of the second. At first this seems like the definition of a non-problem because a second is defined as 1/86,400th of a day. This means that whatever the length of the day is the second should, by definition, match.

The problem is that the Earth’s day has a maddeningly inconsistent length. The Earth is constantly being pulled at by the Moon, the Sun, and the planets, creating the tides that slowly, but surely slow down the Earth’s rotation. Worse, the Earth isn’t solid. Much of it is in a molten liquid or plastic state. To see how this affects the day, try spinning a hard-boiled egg and a raw one (in the shell, of course). The hard-boiled one will spin like a top while the raw one will fall over because the liquid yolk and white are sloshing about. A similar thing happens to the Earth, causing all sorts of unpredictable wobbles.

Added to this is the fact that the Earth’s crust isn’t stable either. Continents move, ice caps grow and shrink, as do glaciers, while land masses are pressed down and rebound as ice ages come and go and sea levels change. And when you add in volcanoes and earthquakes, it’s a wonder that the day is as steady as it is. However, it does change when measured astronomically against quasars and GPS measurements, and, according to NASA, the day has lengthened by an average of 2.5 milliseconds since 1820.

This variation doesn’t mean much in everyday life and it may seem like most people could live with a day that’s off by a couple of thousandths of a second, but we live in a world that requires extremely precise timing in order to function. Navigation, astronomy, mobile phones, satellites, the internet, submarines, and a huge number of other systems rely on extremely precise clocks – in this case, atomic clocks.

Thanks to atomic clocks, we now have two definitions of the second. The first is the imprecise one based on the rotation of the Earth, and the other, the official one used by the scientific community since the General Conference of the Metre Convention of 1967, is based on oscillations of a cesium atom with a second defined as 9,192,631,770 oscillations of the atom’s microwave signal.

This is where the leap second comes in. It’s based on the specifications of the International Earth Rotation and Reference Systems Service (IERS) in Paris and uses 200 atomic clocks in 50 national laboratories to keep the world’s radio and internet controlled timepieces within 0.9 seconds of accuracy over the course of a year.

The tricky bit is taking the extreme accuracy of the atomic clocks and matching them to the more variable rotation of the Earth, which by 1972 was already 10 seconds out of synch. To remedy this discrepancy, the atomic clocks are used to keep tabs on the Earth’s rotation. When astronomical measurements indicate that they clocks and the Earth are getting too far out of step, a correction is calculated and periodically applied, producing what is now called Coordinated Universal Time (UTC).

Since January 1, 1972, there have been 26 leap seconds. These leap seconds aren’t anything as regular as leap years. Instead of regular intervals, leap seconds have since 1999 been set at intervals of 7, 3, 3.5, and 3 years. During these leap seconds, clocks and watches showing legal time must synch with a new time signal or stop for one second.

Though the leap second is an established standard, its use remains controversial in horological circles due to the expense of keeping the two time systems in synch, and a decision on whether to continue the practice is expected to be made in November by the World Radiocommunication Conference (WRC-15) of the International Telecom Union Unia Transportation.

References:http://www.gizmag.com/