Stretchy ‘Origami Batteries’ Could Power Smart Clothing

smartwatch-kirigami-battery

Stretchy batteries inspired by origami could power smartwatches and other wearable electronics, researchers say.

Increasingly, scientists worldwide are developing flexible electronics, such as video displays and solar panels, that could one day make their way into clothing and even human bodies. But one limitation of these devices is the scarcity of equally flexible batteries to power them or store energy they generate.

Although prior research has created bendable batteries, it has proven more challenging to developing ones that are stretchy versions has proven more challenging, researchers said. Now, inventors have created lithium-ion batteries that can stretch to more than 150 percent of their original size, while remaining capable of powering devices. [Top 10 Inventions that Changed the World]

Hanqing Jiang, an associate professor of mechanical and aerospace engineering at Arizona State University in Tempe, came up with the new device after “talking with an origami artist who showed me some famous origami patterns,” he said. One of these patterns, known as the Miura-ori fold, is currently used to fold large maps into small rectangles, and was originally invented to help pack solar panels efficiently on spacecraft.

One problem with using principles of origami to create electronics is that folding often produces uneven surfaces. This can make it difficult to integrate these devices with other electronics, the researchers said.

Instead, Jiang and his colleagues used a variation of origami known as kirigami to create their stretchable batteries. Whereas conventional origami uses only folding to create structures, kirigami uses both folding and cutting. The technique results in structures whose surfaces can stay even after stretching.

“We found a new approach to make stretchable structures using conventional manufacturing approaches,” Jiang said.

The batteries were created using slurries of graphite and lithium cobalt dioxide, which together can store and release electricity. These slurries were coated onto sheets of aluminum foil, and kirigami techniques were then used to fold and cut the sheets into stretchy serpentine shapes.

In experiments, the new batteries could power a Samsung Gear 2 smartwatch even when stretched, the researchers said. The batteries could easily be sewn into a stretchy wristband, which suggests they could be used in flexible wearable devices.

Another research team recently developed a battery that could stretch to 300 percent its original size. In this device, the energy-storing materials were sandwiched between thin sheets of silicon rubber. Jiang said his new battery has an advantage over this previous battery because his is compatible with commercially available manufacturing technologies.

The researchers are now working on creating microscopic origami patterns to combine stretchable batteries with microelectronics. Jiang and his colleagues detailed their findings online June 11 in the journal Scientific Reports.

References:http://www.livescience.com/

World’s Thinnest Light Bulb Created from Graphene

graphene-light-bulb

Graphene, a form of carbon famous for being stronger than steel and more conductive than copper, can add another wonder to the list: making light.

Researchers have developed a light-emitting graphene transistor that works in the same way as the filament in a light bulb.

“We’ve created what is essentially the world’s thinnest light bulb,” study co-author James Hone, a mechanical engineer at Columbia University in New York, said in a Scientists have long wanted to create a teensy “light bulb” to place on a chip, enabling what is called photonic circuits, which run on light rather than electric current. The problem has been one of size and temperature — incandescent filaments must get extremely hot before they can produce visible light. This new graphene device, however, is so efficient and tiny, the resulting technology could offer new ways to make displays or study high-temperature phenomena at small scales, the researchers said. [8 Chemical Elements You’ve Never Heard Of]

Making light

When electric current is passed through an incandescent light bulb’s filament — usually made of tungsten — the filament heats up and glows. Electrons moving through the material knock against electrons in the filament’s atoms, giving them energy. Those electrons return to their former energy levels and emit photons (light) in the process. Crank up the current and voltage enough and the filament in the light bulb hits temperatures of about 5,400 degrees Fahrenheit (3,000 degrees Celsius) for an incandescent. This is one reason light bulbs either have no air in them or are filled with an inert gas like argon: At those temperatures tungsten would react with the oxygen in air and simply burn.

In the new study, the scientists used strips of graphene a few microns across and from 6.5 to 14 microns in length, each spanning a trench of silicon like a bridge. (A micron is one-millionth of a meter, where a hair is about 90 microns thick.) An electrode was attached to the ends of each graphene strip. Just like tungsten, run a current through graphene and the material will light up. But there is an added twist, as graphene conducts heat less efficiently as temperature increases, which means the heat stays in a spot in the center, rather than being relatively evenly distributed as in a tungsten filament.

Myung-Ho Bae, one of the study’s authors, told Live Science trapping the heat in one region makes the lighting more efficient. “The temperature of hot electrons at the center of the graphene is about 3,000 K [4,940 F], while the graphene lattice temperature is still about 2,000 K [3,140 F],” he said. “It results in a hotspot at the center and the light emission region is focused at the center of the graphene, which also makes for better efficiency.” It’s also the reason the electrodes at either end of the graphene don’t melt.

As for why this is the first time light has been made from graphene, study co-leader Yun Daniel Park, a professor of physics at Seoul National University, noted that graphene is usually embedded in or in contact with a substrate.

“Physically suspending graphene essentially eliminates pathways in which heat can escape,” Park said. “If the graphene is on a substrate, much of the heat will be dissipated to the substrate. Before us, other groups had only reported inefficient radiation emission in the infrared from graphene.”

The light emitted from the graphene also reflected off the silicon that each piece was suspended in front of. The reflected light interferes with the emitted light, producing a pattern of emission with peaks at different wavelengths. That opened up another possibility: tuning the light by varying the distance to the silicon.

The principle of the graphene is simple, Park said, but it took a long time to discover.

“It took us nearly five years to figure out the exact mechanism but everything (all the physics) fit. And, the project has turned out to be some kind of a Columbus’ Egg,” he said, referring to a legend in which Christopher Columbus challenged a group of men to make an egg stand on its end; they all failed and Columbus solved the problem by just cracking the shell at one end so that it had a flat bottom.

References:http://www.livescience.com/

Interview: Lockheed Martin’s Todd Danko on the DRC Finals and the future of robotics

drc-lockheed-interview

Team Trooper’s robot named Leo competed in the recent DARPA Robotic Challenge

One of the 24 teams competing at the 2015 DARPA Robotics Challenge, and the only team fielded by a large private company, was Lockheed Martin’s Team Trooper and its robot Leo. To find out more about what goes into programming a humanoid robot and the future of robotics, we talked to the team leader, Todd Danko.

Danko: The whole point is to look for new opportunities. We think that investing in mobile manipulation, which is effectively what we’re doing, will be very useful in future applications like underwater or space robotics; places where it’s very difficult or impossible to get people to do those tasks. You can use our systems to tell robots what to do, and let the robots do those things.

Why did Lockheed opt for using Boston Dynamics’s humanoid Atlas robot?

We’re in a world that’s complementary to the humanoid form. If you have a very constrained task, like in a factory, a humanoid isn’t the right answer. You want to optimize your robot to solve those problems. If you don’t have a single problem to optimize, and you want a more general robot, then a humanoid robot makes more sense.

drc-lockheed-interview-3

Does the humanoid robot produce any challenges in creating software?

It produces tremendous challenges. For a humanoid robot just to stay still and not fall over if you touch it. That’s a lot of software on its own. You have to have a good dynamic model of the robot and run it constantly at a thousand Hertz, so it’s harder for the robot to fall over. Meanwhile, a robot with wheels, if you turn everything off, it will be mechanically stable. It just sits there.

Were there any surprises working on this project?

Generally, one thing that surprised me is the state of the art of robotics and especially humanoid robotics. It’s one thing to see them in a movie or some real robots in specific demos, but with those demos you’re seeing the best of what could be, so the state of the art is actually a lot more primitive than many people think it is. It’s exciting to be able to help grow that

.drc-lockheed-interview-2

It’s a poor analogy, but what would you compare a humanoid robot of today to in terms of its development?

I could get into trouble because I don’t have much expertise in human development, but one of our goals was to work toward a robot with the capabilities of a two-year old. I can’t say that we exceeded that or didn’t, but like a two-year old, there’s definitely some misbehavior in these robots. They don’t always do what we tell them to.

We noticed in the competition that the robots moved very slowly with a lot of starting and stopping while they solved a problem or awaited an order. What hurdles have to be overcome before we see robots are that are fast and articulate enough to be practical?

There are two sides to this. On one side, robots are already practical in many ways. Probably the most successful robots are much more simple than those [at the competition]. On top of that, you have to consider what are the consequences of an error, so if you have a lot of runs or a lot of time, you may give your autonomous system a lot of latitude to make decisions rather than constantly approving it before it proceeds. In a competition like this, there are greater consequences and if a mistake happens that a human can stop, then we should prevent that from happening.

Secondly, there a lots of things that different groups are working on to improve the performance of robots as a whole. You notice that the robots are untethered. Powerwise, they were lasting an hour at least in most cases, but how useful is an hour of robot time? Maybe you want something that lasts a day or a couple of days even. We need better batteries, better power systems, and more efficient actuators, so there’s more power and less use of power down the line.

drc-lockheed-interview-5

I’ve never met a perception algorithm that couldn’t use more processors to be better. Equally with planning and parallelizing, this provides more possibilities to come up with more solutions. Then there’s perception. There’s still a lot of room for improvement in the state of the art in perception. We’re very good at recognizing specific objects, but more needs to be done in recognizing categories of objects, or to recognize something never seen before and how that could be used.

On top of all this, I think there’s still a role for the human to help the robot know what it is that needs to be done. It may be best to have a human in a safe place who can communicate the “what” and allow the robot to do the dangerous things. That’s something I think we can see in the near future.

It’s the cliché question, but how long do you think it will be before we see robots like this as part of people’s lives?

It’s going to be a long time before we see a robot like this. Look at what [happened in the competition] and how many people it took to keep that robot from destroying itself. There’s a lot of work that needs to be done before we’re contributing in a way that’s not a burden to the operators. That goes back to all those things we talked about that need to be improved, so it’s going to be quite some time. On the other hand, simple robots are already being used in applications today. We’re just using more complicated robots in more complicated situations as we go in that direction of complexity.

References:http://www.gizmag.com/

Physicists develop ultrasensitive nanomechanical biosensor

miptphysicis

Two young researchers working at the MIPT Laboratory of Nanooptics and Plasmonics, Dmitry Fedyanin and Yury Stebunov, have developed an ultracompact, highly sensitive nanomechanical sensor for analyzing the chemical composition of substances and detecting biological objects, such as viral disease markers, which appear when the immune system responds to incurable or hard-to-cure diseases, including HIV, hepatitis, herpes, and many others. The sensor will enable doctors to identify tumor markers, whose presence in the body signals the emergence and growth of cancerous tumors.

The sensitivity of the new device is best characterized by one key feature: According to its developers, the sensor can track changes of just a few kilodaltons in the mass of a cantilever in real time. One Dalton is roughly the mass of a proton or neutron, and several thousand Daltons are the mass of individual proteins and DNA molecules. So the new optical sensor will allow for diagnosing diseases long before they can be detected by any other method, which will pave the way for a new-generation of diagnostics.
The device, described in an article published in the journal Scientific Reports, is an optical or, more precisely, optomechanical chip. “We’ve been following the progress made in the development of micro- and nanomechanical biosensors for quite a while now, and can say that no one has been able to introduce a simple and scalable technology for parallel monitoring that would be ready to use outside a laboratory. So our goal was not only to achieve the high sensitivity of the sensor and make it compact, but also make it scalable and compatible with standard microelectronics technologies,” the researchers said.
Unlike similar devices, the new sensor has no complex junctions and can be produced through a standard CMOS process technology used in microelectronics. The sensor doesn’t have a single circuit, and its design is very simple. It consists of two parts: a photonic (or plasmonic) nanowave guide to control the optical signal, and a cantilever hanging over the waveguide.

2-physicistsde

A cantilever, or beam, is a long and thin strip of microscopic dimensions (5 micrometers long, 1 micrometer wide and 90 nanometers thick), connected tightly to a chip. To get an idea how it works, imagine pressing one end of a ruler tightly to the edge of a table and allowing the other end to hang freely in the air. If you snap the free end with your other hand, the ruler will make mechanical oscillations at a certain frequency. That’s how the cantilever works. The difference between the oscillations of the ruler and the cantilever is only the frequency, which depends on the materials and geometry: while the ruler oscillates at several tens of hertz, the frequency of the cantilever’s oscillations is measured in megahertz. In other words, it makes a few million oscillations per second.

There are two optical signals going through the waveguide during oscillations: The first one sets the cantilever in motion, and the second one allows for reading the signal containing information about the movement. The inhomogeneous electromagnetic field of the control signal’s optical mode transmits a dipole moment to the cantilever, impacting the dipole at the same time so that the cantilever starts to oscillate.
The sinusoidally modulated control signal makes the cantilever oscillate at an amplitude of up to 20 nanometers. The oscillations determine the parameters of the second signal, the output power of which depends on the cantilever’s position.

3-physicistsde

The highly localized optical modes of nanowave guides, which create a strong electric field intensity gradient, are key to inducing cantilever oscillations. Because the changes of the electromagnetic field in such systems are measured in tens of nanometers, researchers use the term “nanophotonics.” Without the nanoscale waveguide and the cantilever, the chip simply wouldn’t work. A big cantilever cannot be made to oscillate by freely propagating light, and the effects of chemical changes to its surface on the oscillation frequency would be less noticeable..

Cantilever oscillations make it possible to determine the chemical composition  of the environment in which the chip is placed. That’s because the frequency of mechanical vibrations depends not only on the materials’ dimensions and properties, but also on the mass of the oscillatory system, which changes during a chemical reaction between the cantilever and the environment. By placing different reagents on the cantilever, researchers make it react with specific substances or even biological objects. If you place antibodies to certain viruses on the cantilever, it’ll capture the viral particles in the analyzed environment. Oscillations will occur at a lower or higher amplitude depending on the virus or the layer of chemically reactive substances on the cantilever, and the electromagnetic wave passing through the waveguide will be dispersed by the cantilever differently, which can be seen in the changes of the intensity of the readout signal.

Calculations done by the researchers showed that the new sensor will combine high sensitivity with a comparative ease of production and miniature dimensions, allowing it to be used in all portable devices, such as smartphones, wearable electronics, etc. One chip, several millimeters in size, can accommodate several thousand such sensors, configured to detect different particles or molecules. The price, thanks to the simplicity of the design, will most likely depend on the number of sensors, being much more affordable than its competitors.

References:http://phys.org/