Style software gives fashion tips after judging what you wear

dn27729-1_1200

Now even computers are going to be critical of how we look: algorithms are getting into style. New software judges outfits from a photograph and offers tips to make them look even more chic.

“Not everyone has access to an expert,” says Raquel Urtasun, a computer scientist at the University of Toronto, Canada, who developed the software with colleagues in Spain. “You can imagine something like this being used [to style photos for] dating sites and Facebook profiles.”

Fashion is as tough for machines to master Movie Cameraas it is for us, if not more so, largely because it is so subjective. What’s popular now may become passé in a few months, and what works well in a particular culture or setting could be wildly inappropriate in another: think about clothes for date night and clothes for the office. And before a computer works any of this out, it has to be able to correctly identify each item of clothing being worn.

To teach the software about fashion, Urtasun’s team showed it thousands of pictures from Chictopia, a popular style website. The more positive votes left by other users, the more “fashionable” the software perceived the look to be.

It also noted other information about the photo, such as the user’s geographic location, the date they had posted it, the background of the picture, and written descriptions of the clothing.

The resulting software uses this information to categorise outfits and make suggestions based on what was successful for others in similar situations – for example, to add black boots or try something in pastel. The team plans to hone the results further by showing it a more diverse array of photos from other sources.

Urtasun presented the work at the Computer Vision and Pattern Recognition conference in Boston, Massachusetts, earlier this month. Her team plans to improve the software so that it can automate the work of a human stylist.

Alexandra Greenawalt, a personal stylist in New York City, is understandably sceptical about computers muscling in on her patch. Looking good is about more than the latest trends, she says.

When dressing clients, she considers a wide range of factors, including their age, occupation and body shape. An effective algorithm would need to take all that into account, too.

Still, she is curious to watch the technology develop. “What will be interesting to see is if it can predict fashion before it happens or just based on likes in the past,” she says. “I would imagine the teens and 20-year-olds who are very much wanting to be in fashion would find it valuable.”

References:http://www.newscientist.com/

Stretchy ‘Origami Batteries’ Could Power Smart Clothing

smartwatch-kirigami-battery

Stretchy batteries inspired by origami could power smartwatches and other wearable electronics, researchers say.

Increasingly, scientists worldwide are developing flexible electronics, such as video displays and solar panels, that could one day make their way into clothing and even human bodies. But one limitation of these devices is the scarcity of equally flexible batteries to power them or store energy they generate.

Although prior research has created bendable batteries, it has proven more challenging to developing ones that are stretchy versions has proven more challenging, researchers said. Now, inventors have created lithium-ion batteries that can stretch to more than 150 percent of their original size, while remaining capable of powering devices. [Top 10 Inventions that Changed the World]

Hanqing Jiang, an associate professor of mechanical and aerospace engineering at Arizona State University in Tempe, came up with the new device after “talking with an origami artist who showed me some famous origami patterns,” he said. One of these patterns, known as the Miura-ori fold, is currently used to fold large maps into small rectangles, and was originally invented to help pack solar panels efficiently on spacecraft.

One problem with using principles of origami to create electronics is that folding often produces uneven surfaces. This can make it difficult to integrate these devices with other electronics, the researchers said.

Instead, Jiang and his colleagues used a variation of origami known as kirigami to create their stretchable batteries. Whereas conventional origami uses only folding to create structures, kirigami uses both folding and cutting. The technique results in structures whose surfaces can stay even after stretching.

“We found a new approach to make stretchable structures using conventional manufacturing approaches,” Jiang said.

The batteries were created using slurries of graphite and lithium cobalt dioxide, which together can store and release electricity. These slurries were coated onto sheets of aluminum foil, and kirigami techniques were then used to fold and cut the sheets into stretchy serpentine shapes.

In experiments, the new batteries could power a Samsung Gear 2 smartwatch even when stretched, the researchers said. The batteries could easily be sewn into a stretchy wristband, which suggests they could be used in flexible wearable devices.

Another research team recently developed a battery that could stretch to 300 percent its original size. In this device, the energy-storing materials were sandwiched between thin sheets of silicon rubber. Jiang said his new battery has an advantage over this previous battery because his is compatible with commercially available manufacturing technologies.

The researchers are now working on creating microscopic origami patterns to combine stretchable batteries with microelectronics. Jiang and his colleagues detailed their findings online June 11 in the journal Scientific Reports.

References:http://www.livescience.com/

World’s Thinnest Light Bulb Created from Graphene

graphene-light-bulb

Graphene, a form of carbon famous for being stronger than steel and more conductive than copper, can add another wonder to the list: making light.

Researchers have developed a light-emitting graphene transistor that works in the same way as the filament in a light bulb.

“We’ve created what is essentially the world’s thinnest light bulb,” study co-author James Hone, a mechanical engineer at Columbia University in New York, said in a Scientists have long wanted to create a teensy “light bulb” to place on a chip, enabling what is called photonic circuits, which run on light rather than electric current. The problem has been one of size and temperature — incandescent filaments must get extremely hot before they can produce visible light. This new graphene device, however, is so efficient and tiny, the resulting technology could offer new ways to make displays or study high-temperature phenomena at small scales, the researchers said. [8 Chemical Elements You’ve Never Heard Of]

Making light

When electric current is passed through an incandescent light bulb’s filament — usually made of tungsten — the filament heats up and glows. Electrons moving through the material knock against electrons in the filament’s atoms, giving them energy. Those electrons return to their former energy levels and emit photons (light) in the process. Crank up the current and voltage enough and the filament in the light bulb hits temperatures of about 5,400 degrees Fahrenheit (3,000 degrees Celsius) for an incandescent. This is one reason light bulbs either have no air in them or are filled with an inert gas like argon: At those temperatures tungsten would react with the oxygen in air and simply burn.

In the new study, the scientists used strips of graphene a few microns across and from 6.5 to 14 microns in length, each spanning a trench of silicon like a bridge. (A micron is one-millionth of a meter, where a hair is about 90 microns thick.) An electrode was attached to the ends of each graphene strip. Just like tungsten, run a current through graphene and the material will light up. But there is an added twist, as graphene conducts heat less efficiently as temperature increases, which means the heat stays in a spot in the center, rather than being relatively evenly distributed as in a tungsten filament.

Myung-Ho Bae, one of the study’s authors, told Live Science trapping the heat in one region makes the lighting more efficient. “The temperature of hot electrons at the center of the graphene is about 3,000 K [4,940 F], while the graphene lattice temperature is still about 2,000 K [3,140 F],” he said. “It results in a hotspot at the center and the light emission region is focused at the center of the graphene, which also makes for better efficiency.” It’s also the reason the electrodes at either end of the graphene don’t melt.

As for why this is the first time light has been made from graphene, study co-leader Yun Daniel Park, a professor of physics at Seoul National University, noted that graphene is usually embedded in or in contact with a substrate.

“Physically suspending graphene essentially eliminates pathways in which heat can escape,” Park said. “If the graphene is on a substrate, much of the heat will be dissipated to the substrate. Before us, other groups had only reported inefficient radiation emission in the infrared from graphene.”

The light emitted from the graphene also reflected off the silicon that each piece was suspended in front of. The reflected light interferes with the emitted light, producing a pattern of emission with peaks at different wavelengths. That opened up another possibility: tuning the light by varying the distance to the silicon.

The principle of the graphene is simple, Park said, but it took a long time to discover.

“It took us nearly five years to figure out the exact mechanism but everything (all the physics) fit. And, the project has turned out to be some kind of a Columbus’ Egg,” he said, referring to a legend in which Christopher Columbus challenged a group of men to make an egg stand on its end; they all failed and Columbus solved the problem by just cracking the shell at one end so that it had a flat bottom.

References:http://www.livescience.com/

Interview: Lockheed Martin’s Todd Danko on the DRC Finals and the future of robotics

drc-lockheed-interview

Team Trooper’s robot named Leo competed in the recent DARPA Robotic Challenge

One of the 24 teams competing at the 2015 DARPA Robotics Challenge, and the only team fielded by a large private company, was Lockheed Martin’s Team Trooper and its robot Leo. To find out more about what goes into programming a humanoid robot and the future of robotics, we talked to the team leader, Todd Danko.

Danko: The whole point is to look for new opportunities. We think that investing in mobile manipulation, which is effectively what we’re doing, will be very useful in future applications like underwater or space robotics; places where it’s very difficult or impossible to get people to do those tasks. You can use our systems to tell robots what to do, and let the robots do those things.

Why did Lockheed opt for using Boston Dynamics’s humanoid Atlas robot?

We’re in a world that’s complementary to the humanoid form. If you have a very constrained task, like in a factory, a humanoid isn’t the right answer. You want to optimize your robot to solve those problems. If you don’t have a single problem to optimize, and you want a more general robot, then a humanoid robot makes more sense.

drc-lockheed-interview-3

Does the humanoid robot produce any challenges in creating software?

It produces tremendous challenges. For a humanoid robot just to stay still and not fall over if you touch it. That’s a lot of software on its own. You have to have a good dynamic model of the robot and run it constantly at a thousand Hertz, so it’s harder for the robot to fall over. Meanwhile, a robot with wheels, if you turn everything off, it will be mechanically stable. It just sits there.

Were there any surprises working on this project?

Generally, one thing that surprised me is the state of the art of robotics and especially humanoid robotics. It’s one thing to see them in a movie or some real robots in specific demos, but with those demos you’re seeing the best of what could be, so the state of the art is actually a lot more primitive than many people think it is. It’s exciting to be able to help grow that

.drc-lockheed-interview-2

It’s a poor analogy, but what would you compare a humanoid robot of today to in terms of its development?

I could get into trouble because I don’t have much expertise in human development, but one of our goals was to work toward a robot with the capabilities of a two-year old. I can’t say that we exceeded that or didn’t, but like a two-year old, there’s definitely some misbehavior in these robots. They don’t always do what we tell them to.

We noticed in the competition that the robots moved very slowly with a lot of starting and stopping while they solved a problem or awaited an order. What hurdles have to be overcome before we see robots are that are fast and articulate enough to be practical?

There are two sides to this. On one side, robots are already practical in many ways. Probably the most successful robots are much more simple than those [at the competition]. On top of that, you have to consider what are the consequences of an error, so if you have a lot of runs or a lot of time, you may give your autonomous system a lot of latitude to make decisions rather than constantly approving it before it proceeds. In a competition like this, there are greater consequences and if a mistake happens that a human can stop, then we should prevent that from happening.

Secondly, there a lots of things that different groups are working on to improve the performance of robots as a whole. You notice that the robots are untethered. Powerwise, they were lasting an hour at least in most cases, but how useful is an hour of robot time? Maybe you want something that lasts a day or a couple of days even. We need better batteries, better power systems, and more efficient actuators, so there’s more power and less use of power down the line.

drc-lockheed-interview-5

I’ve never met a perception algorithm that couldn’t use more processors to be better. Equally with planning and parallelizing, this provides more possibilities to come up with more solutions. Then there’s perception. There’s still a lot of room for improvement in the state of the art in perception. We’re very good at recognizing specific objects, but more needs to be done in recognizing categories of objects, or to recognize something never seen before and how that could be used.

On top of all this, I think there’s still a role for the human to help the robot know what it is that needs to be done. It may be best to have a human in a safe place who can communicate the “what” and allow the robot to do the dangerous things. That’s something I think we can see in the near future.

It’s the cliché question, but how long do you think it will be before we see robots like this as part of people’s lives?

It’s going to be a long time before we see a robot like this. Look at what [happened in the competition] and how many people it took to keep that robot from destroying itself. There’s a lot of work that needs to be done before we’re contributing in a way that’s not a burden to the operators. That goes back to all those things we talked about that need to be improved, so it’s going to be quite some time. On the other hand, simple robots are already being used in applications today. We’re just using more complicated robots in more complicated situations as we go in that direction of complexity.

References:http://www.gizmag.com/