Centimeter-long origami robot climbs inclines, swims, and carries loads

centimeterlo

At the recent International Conference on Robotics and Automation, MIT researchers presented a printable origami robot that folds itself up from a flat sheet of plastic when heated and measures about a centimeter from front to back.

Weighing only a third of a gram, the robot can swim, climb an incline, traverse rough terrain, and carry a load twice its weight. Other than the self-folding plastic sheet, the robot’s only component is a permanent magnet affixed to its back. Its motions are controlled by external magnetic fields.
“The entire walking motion is embedded into the mechanics of the robot body,” says Cynthia R. Sung, an MIT graduate student in electrical engineering and computer science and one of the robot’s co-developers. “In previous [origami] robots, they had to design electronics and motors to actuate the body itself.”
Joining Sung on the paper describing the robot are her advisor, Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science; first author Shuhei Miyashita, a postdoc in Rus’ lab; Steven Guitron, who just received his bachelor’s degree in mechanical engineering from MIT; and Marvin Ludersdorfer of the Technical University of Munich.
Fantastic Voyage

The robot’s design was motivated by a hypothetical application in which tiny sheets of material would be injected into the human body, navigate to an intervention site, fold themselves up, and, when they had finished their assigned tasks, dissolve. To that end, the researchers built their prototypes from liquid-soluble materials. One prototype robot dissolved almost entirely in acetone (the permanent magnet remained); another had components that were soluble in water.
“We complete the cycle from birth through life, activity, and the end of life,” Miyashita says. “The circle is closed.”
In all of the researchers’ prototypes, the self-folding sheets had three layers. The middle layer always consisted of polyvinyl chloride, a plastic commonly used in plumbing pipes, which contracts when heated. In the acetone-soluble prototype, the outer layers were polystyrene.
Slits cut into the outer layers by a laser cutter guide the folding process. If two slits on opposite sides of the sheet are of different widths, then when the middle layer contracts, it forces the narrower slit’s edges together, and the sheet bends in the opposite direction. In their experiments, the researchers found that the sheet would begin folding at about 150 degrees Fahrenheit.
Once the robot has folded itself up, the proper application of a magnetic field to the permanent magnet on its back causes its body to flex. The friction between the robot’s front feet and the ground is great enough that the front feet stay fixed while the back feet lift. Then, another sequence of magnetic fields causes the robot’s body to twist slightly, which breaks the front feet’s adhesion, and the robot moves forward

Outside control

In their experiments, the researchers positioned the robot on a rectangular stage with an electromagnet at each of its four corners. They were able to vary the strength of the electromagnets’ fields rapidly enough that the robot could move nearly four body lengths a second.

In addition to the liquid-soluble versions of their robot, the researchers also built a prototype whose outer layers were electrically conductive. Inspired by earlier work from Rus and Miyashita, the researchers envision that a tiny, conductive robot could act as a tiny sensor. Contact with other objects—whether chemical accretions in a mechanical system or microorganisms or cells in the body—would disrupt a current passing through the robot in a characteristic way, and that electrical signal could be relayed to human operators.
“Making small robots is particularly challenging, because you don’t just take off-the-shelf components and bolt them together,” says Hod Lipson, a professor of mechanical and aerospace engineering at Cornell University, who studies robotics. “It’s a challenging angle of robotics, and they’ve been able to solve it.”
“They use digital manufacturing techniques so that the intelligence of the manufacturing is embedded in the material,” Lipson adds. “I think the techniques they describe would scale to smaller and smaller dimensions, so they by no means have reached a limit.”

References:http://phys.org/

A computer algorithm to quantify creativity in art networks

acomputeralg

A team of researchers at Rutgers University has taken on the novel task of getting a computer to rate paintings made by the masters, based on their creativity. They have written a paper describing their approach and the results they have obtained in running their algorithm and have posted it on the preprint server arXiv.

The value of art lies in the eye of the beholder, some may find a particular painting moves them to tears, while another feels nothing—such is the intangible nature of the human mind and its reaction to stimuli. Creativity, on the other hand, is a little more easily recognized, whether in art, the sciences or other areas. In this new effort the team at Rugters sought to bring some science to the fine art of creativity recognition, as it applies to one of the most recognized fine arts—paintings done by masters over the years. Traditionally, labeling a work of art as creative has fallen to art scholars with years of training, background and love of the work—it has to have something new, of course, but it must also, according to the researchers, have demonstrated some degree of influence, i.e. be copied by others that come after. They set out to create an algorithm that once finished could rate the works by masters, based on nothing but creativity.
To create that algorithm, the team started with what are known as classemes—where a computer recognizes an object in a picture and assigns it to a particular category. Next, they found a way to access a huge database of famous paintings that was easily accessible, Wikiart, which has among other things, approximately 62,000 pictures of famous paintings. Then finally, they applied theoretical work being done with network science to help with figuring out which paintings were a clear influence in the creation of other paintings.
Putting it all together and running the algorithm resulted in generating a list of paintings with rankings based on creativity. The approach apparently worked, as the researchers report that for the most part, their algorithm results matched very closely with art expert assessments over the years, though there were a few exceptions here and there. The team suggests the algorithm could be used in other contexts as well, such as sculpture, literature and likely other science based applications.

References:http://phys.org/

Machines learn to understand how we speak

machineslear

At Apple’s recent World Wide Developer Conference, one of the tent-pole items was the inclusion of additional features for intelligent voice recognition by its personal assistant app Siri in its most recent update to its mobile operating system iOS 9.

Now, instead of asking Siri to “remind me about Kevin’s birthday tomorrow”, you can rely on context and just ask Siri to “remind me of this” while viewing the Facebook event for the birthday. It will know what you mean.

Technology like this has also existed in Google devices for a little while now – thanks to OK Google – bringing us ever closer to context-aware voice recognition.

But how does it all work? Why is context so important and how does it tie in with voice recognition?
To answer that question, it’s worthwhile looking back at how voice recognition works and how it relates to another important area, natural language processing.

A brief history of voice recognition

Voice recognition has been in the public consciousness for a long time. Rather than tapping on a keyboard, wouldn’t it be nice to speak to a computer in natural language and have it understand everything you say?
Ever since Captain Kirk’s conversation with the computer aboard the USS Enterprise in the original Star Trek series in the 1960s (and Scotty’s failed attempt to talk to a 20th-century computer in one of the later Original Series movies) we’ve dreamed about how this might work.

Even movies set in more recent times have flirted with the idea of better voice recognition. The technology-focused Sneakers from 1992 features Robert Redford painfully collecting snippets of an executive’s voice and playing them back with a tape recorder into a computer to gain voice access to the system.

But the simplicity of the science-fiction depictions belies a complexity in the process of voice-recognition technology. Before a computer can even understand what you mean, it needs to be able to understand what you said.
This involves a complex process that includes audio sampling, feature extraction and then actual speech recognition to recognise individual sounds and convert them to text.

Researchers have been working on this technology for many years. They have developed techniques that extract features in a similar way to the human ear and recognise them as phonemes and sounds that human beings make as part of their speech. This involves the use of artificial neural networks, hidden Markov models and other ideas that are all part of the broad field of artificial intelligence.

Through these models, speech-recognition rates have improved. Error rates of less than 8% were reported this year by Google.

But even with these advancements, auditory recognition is only half the battle. Once a computer has gone through this process, it only has the text that replicates what you said. But you could have said anything at all.

The next step is natural language processing.

Did you get the gist?

Once a machine has converted what you say into text, it then has to understand what you’ve actually said. This process is called “natural language processing”. This is arguably more difficult than the process of voice recognition, because the human language is full of context and semantics that make the process of natural language recognition difficult.

Anybody who has used earlier voice-recognition systems can testify as to how difficult this can be. Early systems had a very limited vocabulary and you were required to say commands in just the right way to ensure that the computer understood them.

This was true not only for voice-recognition systems, but even textual input systems, where the order of the words and the inclusion of certain words made a large difference to how the system processed the command. This was because early language-processing systems used hard rules and decision trees to interpret commands, so any deviation from these commands caused problems.

Newer systems, however, use machine-learning algorithms similar to the hidden Markov models used in speech recognition to build a vocabulary. These systems still need to be taught, but they are able to make softer decisions based on weightings of the individual words used. This allows for more flexible queries, where the language used can be changed but the content of the query can remain the same.

This is why it’s possible to ask Siri either to “schedule a calendar appointment for 9am to pick up my dry-cleaning” or “enter pick up my dry-cleaning in my calendar for 9am” and get the same result.
But how do you deal with different voices?

Despite these advancements there are still challenges in this space. In the field of voice recognition, accents and pronunciation can still cause problems.

Because of the way the systems work, different pronunciation of phonemes can cause the system to not recognise what you’ve said. This is especially true when the phonemes in a word seem (to non-locals) to bear no relation to the way it is pronounced, such as the British cities of “Leicester” or “Glasgow”.

Even Australian cities such as “Melbourne” seem to trip up some Americans. While to an Australian the pronunciation of Melbourne is very obvious, the different way that phonemes are used in America means that they often pronounce it wrong (to parochial ears).

Anybody who has heard a GPS system mispronounce Ipswich as “eyp-swich” knows this also goes both ways. The only way around this is to train the system in the different ways words are pronounced. But with the variation in accents (and even pronunciation within accents) this can be quite a large and complex process.

On the language-processing side, the issue is predominantly one of context. The example given in the opening provides an example of the state of the art in contextual language processing. But all you need to do is pay attention to a conversation for a few minutes to realise how much we change the way we speak to give machines extra context.

For instance, how often do you ask somebody:

Did you get my e-mail?

But what you actually mean is:

Did you get my e-mail? If you did, have you read it and can you please provide a reply as response to this question?
Things get even more complicated when you want to engage in a conversation with a machine, asking an initial question and the follow-up questions, such as “What is Martin’s number?”, followed by “Call him” or “Text him”.

Machines are improving when it comes to understanding context, but they still have a way to go!

Automatic translation

So, we have made great progress in a lot of different areas to get to this point. But there are still challenges ahead in accent recognition, implications in language, and context in conversations. This means it might still be a while before we have those computers from Star Trek interpreting everything we say.

But rest assured. We are slowly getting closer, with recent advancements from Microsoft in automatic translation showing that, if we get it right, the result can be very cool.

Google has recently revealed technology that uses a combination of image or voice recognition, natural language processing and the camera on your smartphone to automatically translate signs and short conversations from one language to another for you. It will even try to match the font so that the sign looks the same, but in English!

So no longer do you need to ponder over a menu written in Italian, or wonder how to order from a waiter who doesn’t speak English, Google has you covered. Not quite the USS Enterprise, but certainly closer!

Michael Cowling is Senior Lecturer & Discipline Leader, Mobile Computing & Applications at Central Queensland University.

References:http://phys.org/

Soft Robotic Tentacles Pick Up Ant Without Crushing It

micro-tentacle-ant

iny soft robotic tentacles might be ideal for delicate microscopic surgery, say researchers, who were able to use the teensy “limbs” to pick up an ant without damaging its body.

In experiments, these new tentacles also wrapped around other tiny items — such as fish eggs, which deform and burst easily when handled by hard tweezers — without damaging them, scientists added.

Conventional robots are built from rigid parts, making them vulnerable to harm from bumps, scrapes, twists and falls, as well as preventing them from wriggling past obstacles. Increasingly, researchers are developing robots made from soft, elastic plastic and rubber and inspired by octopuses, worms and starfish. These soft robots are resistant to many of the kinds of damage, and can overcome many of the obstacles, which can impair hard robots.

However, miniaturizing soft robots for tiny applications has proved challenging. Soft robots typically move with the aid of compressed air that is forced in and out of many tiny pneumatic channels running through their limbs, essentially inflating and deflating like balloons. However, scientists have faced challenges when trying to create microscopic versions of such limbs. For example, the hollow channels in soft robots are often created by dissolving away unwanted matter, but ensuring that all such material gets dissolved is a complicated task at microscopic scales.

These new robot tentacles can grab and squeeze items by moving in a spiraling manner, much like elephant trunks, octopus arms, plant tendrils and monkey tails.

The soft robotic micro-tentacle wraps around a delicate fish egg. (Scale bar: 0.5 mm)Pin It The soft robotic micro-tentacle wraps around a delicate fish egg. (Scale bar: 0.5 mm)
Credit: Jaeyoun Kim / Iowa State UniversityView full size image
The microscopic tubes are 5 to 8 millimeters long, about the length of the average red ant. Each tube has walls 8 to 32 microns thick and hollow channels 100 to 125 microns wide. In comparison, the average width of a human hair is about 100 microns.

To make these microscopic tubes, the researchers dipped thin wires or optical fibers in liquid silicone rubber and then stripped the hollow pipes off the rods once the fluid had solidified. The researchers inflated and deflated the tubes using syringes as pumps.

The hollow channel inside each tube did not run straight down its middle — rather, by letting gravity pull on the silicone rubber as it solidified, one side of each tube was thicker than the other. When air is pumped into each tube, the thin side will bend more than the thick side, allowing the tube to coil.

Ordinarily, these microscopic tubes can only coil once when inflated. However, the scientists augmented the ability of the tubes to flex by adding rings of silicone rubber onto their exteriors that “amplified the single-turn coiling into multi-turn spiraling,”study co-author Jaeyoun Kim, an electrical engineer at Iowa State University, told Live Science.

These new tentacles could pick up and hold an ant whose waist was about 400 microns wide without damaging its body. The researchers suggest these tentacles could help safely and delicately manipulate blood vessels or even embryos in minimally invasive surgeries. “The gentle spiraling and scooping motion of our micro-tentacle will definitely help,” Kim said.

Kim and his colleagues, Jungwook Paek and Inho Cho, detailed their findings online today (June 11) in the journal Scientific Reports.

References:http://www.livescience.com/