World’s Thinnest Light Bulb Created from Graphene

graphene-light-bulb

Graphene, a form of carbon famous for being stronger than steel and more conductive than copper, can add another wonder to the list: making light.

Researchers have developed a light-emitting graphene transistor that works in the same way as the filament in a light bulb.

“We’ve created what is essentially the world’s thinnest light bulb,” study co-author James Hone, a mechanical engineer at Columbia University in New York, said in a Scientists have long wanted to create a teensy “light bulb” to place on a chip, enabling what is called photonic circuits, which run on light rather than electric current. The problem has been one of size and temperature — incandescent filaments must get extremely hot before they can produce visible light. This new graphene device, however, is so efficient and tiny, the resulting technology could offer new ways to make displays or study high-temperature phenomena at small scales, the researchers said. [8 Chemical Elements You’ve Never Heard Of]

Making light

When electric current is passed through an incandescent light bulb’s filament — usually made of tungsten — the filament heats up and glows. Electrons moving through the material knock against electrons in the filament’s atoms, giving them energy. Those electrons return to their former energy levels and emit photons (light) in the process. Crank up the current and voltage enough and the filament in the light bulb hits temperatures of about 5,400 degrees Fahrenheit (3,000 degrees Celsius) for an incandescent. This is one reason light bulbs either have no air in them or are filled with an inert gas like argon: At those temperatures tungsten would react with the oxygen in air and simply burn.

In the new study, the scientists used strips of graphene a few microns across and from 6.5 to 14 microns in length, each spanning a trench of silicon like a bridge. (A micron is one-millionth of a meter, where a hair is about 90 microns thick.) An electrode was attached to the ends of each graphene strip. Just like tungsten, run a current through graphene and the material will light up. But there is an added twist, as graphene conducts heat less efficiently as temperature increases, which means the heat stays in a spot in the center, rather than being relatively evenly distributed as in a tungsten filament.

Myung-Ho Bae, one of the study’s authors, told Live Science trapping the heat in one region makes the lighting more efficient. “The temperature of hot electrons at the center of the graphene is about 3,000 K [4,940 F], while the graphene lattice temperature is still about 2,000 K [3,140 F],” he said. “It results in a hotspot at the center and the light emission region is focused at the center of the graphene, which also makes for better efficiency.” It’s also the reason the electrodes at either end of the graphene don’t melt.

As for why this is the first time light has been made from graphene, study co-leader Yun Daniel Park, a professor of physics at Seoul National University, noted that graphene is usually embedded in or in contact with a substrate.

“Physically suspending graphene essentially eliminates pathways in which heat can escape,” Park said. “If the graphene is on a substrate, much of the heat will be dissipated to the substrate. Before us, other groups had only reported inefficient radiation emission in the infrared from graphene.”

The light emitted from the graphene also reflected off the silicon that each piece was suspended in front of. The reflected light interferes with the emitted light, producing a pattern of emission with peaks at different wavelengths. That opened up another possibility: tuning the light by varying the distance to the silicon.

The principle of the graphene is simple, Park said, but it took a long time to discover.

“It took us nearly five years to figure out the exact mechanism but everything (all the physics) fit. And, the project has turned out to be some kind of a Columbus’ Egg,” he said, referring to a legend in which Christopher Columbus challenged a group of men to make an egg stand on its end; they all failed and Columbus solved the problem by just cracking the shell at one end so that it had a flat bottom.

References:http://www.livescience.com/

Interview: Lockheed Martin’s Todd Danko on the DRC Finals and the future of robotics

drc-lockheed-interview

Team Trooper’s robot named Leo competed in the recent DARPA Robotic Challenge

One of the 24 teams competing at the 2015 DARPA Robotics Challenge, and the only team fielded by a large private company, was Lockheed Martin’s Team Trooper and its robot Leo. To find out more about what goes into programming a humanoid robot and the future of robotics, we talked to the team leader, Todd Danko.

Danko: The whole point is to look for new opportunities. We think that investing in mobile manipulation, which is effectively what we’re doing, will be very useful in future applications like underwater or space robotics; places where it’s very difficult or impossible to get people to do those tasks. You can use our systems to tell robots what to do, and let the robots do those things.

Why did Lockheed opt for using Boston Dynamics’s humanoid Atlas robot?

We’re in a world that’s complementary to the humanoid form. If you have a very constrained task, like in a factory, a humanoid isn’t the right answer. You want to optimize your robot to solve those problems. If you don’t have a single problem to optimize, and you want a more general robot, then a humanoid robot makes more sense.

drc-lockheed-interview-3

Does the humanoid robot produce any challenges in creating software?

It produces tremendous challenges. For a humanoid robot just to stay still and not fall over if you touch it. That’s a lot of software on its own. You have to have a good dynamic model of the robot and run it constantly at a thousand Hertz, so it’s harder for the robot to fall over. Meanwhile, a robot with wheels, if you turn everything off, it will be mechanically stable. It just sits there.

Were there any surprises working on this project?

Generally, one thing that surprised me is the state of the art of robotics and especially humanoid robotics. It’s one thing to see them in a movie or some real robots in specific demos, but with those demos you’re seeing the best of what could be, so the state of the art is actually a lot more primitive than many people think it is. It’s exciting to be able to help grow that

.drc-lockheed-interview-2

It’s a poor analogy, but what would you compare a humanoid robot of today to in terms of its development?

I could get into trouble because I don’t have much expertise in human development, but one of our goals was to work toward a robot with the capabilities of a two-year old. I can’t say that we exceeded that or didn’t, but like a two-year old, there’s definitely some misbehavior in these robots. They don’t always do what we tell them to.

We noticed in the competition that the robots moved very slowly with a lot of starting and stopping while they solved a problem or awaited an order. What hurdles have to be overcome before we see robots are that are fast and articulate enough to be practical?

There are two sides to this. On one side, robots are already practical in many ways. Probably the most successful robots are much more simple than those [at the competition]. On top of that, you have to consider what are the consequences of an error, so if you have a lot of runs or a lot of time, you may give your autonomous system a lot of latitude to make decisions rather than constantly approving it before it proceeds. In a competition like this, there are greater consequences and if a mistake happens that a human can stop, then we should prevent that from happening.

Secondly, there a lots of things that different groups are working on to improve the performance of robots as a whole. You notice that the robots are untethered. Powerwise, they were lasting an hour at least in most cases, but how useful is an hour of robot time? Maybe you want something that lasts a day or a couple of days even. We need better batteries, better power systems, and more efficient actuators, so there’s more power and less use of power down the line.

drc-lockheed-interview-5

I’ve never met a perception algorithm that couldn’t use more processors to be better. Equally with planning and parallelizing, this provides more possibilities to come up with more solutions. Then there’s perception. There’s still a lot of room for improvement in the state of the art in perception. We’re very good at recognizing specific objects, but more needs to be done in recognizing categories of objects, or to recognize something never seen before and how that could be used.

On top of all this, I think there’s still a role for the human to help the robot know what it is that needs to be done. It may be best to have a human in a safe place who can communicate the “what” and allow the robot to do the dangerous things. That’s something I think we can see in the near future.

It’s the cliché question, but how long do you think it will be before we see robots like this as part of people’s lives?

It’s going to be a long time before we see a robot like this. Look at what [happened in the competition] and how many people it took to keep that robot from destroying itself. There’s a lot of work that needs to be done before we’re contributing in a way that’s not a burden to the operators. That goes back to all those things we talked about that need to be improved, so it’s going to be quite some time. On the other hand, simple robots are already being used in applications today. We’re just using more complicated robots in more complicated situations as we go in that direction of complexity.

References:http://www.gizmag.com/

Physicists develop ultrasensitive nanomechanical biosensor

miptphysicis

Two young researchers working at the MIPT Laboratory of Nanooptics and Plasmonics, Dmitry Fedyanin and Yury Stebunov, have developed an ultracompact, highly sensitive nanomechanical sensor for analyzing the chemical composition of substances and detecting biological objects, such as viral disease markers, which appear when the immune system responds to incurable or hard-to-cure diseases, including HIV, hepatitis, herpes, and many others. The sensor will enable doctors to identify tumor markers, whose presence in the body signals the emergence and growth of cancerous tumors.

The sensitivity of the new device is best characterized by one key feature: According to its developers, the sensor can track changes of just a few kilodaltons in the mass of a cantilever in real time. One Dalton is roughly the mass of a proton or neutron, and several thousand Daltons are the mass of individual proteins and DNA molecules. So the new optical sensor will allow for diagnosing diseases long before they can be detected by any other method, which will pave the way for a new-generation of diagnostics.
The device, described in an article published in the journal Scientific Reports, is an optical or, more precisely, optomechanical chip. “We’ve been following the progress made in the development of micro- and nanomechanical biosensors for quite a while now, and can say that no one has been able to introduce a simple and scalable technology for parallel monitoring that would be ready to use outside a laboratory. So our goal was not only to achieve the high sensitivity of the sensor and make it compact, but also make it scalable and compatible with standard microelectronics technologies,” the researchers said.
Unlike similar devices, the new sensor has no complex junctions and can be produced through a standard CMOS process technology used in microelectronics. The sensor doesn’t have a single circuit, and its design is very simple. It consists of two parts: a photonic (or plasmonic) nanowave guide to control the optical signal, and a cantilever hanging over the waveguide.

2-physicistsde

A cantilever, or beam, is a long and thin strip of microscopic dimensions (5 micrometers long, 1 micrometer wide and 90 nanometers thick), connected tightly to a chip. To get an idea how it works, imagine pressing one end of a ruler tightly to the edge of a table and allowing the other end to hang freely in the air. If you snap the free end with your other hand, the ruler will make mechanical oscillations at a certain frequency. That’s how the cantilever works. The difference between the oscillations of the ruler and the cantilever is only the frequency, which depends on the materials and geometry: while the ruler oscillates at several tens of hertz, the frequency of the cantilever’s oscillations is measured in megahertz. In other words, it makes a few million oscillations per second.

There are two optical signals going through the waveguide during oscillations: The first one sets the cantilever in motion, and the second one allows for reading the signal containing information about the movement. The inhomogeneous electromagnetic field of the control signal’s optical mode transmits a dipole moment to the cantilever, impacting the dipole at the same time so that the cantilever starts to oscillate.
The sinusoidally modulated control signal makes the cantilever oscillate at an amplitude of up to 20 nanometers. The oscillations determine the parameters of the second signal, the output power of which depends on the cantilever’s position.

3-physicistsde

The highly localized optical modes of nanowave guides, which create a strong electric field intensity gradient, are key to inducing cantilever oscillations. Because the changes of the electromagnetic field in such systems are measured in tens of nanometers, researchers use the term “nanophotonics.” Without the nanoscale waveguide and the cantilever, the chip simply wouldn’t work. A big cantilever cannot be made to oscillate by freely propagating light, and the effects of chemical changes to its surface on the oscillation frequency would be less noticeable..

Cantilever oscillations make it possible to determine the chemical composition  of the environment in which the chip is placed. That’s because the frequency of mechanical vibrations depends not only on the materials’ dimensions and properties, but also on the mass of the oscillatory system, which changes during a chemical reaction between the cantilever and the environment. By placing different reagents on the cantilever, researchers make it react with specific substances or even biological objects. If you place antibodies to certain viruses on the cantilever, it’ll capture the viral particles in the analyzed environment. Oscillations will occur at a lower or higher amplitude depending on the virus or the layer of chemically reactive substances on the cantilever, and the electromagnetic wave passing through the waveguide will be dispersed by the cantilever differently, which can be seen in the changes of the intensity of the readout signal.

Calculations done by the researchers showed that the new sensor will combine high sensitivity with a comparative ease of production and miniature dimensions, allowing it to be used in all portable devices, such as smartphones, wearable electronics, etc. One chip, several millimeters in size, can accommodate several thousand such sensors, configured to detect different particles or molecules. The price, thanks to the simplicity of the design, will most likely depend on the number of sensors, being much more affordable than its competitors.

References:http://phys.org/

For app developers, more big changes are coming soon

The App Store revolutionized the tech world when it opened in summer 2008, spawning a billion-dollar industry in one fell swoop. It was neither the first nor the largest back then, but the store quickly exploded in popularity, prompting Apple co-founder Steve Jobs to say “it is going to be very hard for others to catch up.”

The store was one of the big stars this week at Apple’s annual Worldwide Developers Conference in San Francisco, with CEO Tim Cook’s announcement that it had recently “passed a major milestone, with 100 billion app downloads” since the store opened its virtual doors.
Many in the app-developer fold say the business, thanks to the marketplace created by the App Store and other outlets like Google Play, is still in its infancy and mobile apps in the next few years will continue to change human behavior in unimaginable ways.
From the “Internet of Things,” where apps will help connect an estimated 50 billion devices to the Internet by 2020 and transform the way we relate to our homes and workplaces, to the continued democratization of software, where tech novices will be able to build their own apps, the digital landscape will shift at a breakneck speed.
At the same time, the way apps come into being could also go through a seismic shift. The small independent app-makers who early on helped make the App Store the success it is today will find it harder to survive there, while large corporations will dominate the stage as their in-house coders custom-tailor more and more apps to meet their customers’ needs.
“We’re seeing big companies taking over” the mobile-app industry, said Mark Wilcox, a business analyst with VisionMobile. As a result, according to the firm’s recent survey, nearly half of the developers who want to make money building apps actually make zero or next to nothing. “Large companies, and especially game publishers, take all the top spots on the App Store and most of the revenue,” he said. “The little guys are struggling to compete.”
Calling the momentum “absolutely staggering,” Cook told developers this week that the App Store has forever changed the way we think of software and the way we all increasingly use it in our daily lives.
Connecting everyday objects, from home-heating systems to toasters, will continue to be a major focus for developers, with one survey showing that 53 percent of respondents said they were already working on so-called IoT – or “Internet of Things” – apps. Wearable tech, like the new Apple Watch, could host thousands of new apps this year alone, from health and fitness monitors to tools not yet envisioned.
In a clear nod to the future of apps already unfolding on wearable technology, Cook used his keynote address to introduce Kevin Lynch, Apple’s vice president for technology, to talk about watchOS 2, the first major update for the Apple Watch since it was unveiled last September. Lynch said developers could soon use the new software to build native, or in-watch, apps that would allow users to tap directly into the watch’s burgeoning bounty without having to rely on their iPhones for access.
Another budding trend features strategically placed beacons, small devices in the physical world that interact with apps, which in turn will collect and process mountains of data. An in-app sale offer triggered on your phone by a beacon inside the Wal-Mart you just entered is an example of this technology. Over time, all that data collected from our phones about our daily patterns will then guide and improve the software we’ll use to work and play.
“The ‘Internet of Things’ is happening quickly,” said 22-year-old Ashu Desai, whose Make School is teaching college and high-school students how to build apps. “We’ll see apps where your phone will know more and more about your surroundings. There will be a massive proliferation of sensors that will be everywhere so apps can send you the temperature of your hot tub, lock and unlock your doors, and turn on your stove remotely.”
In a way, the future of apps is already here, with an increasing number of them not on public view at the App Store but quietly being harnessed by teams within private companies and organizations, from giants like Salesforce to stage crews at musical venues to small enterprises like contractors and electricians.
Consultant Richard Carlton helps companies use programs like Apple-owned FileMaker to create their own proprietary apps that allow colleagues to collaborate on a shared database they can all access from their mobile devices.
“These mobile tools allow people who aren’t coders to build their own solutions and share them with their fellow employees,” he said. “For example, we’ve helped plumbers create apps they can use to update their work schedules on their phones. This software lets you sign contracts in the field, take photos and enter them in a database, or do property inspections. And this costs the company a quarter of what they’d pay to have a professional app developer do it.”
In other trends in the coming year, there will be more video ads playing on our smartphone screens, more crowdfunding to launch app startups and more developers leaving Google and Apple to become consultants who’ll build apps for corporate clients like The Home Depot.
“Every company out there is turning to mobile, whether it’s retail or airlines or real estate,” said Shravan Goli, president of the tech-jobs site Dice.com. With big companies storming into the market, the coming years will be tough for the independents, said Craig Hockenberry with app-design firm The Icon Factory.
“The people who want to survive solely off the puzzle game or the camera app are the ones having a problem right now,” he said. “When the App Store opened, our first app sold well because there wasn’t a lot of competition. We were a big fish in a small pond. Now the pond is more like an ocean.”
—-
WHAT’S the future of apps?
We asked five attendees at Apple’s annual Worldwide Developers Conference this week in San Francisco for their take on what’s ahead.
Jenna Hoffstein
Educational app developer, Boston
“We’ll see a broader use of apps in schools, supporting teachers and giving kids more engaging ways to learn math and science.”
Ashok Ramamoorthy
Product manager, India
“All your business will develop around your (enterprise) app. If you’re not taking advantage of that, you’re losing money.”
Igor Ievsiukov
Developer, Ukraine
“Apps will be smarter and they’ll distract the user less. Their functions will be more personalized and personalized more precisely.”
Amy Wardrop
Digital product manager, Sydney
“The future of apps is all about experiential, the actual experience of being human. Wearable health and fitness devices, for example, will provide personal analytics, with more layering of information from both humans and their environment.”
Ashish Singh
Developer, India
“Apps will become part of every aspect of our lives, with virtual-reality apps more prevalent. With an app and a pair of VR glasses, you’ll be able to virtually tour a property for sale, museums or vacation destinations.”

References:http://phys.org/