Live broadcasting app Periscope pops up on Android

periscope-android

Following the much-hyped iOS launch back in March, Twitter’s live broadcasting app Periscope has now landed on Android. Unveiled on Tuesday, the app carries the same functionality as its iOS sibling, but with a few minor differences unique to the Android platform.When Periscope debuted earlier this year, it generated much discussion about the future of broadcasting. From that point, anybody wielding an iOS device could stream all the action live from their camera to anybody willing to tune in.

Much like Twitter itself, it quickly became a popular tool for celebrities and was adopted by everybody from Jimmy Fallon to Ringo Starr. What’s more, it raised interesting questions about piracy, with this month’s Pay-Per-View Mayweather-Pacquiao bout beamed live to the smartphones of non-paying sports fans all around the world.

Android users running version 4.4 (KitKat) can now freely download the Periscope app from Google Play. As it does on iOS, the app integrates with Twitter, offering users a list of suggested accounts to follow the first time they sign in. The home screen displays live and recent streams from people you follow, along with featured streams suggested by the app.

A shiny red button at the bottom right of screen can be hit to begin a broadcast of your own, which users can choose to be public or a private broadcast streamed only to followers they select. Give the broadcast a title, tag the location if you wish and you’re away, bringing a summary of your lunch or a fire in Brooklyn live to the mobile screens of anybody who is interested.

In a blog post, Periscope’s developers note a few differences between the Android and iOS versions. Further to an interface inspired by Material design, Google’s visual language, Android users can configure the app to push notifications when somebody they follow on Twitter broadcasts for the first time and also if somebody they are following shares somebody else’s broadcast. Another added feature is the ability to resume watching broadcasts from where you left off, should you be interrupted by a phone call or message.

References:http://www.gizmag.com/

Researchers may have discovered fountain of youth by reversing aging in human cells

reverse-aging

Researchers in Japan have found that human aging may be able to be delayed or even reversed, at least at the most basic level of human cell lines. In the process, the scientists from the University of Tsukuba also found that regulation of two genes is related to how we age.

The new findings challenge one of the current popular theories of aging, that lays the blame for humans’ inevitable downhill slide with mutations that accumulate in our mitochondrial DNA over time. Mitochondrion are sometimes likened to a cellular “furnace” that produces energy through cellular respiration. Damage to the mitochondrial DNA results in changes or mutations in the DNA sequence that build up and are associated with familiar signs of aging like hair loss, osteoporosis and, of course, reduced lifespan.

So goes the theory, at least. But the Tsukuba researchers suggest that something else may be going on within our cells. Their research indicates that the issue may not be that mitochondrial DNA become damaged, but rather that genes get turned “off” or “on” over time. Most intriguing, the team led by Professor Jun-Ichi Hayashi was able to flip the switches on a few genes back to their youthful position, effectively reversing the aging process.

The researchers came to this conclusion by comparing the function level of the mitochondria in fibroblast cell lines from children under 12 years of age to those of elderly people between 80 and 97. As expected, the older cells had reduced cellular respiration, but the older cells did not show more DNA damage than those from children. This discovery led the team to propose that the reduced cellular function is tied to epigenetic regulation, changes that alter the physical structure of DNA without affecting the DNA sequence itself, causing genes to be turned on or off. Unlike mutations that damage that sequence, as in the other, aforementioned theory of aging, epigenetic changes could possibly be reversed by genetically reprogramming cells to an embryonic stem cell-like state, effectively turning back the clock on aging.

For a broad comparison, imagine that a power surge hits your home’s electrical system. If not properly wired, irreversible damage or even fire may result. However, imagine another home in which the same surge trips a switch in this home’s circuit breaker box. Simply flipping that breaker back to the “on” position should make it operate as good as new. In essence, the Tsukuba team is proposing that our DNA may not become fried with age as previously thought, but rather simply requires someone to access its genetic breaker box to reverse aging.

To test the theory, the researchers found two genes associated with mitochondrial function and essentially experimented with turning them on or off. In doing so, they were able to create defects or restore cellular respiration. These two genes regulate glycine, an amino acid, production in mitochondria, and in one of the more promising findings, a 97-year-old cell line saw its cellular respiration restored after the addition of glycine for 10 days.

The researchers’ findings were published this month in the journal Scientific Reports.

Whether or not this process could be a potential fountain of youth for humans and not just human fibroblast cell lines still remains to be seen, with much more testing required. However, if the theory holds, glycine supplements could one day become a powerful tool for life extension.

Similar research from the Salk Institute has also recently looked at other ways to slow down or stop aging at a cellular level, while yet another team is looking into a new class of drugs called senolytics that could help slow aging.

References:http://www.gizmag.com/

New algorithm lets autonomous robots divvy up assembly tasks

helpingrobot

Today’s industrial robots are remarkably efficient—as long as they’re in a controlled environment where everything is exactly where they expect it to be.

But put them in an unfamiliar setting, where they have to think for themselves, and their efficiency plummets. And the difficulty of on-the-fly motion planning increases exponentially with the number of robots involved. For even a simple collaborative task, a team of, say, three autonomous robots might have to think for several hours to come up with a plan of attack.
This week, at the Institute for Electrical and Electronics Engineers’ International Conference on Robotics and Automation, a group of MIT researchers were nominated for two best-paper awards for a new algorithm that can significantly reduce robot teams’ planning time. The plan the algorithm produces may not be perfectly efficient, but in many cases, the savings in planning time will more than offset the added execution time.
The researchers also tested the viability of their algorithm by using it to guide a crew of three robots in the assembly of a chair.
“We’re really excited about the idea of using robots in more extensive ways in manufacturing,” says Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science, whose group developed the new algorithm. “For this, we need robots that can figure things out for themselves more than current robots do. We see this algorithm as a step in that direction.”
Rus is joined on the paper by three researchers in her lab—first author Mehmet Dogar, a postdoc, and Andrew Spielberg and Stuart Baker, both graduate students in electrical engineering and computer science.

Grasping consequences

The problem the researchers address is one in which a group of robots must perform an assembly operation that has a series of discrete steps, some of which require multirobot collaboration. At the outset, none of the robots knows which parts of the operation it will be assigned: Everything’s determined on the fly.
Computationally, the problem is already complex enough, given that at any stage of the operation, any of the robots could perform any of the actions, and during the collaborative phases, they have to avoid colliding with each other. But what makes planning really time-consuming is determining the optimal way for each robot to grasp each object it’s manipulating, so that it can successfully complete not only the immediate task, but also those that follow it.
“Sometimes, the grasp configuration may be valid for the current step but problematic for the next step because another robot or sensor is needed,” Rus says. “The current grasping formation may not allow room for a new robot or sensor to join the team. So our solution considers a multiple-step assembly operation and optimizes how the robots place themselves in a way that takes into account the entire process, not just the current step.”
The key to the researchers’ algorithm is that it defers its most difficult decisions about grasp position until it’s made all the easier ones. That way, it can be interrupted at any time, and it will still have a workable assembly plan. If it hasn’t had time to compute the optimal solution, the robots may on occasion have to drop and regrasp the objects they’re holding. But in many cases, the extra time that takes will be trivial compared to the time required to compute a comprehensive solution.
Principled procrastination
The algorithm begins by devising a plan that completely ignores the grasping problem. This is the equivalent of a plan in which all the robots would drop everything after every stage of the assembly operation, then approach the next stage as if it were a freestanding task.
Then the algorithm considers the transition from one stage of the operation to the next from the perspective of a single robot and a single part of the object being assembled. If it can find a grasp position for that robot and that part that will work in both stages of the operation, but which won’t require any modification of any of the other robots’ behavior, it will add that grasp to the plan. Otherwise, it postpones its decision.
Once it’s handled all the easy grasp decisions, it revisits the ones it’s postponed. Now, it broadens its scope slightly, revising the behavior of one or two other robots at one or two points in the operation, if necessary, to effect a smooth transition between stages. But again, if even that expanded scope proves too limited, it defers its decision.
If the algorithm were permitted to run to completion, its last few grasp decisions might require the modification of every robot’s behavior at every step of the assembly process, which can be a hugely complex task. It will often be more efficient to just let the robots drop what they’re holding a few times rather than to compute the optimal solution.
In addition to their experiments with real robots, the researchers also ran a host of simulations involving more complex assembly operations. In some, they found that their algorithm could, in minutes, produce a workable plan that involved just a few drops, where the optimal solution took hours to compute. In others, the optimal solution was intractable—it would have taken millennia to compute. But their algorithm could still produce a workable plan.
“With an elegant heuristic approach to a complex planning problem, Rus’s group has shown an important step forward in multirobot cooperation by demonstrating how three mobile arms can figure out how to assemble a chair,” says Bradley Nelson, the Professor of Robotics and Intelligent Systems at Swiss Federal Institute of Technology in Zurich. “My biggest concern about their work is that it will ruin one of the things I like most about Ikea furniture: assembling it myself at home.”

References:http://phys.org/

The Future of Your PC’s Hardware

152683-2612p088-2b

Since the dawn of electronics, we’ve had only three types of circuit components–resistors, inductors, and capacitors. But in 1971, UC Berkeley researcher Leon Chua theorized the possibility of a fourth type of component, one that would be able to measure the flow of electric current: the memristor. Now, just 37 years later, Hewlett-Packard has built one
What is it? As its name implies, the memristor can “remember” how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today’s flash memory.

Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today’s PCs.

Someday the memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic “on” and “off” states that today’s digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuttling ones and zeroes around.

When is it coming? Researchers say that no real barrier prevents implementing the memristor in circuitry immediately. But it’s up to the business side to push products through to commercial reality. Memristors made to replace flash memory (at a lower cost and lower power consumption) will likely appear first; HP’s goal is to offer them by 2012. Beyond that, memristors will likely replace both DRAM and hard disks in the 2014-to-2016 time frame. As for memristor-based analog computers, that step may take 20-plus years.

32-Core CPUs From Intel and AMD

152683-2612p088-3b

If your CPU has only a single core, it’s officially a dinosaur. In fact, quad-core computing is now commonplace; you can even get laptop computers with four cores today. But we’re really just at the beginning of the core wars: Leadership in the CPU market will soon be decided by who has the most cores, not who has the fastest clock speed.
What is it? With the gigahertz race largely abandoned, both AMD and Intel are trying to pack more cores onto a die in order to continue to improve processing power and aid with multitasking operations. Miniaturizing chips further will be key to fitting these cores and other components into a limited space. Intel will roll out 32-nanometer processors (down from today’s 45nm chips) in 2009.

When is it coming? Intel has been very good about sticking to its road map. A six-core CPU based on the Itanium design should be out imminently, when Intel then shifts focus to a brand-new architecture called Nehalem, to be marketed as Core i7. Core i7 will feature up to eight cores, with eight-core systems available in 2009 or 2010. (And an eight-core AMD project called Montreal is reportedly on tap for 2009.)

After that, the timeline gets fuzzy. Intel reportedly canceled a 32-core project called Keifer, slated for 2010, possibly because of its complexity (the company won’t confirm this, though). That many cores requires a new way of dealing with memory; apparently you can’t have 32 brains pulling out of one central pool of RAM. But we still expect cores to proliferate when the kinks are ironed out: 16 cores by 2011 or 2012 is plausible (when transistors are predicted to drop again in size to 22nm), with 32 cores by 2013 or 2014 easily within reach. Intel says “hundreds” of cores may come even farther down the line.

References: http://www.pcworld.com