How Computers Can Teach Themselves to Recognize Cats

computer-codes

In June 2012, a network of 16,000 computers trained itself to recognize a cat by looking at 10 million images from YouTube videos. Today, the technique is used in everything from Google image searches to Facebook’s newsfeed algorithms.

The feline recognition feat was accomplished using “deep learning,” an approach to machine learning that works by exposing a computer program to a large set of raw data and having it discover more and more abstract concepts. “What it’s about is allowing the computer to learn how to represent information in a more meaningful way, and doing so at several levels of representation,” said Yoshua Bengio, a computer scientist at the University of Montreal in Canada, who co-authored an article on the subject, published today (May 27) in the journal Nature. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]

“There are many ways you can represent information, some of which allow a human decision maker to make a decision more easily,” Bengio told Live Science. For example, when light hits a person’s eye, the photons stimulate neurons in the retina to fire, sending signals to the brain’s visual cortex, which perceives them as an image. This image in the brain is abstract, but it’s a more useful representation for making decisions than a collection of photons.
Similarly, deep learning allows a computer (or set of computers) to take a bunch of raw data — in the form of pixels on a screen, for example — and construct higher and higher levels of abstraction. It can then use these abstract concepts to make decisions, such as whether a picture of a furry blob with two eyes and whiskers is a cat.

“Think of a child learning,” Bengio said. “Initially, the child may see the world in a very simple way, but at some point, the child’s brain clicks, and she discovers an abstraction.” The child can use that abstraction to learn other abstractions, he added.

The self-learning approach has led to dramatic advances in speech- and image-recognition software. It is used in many Internet and mobile phone products, and even self-driving cars, Bengio said.

Deep learning is an important part of many forms of “weak” artificial intelligence, nonsentient intelligence focused on a narrow task, but it could become a component of “strong” artificial intelligence — the kind of AI depicted in movies like “Ex Machina” and “Her.”

But Bengio doesn’t subscribe to the same fears about strong AI that billionaire entrepreneur Elon Musk, world-famous physicist Stephen Hawking and others have been sounding alarms about.

“I do subscribe to the idea that, in some undetermined future, AI could be a problem,” Bengio said, “but we’re so far from [strong AI taking over] that it’s not going to be a problem.”

However, he said there are more immediate issues to be concerned about, such as how AI will impact personal privacy and the job market. “They’re less sexy, but these are the questions that should be used for debate,” Bengio said.

References:http://www.livescience.com/

MIT’s robotic cheetah can now leap over obstacles

mit-robot-cheetah

The last time we heard from the researchers working on MIT’s robotic cheetah project, they had untethered their machine to let it bound freely across the campus lawns. Wireless and with a new spring in its step, the robot hit speeds of 10 mph (6 km/h) and could jump 13 in (33 cm) into the air. The quadrupedal robot has now been given another upgrade in the form of a LIDAR system and special algorithms, allowing it to detect and leap over obstacles in its path.

MIT’s robotic cheetah project has been in the works for a few years now. The team’s view is that the efficiency with which Earth’s fastest animal goes about its business holds many lessons for the world of robotic engineering. This line of thinking has inspired other like-minded projects, with DARPA and Boston Dynamics both working on robotic cheetahs of their own.

The MIT team says it has now trained the first four-legged robot capable of jumping over hurdles autonomously as it runs. With an onboard LIDAR system, the machine is now able use reflections from a laser to map the terrain. This data is partnered with a special algorithm to dictate the robot’s next moves.

The first part of this algorithm sees the robot identify an upcoming obstacle, and determine both its size and the distance to it. The second part of the algorithm is what enables the robot to manage its approach, determining the best position from which to jump and safely make it over the top. This sees the robot’s stride adjusted if need be, speeding up or slowing down to take off from the ideal launch point. This algorithm works in around 100 milliseconds and is run on the fly, dynamically tuning the robots approach with every step.

Right as the robot goes to leave the ground, a third part of the algorithm helps it work out the optimal jumping trajectory. This involves taking the obstacle height and speed of approach to calculate how much force is required from its electric motors to propel it up and over the hurdle.

Putting the cheetah’s new capabilities to the test, the team first set it down to run on a treadmill while tethered. Running at an average speed of 5 mph (8 km/h), the robot was able to clear obstacles up to 18 in (45 cm) with a success rate of around 70 percent. The cheetah was then unleashed onto an indoor test track, running freely with more space and longer approach times to prepare its jumps, clearing about 90 percent of obstacles.

mit-robot-cheetah-2

“A running jump is a truly dynamic behavior,” says Sangbae Kim, assistant professor of mechanical engineering at MIT. “You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviors.”

Kim and his team will now look to improve the robot further so that it can leap over obstacles on softer terrain such as grass. They will demonstrate the cheetah’s new capabilities at the DARPA Robotics Challenge in June. Gizmag will be trackside to bring you a closer look.

References:http://www.gizmag.com/

Mathematician designs social sustainability software

mathematicia

Edgar Antonio Valdés Porras has designed a software and service-oriented theoretical methodology supporting sustainability for cities, which if implemented, would increase economic impact points and infrastructure in Mexico and the Netherlands.

The Mexican specialist designs algorithms that solve the problems of communication and interaction between various economic sectors to further implement a system of software services.
Using the scrum methodology, he has developed a system based on knowing the specific problems of users or residents to create algorithms. “To create a network, points of impact are identified, the length is analyzed, the services that can be applied and then it monitors the effectiveness.”
The network of services currently being designed by Porras Valdes, facilitates the entry of government, technology and agriculture products that are interconnected. In the Netherlands, there is a network of effective communication and transport. One example is the port of Rotterdam, which is surrounded by download centers and warehouses to facilitate its function. A product, such as the one proposed by the Mexican researcher, would help in the efficacy of various production processes.
The mathematician works in Holland developing software aimed at social sustainability, cultural and agricultural programs that help solve several problems by making various tools available to the population. The improvement of social programs, for example, helps to reduce vandalism.
To implement a sustainable service, research is required to obtain a map of the location, geographical qualities, infrastructure and population attributes and generate a base of technological and social services to support the strategy to be implemented. Each of these aspects corresponds to a network node and forms a micro-network that seeks to harness all resources efficiently to create sustainable cities. The entire process takes an average of four years.
He plans to bring the system to his homeland. He states that “one of the current problems in Mexico is the centralization of resources, which are distributed incorrectly (most are located in the capital, Mexico City). We need to organize different cities to take advantage of all remedies. We need to look at microgrids and create sustainable cities that take advantage of the topology of the country.”
“We must solve the problem from the root, not with just a Band-aid. In the Netherlands, the range of possibilities is reviewed and then a decision is made. Mexico should do the same,” says Valdés Porras.

References:http://phys.org/

Live broadcasting app Periscope pops up on Android

periscope-android

Following the much-hyped iOS launch back in March, Twitter’s live broadcasting app Periscope has now landed on Android. Unveiled on Tuesday, the app carries the same functionality as its iOS sibling, but with a few minor differences unique to the Android platform.When Periscope debuted earlier this year, it generated much discussion about the future of broadcasting. From that point, anybody wielding an iOS device could stream all the action live from their camera to anybody willing to tune in.

Much like Twitter itself, it quickly became a popular tool for celebrities and was adopted by everybody from Jimmy Fallon to Ringo Starr. What’s more, it raised interesting questions about piracy, with this month’s Pay-Per-View Mayweather-Pacquiao bout beamed live to the smartphones of non-paying sports fans all around the world.

Android users running version 4.4 (KitKat) can now freely download the Periscope app from Google Play. As it does on iOS, the app integrates with Twitter, offering users a list of suggested accounts to follow the first time they sign in. The home screen displays live and recent streams from people you follow, along with featured streams suggested by the app.

A shiny red button at the bottom right of screen can be hit to begin a broadcast of your own, which users can choose to be public or a private broadcast streamed only to followers they select. Give the broadcast a title, tag the location if you wish and you’re away, bringing a summary of your lunch or a fire in Brooklyn live to the mobile screens of anybody who is interested.

In a blog post, Periscope’s developers note a few differences between the Android and iOS versions. Further to an interface inspired by Material design, Google’s visual language, Android users can configure the app to push notifications when somebody they follow on Twitter broadcasts for the first time and also if somebody they are following shares somebody else’s broadcast. Another added feature is the ability to resume watching broadcasts from where you left off, should you be interrupted by a phone call or message.

References:http://www.gizmag.com/