NIST revises key computer security publication on random number generation

nistrevisesk

In response to public concerns about cryptographic security, the National Institute of Standards and Technology (NIST) has formally revised its recommended methods for generating random numbers, a crucial element in protecting private messages and other types of electronic data. The action implements changes to the methods that were proposed by NIST last year in a draft document issued for public comment.

The updated document, Recommendation for Random Number Generation Using Deterministic Random Bit Generators, describes algorithms that can be used to reliably generate random numbers, a key step in data encryption.

One of the most significant changes to the document is the removal of the Dual_EC_DRBG algorithm, often referred to conversationally as the “Dual Elliptic Curve random number generator.” This algorithm has spawned controversy because of concerns that it might contain a weakness that attackers could exploit to predict the outcome of random number generation. NIST continues to recommend the other three algorithms that were included in the previous version of the Recommendation document, which was released in early 2012.

The revised version also contains several other notable changes. One concerns the CTR_DRBG—one of the three remaining random number algorithms—and allows additional options for its use. Another change recommends reintroducing randomness into deterministic algorithms as often as it is practical, because refreshing them provides additional protection against attack. The document also includes a link to examples that can help developers to implement the SP 800-90A random number generators correctly.

The revised publication reflects public comments received on a draft version, released late last year.

References:http://phys.org/

Throwable tactical camera gets commercial release

throwabletac

Unseen areas are troublesome for police and first responders: Rooms can harbor dangerous gunmen, while collapsed buildings can conceal survivors. Now Bounce Imaging, founded by an MIT alumnus, is giving officers and rescuers a safe glimpse into the unknown.

In July, the Boston-based startup will release its first line of tactical spheres, equipped with cameras and sensors, that can be tossed into potentially hazardous areas to instantly transmit panoramic images of those areas back to a smartphone.
“It basically gives a quick assessment of a dangerous situation,” says Bounce Imaging CEO Francisco Aguilar MBA ’12, who invented the device, called the Explorer.

Launched in 2012 with help from the MIT Venture Mentoring Service (VMS), Bounce Imaging will deploy 100 Explorers to police departments nationwide, with aims of branching out to first responders and other clients in the near future.

The softball-sized Explorer is covered in a thick rubber shell. Inside is a camera with six lenses, peeking out at different indented spots around the circumference, and LED lights. When activated, the camera snaps photos from all lenses, a few times every second. Software uploads these disparate images to a mobile device and stitches them together rapidly into full panoramic images. There are plans to add sensors for radiation, temperature, and carbon monoxide in future models.

For this first manufacturing run, the startup aims to gather feedback from police, who operate in what Aguilar calls a “reputation-heavy market.” “You want to make sure you deliver well for your first customer, so they recommend you to others,” he says.

Steered right through VMS

Over the years, media coverage has praised the Explorer, including in Wired, the BBC, NBC, Popular Science, and Time—which named the device one of the best inventions of 2012. Bounce Imaging also earned top prizes at the 2012 MassChallenge Competition and the 2013 MIT IDEAS Global Challenge.

Instrumental in Bounce Imaging’s early development, however, was the VMS, which Aguilar turned to shortly after forming Bounce Imaging at the MIT Sloan School of Management. Classmate and U.S. Army veteran David Young MBA ’12 joined the project early to provide a perspective of an end-user.

“The VMS steered us right in many ways,” Aguilar says. “When you don’t know what you’re doing, it’s good to have other people who are guiding you and counseling you.”

1-throwabletac

 

Leading Bounce Imaging’s advisory team was Jeffrey Bernstein SM ’84, a computer scientist who had co-founded a few tech startups—including PictureTel, directly out of graduate school, with the late MIT professor David Staelin—before coming to VMS as a mentor in 2007.

Among other things, Bernstein says the VMS mentors helped Bounce Imaging navigate, for roughly two years, in funding and partnering strategies, recruiting a core team of engineers and establishing its first market—instead of focusing on technical challenges. “The particulars of the technology are usually not the primary areas of focus in VMS,” Bernstein says. “You need to understand the market, and you need good people.”

In that way, Bernstein adds, Bounce Imaging already had a leg up. “Unlike many ventures I’ve seen, the Bounce Imaging team came in with a very clear idea of what need they were addressing and why this was important for real people,” he says.

Bounce Imaging still reaches out to its VMS mentors for advice. Another “powerful resource for alumni companies,” Aguilar says, was a VMS list of previously mentored startups. Over the years, Aguilar has pinged that list for a range of advice, including on manufacturing and funding issues. “It’s such a powerful list, because MIT alumni companies are amazingly generous to each other,” Aguilar says.

The right first market

From a mentor’s perspective, Bernstein sees Bounce Imaging’s current commercial success as a result of “finding that right first market,” which helped it overcome early technical challenges. “They got a lot of really good customer feedback really early and formed a real understanding of the market, allowing them to develop a product without a lot of uncertainty,” he says.

Aguilar conceived of the Explorer after the 2010 Haiti earthquake, as a student at both MIT Sloan and the Kennedy School of Government at Harvard University. International search-and-rescue teams, he learned, could not easily find survivors trapped in the rubble, as they were using cumbersome fiber-optic cameras, which were difficult to maneuver and too expensive for wide use. “I started looking into low-cost, very simple technologies to pair with your smartphone, so you wouldn’t need special training or equipment to look into these dangerous areas,” Aguilar says.

The Explorer was initially developed for first responders. But after being swept up in a flurry of national and international attention from winning the $50,000 grand prize at the 2012 MassChallenge, Bounce Imaging started fielding numerous requests from police departments—which became its target market.
Months of rigorous testing with departments across New England led Bounce Imaging from a clunky prototype of the Explorer—”a Medusa of cables and wires in a 3D-printed shell that was nowhere near throwable,” Aguilar says—through about 20 further iterations.

But they also learned key lessons about what police needed. Among the most important lessons, Aguilar says, is that police are under so much pressure in potentially dangerous situations that they need something very easy to use. “We had loaded the system up with all sorts of options and buttons and nifty things—but really, they just wanted a picture,” Aguilar says.

Neat tricks

Today’s Explorer is designed with a few “neat tricks,” Aguilar says. First is a custom, six-lensed camera that pulls raw images from its lenses simultaneously into one processor. This reduces complexity and reduces the price tag of using six separate cameras.

The ball also serves as its own wireless hotspot, through Bounce Imaging’s network, that a mobile device uses to quickly grab those images—”because a burning building probably isn’t going to have Wi-Fi, but we still want … to work with a first responder’s existing smartphone,” Aguilar says.

But the key innovation, Aguilar says, is the image-stitching software, developed by engineers at the Costa Rican Institute of Technology. The software’s algorithms, Aguilar says, vastly reduce computational load and work around noise and other image-quality problems. Because of this, it can stitch multiple images in a fraction of a second, compared with about one minute through other methods.

In fact, after the Explorer’s release, Aguilar says Bounce Imaging may option its image-stitching technology for drones, video games, movies, or smartphone technologies. “Our main focus is making sure the [Explorer] works well in the market,” Aguilar says. “And then we’re trying to see what exciting things we can do with the imaging processing, which could vastly reduce computational requirements for a range of industries developing around immersive video.”

References:http://phys.org/

Computer vision and mobile technology could help blind people ‘see’

computervisi

Computer scientists are developing new adaptive mobile technology which could enable blind and visually-impaired people to ‘see’ through their smartphone or tablet.

Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.

Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with visual impairments.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.

References:http://phys.org/

I always feel like somebody’s watching me…

ialwaysfeell

What power can individuals have over their data when their every move online is being tracked? Researchers at the Cambridge Computer Laboratory are building new systems that shift the power back to individual users, and could make personal data faster to access and at much lower cost.

It’s a fact of modern life – with every click, every tweet, every Facebook Like, we hand over information about ourselves to organisations who are desperate to know all of our secrets, in the hope that those secrets can be used to sell us something.

Companies have been collecting every possible scrap of information from their customers since long before the internet age, but with more powerful computers, cheaper storage and ubiquitous online use, the methods organisations use to gather information about people have become ever-more sophisticated. And sometimes those organisations know us better than our own families or friends.

For example, several years ago, data analysis tools used by the US retailer Target had become so precise that they were able to determine, with astonishing accuracy, whether a woman was pregnant and how far along she was, based on her purchase of certain products. And in one particularly embarrassing incident, Target knew that a teenage girl was pregnant before her father did, much to her father’s displeasure.

“What Target learned from that incident is that marketing too accurately can really make people squeamish,” says Professor Jon Crowcroft of the University’s Computer Laboratory. “But if they made their marketing a little less accurate by increasing the amount of privacy they give their customers, they found they can still retain or increase their customer base without making people feel as if they’re being spied on.”

Crowcroft’s research is in the area of ‘privacy by design’ – systems that allow us to live in the digital world and protect our privacy at the same time. As the concept of the Internet of Things – internet-connected washing machines, toasters and televisions – becomes reality, Crowcroft insists that privacy by design is needed to address the massive power imbalance that occurs when our personal data is shared with, and sold by, corporations, governments and other organisations.

But privacy by design doesn’t mean disconnecting from the online world and putting on a tinfoil hat – far from it. “There’s already a lot of data stored about each and every one of us – the things we buy, the food we eat, the health issues we have – and for each of these market segments, there are perfectly legitimate uses for that data,” adds Crowcroft. “Collecting healthcare data is fantastically useful for tracking pandemics, preventative care, more- efficient treatment, public health – those are all perfectly reasonable and positive uses for big data. At the same time, most sites gather information in order to target ads more accurately, and most people are actually okay with that. So the question then becomes, what is privacy by design?”

“What we’re trying to do is develop processing frameworks that would allow this data to be useful and to be used, without the somewhat creepy feeling that you’re constantly being watched,” says Crowcroft’s colleague Dr Richard Mortier.

The type of system that Crowcroft and Mortier envision is one in which the user has the scope to allow access to their data on a case-by-case basis, rather than it be harvested whether they like it or not: computations are performed where the data is gathered, and the results are pushed back to the organisation that wants the data.

“We can change the big data problem completely by moving where the data is processed,” explains Mortier. “Rather than having systems where all of the data is gathered in some huge central location and processed, if you reconstruct the system so that the data is processed in the same place it’s gathered, individuals would be able to take some of the control of their information back from corporations and surveillance organisations. Instead of one huge central processing node, we want to see billions of smaller nodes, which would make information quicker to access, and could potentially be stored
at lower overall cost.”

Crowcroft and Mortier have designed and partially built systems where a person’s data stays local to them, and they can have the option to decide what is shared and with whom. For example, a patient can share their healthcare data with their GP, but the GP would have to get authorisation from the patient before sharing that data with a pharmaceutical company.

“People realise they’re being marketed to, but I don’t think they realise the scale of it – it really is a hidden menace,” says Crowcroft. “The point is that we could build systems that could stop that completely, and re-enable it on the basis of a level playing field. We want to see systems where people have agency over their data, giving them the ability to allow or prevent certain types of access.”

Contrary to what some people may assume about the nature of digital life, adds Crowcroft, the vast majority of people highly value their own privacy. He points to the launch and then recall of Google Glass, a wearable computer worn like eyeglasses. “People started wearing these things into restaurants and other diners wouldn’t put up with it, because they didn’t want to be recorded while eating their lunch – it really creeped people out,” he says.

“And that’s in a public space: imagine the same sort of thing happening in a private space. It’s about the asymmetry and the idea that this is being done to you and you have no comeback. The problem with digital infrastructures is you don’t see them, and to a certain extent companies depend on people not understanding them – we can build systems where there are mechanisms through which they can be understood.”

Crowcroft and Mortier recognise that they’ll never convince everyone to ditch cloud computing and switch to a decentralised system. But that isn’t their goal. “It takes a while to show that new ways of doing things can really work,” says Crowcroft. “If these sorts of systems become a reasonably widely used alternative, it will go a long way towards keeping companies and cloud storage providers honest. The very small number of providers leads to the exploitation of the network effect, where they have a strong monopolistic position over a certain type of data. And monopolies are not good for economies. If a decentralised system is more ethical, enough people using it may incentivise the big providers to be more ethical too.”

References:http:http://phys.org/