How computers are learning to make human software work more efficiently

howcomputers

Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

The benefits

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware.
The potential does stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.
We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.
In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.
The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers

To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

References:http://phys.org/

 

Solar-powered hydrogen generation using two of the most abundant elements on Earth

hematite-hydrogen

By smoothing the surface of hematite, a team of researchers achieved “unassisted” water splitting using the abundant rust-like mineral hermatite and silicon to capture and store solar energy within hydrogen gas

One potential clean energy future requires an economical, efficient, and relatively simple way to generate copious amounts of hydrogen for use in fuel-cells and hydrogen-powered vehicles. Often achieved by using electricity to split water molecules into hydrogen and oxygen, the ideal method would be to mine hydrogen from water using electricity generated directly from sunlight without the addition of any external power source. Hematite – the mineral form of iron – used in conjunction with silicon has shown some promise in this area, but low conversion efficiencies have slowed research. Now scientists have discovered a way to make great improvements, giving hope to using two of the most abundant elements on earth to efficiently produce hydrogen.

Hematite holds potential for use in low-power photoelectrochemical water splitting (where energy, in the form of light, is the input and chemical energy is the output) to release hydrogen due to its low turn-on voltage of less than 0.3 volts when exposed to sunlight. Unfortunately, that voltage is too low to initiate water-splitting so a number of improvements to the surface of hematite have been sought to improve current flow.

In this vein, researchers from Boston College, UC Berkeley, and China’s University of Science and Technology have hit upon the technique of “re-growing” the hematite, so that a smoother surface is obtained along with a higher energy yield. In fact, this new version has doubled the electrical output, and moved one step closer to enabling practical, large-scale energy-harvesting and hydrogen generation.

“By simply smoothing the surface characteristics of hematite, this close cousin of rust can be improved to couple with silicon, which is derived from sand, to achieve complete water splitting for solar hydrogen generation,” said Boston College associate professor of chemistry Dunwei Wang. “This unassisted water splitting, which is very rare, does not require expensive or scarce resources.”

Working on previous work that realized gains in the photoelectrochemical turn-on voltage from the use of smooth surface coatings, the team re-assessed the hematite surface structure by employing a synchrotron particle accelerator at the Lawrence Berkeley National Laboratory. Concentrating on massaging the hematite’s surface deficiencies to see if this would result in improvements, the researchers used physical vapor deposition to layer hematite onto a borosilicate glass substrate and create a photoanode. They then baked the devices to produce a thin, even film of iron oxide across their surfaces.

Subsequent tests of this new amalgam resulted in an immediate improvement in turn-on voltage, and a substantial increase in photovoltage from 0.24 volts to 0.80 volts. Whilst this new hydrogen harvesting process only realized an efficiency of 0.91 percent, it is the very first time that the combination of hematite and amorphous silicon has been shown to produce any meaningful efficiencies of conversion at all.

As a result, this research has shown that progress has been made towards the possibility of producing photoelectrochemical energy harvesting that is totally self-sufficient, uses abundantly available materials, and is easy to produce.

“This offers new hope that efficient and inexpensive solar fuel production by readily available natural resources is within reach,” said Wang. “Getting there will contribute to a sustainable future powered by renewable energy.”

References:http://www.gizmag.com/

Human Organs-on-Chips wins Design of the Year 2015

human-organs-chip-design-of-the-year

The winner and all of the nominated projects, are currently on display at the Design Museum until March 31, 2016

A micro-device lined with living human cells able to mimic the function of living organs has been declared the overall winner of the Design Museum’s Design of the Year Award for 2015.

Something of a departure from last year’s winner, the Heydar Aliyev Center, by Zaha Hadid, Human Organs-on-Chips is the competition’s first winner from the field of medicine in its eight-year history. Designed by Donald Ingber and Dan Dongeun Huh at Harvard University’s Wyss Institute, the Human Organs-on-Chips project comprises a series of chips that mimic real human organs, including a lung-on-a-chip, and gut-on-a-chip.

As we previously reported, the research could prove beneficial in evaluating the safety and efficacy of potential medical treatments, in addition to lessening demands on animal testing, accelerating drug discovery, and decreasing development and treatment costs.

human-organs-chip-design-of-the-year-2

“One of the most important things about the Designs of the Year award is the chance that it gives the museum to explore new territory,” says London’s Design Museum Director, Deyan Sudjic. “The team of scientists that produced this remarkable object don’t come from a conventional design background. But what they have done is clearly a brilliant piece of design.”

The winner and all of the nominated projects , are currently on display at the Design Museum until March 31, 2016.\

References:http://www.gizmag.com/

Data transfer technology that increases speed of remote file access 23 hours ago

datatransfer

Fujitsu Laboratories has developed a software-based technology to increase data-transfer speeds for accessing files on remote enterprise file-sharing servers. When accessing remote file-sharing servers in the cloud, slow upload and download speeds for typical file-sharing systems due to network latency has been an issue. By using a newly developed software that relays communications between the client and server, the number of communications made has been significantly reduced, lowering the effects of network latency. This communication frequency occurs when obtaining information on multiple file names and file sizes on a remote network. In an internal experiment, file transfers were confirmed to be up to ten times faster when dealing with multiple small files. Transfers of large files can be up to twenty times faster when combined with the deduplication technology Fujitsu Laboratories announced last year. By simply installing this software on a client and server, increased speeds for file access for existing file-sharing systems can be achieved.

In file sharing, files are stored on server connected to a network and multiple clients can share the same files. This is used by enterprises to share information and manage documents. Previously, Individual locations have maintained their own file-sharing servers on-site, but in order to improve security and reduce operating costs through combined management, server consolidation has become more common as have opportunities to remotely access file-sharing servers. With two network file-sharing protocols that are widely used in file-sharing systems, CIFS and SMB, the effects of network latency can impose significant wait times for accessing files, creating a demand for improving speed.

Technologies Issues

Fujitsu Laboratories has already developed a deduplication technology for use with remote data transfers, which accelerates the process by avoiding retransmissions of previously sent data. This technology can be applied to a variety of situations, but it has had limited effectiveness with the CIFS and SMB file-sharing protocols because of their unique processes. Improving networks and installing specialized hardware are other ways of increasing speeds, but these are expensive, and installation of specialized hardware has limited effectiveness when handling large numbers of small files only a few kilobytes in size. The CIFS and SMB file-sharing protocols have the following unique processes and challenges.
When copying a folder containing a large number of files, all of the file-attribute information is requested for each file, and the accumulation of these requests in a remote network causes significant latency (Figure 1).
When sending relatively large files, their data is split into pieces tens of kilobytes in size, and header information is attached to each data. Because this header information is updated each time, the transmitted data becomes different even if it sends the same file, which makes deduplication ineffective.

Fujitsu Laboratories has developed a technology that accelerates data transfers for file-sharing servers using only software. Key features of the technology are as follows.

1-datatransfer

1. Collectively proxy read-ahead for multiple files and proxy response

With this technology, a module is installed on both the client and server that accelerates data transfers (Figure 2). The server-side module: 1) identifies when a folder containing multiple files starts to download; 2) read-ahead on the client proxy the batch of all the files downloaded; 3) these read-ahead files are bundled together and transmitted to the client-side module; and 4) the client-side module then replies to a request to get data with its server proxy. In this way, the amount of communications generated by obtaining file attributes, such as multiple file names and file sizes, is greatly reduced, as are the delays influenced by network latency.

2. Effective deduplication due to header separation

Fujitsu Laboratories developed a technology that works on the server-side module to separate the transmitted data into the headers and the contents of file. This makes deduplication of retransmitted data more precise, leading to more effective network traffic reduction.
In Fujitsu Laboratories’ internal experiment, software that implements this technology was found to have the following effects.
Increase in speed of multiple small file transfers: In a test environment that simulated the network latency for accessing a file-sharing server in Kawasaki from a location in Kyushu, batch downloads of folders containing one hundred 1-KB files was found to be ten times faster.
Increase in speed of large file transfers: In the same test environment, a download of a single 10 MB file was found to be as much as twenty times faster (compared with having no acceleration technologies such as deduplication).
This technology is implemented as software and can be installed on existing file-sharing systems. It can also be applied to cloud and server-virtualization environments, mobile devices, etc., and can be extended to a variety of network services. This technology enables more efficient file sharing and joint development between remote locations.
Fujitsu Laboratories plans to implement this technology into a product as a function for a WAN optimization solution during fiscal 2015, after internal testing at Fujitsu.

References:http://phys.org/