Monthly Archives: June 2016

Infinitesimal computing device

unduhan-41Scientists and other creative thinkers began to realize Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued as a way to inspire scientific and engineering creativity, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometers on any side.

“Novel computing paradigms are needed to keep up with the demand for faster, smaller and more energy-efficient devices,” said Gina Adam, postdoctoral researcher at UCSB’s Department of Computer Science and lead author of the paper “Optimized stateful material implication logic for three dimensional data manipulation,” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”

While efforts to shrink computing devices have been ongoing for decades — in fact, Feynman’s challenges as he presented them in his 1959 talk have been met — scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. A nanoscale 8-bit adder operating in 50-by-50-by-50 nanometer dimension, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality, said Dmitri Strukov, a UCSB professor of computer science.

“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” he said.

Key to this development is the use of a logic system called material implication logic combined with memristors — circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages — a critical function in autonomous systems such as robotics.

In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.

“Previous groups show that individual blocks can be scaled to very small dimensions, let’s say 10-by-10 nanometers,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors and material implication logic. By applying those results to his group’s developments, he said, the challenge could easily be met.

The tiny memristors are being heavily researched in academia and in industry for their promising uses in memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy scarce systems such as robotics and medical implants.

Online word of mouth marketing

“We were initially approached by an online game provider that used a ‘freemium’ model — players could play for free, but could receive upgrades by paying a fee to become premium users,” says William Rand, an assistant professor of business management at NC State and co-author of a paper on the work. “The company wanted to know what incentives would be most likely to convince players to become premium users. That was the impetus for the work, but what we found is actually relevant for any company or developer interested in incentivizing user investment in apps or online services.”

A preliminary assessment indicated that access to new content was not the primary driver in convincing players to pay a user fee. Instead, player investment seemed to be connected to a player’s social networks.

To learn more, the researchers evaluated three months’ worth of data on 1.4 million users of the online game, including when each player began playing the game; each player’s in-game connections with other players; and whether a player became a premium user.

Using that data, the researchers created a computer model using agent-based modeling, a method that creates a computational agent to represent a single user or group of users. The computer model allowed them to assess the role that social connections may have played in getting players to pay user fees. They found that two different behavioral models worked very well, but in different ways.

“We found that the best model for accurately predicting the overall rate of players becoming premium users was the so-called ‘Bass model,’ which holds that the larger the fraction of direct connections you have who use a product, the more likely you are to use the product,” Rand says.

However, the researchers found that the best model for predicting the behavior of any specific individual was the complex contagion model.

“The Bass model looks at the fraction of your direct connections who adopt a product, whereas the complex contagion model simply looks at the overall number of your direct connections who adopt,” Rand says.

Both techniques have utility for businesses. For example, being able to predict how many players would become premium users could help a company make sustainable business decisions; whereas being able to predict the behavior of an individual player may help a company target players who are near the threshold of becoming premium users.

“By merging these two modeling approaches, we created a tool that would allow a company to predict how many additional premium users it would gain, depending on various degrees of investment in marketing to individual players who are the threshold of becoming premium users,” Rand says. “This could be used to make informed decisions about how much to invest in ‘seeded,’ or targeted, marketing in order to capitalize on word-of-mouth marketing.

Controlling light tips

A new study reports that researchers have demonstrated a way to control light with light using one third — in some cases, even less — of the energy typically required. The advancement, coupled with other developments, could ultimately lead to more powerful, energy-efficient computer chips and other optics-based technologies.

“Typically, symmetry connotes harmony and beauty. But not in this case. We’ve developed technology — an asymmetric metawaveguide — that enables a weak control laser beam to manipulate a much more intense laser signal,” says Liang Feng, PhD, assistant professor in the Department of Electrical Engineering at the University at Buffalo’s School of Engineering and Applied Sciences, and the study’s lead author.

The study — “Metawaveguide for Asymmetric Interferometric Light-Light Switching” — was published today (Oct. 31, 2016) in the journal Physical Review Letters. It was co-authored by researchers at California Institute of Technology and the City University of New York.

The study reports that the metawaveguide — a tiny rectangular box made of silicon, the semiconducting material for computer chips — creates asymmetric reflections of the two beams of light, which enables the weaker beam to control the other beam.

Basis for machine learning systems decisions

But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

“In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

“It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

“There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

Virtual brains

Neural networks are so called because they mimic — approximately — the structure of the brain. They are composed of a large number of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks.

In a process referred to as “deep learning,” training data is fed to a network’s input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. The values stored in the network’s output nodes are then correlated with the classification category that the network is trying to learn — such as the objects in an image, or the topic of an essay.

Over the course of the network’s training, the operations performed by the individual nodes are continuously modified to yield consistently good results across the whole set of training examples. By the end of the process, the computer scientists who programmed the network often have no idea what the nodes’ settings are. Even if they do, it can be very hard to translate that low-level information back into an intelligible description of the system’s decision-making process.

In the new paper, Lei, Barzilay, and Jaakkola specifically address neural nets trained on textual data. To enable interpretation of a neural net’s decisions, the CSAIL researchers divide the net into two modules. The first module extracts segments of text from the training data, and the segments are scored according to their length and their coherence: The shorter the segment, and the more of it that is drawn from strings of consecutive words, the higher its score.

The segments selected by the first module are then passed to the second module, which performs the prediction or classification task. The modules are trained together, and the goal of training is to maximize both the score of the extracted segments and the accuracy of prediction or classification.