Category Archives: Internet

Phone networks revealed

unduhan-40The encryption scheme used for second generation (2G) mobile phone data can be hacked within seconds by exploiting weaknesses and using common hardware, researches at Agency for Science, Technology and Research (A*STAR), Singapore, show. The ease of the attack shows an urgent need for the 2G Global System for Mobile Communications (GSM) encryption scheme to be updated.

GSM was first deployed 25 years ago and has since become the global standard for mobile communications, used in nearly every country and comprising more than 90 per cent of the global user base.

“GSM uses an encryption scheme called the A5/1 stream cipher to protect data,” explains Jiqiang Lu from the A*STAR Institute for Infocomm Research. “A5/1 uses a 64-bit secret key and a complex keystream generator to make it resistant to elementary attacks such as exhaustive key searches and dictionary attacks.”

Any encryption scheme can be hacked given sufficient time and data, so security engineers usually try to create an encryption scheme that would demand an unfeasible amount of time to crack. But, as GSM gets older, weaknesses in the A5/1 cipher and advances in technology have rendered GSM communications susceptible to attack.

Straightforward ‘brute force’ attacks by guessing the secret key from the data stream are still intensively time consuming, and although A5/1 was reported to have been successfully attacked in 2010, the details of the attack were kept secret. By exploiting weaknesses in the A5/1 cipher, Lu and his colleagues have now demonstrated the first real-time attack using a relatively small amount of data.

“We used a rainbow table, which is constructed iteratively offline as a set of chains relating the secret key to the cipher output,” says Lu. “When an output is received during an attack, the attacker identifies the relevant chain in the rainbow table and regenerates it, which gives a result that is very like to be the secret key of the cipher.”

Using two specific exploits, Lu’s team was able to reduce the effective complexity of the key to a level that allowed a rainbow table to be constructed in 55 days using consumer computer hardware, making possible a successful online attack, in most cases within just nine seconds.

Capture the crush in biological cells

“Biological processes that make life happen and cause diseases largely take place inside cells, which can be studied with microscopes and other techniques, but not in enough detail,” said Michael Feig, an MSU professor of biochemistry and molecular biology who led the research project. “Our research has revealed unprecedented details about what exactly takes place inside biological cells, and how proteins in particular behave in their natural environment.”

The team set out to examine whether the crowding in biological cells alters the properties of biological molecules and their ability to carry out their function. Armed with access to the “K computer,” a supercomputer housed at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, the research team was able to conduct computer simulations that model the cellular interior of a bacterium, and show a detailed view of how the various molecular components interact in a very dense environment.

“Our computer simulations were not too far away from simulating an entire cell in full atomistic detail,” Feig said. “These simulations exceeding 100 million atoms are the largest simulations of this kind and are several orders of magnitude larger than what is typical research work today.”

The powerful computer simulation led to a discovery that some proteins may not be as stable in very dense environments, losing the structures necessary for biological function. The research also found that this cellular environment might bring proteins involved in related biological processes closer to each other, which would enhance the overall efficiency of the cell in converting food to energy.

“Proteins in cells are squeezed together like people in the Tokyo subway during rush hour, where the crush violates personal space. But for proteins this is sometimes more welcome than we thought,” Feig said.

A third major finding is that smaller molecules, such as those providing food and carrying energy, appear to be distracted by the many opportunities to interact with the larger proteins, affecting their biological function.

“This is a breakthrough achievement in understanding how the molecules that biochemists normally study interact in real life conditions,” said Thomas Sharkey, chair of the Department of Biochemistry and Molecular Biology at MSU. “It will provide critical insights that will be used by people working to cure cancer and other diseases that depend on the cellular processes that are now much better understood.”

But this is just the beginning of detailed whole-cell simulations, according to Feig.

“Future studies will aim to reach longer time scales, and to move towards larger and more complex cells, especially human cells, to better relate to human diseases,” Feig said.

websites before they cause harm

In a paper presented at the 2016 ACM Conference on Computer and Communications Security on Oct. 27, the researchers describe a system called PREDATOR that distinguishes between legitimate and malicious purchasers of new websites. In doing so, the system yields important insights into how those two groups behave differently online even before the malicious users have done anything obviously bad or harmful. These early signs of likely evil-doers help security professionals take preemptive measures, instead of waiting for a security threat to surface.

“The intuition has always been that the way that malicious actors use online resources somehow differs fundamentally from the way legitimate actors use them,” Feamster explained. “We were looking for those signals: what is it about a domain name that makes it automatically identifiable as a bad domain name?”

Once a website begins to be used for malicious purposes — when it’s linked to in spam email campaigns, for instance, or when it installs malicious code on visitors’ machines — then defenders can flag it as bad and start blocking it. But by then, the site has already been used for the very kinds of behavior that we want to prevent. PREDATOR, which stands for Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration, gets ahead of the curve.

The researchers’ techniques rely on the assumption that malicious users will exhibit registration behavior that differs from those of normal users, such as buying and registering lots of domains at once to take advantage of bulk discounts, so that they can quickly and cheaply adapt when their sites are noticed and blacklisted. Additionally, criminals will often register multiple sites using slight variations on names: changing words like “home” and “homes” or switching word orders in phrases.

By identifying such patterns, Feamster and his collaborators were able to start sifting through the more than 80,000 new domains registered every day to preemptively identify which ones were most likely to be used for harm.

Testing their results against known blacklisted websites, they found that PREDATOR detected 70 percent of malicious websites based solely on information known at the time those domains were first registered. The false positive rate of the PREDATOR system, or rate of legitimate sites that were incorrectly identified as malicious by the tool, was only 0.35 percent.

Being able to detect malicious sites at the moment of registration, before they’re being used, can have multiple security benefits, Feamster said. Those sites can be blocked sooner, making it difficult to use them to cause as much harm — or, indeed, any harm at all if the operators are not permitted to purchase them. “PREDATOR can achieve early detection, often days or weeks before existing blacklists, which generally cannot detect domain abuse until an attack is already underway,” the authors write in their paper. “The key advantage is to respond promptly for defense and limit the window during which miscreants might profitably use a domain.”

Infinitesimal computing device

unduhan-41Scientists and other creative thinkers began to realize Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued as a way to inspire scientific and engineering creativity, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometers on any side.

“Novel computing paradigms are needed to keep up with the demand for faster, smaller and more energy-efficient devices,” said Gina Adam, postdoctoral researcher at UCSB’s Department of Computer Science and lead author of the paper “Optimized stateful material implication logic for three dimensional data manipulation,” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”

While efforts to shrink computing devices have been ongoing for decades — in fact, Feynman’s challenges as he presented them in his 1959 talk have been met — scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. A nanoscale 8-bit adder operating in 50-by-50-by-50 nanometer dimension, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality, said Dmitri Strukov, a UCSB professor of computer science.

“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” he said.

Key to this development is the use of a logic system called material implication logic combined with memristors — circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages — a critical function in autonomous systems such as robotics.

In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.

“Previous groups show that individual blocks can be scaled to very small dimensions, let’s say 10-by-10 nanometers,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors and material implication logic. By applying those results to his group’s developments, he said, the challenge could easily be met.

The tiny memristors are being heavily researched in academia and in industry for their promising uses in memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy scarce systems such as robotics and medical implants.

Online word of mouth marketing

“We were initially approached by an online game provider that used a ‘freemium’ model — players could play for free, but could receive upgrades by paying a fee to become premium users,” says William Rand, an assistant professor of business management at NC State and co-author of a paper on the work. “The company wanted to know what incentives would be most likely to convince players to become premium users. That was the impetus for the work, but what we found is actually relevant for any company or developer interested in incentivizing user investment in apps or online services.”

A preliminary assessment indicated that access to new content was not the primary driver in convincing players to pay a user fee. Instead, player investment seemed to be connected to a player’s social networks.

To learn more, the researchers evaluated three months’ worth of data on 1.4 million users of the online game, including when each player began playing the game; each player’s in-game connections with other players; and whether a player became a premium user.

Using that data, the researchers created a computer model using agent-based modeling, a method that creates a computational agent to represent a single user or group of users. The computer model allowed them to assess the role that social connections may have played in getting players to pay user fees. They found that two different behavioral models worked very well, but in different ways.

“We found that the best model for accurately predicting the overall rate of players becoming premium users was the so-called ‘Bass model,’ which holds that the larger the fraction of direct connections you have who use a product, the more likely you are to use the product,” Rand says.

However, the researchers found that the best model for predicting the behavior of any specific individual was the complex contagion model.

“The Bass model looks at the fraction of your direct connections who adopt a product, whereas the complex contagion model simply looks at the overall number of your direct connections who adopt,” Rand says.

Both techniques have utility for businesses. For example, being able to predict how many players would become premium users could help a company make sustainable business decisions; whereas being able to predict the behavior of an individual player may help a company target players who are near the threshold of becoming premium users.

“By merging these two modeling approaches, we created a tool that would allow a company to predict how many additional premium users it would gain, depending on various degrees of investment in marketing to individual players who are the threshold of becoming premium users,” Rand says. “This could be used to make informed decisions about how much to invest in ‘seeded,’ or targeted, marketing in order to capitalize on word-of-mouth marketing.

Controlling light tips

A new study reports that researchers have demonstrated a way to control light with light using one third — in some cases, even less — of the energy typically required. The advancement, coupled with other developments, could ultimately lead to more powerful, energy-efficient computer chips and other optics-based technologies.

“Typically, symmetry connotes harmony and beauty. But not in this case. We’ve developed technology — an asymmetric metawaveguide — that enables a weak control laser beam to manipulate a much more intense laser signal,” says Liang Feng, PhD, assistant professor in the Department of Electrical Engineering at the University at Buffalo’s School of Engineering and Applied Sciences, and the study’s lead author.

The study — “Metawaveguide for Asymmetric Interferometric Light-Light Switching” — was published today (Oct. 31, 2016) in the journal Physical Review Letters. It was co-authored by researchers at California Institute of Technology and the City University of New York.

The study reports that the metawaveguide — a tiny rectangular box made of silicon, the semiconducting material for computer chips — creates asymmetric reflections of the two beams of light, which enables the weaker beam to control the other beam.

Basis for machine learning systems decisions

But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

“In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

“It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

“There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

Virtual brains

Neural networks are so called because they mimic — approximately — the structure of the brain. They are composed of a large number of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks.

In a process referred to as “deep learning,” training data is fed to a network’s input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. The values stored in the network’s output nodes are then correlated with the classification category that the network is trying to learn — such as the objects in an image, or the topic of an essay.

Over the course of the network’s training, the operations performed by the individual nodes are continuously modified to yield consistently good results across the whole set of training examples. By the end of the process, the computer scientists who programmed the network often have no idea what the nodes’ settings are. Even if they do, it can be very hard to translate that low-level information back into an intelligible description of the system’s decision-making process.

In the new paper, Lei, Barzilay, and Jaakkola specifically address neural nets trained on textual data. To enable interpretation of a neural net’s decisions, the CSAIL researchers divide the net into two modules. The first module extracts segments of text from the training data, and the segments are scored according to their length and their coherence: The shorter the segment, and the more of it that is drawn from strings of consecutive words, the higher its score.

The segments selected by the first module are then passed to the second module, which performs the prediction or classification task. The modules are trained together, and the goal of training is to maximize both the score of the extracted segments and the accuracy of prediction or classification.

The different of Coherence and Control

Such is life for the scientists of the Martinis Group at UC Santa Barbara and Google, Inc., as they explore the exciting but also still somewhat counter-intuitive world of quantum computing. In a paper published in the journal Nature Physics, they and colleagues at Tulane University in New Orleans demonstrate a relatively simple yet complete platform for quantum processing, integrating the control of three superconducting qubits.

“We’re probing the edge of our capability,” said the paper’s lead author, Pedram Roushan. There have been quite a few efforts to build and study individual parts of a quantum processor, he explained, but this particular project involves putting them all together in a basic building block that can be fully controlled and potentially scaled up into a functional quantum computer.

However, before a fully practicable quantum computer — with all its potential for vast, rapid and simultaneous calculations — can be made, various and sometimes unpredictable and spontaneous circumstances arise that have to be understood as the researchers pursue greater control and sophistication of their system.

“You’re dealing with particles — qubits in this case — that are interacting with one another, and they’re interacting with external fields,” Roushan said. “This all leads to very complicated physics.”

To help solve this particular many-body problem, he explained, their fully controllable quantum processing system had to be built from a single qubit up, in order to give the researchers opportunities to more clearly understand the states, behaviors and interactions that can occur.

By engineering the pulse sequences used to manipulate the spins of the photons in their system, the researchers created an artificial magnetic field affecting their closed loop of three qubits, causing the photons to interact strongly with not only each other, but also with the pseudo-magnetic field. Not a small feat.

“Naturally most systems where there is good control are photonic systems,” said co-author Charles Neill. Unlike electrons, charge-less photons generally tend not to interact with each other nor with external magnetic fields, he explained. “In this article we show that we can get them to interact with each other very strongly, and interact with a magnetic field very strongly, which are the two things you need to do to get them to do interesting physics with photons,” Neill said.

Another advantage of this synthetic condensed-matter system is the ability to drive it into its lowest-lying energy state — called the ground state — to probe its properties.

But with more control comes the potential for more decoherence. As the researchers strove for greater programmability and ability to influence and read the qubits, the more open their system was likely to be to error and loss of information.

“The more control we have over a quantum system, the more complex algorithms we would be able to run,” said co-author Anthony Megrant. “However, every time we add a control line, we’re also introducing a new source of decoherence.” At the level of a single qubit, a tiny margin of error may be tolerated, the researchers explained, but even with a relatively small increase in the number of qubits, the potential for error multiplies exponentially.

“There are these corrections that are intrinsically quantum mechanical, and then they start to matter at the level of precision that we’re getting at,” Neill said.

To combat the potential for error while increasing their level of control, the team had to reconsider both the architecture of their circuit and the material that was being used in it. Instead of their traditionally single-level, planar layout, the researchers redesigned the circuit to allow control lines to “cross over” others via a self-supporting metallic “bridge.” The dielectric — the insulating material between the conducting control wires — was itself found to be a major source of errors.

“All deposited dielectrics that we know of are very lossy,” Megrant said, and so a more precisely fabricated and less defective substrate was brought in to minimize the likelihood of decoherence.

Software could help save the planet

unduhan-42Researchers at Lancaster University’s Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do.

The system — called REx — is being developed with vast energy-hungry data centres in mind. By being able to rapidly adjust to optimally deal with a huge multitude of tasks, servers controlled by REx would need to do less processing, therefore consuming less energy.

REx works using ‘micro-variation’ — where a large library of building blocks of software components (such as memory caches, and different forms of search and sort algorithms) can be selected and assembled automatically in response to the task at hand.

“Everything is learned by the live system, assembling the required components and continually assessing their effectiveness in the situations to which the system is subjected,” said Dr Barry Porter, lecturer at Lancaster University’s School of Computing and Communications. “Each component is sufficiently small that it is easy to create natural behavioural variation. By autonomously assembling systems from these micro-variations we then see REx create software designs that are automatically formed to deal with their task.

“As we use connected devices on a more frequent basis, and as we move into the era of the Internet of Things, the volume of data that needs to be processed and distributed is rapidly growing. This is causing a significant demand for energy through millions of servers at data centres. An automated system like REx, able to find the best performance in any conditions, could offer a way to significantly reduce this energy demand,” Dr Porter added.

In addition, as modern software systems are increasingly complex — consisting of millions of lines of code — they need to be maintained by large teams of software developers at significant cost. It is broadly acknowledged that this level of complexity and management is unsustainable. As well as saving energy in data centres, self-assembling software models could also have significant advantages by improving our ability to develop and maintain increasingly complex software systems for a wide range of domains, including operating systems and Internet infrastructure.

REx is built using three complementary layers. At the base level a novel component-based programming language called Dana enables the system to find, select and rapidly adapt the building blocks of software. A perception, assembly and learning framework (PAL) then configures and perceives the behaviour of the selected components, and an online learning process learns the best software compositions in real-time by taking advantage of statistical learning methods known as ‘linear bandit models’.

The work is presented in the paper ‘REx: A Development Platform and Online Learning Approach for Runtime Emergent Software Systems’ at the conference ‘OSDI ’16 12th USENIX Symposium on Operating Systems Design and Implementation’. The research has been partially supported by the Engineering and Physical Sciences Research Council (EPSRC), and also a PhD scholarship of Brazil.

Wearable fitness tracker

The wearable device industry is estimated to grow to more than $30 billion by 2020. These sensors, often worn as bracelets or clips, count the number of steps we take each day; the number of hours we sleep; and monitor our blood pressure, heart rate, pulse and blood sugar levels.

The list of biophysical functions these devices can measure is growing rapidly. “But nobody has yet figured out a way to translate the information gathered by these devices into measures of health and longevity, let alone monetize this information — until now,” says S. Jay Olshansky, professor of epidemiology and biostatistics at the University of Illinois at Chicago School of Public Health and chief scientist at Lapetus Solutions, who is lead author on the paper. The researchers report that for the first time, the trillions of data points collected by wearable sensors can now be translated into empirically-verified measures of health risks and longevity — measures that have significant financial value to third parties like mortgage lenders, life insurance companies, marketers and researchers.

In the study, Olshansky and colleagues use the number of steps taken daily — a measure collected by almost all wearable sensors — and show how, using scientifically verified formulas, the step data can be translated into measures of health risk. By combining step count with age, sex, height, weight, walking speed, stride length, steps per mile and calories burned per step, they can derive the reduction in risk of death and expected gain in life expectancy and healthy-life expectancy if that same level of physical activity — in this case walking — is continued.

“In effect, we can take the data collected by your Fitbit and translate that into scientifically verified measures of health risk,” Olshansky said. “For example, we know that a 65-year-old, 5-foot-6-inch male weighing 175 pounds will reduce his risk of death by 33 percent if he regularly walks at a pace of 4 miles per hour,” Olshansky said. “The fact that it significantly reduces this man’s risk of death is valuable to the person walking, and also valuable to companies interested in interacting with someone with his level of daily physical activity.”

In the new health-data economy, your health information, once processed into longevity and health risk, will have a market.

“Imagine getting paid to upload your wearable sensor information to a new health data cloud,” Olshansky said. “Not only would researchers and companies be interested, but your own physician could access the data at your next physical to see, in effect, how you’d ‘driven’ your body since your last visit. “That information would provide a much better, more accurate picture of your overall health than the snapshot you get from blood and urine collected on the day of your once-a-year checkup.”