/* -- STUFF -- */

iTnews: Cyber security author discusses economics of protecting cyberspace

Friday, May 23, 2008


As a journalist at iTnews:

U.S. economists have launched a book that claims to document the first systematic analysis of the economics of protecting cyberspace.

Titled ‘Cyber Security: Economic Strategies and Public Policy Alternatives’, the book explores private sector security decisions, as well as the role of governments in facilitating and encouraging proactive cyber security investment strategies.

Besides individual concerns about identity theft, authors Michael Gallaher, Albert Link and Brent Rowe warn against larger threats such as a potential attack on national energy infrastructure.

Rowe, who is a research economist at research institute RTI International, spoke with iTnews about the book and its recommendations for private and public sector managers and strategists involved in cyber security.

Why is cyber security important?

Cyberspace is the nervous system of business today -- it links our critical infrastructures across both public and private institutions in sectors ranging from food and agriculture, water supply and public health, to energy, transportation and financial services.

This information control system is composed of hundreds of thousands of interconnected computers, servers, routers, switches and fibre-optic cables that allow our critical infrastructure to work. When this infrastructure is breached, the costs can mount very quickly.

Cyber security breaches are costly to businesses in terms of direct damages and future lost opportunities associated with stifling innovation, as well as to individuals in terms of identity theft.

In what way is cyber security a matter of national and homeland security?

In addition to time and monetary costs imposed on businesses and individuals as a result of cyber security breaches, a cyber attack could be aimed to have much more calamitous effects than described above.

For example, a complex and coordinated attack could be focused on the U.S. energy infrastructure, which has been shown to be relatively insecure, and potentially knock out power for days or weeks.

When did cyber security become an issue for private and public institutions? What has changed to make it an issue?

While there are no consistent estimates of the annual cost to the private sector or the public sector from security compromises, a rough estimate is that in 2006 cyber security breaches accounted for nearly US$1 billion in the United States.

Such costs have risen over the past decade to the point where organisations are focusing more on their information security investments. In many cases, information security officers in major corporations now have much more significant roles in company planning activities.

Companies are facing large costs, individuals confronting issues such as identify theft, and experts believe that larger threats -- for example, a potential attack on the U.S. energy infrastructure -- are looming.

What are some common mistakes that public and private sector organisations make in securing their IT infrastructure?

From a social perspective, organizations under-invest in cyber security because they are not penalised when their lacking security allows attackers to use them as a staging point or to compromise hosts and create botnets.

However, private sector organisations do not have the information they need to make efficient decisions from a private perspective -- what’s in their best interest -- or a social perspective -- what’s in the best interest of society. They collect what information they can with given resources, and then make decisions based on their budget constraints.

In some cases, they may underestimate the costs imposed by security breaches, however, there is no research to support the assertion that they are not acting in their best interest given the information they have available.

What is the Government’s role in cyber security?

As for government’s role, our research suggests that there are at least two barriers that prevent organisations from investing in the socially desirable level of cyber security; government’s role should be to help remove these barriers.

These barriers are also referred to as market failures and include: limited reliable, cost-effective information upon which an organisation can make informed cyber security investment decisions; and the cost externalities that spill over to other organisations and to consumers as a result of a security breach.

As a result, any cyber security investment that an organisation makes, particularly of a proactive nature, will likely generate social benefits in excess of private benefits. Thus, government would like to encourage such investments by removing or lessening such barriers.

In the past, government has attempted to develop and motivate the use of new technologies or standards that would improve security. Moving forward, new strategies are needed; for example, as suggested in our book, external public information is likely to motivate the adoption and implementation of proactive cyber security investment strategies.

How much investment should be made in cyber security?

According to our estimates, on average, a little less than 6 percent of their IT budgets on cyber security. However, as I offered in my previous answer, organisations are investing what they perceive to be an optimal investment given the information they have and their resource constraints.

There is no perfect level or type of cyber security investments that all organisations should make; this again points to the information problem that exists. The world of cyber security threats and solutions is constantly changing.

What were the most surprising results of your analysis of the economics of cyber security?

We identified several very interesting relationships that we believe should motivate the government and the public policy arena more broadly to act.

First, we were surprised by how much small businesses relied on outside contractors, and how unaware they were of the implications of their actions on their business and others.

Small businesses shared a focus on the bottom line as the main driver for any internal investment decisions, resulting in a lack of spending on proactive spending.

Overall, despite significant spending on security as a portion of their IT budgets (approximately 10 percent), our research suggests that small businesses are making the most strikingly socially efficient security investments of any industry group with which we spoke.

Second, of particular importance from the perspective of informing public policy, we found a relationship between organisations that rely on external public resources (e.g., surveys, ISO and NIST recommendations, etc.) when making cyber security investment decisions and the proactive nature of their cyber security strategy.

Since pursuing a proactive, preventative strategy is likely to reduce computer system breaches and hence the flow of attacks through an organisation to other organisations, it follows that one important role for government is the provision of information on state-of-the-art technologies and procedures that promote proactive cyber security approaches.

more

iTnews: CeBIT 08: Senator Lundy lobbies for Open Source change

Wednesday, May 21, 2008


As a journalist at iTnews:

The recent change of government could be an opportunity for the Australian Open Source community to bring their “free and open” philosophy to the public domain.

Speaking at the Open CeBIT conference in Sydney today, Senator Kate Lundy said that the newly-appointed Rudd Government represents a creative peak in public policy, as evidenced by the Australia 2020 Summit that was held in April.

“I’m really glad to be off the opposition benches and on to the government benches,” said Lundy, who is Senator for the Australian Capital Territory.

“The fact is, we got some really creative ideas from the 2020 Summit,” she said.

So far, suggestions to the Government have included: more resources for the use of Open Source in the education sector and Not-for-Profit organisations; government uptake of open standards; amendments to copyright laws; and the use of IPV6 as a platform for innovation.

Lundy also described debates about allowing open access to Crown copyright material, open access to government-funded research, and Open Source licensing of software that is developed with taxpayers’ money.

“Governments tend to want to hold onto that [software] as an asset, and a lot of opportunities for innovation are lost that way,” Lundy said.

An open philosophy could benefit Australia by providing the foundations for innovation, digital knowledge and open technology, Lundy said.

Already, the philosophy is gaining momentum through a business uptake of Open Source software, from the backend database layer to business applications.

According to a 2008 IT spending and priorities study by Australian analyst firm Longhaus, Open Source is likely to become a fully-integrated dimension of the overall software market by 2012.

While 14 percent of IT decision makers in medium to large organisations claimed to have no intention of using Open Source solutions, Longhaus Research Director Sam Higgins said that these were simply “organisations in denial”.

“There are many proprietary distributors that are filling the distributor role [for Open Source software],” Higgins explained.

“Open Source is one of those inevitable features that is becoming inherent in enterprise technology,” he said.

Defying conventional expectations about Free and Open Source Software, the Longhaus study found the major drivers for Open Source adoption to be licensing and convenience, and not cost.

“When we talk about ‘free’, it may not necessarily be about cost; today, it’s much more about freedom of choice,” he said.

But for the business uptake of the Open Source philosophy to spread to policy makers, stake holders must play an active role in lobbying for change, Lundy said.

Describing a governmental bias towards the risk-averse position of inertia, Lundy encouraged the Open Source community to work together to present a strong, compelling position to the Government.

“We’ve got all the evidence we need; I think the next step is to grasp the political agenda,” she said.

“Part of this challenge is to get these ideas into a cohesive summary and present it to policy makers. Unless we can do this as an Open Source community, it’s going to be really hard to bring about change,” she said.

“My own view is that Australia is quite a lot greater than the sum of its parts. Let’s not deny ourselves the tools that will help us achieve our potential,” she concluded.

more

iTnews: Researcher discusses iPod supercomputer

Friday, May 09, 2008


As a journalist at iTnews:

Microprocessors from portable electronics like iPods could yield low-cost, low-power supercomputers for specialised scientific applications, according to computer scientist John Shalf.

Along with a research team from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, Shalf is designing a supercomputer based on low-power embedded microprocessors, which has the sole purpose of improving global climate change predictions.

Shalf spoke with iTnews about the desktop and embedded chip markets, inefficiencies in current supercomputing designs, and how the Berkeley design will achieve a peak performance of 200 petaflops while consuming less than four megawatts of power.

How do mobile devices come into play in your supercomputer design?

The motivation is that we’ve gotten to the point where the cost of power that goes to new supercomputing systems is getting to be very close to the cost of buying them.

When you move from the old [approach to] supercomputing, which is performance-limited, to supercomputing where the limiting concerns are power and cost, then all the lessons that we need to learn are already well-understood by people who used manufacture microprocessors for cell phones.

A lot of [current] supercomputers have USB ports and sound chips -- they will never be used and yet they consume power. They [manufacturers] call it commodity off the shelf [COTS] technology, where if you want to have things cheap, you leverage the mass market.

Now, the market has moved away from desktops down to individual cell phones, it’s going to change the entire computing industry I think.

In terms of investment in microprocessor technology, it used to be dominated by the desktop machines, but now the iPhones and iPods are where all the money for research into advance microprocessor designs is going. We’re leveraging that trend, and we’re kind of like the early adopters of that idea.

How will the Berkeley design require less power than current approaches to supercomputing?

The desktop chip market or the server market, that we’ve been basing our supercomputer designs on, emphasise serial performance. That is to get high clock frequencies, and to make things that aren’t parallel -- like Microsoft Word or PowerPoint -- run as fast as they can.

However, when you look at the Physics of how power consumption is related to clock frequency, voltage squared is related to clock frequency. So if you reduce the clock frequency modestly, we get a cubic power efficiency benefit.

If you compare a high end server chip that consumes 120W running at 2GHz, if we just drop the clock frequency to 600MHz we can get the wattage down to 0.09W.

Another way to reduce power is to remove anything that you don’t need for that particular device from the processor.

[Partner company] Tensilica can create 200 new microprocessor designs per year. Their tools allow them to tailor-make special processors for each new thing they want to do, and they can do it very fast.

We’re using their design tools to make a microprocessor that removes everything that we don’t need for this climate application.

Can the same concept be used in general purpose supercomputing? Are general purpose computers a feasible concept?

In order for this [the Berkeley approach] to work, you need a problem that runs in parallel, because you need more of these [low clock frequency] processors to match the performance of a really big one. It happens that scientific applications already have plenty of parallelism available.

While the desktop chips –- the Intels, the AMDs -- can’t really play this game because things like Microsoft Word aren’t running in parallel, we can exploit this way beyond the ability of the desktop chip.

I think they [general purpose supercomputers] are a realistic idea, and there’s still a place for general purpose supercomputing systems. In terms of general purpose supercomputing, there will be large systems that handle a broad array of applications.

We’re saying for certain computation problems, ours is the correct approach, but it doesn’t supplant the need for general purpose computing because there are many problems that are much smaller than the petaflop or the exaflop.

What is your research team working on currently?

We’re currently doing this iterative process where we adjust all the aspects of the processor -- how much memory it has, how fast the memory is, how many instructions it does per clock cycle – all these things fixed in a conventional desktop chip, but we can adjust everything about the microprocessor design.

We have something that automatically tunes the software after we make a hardware change, then we benchmark it, measure how much power it takes, then we change the hardware again. We keep on iterating to come up with the optimal hardware and software solution for power.

When do you expect to achieve tangible results?

We want to demonstrate the first prototype in November, but that would just be one processing element in the system. The way that we’re able to do that is using something called RAMP [Research Accelerator for Multiprocessors] at U.C. Berkeley, which is a system that allows us to prototype new hardware designs without actually building them from scratch.

If we were to actually get enough money to create some chips, it would take us an additional six months or so to get chips out.

How much will the Berkeley supercomputer cost to build?

The cost of putting all the components in the same place is probably in the $30-50 million range. A lot of that cost is just the cost of memory chips.

However, I’d point out that that’s the typical cost of buying, from IBM or Cray, a supercomputing system. So it’s on the same order of cost that we currently put into systems that are one thousandth of the performance that we need to solve this problem.

Are any other groups taking the same approach to supercomputing as yours?

There is something called MD-GRAPE which is in Japan [and used for] molecular dynamics. They [researchers] showed that by designing a custom chip for that application, they could do something that was 1,000 times more efficient than using conventional microprocessors.

It cost them a total of $9 million to do that.

Another group is D.E. Shaw Research that has built a system called Anton. They are also using Tensilica processors, and that system is 100 times to 1,000 times more powerful than is achievable using conventional microprocessors.

MD-GRAPE is an older system. Anton, they just booted up the first nodes a couple of months ago, and they’ve been testing that out, demonstrating that it works.

How does the Berkeley approach differ from the other projects?

It’s [Climate change] a new area and also we’re more leveraging off-the-shelf design tools and less dependent on fully customised hardware that requires a lot more energy and more time investment.

Do you foresee there to be any commercial opportunities for the technology?

People have spent such a long time saying you can’t compete against the big microprocessor companies to create an efficient machine for science, and that was definitely true when power wasn’t a limiting factor. But now, we need to show the feasibility of this approach so that it can change the way that we design machines for supercomputing.

If we’re successful as researchers, IBM or Intel -- if they see a market for this -- will turn around and do this.

more

iTnews: Spintronics professor wins grant, discusses technology

Wednesday, May 07, 2008


As a journalist at iTnews:

The U.S. Department of Defense has invested half a million dollars in a grant that aims to further the emerging field of quantum spin-based electronics.

Last week, spintronics researcher Ian Appelbaum was awarded US$484,370 by the government agency’s Experimental Program to Stimulate Competitive Research (DEPSCoR).

Spintronic devices are expected to be faster, smaller, and smarter than present-day gadgets. Potential devices include instant-on computers, cell phones and other devices that require much less power to operate.

Appelbaum, who is an assistant professor of electrical and computer engineering at the University of Delaware, spoke to iTnews about the potentials of spintronics and his research into the semiconductor silicon to enhance the speed and design of integrated circuits for spintronics.

What first drew you to researching spintronics?

This is a new field where several fundamental challenges were, and still are, begging for solutions. The opportunity to personally make an impact on a field which has the potential to transform technology, and have fun doing it, really excited me.

How does spintronics work?

Electrons carry electric currents because they have mass and electric charge. In addition, they also have an intrinsic magnetic moment, called "spin".

However, the orientations of these electron magnetic moments are random in nonmagnetic materials, so the presence of spin can for the most part be ignored in traditional electrical engineering applications.

The field of semiconductor spintronics aims to utilise "spin-polarised" electrons, where the magnetic orientations of electrons are more or less aligned to each other, for information processing using semiconducting materials, whether it is binary or quantum logic, or for constructing better interconnects between electronic devices.

Could you please briefly explain the research that has led to you being awarded the DEPSCoR grant?

My group was the first to demonstrate -- last year, in the journal "Nature" -- spin-polarised electron injection, transport, and detection in the semiconductor silicon.

This is significant because silicon is technologically and economically the most important semiconductor, since it is the materials basis for microelectronics.

Moreover, silicon is an ideal material for spintronics because electrons in this material maintain their spin orientations for much longer than other semiconductors.

With this new grant, we hope to bring semiconductor spintronics closer to a pathway out of the lab environment and in a direction toward actual applications.

What do you feel is the most exciting possible outcome of spintronics?

Spintronics is touted as having the potential to enable low-power, smaller, faster, etc logic technologies. However, if you make a comparison to the history of many other technologies, I think the most exciting things to come out of the field will invariably be the ones not predicted a priori.

Are there any competing technologies to spintronics?

The International Technology Roadmap for Semiconductors (ITRS) has identified several competing technologies for future logic applications, like molecular electronics and single-electron devices.

Both of these alternatives are securely in early research stages. It's impossible to predict what technology will emerge as the best to supplant present-day microelectronics technology.

How long will it be before spintronics-based devices reach the market?

Strictly speaking, there have been spintronics devices on the market for over ten years: the magnetic read head in hard drives utilises spin-polarised currents in metals and, more recently, metal/oxide tunneling devices.

The science behind this technology won Albert Fert and Peter Grunberg last year's Nobel Prize in Physics. How long until a semiconductor spintronics device reaches market? I don't know, but I hope my work brings us closer to it.

more