/* -- STUFF -- */

iTnews: HP collaborates to identify Wi-Fi dead zones cheaply

Tuesday, September 30, 2008


As a journalist at iTnews:

A collaborative effort by HP Labs and Rice University has produced a technique that could lower the cost of identifying ‘dead zones’ in large wireless networks.

The technique enables Wi-Fi architects to test and refine their layouts before a network is deployed.

According to Joshua Robinson, a graduate student at Rice University, there currently is no standard industry practice to identify Wi-Fi dead zones.

“The frequency of dead zones have actually been a huge obstacle to deploying city-wide wireless networks,” he told iTnews.

“Since companies don't advertise how they find dead zones, it's hard to say authoritatively what happens.”

Some providers employ expensive, exhaustive measurement studies that require the network to be tested from every location from which potential users may wish to connect.

Other approaches involve taking a few measurements in an ad hoc fashion and fixing any remaining dead zones after the network is deployed.

According to Robinson, the goal of the new technique is to focus measurement efforts on ‘trouble areas’ that potentially could be dead zones.

The technique identifies locations at which the network should be tested by combining wireless signal models with publicly-available information about basic topography, street locations and land use.

“We develop accurate predictions and use these predictions to avoid spending a lot of measurements in areas that have clearly very good or very poor performance,” he explained.

“This is how we are able to use a small number of measurements to more accurately find a network's performance and identify all the dead zones.”

The research won best-paper honours at the annual MobiCom ’08 wireless conference in San Francisco this month.

By requiring five times fewer measurements when compared with a grid sampling strategy, and ‘far fewer’ measurements than needed for an exhaustive measurement strategy, the new technique could reduce labour and equipment costs, researchers say.

Robinson expects the municipalities, companies, and non-profit organisations looking to deploy city-wide wireless mesh networks to benefit most from the new technique.

He named for example the Technology-for-All (TFA) network that is being built by Rice University in partnership with a local non-profit organisation to wirelessly provide free Internet access to an under-served neighbourhood in Houston, Texas.

“We currently serve around 4000 people,” he told iTnews. “Since we do not have a big budget to test the network, techniques to reduce the cost and time involved in finding our dead zones are very helpful.”

Besides the TFA deployment, the new technique also has been tested on Google’s wireless network in Mountain View, California.

When compared with exhaustive measurement studies of both networks, the technique was found to achieve approximately 90 percent accuracy while requiring less than two percent of the number of measurements performed.

In the short term, Rice University researchers will be focusing on extending their research for use in the network planning process.

HP Labs has a definite commercial interest in the project and has been involved in prior deployments in Taipei. However, no plans for commercialisation of the technique have been announced as yet.

more

iTnews: Beware smartphone data leakage, Marshal warns

Friday, September 26, 2008


As a journalist at iTnews:

The increasing use of Blackberry, iPhone and other smartphone devices in the enterprise could put corporate data at risk, content security vendor Marshal warns.

According to Marshal’s Asia-Pacific Vice President, Jeremy Hulse, companies need to govern the use of smartphones, which enable a greater number of people to access company data from anywhere.

While notebook computers have enabled similar data mobility in the past, Hulse expects the burgeoning smartphone culture to introduce new risks to enterprise security.

“You don’t really pull a notebook out as much as a smartphone,” he noted. “The risk is pulling a smartphone out with friends at a bar, leaving it around, or losing it in a public place.”

Highlighting the importance of financial and strategic data to a corporation, Hulse said businesses should pay more attention to defining and protecting their critical information.

“The level of risk [posed by smartphones] depends on the type of information that people are pushing down to mobile devices, and the locations they are accessing this information from,” he told iTnews.

“They [businesses] have to ask themselves, ‘Do people need to access corporate information on mobile devices?’”

Market pervasion and a diminutive size have contributed to the fact that mobile phones and PDAs currently are far more commonly lost, or left behind, than notebook computers.

According to a recent survey by privacy vendor Credant technologies, a total of 62000 mobile devices have been left in London cabs during the past six months.

While personal data and identity fraud has been the main worry of lost mobile devices in the past, Hulse expects corporate data loss soon to steal the spotlight.

“It’s only a matter of time, especially with the amount of storage available in new devices,” he said.

Besides instilling a corporate culture of greater care when accessing company data on a smartphone, Hulse suggests the use of technologies such as content filtering, hardware and software locks.

While he could not identify manufacturers of mobile devices that offer particularly good or bad security, Hulse noted that some vendors have collaborated with Microsoft to install technology that wipes clean a devices’ memory in case of loss or theft.

Other vendors offer software that provides a standard operating environment across mobile devices and enterprise desktop computers, which could enable organisations to monitor and filter the transfer of sensitive data.

Noting that smartphone technology could benefit employees’ productivity, Hulse said security should not be seen as a barrier to mobility, but an enabler to maximise the benefit from mobile technology investments.

“I think smartphones can actually be really productive, but I think they need to be looked at in terms of security,” he told iTnews.

“The capability [for increased productivity] is there, but training for staff needs to be there too,” he said.

more

iTnews: In-the-cloud security to grow in economic crisis, Webroot predicts

Thursday, September 25, 2008


As a journalist at iTnews:

The economic downturn could give the software as a service (SaaS) model an edge over traditional security software provision, Webroot believes.

In Sydney this week to speak at Gartner’s IT Security Summit, Webroot CTO Gerhard Eschelbeck highlighted the difficulty of managing an evolving threat landscape amid staff shortages and tightening budgets.

While e-mail security has dominated the spotlight previously, Webroot research indicates that the Web has been a greater source of threats in recent times.

The research also found that more than half of Web security decision makers feel that keeping security products up-to-date is challenging, and almost 40 percent believe their companies devote insufficient resources to Web security.

“Nowadays, the threat landscape is changing in so far that the variants of malware is exploding compared to five years ago,” Eschelbeck said, noting that malware variants in circulation today are five times as numerous as those five years ago.

“It is clear that the traditional applications are reaching some of their physical limits,” he told iTnews.

Compared to traditional, on-premise security applications, Webroot’s SaaS offerings provide businesses with the ability to outsource the burden of security.

Companies’ e-mail and Web traffic is filtered through Webroot’s data centres in Australia to detect and remove any malware.

Webroot has invested approximately $1m in its Australian operations, including a newly-launched data centre in Sydney and eight support staff across Sydney and Melbourne.

According to Charles Heunemann, Webroot’s Managing Director of Asia Pacific Operations, the company has provisioned for ‘significant growth’ in the region.

The Sydney data centre currently is used to only a ‘single digit percentage’ of its capabilities, he said.

“What we’ve got is a situation where we’ve come from an early adaptor stage to a wider use of SaaS,” Heunemann said of the growing market for in-the-cloud security applications.

The rise of SaaS was said to be a result of economic pressure to deliver more value through IT and a trend towards outsourcing ‘non-core’ applications.

In the Asia-Pacific region, the market for SaaS is experiencing a Compound Annual Growth Rate (CAGR) of 44 percent, Heunemann said, compared to a CAGR of 13 percent for on-premise software.

Webroot estimates in-the-cloud security applications currently to have a market penetration of 8 percent in Australia, 4 percent in Asia and 25 percent in the U.K.

“The genesis of security technology tends to be in the U.K.,” Eschelbeck said, expecting Australian adoption to reach similar figures ‘before long’.

In terms of Eschelbeck’s research into the ‘Laws of Vulnerabilities’, the SaaS model could reduce the threat posed by current Web-based malware by narrowing organisations’ window of exposure.

“The Laws of Vulnerabilities is to do with how quickly organisations are patching their systems,” he explained. “There is a physical limit to how quickly companies can patch, so there is always a window of exposure of five to nine days.”

“The huge advantage that it [SaaS] brings is moving the responsibility [of patching] from the organisation to the provider,” he said.

Heunemann noted that through providing a specialised security service to multiple organisations, service providers such as Webroot also have access to economies of scale and greater visibility of the overall threat landscape.

Webroot also is able to provide its customers with some level of insurance, through guaranteeing a minimum malware capture and detection rate.

“Compared to the traditional model where you’re looking at deploying an on-site application to protect against malware, the SaaS model essentially is a pay-as-you-go option that scales very well linearly, as you grow,” Eschelbeck said.

“Typically, IT departments are not overstaffed -- I’ve never heard of overstaffing as an being an issue for these departments -- so the advantage [of SaaS] is taking the burden [of security] away from end users,” he said.

more

iTnews: Ruxcon hacker conference opens arms to security pros

Wednesday, September 24, 2008


As a journalist at iTnews:

Community-organised hacker conference, Ruxcon, is aiming to attract a ‘more diverse field’ of attendees to its annual event in November.

Now in its sixth year, Ruxcon is expected to bring together some 350 vulnerability enthusiasts from across Australia.

Ruxcon 2008 will be the sixth such event since its launch in 2003. Over the years, the conference has evolved from a technical, specialist event to have a broader security focus, according to conference organiser Chris Spencer.

“The focus is going to be a little more professional this time around,” Spencer told iTnews. “We want to start attracting a more diverse field of security professionals.”

“I don’t think there is much of an Australian hacking community anymore; the security industry has commercialised vulnerability research, so that there just isn’t a vibrant hacking community,” he said.

Spencer compared Ruxcon to high-profile hacker conferences such as Defcon and Black Hat in the U.S., describing Ruxcon as a hobbyist, community-driven event.

Since its inception, the conference has been organised by the same group of volunteers, all of whom have day jobs in the Australian security industry.

When not organising Ruxcon, Spencer works as a vulnerability researcher. Other organisers include consultants and security administrators.

Ruxcon 2008 will be held at the University of Technology, Sydney from 29 to 30 November, and costs $60 to attend.

Presenters hailing from Australia, New Zealand, Italy and France will discuss topics such as how Graphics Processing Units (GPUs) can be used by malware, and heap exploitation theory in Windows Vista.

Despite the vulnerabilities and methods that will be discussed at the conference, Spencer notes that Ruxcon has not encountered any resistance from vendors to date.

And previous years’ sponsorship by vendors such as Google and VeriSign’s iDefense has not impacted presentations such as ‘Google Hacking’, which will be discussed this year.

“When they [Ruxcon presenters] present on these topics, they are doing it from a research background and not a malicious standpoint,” Spencer said of the risks of revealing vulnerabilities.

“As a whole, it’s [Ruxcon] just about putting on a demonstration of the talent we have in Australia,” he said.

more

iTnews: Angel investor seeks geek chic innovations

Tuesday, September 23, 2008


As a journalist at iTnews:

Australian Web site developer Geekdom is seeking geeky ideas for its online business incubation program.

The program, dubbed ‘Geeksville’, targets projects in Australia and offers successful applicants angel funding, mentoring and business resources.

According to Ian Naylor, Chief Technology Officer of Geekdom, the program aims to ‘bridge the gap’ between budding and business-ready stages of an idea.

“Typically VCs will not fund an idea from inception,” Naylor explained. “Effectively, they are stage two investors once the business has been bootstrapped for a period and a modicum of success has been achieved.”

“The core area of our business is developing with investment in mind,” he said. “Using our venture capital connections, we have a good idea of what has a chance to be picked up.”

Geeksville was formed early this year, when Geekdom executives realised that its core Web site development business structure could support external as well as internal innovations.

The incubation program has now been running for six months and is slated to publically launch its first batch of start-up companies soon.

A six month time to market is among a loose set of selection criteria for Geeksville hopefuls. Other criteria include: innovative ideas for the online market; and a project’s manageability.

While the incubator program has a distinct focus on Australian innovations, it encourages applicants to focus on both local and international markets to create scalable opportunities.

Along with salary, resources, mentoring and business structure, successful applicants also receive access to other resources within Geekdom’s parent organisation, the Photon Group.

The Photon Group encompasses more than 50 other companies, including strategic communications company Love, Naked Communications, and European public relations consultancy Hotwire.

“As an incubator we see our value not just in providing money, but structure and resources,” Naylor told iTnews, “so really the less you [applicants] have, other than the idea, the more value we can add.”

more

iTnews: Video game GPUs find use in biological modelling

Monday, September 22, 2008


As a journalist at iTnews:

Forget supercomputers -- researchers have devised a method of using graphics processing technology from video games in complex biological modelling.

Applied to a new area in biology called agent-based modelling, the technology could enable a wider range of researchers and medical practitioners to simulate processes graphically.

Agent-based modelling simulates the behaviours of complex systems by predicting the actions and interactions of its various components.

Currently, researchers in this field of work either manually develop, or use toolkits to build, code which runs directly on a computer’s CPU (central processing unit).

According to Roshan D'Souza of the Michigan Technological University, such code often has limited scalability, so large-scale models often require the use of a supercomputer.

D’Souza is attempting to make large-scale modelling more accessible with the use of graphic processing units (GPUs) that currently are used in video games to do parallel computation for the purpose of realistic rendering of scenes.

Using GPUs, researchers could simulate large-scale models with tens of millions of cells on a regular desktop costing under US$1400, he expects.

“Currently, if you want to simulate a model anywhere close to what we are handling, you have to go to a supercomputer costing a few million quid,” he told iTnews.

“GPUs are cheap – [costing around] $400 -- and are a bang for the buck,” he said. “You can do sufficiently large models to handle quite a large portion of the scenarios.”

Another benefit of using GPUs is that they produce real-time simulations, whereas supercomputers calculate simulations offline without real-time visual display.

Results of GPU simulations may not be any better than those produced by a supercomputer, nor do GPUs replace supercomputers in simulating ultra-large models comprised of billions of agents, D’Souza said.

However, by providing a cost-effective method for large-scale modelling, GPUs could be used by physicians to do diagnostics or to plan individualised treatment plans for patients.

“The big picture is this: in the near future, when your local physician starts using ABMs [agent-based models] … he/she will not be able to access a supercomputer,” D’Souza said.

“But for sure he/she can buy a GPU and insert in into his/her desktop,” he said.

In the past, GPUs have been used for research into fluid simulation, n-body dynamics and protein folding.

The application of GPUs in agent-based modelling came as an ‘accident’, D’Souza said, when investigating CAD (computer-aided design) tools to detect design errors in mechanical engineering designs.

“I used GPUs for this,” he recalls, “then I was looking around for other ‘stuff’ to do with the GPU. I found that people had already worked on molecular dynamics, fluids, protein folding etc – but agent-based modelling was open.”

“With a $1,400 desktop, we can beat a computing cluster,” he said of the technology. “We are effectively democratising supercomputing and putting these powerful tools into the hands of any researcher.”

“Every time I present this research, I make it a point to thank the millions of video gamers who have inadvertently made this possible,” he said.

more

iTnews: 'Spoken tokens' touted as ultimate security

Friday, September 19, 2008


As a journalist at iTnews:

An Australian market for biometric voice authentication is taking shape from early adopters in the banking, government and service industries.

According to Chuck Buffum, who is Vice President of Caller Authentication Solutions for Nuance Communications, more Australian companies are likely to be authenticating their customers with ‘spoken tokens’ within the next few months.

In Sydney to meet with customers this week, Buffum expects the local voice authentication market to grow to within three months of more mature markets such as the U.S., U.K. and Canada by mid-2009.

“It’s very early in the market take up,” he said, estimating there to be eight to 10 consumer-facing biometric voice authentication deployments in the world currently.

“That will change in the next few months,” he said, citing discussions with Australian banks and government agencies, which are expected to deploy the technology within six to nine months.

Currently, Nuance’s voice authentication technology is used by subscription television provider Austar to handle orders for its premium services and movies.

The authentication and call steering system is said to have achieved an initial return on investment in less than 12 months, and currently handles more than 1.5 million (51 percent) calls per year.

Besides the convenience and cost benefits of an automated voice authentication system, Buffum expects the biometric technology to provide consumers with greater account security.

“All around the world, there is more and more attention on protecting personal information and keeping that secure,” he told iTnews.

“Companies need to be encouraged to recognise that vulnerability and use suitable technologies to protect their customers,” he said.

Buffum warned of the public availability of personal information on the Internet. Such information often is used to secure accounts with traditional authentication protocols, he noted.

Social networking sites such as Facebook were highlighted as sources from which malicious persons may obtain information such as a person’s date of birth and hometown.

Search engine queries can be used to obtain yet more personal information about a target, as evidenced by the hacking of U.S. vice presidential candidate Sarah Palin’s e-mail account this week.

With the right design and security settings, biometric voice authentication methods could provide greater security by combining ‘something you know’, such as a password, with ‘something you are’, through your voice print, Buffum said.

Binary voice prints in Nuance’s software consist of 28 features, which are vocal characteristics such as the geometry of one’s vocal tract and behavioural patterns such as cadence, rhythm and volume.

The system determines the statistical likelihood of a caller and his or her saved voice print, and authenticates the caller in accordance with the security threshold set by the company.

Some transactions such as airline timetables may require lower security thresholds than others, such as banking, Buffum said.

Noting that ‘nothing is as good as iris scanning’, Buffum estimates the security level of voice authentication to be similar to that of fingerprint technology, and ahead of face scanning.

A hacker could make an audio recording of an account holder’s interaction with a voice authentication system and play it back to gain access, but such attacks could be thwarted by requesting the caller to repeat a random series of words, he said.

“The Mission Impossible squad can always break in,” he said, “but with this technology, you can at least keep everyone else out.”

“2009 is the year of the early adopters,” Buffum said of the Australian voice authentication market.

“Hopefully you’ll be counting them [deployments] on more than one hand -- but that’s around the figure we’re looking at.”

more

iTnews: Mobile operators warned of 'dumb pipes' ISP scenario

Wednesday, September 17, 2008


As a journalist at iTnews:

Mobile broadband operators need to look beyond flat-rate access offerings to future-proof their business, analysts say.

Recent research from analyst firm Ovum suggests that operators could bolster their revenues by leveraging subscriber information and network-based assets.

Melbourne-based Ovum analyst Nathan Burley likened the current mobile broadband market to last decade’s fixed broadband offerings that bundled access with services such as e-mail, hosting and content.

As consumers turned to other Web sites for their online activities, fixed broadband ISPs were muscled out of content revenue and into what analysts call a ‘dumb pipes’ scenario.

Mobile broadband operators such as Telstra currently support a bulk of their customer’s activities through services such as mobile TV, games and customised music sites.

But with the growing popularity of smartphones that feature more user-friendly Web browsers, mobile broadband operators may soon go in the way of their fixed-line counterparts, Ovum predicts.

“If you look at these smartphones, browsers on the phones are getting better and you are seeing users of these handsets use the phones for their own content on the Internet,” he said.

Burley named the Blackberry Bold, Nokia N96, HTC Touch Diamond and Apple’s iPhone as examples of game-changing devices.

“In terms of the iPhone, Apple’s strategy to some degree is to relegate the [mobile broadband] operator to offering access only,” he said.

“It’s a very open model for content providers,” he told iTnews.

While this open model could mean less content-based revenue for mobile operators, Burley points out that external content could bolster data access revenue.

Additionally, mobile operators could have a place in the value chain for supporting third-party content and other services by leveraging in-depth information about their customers and network-based assets such as location.

Areas such as social networking applications could benefit from customer metadata, Burley said, highlighting opportunities for advertising revenue.

“On the Internet, if you look at the people [companies] who have prevailed in terms of advertising revenue, it [their success] sits on a lot of information about their subscribers,” he said.

But not all operators will be successful in gaining a share of the advertising pot, analysts predict, due to the strong competition between broadcasters, Internet businesses and traditional media.

Ovum suggests that mobile broadband operators consider options such as: having their own paid-for content and service offerings; offering their own and third-party content on an ad-supported basis; and providing access to free Internet-based content to drive usage and hence access revenues.

more

iTnews: 'Cognitive radios' to improve wireless devices

Tuesday, September 16, 2008


As a journalist at iTnews:

Researchers are developing intelligent radios that can sense their surroundings and adjust their mode of operation accordingly.

Dubbed ‘cognitive radios’, the technology is expected to reach the market within five years, finding uses in public safety devices and wireless networks.

Cognitive radios build on the concept of ‘software defined radio’, in which most functions in a radio device are performed by software-controlled digital electronic circuits.

Similar to how a modern day cell phone signs on to different networks while roaming, a cognitive radio is designed to be adaptive to its situation.

“A cognitive radio is aware of its environment, its own capabilities, the rules within which it can operate, and its operator’s needs and privileges,” explained Charles W. Bostian, an Alumni Distinguished Professor of Electrical and Computer Engineering at Virginia Tech.

“It is capable of changing its operating modes in ways that maximise things that the user wants while staying within the rules ... is capable of learning in the process and of developing configurations that its designer never anticipated.”

Adaptive, cognitive radios could enable techniques such as dynamic frequency sharing, in which radios automatically locate unused frequencies, or share channels based on a priority system.

In public safety, cognitive radios also could be used to provide interoperability between various signals and automatically adjust radio performance.

“For example, if its user is inside a building where there is little or no public safety radio coverage, the radio may automatically switch to a VoIP mode and reach the dispatcher through a WiFi access point and a telephone line,” Bostian said.

“These ideas are research topics now, and some of them will soon reach the market,” he said.

“It will mean more business and more equipment to sell for the wireless infrastructure manufacturers. They have nothing to lose and a lot to gain.”

Microsoft is researching the technology’s potential to alleviate bandwidth scarcity in wireless networking through its Kognitiv Networking Over White Spaces (KNOWS) research project.

The project works towards opportunistically accessing unused portions of the TV spectrum, and already has birthed a prototype that scans for unused frequencies by sensing the TV spectrum.

When an open frequency band is located, the device is designed to dynamically switch to it in a way that does not hurt incumbent TV receivers.

“I think dynamic spectrum access will be the first application for commercial wireless services like WiFi and WiMax,” Bostian told iTnews. “There is movement toward doing this in vacant U.S. TV channels.”

According to Bostian, current challenges in the development of cognitive radio are reducing cost and improving battery life.

While the technology is expected to reach the market within five years, it will take twice as long to become commonplace, Virginia Tech researchers predict.

more

iTnews: Contact centres urged to look offshore for staff

Monday, September 15, 2008


As a journalist at iTnews:

Australian contact centres need to look offshore to solve staffing issues, claims customer service outsourcing agency, Convergys.

Convergys’s statement comes on the heels of the 2008 Australian Contact Centre Industry Benchmark Study, which highlighted high levels of employee attrition in the contact centre industry.

Conducted by ACA Research, the study found staff turnover, difficulty in recruiting and inadequate headcount to be the industry’s three greatest challenges.

Staff attrition was ascribed to the fact that only one-third of contact centre staff view their job as a career, while other survey respondents described their job as part-time, gateway, and transition gigs.

“In Australia, a lot of industries just don’t see a contact centre job as a career,” said Max Tennant, Senior Account Executive of Convergys.

“Top that off with the highly transactional functions in most contact centre jobs, and it just doesn’t make it [contact centre work] an appealing job for young people,” he said.

Tennant suggests that contact centre operators move some functions offshore, where the ‘highly transactional, monotonous functions’ that are required may be considered appealing.

He named the Philippines and India as prime offshore locations due to the language and customer service skills that are available.

“Culturally, the offshore agents are a lot more open to these transactional tasks that Australian agents may find monotonous,” he said, describing ‘hierarchical’ tendencies in the Asian culture, and adding that offshored contact centres tend to pay three times the minimum wage in host countries.

Contact centre operators could direct specific customer requests to locations with staff adept at those functions, Tennant said, noting that while Indian contact centres have been found to excel in backoffice functions, staff in the Philippines traditionally provide a better customer service experience.

“If you ask your customers, ‘would you prefer an offshore or onshore agent’, they will ask for onshore, because that’s what they’re culturally used to,” he noted.

“But if you say, ‘look, here’s our situation -- you can wait 45 minutes for XYZ and our operating hours are 9-5 -- would you like 24/7 service and all these other options if we provide offshore agents?’ They will say yes.”

“It’s important that offshoring needs to be a part of an organisation’s overall strategy and is not just a cost savings thing,” he said.

Teleworking and process automation previously have been heralded as solutions to staffing problems in the Australian contact centre industry.

However, Tennant said that there are some transactions that once automated, do not provide a satisfactory customer experience, and expects there to be insufficient Australian teleworkers to provide the required headcount to satisfy customer demands around the clock.

Convergys maintains 84 contact centres worldwide, which house a total of 45,000 agent stations -- most of which are staffed around the clock, Tennant said.

A majority of Convergys’s facilities are located in the U.S., where the business originated. The company plans to direct more of its clients to its 9 contact centres in India, and 14 in the Philippines.

more

iTnews: Supercomputing revenue to grow $6.4b by 2012

Friday, September 12, 2008


As a journalist at iTnews:

High Performance Computers (HPC) are extending their reach from academic research to government, manufacturing and financial industries, analysts say.

According to analyst firm IDC, revenue from HPCs -- which are commonly termed ‘supercomputers’ -- will grow by $6.4 billion during the next five years.

Industries experiencing the most growth in HPC applications are expected to be software engineering, mechanical design, weather, financial and digital animation.

Universities, which in 2007 spent more than $2.1 billion on HPCs, will continue to lead HPC spending with a forecasted spending of $3.2 billion in 2012.

Although optical and quantum technologies have received much attention as potential supercomputing technologies, most HPCs today are comprised of high performance configurations of ordinary technologies.

“Most high performance computers today are based on commodity technologies that are readily available in the open market,” explained Steve Conway, who is IDC’s Research Vice President of Technical Computing and Steering Committee Member of the HPC User Forum.

Speaking with iTnews in the lead up to IDC’s HPC roadshow in Sydney this month, Conway said that common HPC configurations include standard x86 microprocessors from AMD or Intel, and the standard Linux or Microsoft Windows operating system.

“The trick is in how these technologies are connected together in supercomputers that may have 100,000 or more processors each in some cases,” he said. “Increasingly, fibre optical connections are used to link the components together.”

Outside of the academic sphere, current supercomputers have found uses in aircraft design for Boeing, assembly line modelling for Proctor and Gamble, and manufacturing simulation for Whirlpool.

IBM and Hewlett-Packard lead the fray, each occupying 32.9 percent of the HPC market. Dell currently owns 17.8 percent market share, while Cray owns 1.1 percent.

During the next decade, Conway expects supercomputers to reach speeds 1,000 times faster than today’s best by 2017.

These ‘exaflop’ computers will perform 10^18 calculations per second and will be able to process the informational equivalent of all 20 million volumes in the New York Public Library system in less than one second, Conway said.

But as energy concerns come to the fore, analysts expect the cost and availability of electricity to be biggest challenge for the HPC market.

While petascale supercomputers slated for delivery in the 2010 timeframe are expected to require as much as 20MW of power, similar projects could demand 60MW in the 2015-2017 timeframe.

“Today's biggest supercomputers consume enough electricity to power a small city,” Conway said. “In some cases where the infrastructure is lacking, adequate power is simply not available to an HPC site at any price.”

“The biggest challenge will likely be the cost and availability of adequate electricity to operate computers this large and power-hungry,” he said.

“When the sharply rising costs of oil and gas, it is no surprise that power and cooling has become one of the top few issues for HPC users.”

more

iTnews: University of Sydney takes records management online

Thursday, September 11, 2008


As a journalist at iTnews:

The University of Sydney has completed the second stage of a decade-long move from paper to an electronic records keeping system.

Working with systems integrator Alphawest during the past four years, the university has implemented the Records Online 2 Web interface, which allows staff to create and manage student records online.

The implementation, which is estimated to have cost $160,000, has already delivered an annual return on investment of $675,000.

According to University of Sydney’s acting registrar Tim Robinson, Records Online 2 currently has 1,500 users out of a prime audience of 2,800 administrative staff.

It is estimated to have saved 10 minutes per week of physical filing work and reduced the need to create 10 or more physical files each year for 40 percent of its users.

Staff members are being introduced to the system in groups, in line with the project’s cautious approach to change management.

“One of the fundamental things we decided from the outset was that we weren’t going to go for the big king hit,” Robinson told iTnews.

“Making things simple is always time consuming,” he said, describing an aim to make the records keeping system user-friendly while maintaining a suitable privacy and access regime.

“An enormous amount of thought went into it; we knew that if it wasn’t simple, it would not be used,” he said.

The university took its first step away from its traditional paper registry in 2000, with the implementation of the Captira management tool in the back end.

Online search functionality was added shortly after, but it wasn’t until 2007 when users were able to access 98 percent of records management functions online.

The latest implementation of Records Online 2 now integrates the university’s records system with its business system, as well as allowing staff to access records via Microsoft Outlook.

Robinson noted that each stage of the implementation required ‘a lot’ of training for staff, estimating the cost of training to equal that of hardware and software combined.

“With an organisation as big as the Uni, we knew that we couldn’t train all these people in one go,” he told iTnews, describing one-on-one, roadshow, and small group training options.

After all administrative staff members are trained, the next challenge for Robinson’s team will be to introduce the Records Online 2 to academic staff, so they can access student records more efficiently.

Looking forward, Robinson expects the university to be looking to update its records management system again within the next three years.

He mentioned Microsoft Office SharePoint and an Open Source system that is in development at Curtin University as candidates.

“We’re always looking at what the next stage is,” he said. “I’m guessing by three years time, there will be some significant changes, and we’d be looking to update our systems.”

more

iTnews: Researchers find racial bias in virtual worlds

As a journalist at iTnews:

Real-world behaviours and racial biases could carry forward into virtual worlds such as Second Life, social psychologists say.

According to a study that was conducted in There.com, virtual world avatars respond to social cues in the same ways that people do in the real world.

There.com is a relatively unstructured virtual world that brands itself as an online getaway where users can hang out with friends and explore an immense, unusual landscape.

Users, who were unaware that they were part of a psychological study, were approached by a researcher’s avatar for either a ‘foot-in-the-door’ (FITD) or ‘door-in-the-face’ (DITF) experiment.

The FITD technique works by first asking a participant to comply with a small request -– which, in this experiment, was “Can I take a screenshot of you?” -- followed by a moderate request: “Would you teleport to Duda Beach with me and let me take a screenshot of you?”

Participants who fulfilled the small request are expected to be likely to see himself or herself as being helpful, and thus be more likely to fulfil the subsequent larger request.

The DITF technique work works in an opposite way: the experimenter first makes an unreasonably large request to which the responder is expected to say no, followed by a more moderate request.

In the DITF condition, that large request was to have screenshots taken in 50 different locations, which would have required about two hours of teleporting and travelling.

As the researchers expected, DITF participants were found to be more likely to comply with the moderate request when it was preceded by the large request, than when the moderate request was presented alone.

But while results of the FITD experiment revealed no racial bias, the effect of the DITF technique was significantly reduced when the experimenter took the form of a dark-skinned avatar.

White avatars in the DITF experiment received about a 20 percent increase in compliance with the moderate request; however, the increase for the dark-toned avatars was 8 percent.

According to the researchers, skin colour had no effect on FITD experiments because the elicited psychological effect is related to how a person views himself or herself, and not others.

However, the DITF technique is said to reflect a psychological tendency to reciprocate the requester's ‘concession’ from a relatively unreasonable request to a more moderate request, and thus is affected by whether the requester is deemed worthy of impressing.

The finding is consistent with previous DITF studies -- in real and virtual worlds -- that demonstrate that physical characteristics, such as race, gender and physical attractiveness, affect judgment of others.

Numerous studies done in the real world show that people are more uncomfortable with minorities and are less likely to help them.

“This study suggests that interactions among strangers within the virtual world are very similar to interactions between strangers in the real world,” said Paul W. Eastwick, who conducted the study at the Northwestern University.

“You would think when you're wandering around this fantasyland … that you might behave differently,” he said. “But people exhibited the same type of behaviour -- and the same type of racial bias -- that they show in the real world all the time.”

more

iTnews: Kaspersky Lab patents dynamic antivirus technology

Wednesday, September 10, 2008


As a journalist at iTnews:

Kaspersky Lab has patented a method of antivirus scanning that assesses files according to when and how they first appeared on the computer.


The method has been granted Patent No. 7 392 544 by the U.S. Patent and Trademark Office. Internally, it has been unofficially named ‘FirstTimeCheck’.

By dynamically varying the scanning level and set of tools used for file scanning, FirstTimeCheck is expected to minimise the impact of antivirus scanning on the overall system performance.

The technology also makes it possible to extend the time taken to scan new files and files received via ‘high-risk’ sources such as suspicious Web sites, P2P networks and e-mail attachments.

“In the past, antivirus products were scanning files with a standard set of technologies,” said Nikolay Grebennikov, Kaspersky Lab’s Vice-President for Research & Development.

“Now, the antivirus arsenal includes many new technologies, which significantly raise detection rate, but use more RAM and CPU time.”

“The standard solution for this situation is to limit the usage of these technologies to a level where antivirus scanning will not affect users’ activities too much," he told iTnews.

Grebennikov explained that the ‘standard solution’ to limit the resource usage of antivirus programs has been to set strict limits on the time taken to scan files.

He described such methods as a ‘compromise’ that may affect the quality of scanning and decrease the level of user protection.

“The main advantage of our method is that it minimises the impact on the overall system performance,” Grebennikov told iTnews.

“Normally such a deep system scan would greatly affect performance of the computer it is running on, but our new technology manages to bypass this negative effect to ensure maximum system performance all the time,” he said.

Kaspersky Lab is working on implementing FirstTimeCheck in current products and plans to implement the technology in the next version of its consumer products.

more

iTnews: Red Hat to launch Enterprise Messaging, Realtime, Grid

As a journalist at iTnews:

Red Hat is launching a new platform for managing next-generation architecture in enterprise data centres and High Performance Computers (HPC).

Dubbed Enterprise Messaging, Realtime, Grid (MRG), Red Hat’s new offering is said to highlight a long-term industry trend towards utility computing.

The software integrates a standardised messaging platform and realtime capabilities with grid management technology, and could speed communications, promote interoperability and enable more flexible load and resource management.

According to Red Hat’s Global Product Manager Bryan Che, current e-mail and messaging offerings tend to be diverse, specialised, and lack interoperability.

In Sydney this week to meet with Red Hat’s local team and customers about Enterprise MRG, Che explained that a lack of interoperability could mean that organisations are faced with architectural challenges and are unable to reap the benefits of a holistic messaging ecosystem.

“If you look at the messaging space today, there’s no standard for messaging,” he said. “This means that if you buy one product from Vendor A, and another from Vendor B, they are not interoperable with each other.”

“[Enterprise MRG] is a pretty transformative stand for the industry,” he said. “We believe there to be some pretty transformative effects in how applications are written and how they operate with each other.”

To date, Red Hat’s messaging platform has been deployed by organisations such as Goldman Sachs and Credit Suisse, as well as JP Morgan, with whom the software was developed in collaboration.

The messaging platform is supported by a realtime component, which enables deterministic performance of a system using fine-grained control.

Red Hat’s realtime technology was developed in collaboration with the upstream Linux kernel community and has been optimised for use in the standard Red Hat Enterprise Linux environment.

The grid component of Enterprise MRG builds on the University of Wisconsin’s Condor Project, which was first developed in the 1980s and now is used in the Open Science Grid and IBM’s Blue Gene supercomputers.

The technology is expected to enable organisations to manage their resources and load with greater dynamic flexibility.

Similar to virtualisation, grid technology handles load using a shared pool of resources.

However, the grid technology further adds the ability to integrate traditional server resources with resources such as unused desktops and cloud computing resources.

“The virtualisation model is the first step towards the eventual goal of ‘I want to be much more efficient and how do I best manage my resources’,” Che explained.

“I think there is a lot of converging trends,” he said, describing Enterprise MRG and a rising demand for utility computing.

“Fundamentally, we’re going beyond HPC; what we’re looking to do [with Enterprise MRG] is provide you with the capability to take advantage of any company resource available to you,” he said.

Since it was publicly announced in December 2007, Enterprise MRG has been made available via limited release in North America and Europe.

Enterprise MRG will be available in Australia by the end of the year, when version 1.1 of the software is released globally.

more

iTnews: Eyeball reflexes to improve biometric authentication

Tuesday, September 09, 2008


As a journalist at iTnews:

Researchers are developing a new approach to user authentication that cannot be spoofed, even with the most sophisticated contact lenses or surgery.

The method is based on eye saccades, which are the rapid, tiny, reflex movements of a user’s eyeball.

Since reflexes are said to be beyond conscious control, the researchers expect it to be impossible for attackers to adequately replicate an authorised individual’s saccades.

“Biometric information can easily be leaked or copied,” said Masakatsu Nishigaki, who is a researcher at the Shizuoka University in Japan.

“It is therefore desirable to devise biometric authentication that does not require biometric information to be kept secret,” he said.

Nishigaki and his research partner Daisuke Arai are working towards the long-term aim of creating an authentication system which is directly based on differences in human reflexes.

However, the researchers so far have been unable to identify any human reflexes that exhibit sufficient differences between individual people to enable their use in user authentication.

Currently, the researchers are investigating what Nishigaki describes as an ‘indirect’ method of extracting differences in human reflexes.

The method examines the blind spot, which is a fixed region on the retina of the eye, and determines its position relative to the direction of the gaze.

User authentication is carried out by displaying a target within and outside a person’s blind spot, and using eye tracking technology to measure the reflex time taken until the eye movements occur.

As the method still relies on the shape of the user’s eye, an impostor may be successful with the use of surgery or sophisticated contact lenses, the researchers say.

However, the researchers expect each pattern of responses to be unique to the individual and the method to reduce greatly the likelihood of spoofing.

In a preliminary experiment with ten test subjects, the researchers achieved zero spoofing success rate with their method.

As the authentication system requires a costly point-of-gaze detection device, Nishigaki expects the system to be used only for situations that require highly sensitive, important information to be secured.

Further research is required to investigate the consistency of saccade response times, position and size of the blind spot, and conduct trials involving larger groups of people, Nishigaki said.

more

iTnews: Australia gears up for Software Freedom Day 2008

Monday, September 08, 2008


As a journalist at iTnews:

Free and Open Source Software zealots in Australia will celebrate Software Freedom Day 2008 with free software, CDs and seminars this month.

On September 20, teams from metropolitan and regional Australia will join more than 300 teams from around the world to celebrate the annual, grassroots event.

Although all teams share a common cause, each team is run independently and will be celebrating Software Freedom Day in their own way.

Melbourne event organiser Donna Benjamin, of the Linux Users of Victoria, described the day as a ‘Think Global, Act Local’ type of celebration.

“Each team plans and designs their own event, with little to no intervention or interaction with other teams,” she explained.

“So, it's hard to know what's happening in each city -- but at the same time, it means each event is relevant to the local community where it takes place.”

According to the Software Freedom Day Web page, events have been planned in Adelaide, Bathurst, Canberra, Tasmania, Melbourne and Newcastle.

Canberra Linux Users Group plans to distribute free software on 8,000 CDs that have been funded by Linux Australia.

Meanwhile, the Launceston team in Tasmania will be staging a day-long event at the Gateway Baptist Church Hall to showcase free and Open Source software use for the home and business.

Melbourne’s Software Freedom Day celebrations are supported by the Victorian Government, and will include a series of free talks, live software demonstrations, and free software giveaways.

As well as promoting Open Source software by groups such as Red Hat and Fedora, Melbourne event organisers are promoting the freedom to run, copy, distribute, change and improve software without needing to ask or pay for permission to do so.

more

iTnews: Energy level probe could speed quantum computing's advent

Friday, September 05, 2008


As a journalist at iTnews:

A newly developed technique of characterising artificial atoms could speed the development of quantum computers.

The technique, called 'amplitude spectroscopy', allows researchers to probe the energy level structure of artificial atoms so that they can be used in quantum computing as quantum bits, or qubits.

"There are several things one needs to determine about a candidate technology before it can be considered a qubit technology," said William Oliver, who developed the technique with a team of researchers at the Massachusetts Institute of Technology (MIT).

"One key piece of information is the energy-level structure of the qubit," he told iTnews.

"Although only the lowest two energy levels are utilized as the qubit, the other energy levels may influence its behaviour," Oliver explained. "Knowing where those levels are facilitate the engineering of the qubit and its control."

Due to the laws of quantum physics, an atomic-scale system may exist in multiple energy states at any one time, which is known as a 'superposition' of states.

While traditional techniques exist for characterising the states of atoms and molecules, the energy levels of artificial atoms occupy a wide swath of frequencies that can be difficult to measure.

Using amplitude spectroscopy, the researchers have been able to characterise quantum entities over frequencies that range from tens to hundreds of gigahertz.

The technique works by measuring the interference patterns resulting from the superposition of multiple energy levels in an atomic-scale system. Artificial atoms are probed with a single, fixed frequency which pushes the atom through its energy-state transitions and measures the response.

Although very far away, Oliver expects quantum computers also to rely on interference to perform calculations.

"Quantum computers are still rather far from commercial realisation; we remain today at the single or few-qubit level," he told iTnews.

"Techniques like amplitude spectroscopy allow us to look carefully at the structure of our qubits.

"Through clever algorithms, the net result of the interference should coalesce the output of the computer into a single state that we can easily measure with high probability," he said.

more

iTnews: Researchers warn of Orwellian technology

As a journalist at iTnews:

The combination of ICT and pervasive computing could enable individual activity to be monitored even more closely than George Orwell imagined in his novel 1984, social scientists warn.

While users have reported digital privacy concerns in several surveys, they are not taking appropriate measures to protect themselves or their data, according to social psychologist Saadi Lahlou.

Describing a ‘privacy dilemma’ that is brought about by the fact that technology requires information to deliver better or customised service, Lahlou warns that such data may later be used in another context and against users’ interest.

Lahlou mentioned Gmail as an example of his personal experience with the privacy dilemma.

“I feel that it is actually not reasonable to leave all my mail in someone else’s hands; but I am, as most of us, taken in this privacy dilemma,” he told iTnews.

“It is such a good indexation service of my own mail and so easy to use that I prefer not to think about the possible consequences of misuse or accident.”

He referenced ‘the system’ of interconnected data-collection devices including mobile phones, Web sites and surveillance cameras that can search, analyse and predict the actions of individuals.

“We are creating a system that will be aware of all that we do … virtually from cradle to grave,” Lahlou wrote in the journal Social Science Information. “The system as a whole will know more about us than we know about ourselves.”

Besides the risks to individuals of having their data used inappropriately, Lahlou suggests that an interconnected system could pose dangers to culture and organisations also.

“Subjects who are aware of being constantly monitored with their actions traced will tend to behave exactly according to the rules, in what is called ‘agentic’ manner,” he explained.

“No rule can in every case exactly encompass the complexity of reality,” he told iTnews. “There is a need for some free space of initiative when one should go to adapt the rules to be more efficient.”

Lahlou highlighted common acts of lying to be polite and ‘playing with rules’ in a professional capacity as example situations in which privacy is necessary.

In such situations, technology that enables users to use different sets of information about themselves in different situations could be beneficial, he suggests, proposing a new definition of privacy, termed ‘face-keeping’.

“We all have many faces -- combinations of role and status -- but each one is used only in some settings,” he explained.

He suggests that a new set of guidelines be developed for system designers that emphasises what designers should do, rather than unrealistically focussing on control.

“We are all responsible for the world we build; it would be foolish to lock ourselves in a straightjacket of continuous control,” he told iTnews.

“But the user should not always the one who carries the burden of protection,” he said.

“[Companies] should include privacy in the design specification of the software they build or buy, and not merely consider this as a cumbersome constraint.”

more

iTnews: Fighting fire with fire

Thursday, September 04, 2008


As a journalist at iTnews:

The Web is a pretty nasty place, according to reverse engineer and privacy advocate Mike Perry -- and he should know.

At underground hacker convention DEFCON last month, Perry revealed vulnerabilities in cookies used by sites such as Gmail, Facebook and LinkedIn.

As if publicising the security flaws isn’t enough, Perry will be releasing an automated hacking tool that exploits them.

The self-proclaimed 'mad computer scientist' spoke with iTnews about the vulnerability, his plans, and the online security landscape.

What security issues will be exposed with the release of your https hacking tool?

There are actually two vulnerabilities here. The first is that many sites do not secure their content via https past the initial login page. This allows an attacker to steal their users' cookies and impersonate them on the local network whenever they use the site.

A tool to do this (Robert Graham's 'Hampster') has been circulating for a year, but there has been no response from the major sites.

The second vulnerability is that many sites that do use https past the login page but do not mark their cookies as 'secure'. This is what allows an attacker to induce their browser to transmit these cookies over unsecured, regular http connections so they can observe them and impersonate the user.

Why are you releasing your https hacking tool to the public?

There are two issues I am trying to tackle here. One is to launch a more direct assault against the trend towards 'security theater' -- providing the show of security to people while not actually protecting them at all.

This is exactly what websites exposed to the first vulnerability are doing, and have been doing in the face of a publicly available exploit for over a year.

The second goal is to ensure that the second vulnerability is well publicised and well understood - because it is a subtle one that even many web developers do not consider.

Both of these goals have required the threat of an automated tool to really make any progress towards addressing.

Again, I waited a full year after announcing the vulnerability without a proof of concept exploit, and nothing happened. It was only the existence of and the threat of release of the tool that has caused things to move forward.

When will the tool be released?

I am still continuing to wait a limited time while major sites (such as Google and Microsoft) continue to work on fixing the issue.

However, eventually we'll reach the point at which the major sites that intend to fix the issue have done so, and all we have left are sites that have no intention of investing in the security of their users, or at least no intention of doing so in a timely fashion.

At this point, I will make the tool more widely available, and attempt to use the publicity to encourage people to move away from these sites towards their more secure counterparts.

How easily exploitable is the https cookie vulnerability? Do you expect there to have been many accounts hacked this way so far?

I have seen anecdotal accounts of hijacked 'security theater' webmail accounts (such as Yahoo mail) being hijacked on the comments sections of various articles about the tool.

These were likely performed by the 'sidejacking' tool, or similar independently derived method, since my tool has only been shown to a limited number of people, and was even then only in a reliable, working state very shortly before DEFCON.

So yes, people have begun to exploit this vulnerability even though I have delayed my tool from public release.

What information can typically be obtained using the https cookie vulnerability?

The risks are quite large for affected sites, and very frequently run all the way up to complete identity theft and access to financial data. An incomplete list of sites that are vulnerable (including the type of information available) is here.

Have you had any discussions with owners and administrators of large vulnerable sites so far?

The only sites to even respond to my attempts to contact them have been Google, Microsoft, Twitter, and LinkedIn.

LinkedIn has given several indications that they do not intend to provide SSL protection for the ability to edit profiles on the site, and to view user messages. The exact statement I received was that ‘this is an attack against the end-user, not the web application itself’, which I suspect is the attitude many sites seem to have towards this issue.

How have Web sites like Gmail, Facebook and Hotmail been able to get away with this vulnerability in the past?

I think it stems from three factors: lack of awareness on the part of their users, a desire for ‘usability’, and a desire to avoid the expense of providing secured connections to their users.

To their credit, Gmail has been the most proactive about fixing this: in fact they are the only major email provider to offer complete SSL at all. It's just that their multi-service single sign-on system has made it difficult to properly implement this securely. They are working on fixing this, though.

What is your opinion of the security of most popular consumer Web sites?

In general the web is a pretty nasty place. A lot of this stems from the way the web was designed: as an open, stateless, and mostly unauthenticated medium where sites can load content from other sites, refer their users to other sites, and have them execute almost arbitrary actions automatically.

This requires each site to have to do a lot of custom, independent legwork to secure things from this originally open state, and a lot of them end up getting bits and pieces wrong. Sometimes even fundamental pieces that are fully supported in major browsers, such as the cookie issue we see here.

As more and more people - Internet pros and newbies alike - begin to use social networking Web sites, do you think online security demands will change?

I'm not sure. I certainly hope so. However, while Internet security pros are well aware of these issues, they are a minority.

Without widespread publicity to create a market differentiator around web security, it is going to be hard for people to 'vote with their feet' to avoid insecure sites.

By taking this issue to the public and releasing this tool, I am trying to create this differentiator. It's my opinion that sites that are willfully negligent in securing their users do not deserve any customers at all.

What does a reverse engineer like yourself do? What sparked your interest in privacy, security and censorship resistance?

In general, reverse engineers help to bridge knowledge gaps by figuring out how systems behave so that products and services can interoperate together. At least this is the most common legal form of reverse engineering.

I actually came to privacy, security, and censorship resistance through my independent study of reverse engineering in University.

Right around the turn of the century, all of these ideas came under attack in my country [USA] via rather draconian laws such as the PATRIOT Act and the DMCA. Because of the vague nature of these laws and the climate of surveillance and fear, it was necessary to be very careful about what I studied and how, while the legal climate stabilised.

It has since become a bit more clear exactly what is legal and what is not, but for a student facing these very vague and overreaching laws while just trying to learn, it was a very frightening time, and I naturally sought ways to protect myself.

We still have a long way to go, of course. Many security professionals and computer researchers are still afraid to travel to the USA, and several that do face extreme difficulty at customs. I've even heard cases where they have been flat out refused entry.

What is your opinion of privacy - or lack thereof - in today's world? What is your opinion of information-rich companies like Google?

It's pretty scary. Many companies are compiling a large amount of data about us, and often simply because we willingly cede it over to them without thinking about the consequences.

Privacy policies are often a joke and riddled with exceptions, loopholes, rapidly changing terms, and I believe not even regarded as binding contracts by the courts.

I don't think society has had time to evaluate the consequences of all of this data being accumulated by these organisations. From the fact that it can be stolen or leaked; used in lawsuits, divorce cases, or custody battles, or the fact that it will rapidly become a political weapon used to manipulate our public officials, the consequences of all this data being gathered (and often sold), even if it is held under the strictest of safeguards, is very dangerous.

It is my hope that the more enlightened companies will begin to realise the importance of allowing people to 'opt-out' of this constant surveillance.

Google in particular is showing some signs of understanding the need for projects like Tor (an anonymity, privacy, and censorship resistance network which I volunteer for) to exist and mature, to allow this 'opt-out' option. But only time will tell how it will all shake out.

more

iTnews: Scientists to collaborate on Google's OpenSocial

Wednesday, September 03, 2008


As a journalist at iTnews:

Researchers have launched a new social network to support long-distance collaboration between scientists.

Built on Google’s OpenSocial platform, the newly launched Laboratree Research Management System joins tools such as Facebook and IBM’s SameTime in the online social networking and collaboration arena.

Unlike consumer or commercially available tools, however, Laboratree has a heavy focus on academic research and includes features such as scientific applications, data and document managing and project messaging.

According to its developers at the Indiana University School of Medicine, Laboratree aims to facilitate day-to-day research activities in a way that eliminates barriers to entry by using the familiar structure of social networking.

“My thinking is that we should actually try to do things with a social network -- that is, we should consider the social network the model by which we do things,” said Sean Mooney, who is an assistant professor of medical and molecular genetics at the university.

“We didn’t use an existing network because we created very sophisticated group and project features not offered by other sites,” he told iTnews. “Our focus is on providing scientists with tools specifically useful for researchers.”

Laboratree has been ten months in the making, and is said to have grown from a desire for tools to solve organisation, collaboration, messaging, and document control issues in Mooney’s own lab.

The system is completely Web-based and runs on Linux-based webservers. It allows scientists to maintain their own unique profile, create groups for their labs, manage individual projects, and invite users to collaborate as ‘colleagues’.

In addition to pre-installed features, the system supports embedded applications built on the OpenSocial platform.

“We believe that the culture of science tends toward more openness,” Mooney told iTnews.

“For example, many publicly funded researchers are required to share their data after they publish, and many scientific journals have adopted open access models.”

“We see the same for OpenSocial,” he said. “If an application developer develops an tool for Laboratree, other scientists are welcome to embed it somewhere else -- Orkut for example.”

Response to Laboratree within the scientific community has been ‘very enthusiastic’ so far, Mooney said. At the time of launch, the site had around 500 users, and visitors from 32 countries.

Looking forward, Mooney expects Laboratree’s user base to grow among the global scientific community.

“The stereotype of the lab bound scientist who does not interact with others is a bit of a misnomer,” he told iTnews.

“Today, science is heavily collaborative, and big science is only enabled by many experts working on various aspects of that big problem.”

“We are solving a problem most scientists share,” he said. “As we grow, it will be interesting to see where our user base is strongest.”

more

iTnews: Digital Education Revolution revolts

Tuesday, September 02, 2008


As a journalist at iTnews:

The Rudd Government’s $1.2 billion Digital Education Revolution could be nothing more than a futile grab at an intangible, unsustainable future, experts have warned.

Announced in February as a part of Labor’s budget forecast, the Digital Education Revolution aims to provide all students in Years 9 to 12 with access to ICT.

The program pledges a total of $100 million over the next four years towards the provision of high-speed broadband connections to schools.

The remaining $1.1 billion will go towards the National Secondary School Computer Fund, which is expected to supply schools with enough computers to reach a minimum ratio of one device for every two students.

If successful, the Digital Education Revolution could see Australia overtaking the U.S. state of Maine as the largest digital education program rollout of its kind in the world.

But despite the Rudd Government’s efforts, the Digital Education Revolution has been labelled by some as ‘unsustainable’ and ‘appalling’.

“Every kid having a laptop is a great idea, but it’s really only a small part of the problem,” said Bruce Dixon, a self-proclaimed technology evangelist and the president of the Anytime Anywhere Learning Foundation.

“[This is] a fact that seems to have escaped the attention of our illustrious leaders,” he said.

Speaking at the Expanding Learning Horizons 2008 conference in Lorne this week, Dixon criticised Labor’s rollout strategy in which schools with the least amount of technology are the first to receive funding.

“Basically, the Government has said they’re going to give the most money to the schools who are least ready for it,” he said.

“I think it’s one of the most appalling digital education initiatives in the world,” he said, speculating that the program was created with minimal consultation with the industry, and an ill-defined implementation plan.

As a positive example, Dixon highlighted discussions with Singapore’s Minister of Education in which the country’s educational goals -- rather than technology per se -- took centre stage.

Mary Burns, who is a Senior Technology Specialist at the U.S.-based Education Development Center (EDC), agrees that digital education programs should focus primarily on educational goals instead of hardware and software tools.

Studies by EDC have highlighted student achievement, the learning of transformative skills, and digital equity as indicators of a digital education program’s success.

“I think with any major technology infusion, it’s important to have conversations about vision and change,” she said at the conference, noting that the mere provision of computer hardware does not necessarily ensure digital equity among students.

Burns cited a 2006 study of home broadband usage that found that while children in high income families tend to use their computers for educational purposes, children in low income families tend to use their computers for entertainment.

“It’s not just about how technology will help us; it’s about how do we do this and how we assess if kids are learning what we want them to be learning,” she said.

“We just haven’t figured out how to use these things well,” she said.

more

iTnews: Teachers urged to go virtual

As a journalist at iTnews:

Web 2.0 technologies such as blogging, wikis and virtual worlds are disrupting traditional ways of teaching, education experts claim.

According to technology and education consultant Sheryl Nussbaum-Beach, educators have yet to adopt ‘not-quite-now’ technologies that their students already embrace.

Speaking at the Expanding Learning Horizons 2008 conference in Lorne this week, Nussbaum-Beach described a future that would take place in ‘immersive worlds that haven’t been invented yet’.

She explained that although parents and teachers have traditionally snubbed virtual worlds such as Second Life and World of Warcraft, these worlds could be platforms for learning skills such as networking, entrepreneurialism, and problem solving.

Nussbaum-Beach’s claims echo those of Gartner, which last year predicted that 80 percent of all Fortune 500 companies will be using virtual worlds by the year 2011.

“Immersive worlds are where our students are spending their time; we [educators] need to be there too,” she said. “The more options we make available to students, the better it [their education] is.”

But virtual worlds are only a small part of what is expected to be an upwards trend in the value of social and intellectual capital.

Labelling social and intellectual capital as ‘the new economic values’ in the global economy, Nussbaum-Beach highlighted the importance of leveraging collective knowledge through social networking and relationship building.

Blogs and microblogging sites such as Twitter and Plurk were mentioned as Web 2.0 technologies that could help students build networks of contacts -- as were the popular social networking sites, Facebook and MySpace.

Addressing concerns of frivolity, Nussbaum-Beach described MySpace as a ‘Lord of the Flies’ scenario in which unchaperoned children’s online personas had been ‘distorted by what kids perceive to be pop culture’.

She expects issues of privacy, safety and ethics to be lessened with digital education programs that teach students rules of etiquette, literacy, and street smarts that apply online.

“Between Facebook and MySpace, MySpace is considered the ghetto of the two – so MySpace is where we [educators] need to be,” she said.

“If you don’t know how to use these tools online, you won’t be able to give it [that knowledge] away to your students.”

“Nobody’s indoctrinating kids to be good digital citizens online,” she said. “No one is teaching these kids that their future employers are going to be Googling them, and this is what they will see.”

To groom students as critical thinkers, Nussbaum-Beach suggests that educators develop a holistic network of learning in which school is merely one node.

Informal education, performance, games, mentors, communities, and self-learning were mentioned as other avenues through which students can grow.

“I think that we’re seeing a change in the educational landscape,” she said. “We are the first generation of teachers who are preparing kids for jobs that haven’t even been invented yet.”

“The truth is, computers will never replace teachers, but teachers who are able to use technology to learn, build, [and] create are going to replace those who are not,” she said.

more