/* -- STUFF -- */

iTnews: Can Australia lead global R&D?

Monday, June 23, 2008


As a journalist at iTnews:

The Australian technology industry needs the support of the Government to secure an innovative advantage for the country, according to the Australian Information Industry Association (AIIA).

Discussing a recent study of scientific research and development by U.S. think-tank RAND, AIIA Chief Executive Officer Ian Birks said leadership in scientific innovation would be “critical to the future of the Australian economy”.

“It is very important that Australia is competitive in the emerging global information economy,” Birks told iTnews.

“It will be our ability to establish ourselves as a centre for innovation and technology excellence that will to a large degree determine our success,” he said.

But in the global arena, Australia may be lagging behind in generating sufficient opportunities to attract and retain the world’s top talent.

According to the RAND study, which was sponsored by the U.S. Federal Government, the U.S. currently is considered the dominant world power in science and technology.

RAND researchers James Hosack and Titus Gallama found the U.S. to account for 40 percent of global spending on scientific research and development (R&D), 70 percent of the world’s Nobel Prize winners, and three-quarters of the world’s top 40 universities.

By analysing wage data about foreign-born science and engineering workers, the researchers concluded that the U.S. has become “increasingly reliant on foreign-born workers” to build and maintain its lead.

AIIA’s Birks attributed the historical success of the U.S. in attracting foreign talent to the country’s ability to finance a supportive policy environment.

“The reality is that talent gravitates towards the most interesting work and the best packages on offer,” he said. “Silicon Valley is an excellent example.”

Phil Robertson, Chief Operating Officer of NICTA, agrees.

“There is no doubt that the US is an example of a country that has used sustained long-term funding programs, such as DARPA and NASA, to underpin long-term research programs and fund the attraction of people from other countries to work in the US,” he told iTnews.

Noting the importance of technology as a driver for change across society and economy, Robertson expects countries that have invested in ICT R&D to reap dual benefits from being technology producers, as well as being on the forefront of technology use.

While Australia previously has settled for being a user -- and not a creator -- of technology, the situation is changing, Robertson said.

“We are now recognising more widely the importance of building an Australian ICT industry,” he said.

“By creating technology that is used across the world we will make greater economic gains than just productivity gains.”

“Government, research and industry need to work together to achieve this vision and I think there is growing momentum behind this,” he said.

In the U.S., economic fluctuations and tightening immigration restrictions could create a window of opportunity for other countries to take the lead in dominating the technology industry.

In an interview with iTnews, RAND’s Hosack said that the U.S. government’s recent decision to reduce the availability of H1-B skilled immigration visas from the current 195,000 limit to 65,000 is “not a step in the right direction”.

And with the U.S. economic recession yielding unexpected reductions in federal and corporate revenues, Hosack expects R&D spending to be on the decline.

“We view this as a globalised market for science and engineering development. Any country with a serious interest in scientific development is going to be a competitor,” he said, mentioning Europe, Brasil and China as competitors.

But according to NICTA’s Robertson, nations should approach technology R&D as a global collaboration and not a competition.

“The technology research industry is a global one, so we should no longer be thinking in terms of ‘brain drain’ or ‘brain gain’,” he said.

“What matters is the total amount being done in this country, and researchers will have a healthy flow between countries.”

Much of NICTA’s research is conducted in collaboration with research institutes around the world.

An estimated one-third of NICTA’s researchers are from overseas, the attraction of whom Robertson attributes to the organisation’s well-defined research areas, vibrancy, and funding certainty.

On a global scale, however, Australia’s funding of ICT R&D is low when compared with OECD peers.

According to statistics from the Australian Bureau of Statistics, overall funding for R&D is 1.76 percent of the country’s Gross Domestic Product (GDP), which is below an OECD average of 2.26 percent and the European Union’s 2010 target of three percent.

Applied research in ICT accounts for a mere 1.6 percent of the Australian Government’s total expenditure of more than $1.3 billion on applied research across all industries.

“I think there is a growing awareness in Australia that international competitiveness in support for R&D is linked to our future economic success,” Robertson said. “We need to turn this awareness into increased funding for ICT R&D.”

“If Australia offers an environment to tackle new and exciting problems in a strategic way we will attract research talent with the drive to make a difference,” he said.

AIIA’s Birks pointed the finger of responsibility at government and industry bodies, which have the opportunity to create an open environment that attracts the investment of global companies.

As in the RAND analysis of the U.S. environment, policy initiatives that stimulate and support infrastructure, workforce and education will be critical to Australia’s success, Birks said.

R&D taxation incentives and industry development programs such as the recently discontinued Commercial Ready scheme also were mentioned as important components in developing an innovative environment.

“While there are many things that we are doing well, I think we are still in middle of the pack in these areas,” Birks said.

“Australia has a recognised talent base and a strong spirit of entrepreneurship that both promote international investment in science and technology leadership in this country.”

“What we need to take advantage of current opportunities, however, is consistent policy and government support across the board to stimulate that investment and Australian innovation in general,” he said.

more

iTnews: The technologist's guide to the near future

Monday, June 16, 2008


As a journalist at iTnews:

Technological convergence has shaped many aspects of today’s world, from the creation of medical technologies to devastating events like 9/11, according to U.S. physicist and science fiction author Stanley Schmidt.

In his newly-released non-fiction book titled The Coming Convergence, Schmidt investigates how today’s rapid pace of innovation will produce technologies that could either greatly benefit human lives, or lead to an Orwellian dystopia.

Schmidt spoke with iTnews about the book, his expectations, and the approaching technological “singularity” that has been predicted to herald the end of the human era.

What does the title of your book, 'The coming convergence', refer to?

Much of what happens in our world, from lifesaving medical technologies like CAT scans to disasters like the 9/11 World Trade Center attack, results from seemingly unrelated technologies coming together to do things that none of them could do alone.

This has happened in the past, but is now happening faster and producing more dramatic changes in how we live than ever before. The major “streams” now rushing together include biotechnology, information technology, nanotechnology, and cognitive science.

What will the technological convergence produce? How will it change businesses? How will it change lifestyles?

Nobody can say with certainty what it will produce. What we can do -- and must, if we are to reap the huge potential benefits while avoiding the equally great potential dangers -- is to imagine as many of the possible changes as we can, and try to steer our future toward the ones we like.

For just a few examples: We already see a great deal of business moving out of factories and stores into the internet and homes. We can expect this trend to become much more pronounced, and some manufacturing may move into space, or be done by in-home appliances called “synthesizers” that can make and recycle a wide range of goods.

In a best-case scenario, virtually everybody can live a longer, healthier, safer, more independent life than ever before. In a worst -- but just as possible -- case, we could get something much like George Orwell’s 1984, because any would-be Big Brother already has means at his disposal beyond anything Orwell imagined. On the other hand, so does the resistance.

Is there a point of convergence that we are working towards, or are we moving along a continuum? When can we expect to see a world in which the human brain is directly linked to machines?

It’s very difficult to say when or even whether any particular thing will happen, as many converging factors determine that -- as well as what catches on and what doesn’t.

If the right technological and social factors had come together, we might have had holographic television and flying cars by now, but we don’t. On the other hand, we do have the internet, which has already changed the world in ways far beyond any imagined in earlier science fiction.

We don’t know of any endpoint for what’s happening, but some writers, notably Vernor Vinge, have speculated that as different technologies push each other along, the curves of change will grow steeper and steeper, and may eventually reach a “Singularity.”

That’s a condition where change is so fast that civilisation is transformed almost instantly into something so radically different from what we now have that we would find it hard even to recognise.

How has convergence of past technologies shaped today's world and current technological advancements?

A vast amount of the world we live in results from such convergences. In the CAT scan example I mentioned earlier, we have a lifesaving diagnostic technique that depends on the combination of medical understanding, x-ray imaging, and high-speed computers that can work with very large amounts of data.

The 9/11 example resulted from big-building technology and aviation coming together in ways their inventors never anticipated.

And, of course, in most such cases we can’t point to a single person and say, “He invented big buildings” or “She invented airplanes.” Each of these phenomena is itself the result of earlier convergences.

Similarly, the technologies leading current change, such as biotechnology, depend on earlier technologies coming together, as x-ray diffraction, computing, and several other fields did to decipher the structure of DNA and the genetic code.

What is driving today's innovation? What reasons do we have to expect direct communication links between the human brain and machines, for example?

Sometimes it’s commerce, sometimes it’s military, sometimes it’s simply the desire to learn how nature works and what can be done with it. If somebody sees a way to make a profit by doing something new, he or she will. The public will decide whether it succeeds by deciding whether or not to buy it.

Most of the extrapolations I talk about in the book are just that: possibilities that we can see because we’ve already seen their early forms. I’ve already read, for example, news reports of experimental prosthetics being controlled directly by their wearers’ nervous systems. There was even one case of a monkey directly controlling a “robot monkey” located on the other side of the world.

Who is responsible for directing and driving technological advancement? What is your advice to these parties?

Everybody! It might seem that the topics I talk about in The Coming Convergence would be of interest only to techno-geeks, but in reality they’re going to profoundly affect the future lives of every one of us.

So anyone who votes, shops, travels, teaches, raises children, or otherwise participates in our civilization needs to become aware of the possibilities and how he or she can affect which ones become realities.

Citizens can and must affect what governments do; consumers can and must affect what businesses do. Let’s try to do it wisely, and make the future as good as it can be.

more

iTnews: To code or not to code?

Friday, June 06, 2008


As a journalist at iTnews:

The rise of Open Source software adoption has brought with it increased awareness of non-proprietary programming technologies.

Free and open languages such as PHP and Ruby are catching up fast with the likes of Microsoft’s .NET and Oracle.

And while the myriad programming languages may resemble the Biblical confusion of tongues, technologists agree that there is a time and place for each of the languages -- whether proprietary or Open Source.

“Open Source is great for many customers,” said Jeff Doyle, Product Manager of HP’s Exstream software brand. “However, sometimes proprietary languages are better suited for particular applications.”

Doyle expects the exclusion of proprietary features to allow Open Source languages to be used freely and with greater flexibility.

However, he noted that the revenue generated by proprietary languages encourages vendors to make frequent improvements to their products, which in turn attract users.

“Proprietary software vendors are very motivated to add features and functions into proprietary languages because proprietary languages generate revenue,” he said.

“Therefore, I do not see a complete replacement of Proprietary languages by Open Source products for a very, very long time.”

According to Kan Kawashima, Exstream’s Japan country manager, Exstream developers tend to be well-versed in a range of platforms and languages, including .NET, Java, Perl and PHP.

Versatile developers are expected to enable Exstream to integrate its products with various systems and be flexible to meet demands and aggressive development schedules.

Meanwhile, Melbourne-based consultancy Shine Technologies has built its business on the Ruby on Rails Web application framework, which is based on the Open Source Ruby programming language.

Operating on a belief of not promoting technology for technology’s sake, Shine aims to evaluate technology based on its ability to deliver a business benefit.

According to the consultancy's principal, Mark Johnson, Ruby on Rails was chosen for its value proposition in certain situations.

“For web-based applications, Ruby on Rails offers much faster speed-to-market, yet flattens the learning curve for both us and our clients,” he said.

“The resulting code is smaller, easier to manage and the release process is extremely efficient.”

Open Source technologies appeal to University of New South Wales academic John Shepherd because of cost advantages and the ease of access to the technical mechanisms behind Open Source platforms.

As a lecturer of database programming at the university’s School of Computer Science and Engineering, Shepherd specialises in the PHP programming platform on top of the PostgreSQL database in his teachings.

While Shepherd admitted that proprietary technologies such as Oracle and .NET are more prevalent in the industry, he expects Open Source systems to be more conducive to teaching programming concepts that will better equip students to adapt to new versions of existing technologies.

“As far as we’re concerned, programming languages are really just a vehicle for teaching the underlying ideas,” Shepherd said.

“We’re not teaching a particular system; we’re teaching the ideas behind the system. We know that systems are going to change in two years, and what’s the point in training someone in the gory details of some system that is going to be replaced by version two in three years time.”

UNSW’s Computer Science and Engineering students are taught the C programming language in their first year, to teach basic procedural programming. Second year students are taught Java in the context of Object-Oriented design.

In later years, students are taught whichever language lecturers deem most appropriate for the topic area that is covered.

“I happened to choose PHP because it’s simple and easy for students to pick up,” Shepherd said.

“It’s a higher level of abstraction than C; if you want to be doing the same thing in C, you’re writing five times as much code. And some of the constructs – hash tables and the huge number of libraries it provides - is convenient.”

Cost considerations aside, Shepherd said that the accessibility to Open Source source code has teaching advantages also.

“With open source, I just like the notion that you have access to the source code for systems that you’re running, and even better, if you find bugs in it, you can fix it yourself and post your fixes to the Web site for that open source system,” he said.

“Oracle is not going to give you the source code, so you go with an Open Source database like PostgreSQL and MySQL.”

According to IT services and solutions provider Dimension Data, however, Open Source options may not always be the most cost-effective in the industry.

Dimension Data’s General Manager for Application Integration, Peter Menadue, described dealings with a large Australian bank, which required a specific application to be written by the IT department for an internal customer in commercial banking.

The IT department researched technologies including ColdFusion, PHP, Java and .NET, and achieved a mix of quotes ranging from $150,000 for Java development, to $60,000 for a .NET development.

But the most effective solution turned out to be Microsoft’s SharePoint, with which the IT department was able to satisfy the bank’s requirements for about $20,000.

“The customer was extremely happy; they got a result really quickly, they really liked it, they didn’t have to get anyone to maintain any code,” Menadue said. “That is a flavour that we’re seeing more and more of.”

“Clearly there’s still a lot of development going on, but I think it’s more targeted. People are looking at more of these platforms to, in some cases, avoid code, or at least, avoid the first block of development if you like, and only do the development where they are really adding high value.”

“It’s just a way of reducing ongoing maintenance costs, and means that you buy into a platform and keep riding that wave,” he said.

Describing the shift towards leveraging existing platforms as a trend towards “non-programming”, Menadue noted that the most useful programming languages are those that are most compatible with the platforms being extended.

SharePoint-based projects thus require expertise in .NET, he said, noting that Dimension Data tends to prefer the C# language within the .NET environment.

“I think the basic premise is this: if you’re writing a specific application, either you write the code yourself, or you leverage some functionality that’s already been written,” he said.

“Five, ten years ago, the default action would have been to develop something.”

“I think people are trying to really define exactly where it makes sense for them to do it, rather than just leverage something where somebody has already made some sort of developmental investment,” he said.

more

iTnews: Robotics handbook explores past, present and future

Monday, June 02, 2008


As a journalist at iTnews:

The dream to create machines that are skilled and intelligent is now becoming part of our world’s striking reality, according to robotics expert Bruno Siciliano.

Together with Oussama Khatib, who is a Professor of Computer Science at Stanford University, Siciliano has edited and launched a "Handbook of Robotics" that aims to make the increasingly complex field of robotics more accessible to engineers, doctors, computer scientists and designers.

Siciliano, who is a Professor of Control and Robotics at the University of Naples, Italy, as well as the President of the IEEE Robotics and Automation Society, spoke with iTnews about the book and its topics that range from the foundations to the social and ethical implications of robotics.

Who is your target audience for the "Handbook of Robotics"?

The handbook was conceived to provide a valuable resource not only for robotics experts, but also for newcomers to this expanding field [such as] engineers, medical doctors, computer scientists, and designers.

Why do we need such a handbook?

The undertaking of the project was motivated by the rapid growth of the field.

With the ever increasing amount of publications in journals, conference proceedings and monographs, it is difficult for those involved in robotics, particularly those who are just entering the field, to stay abreast of its wide range of developments.

This task is made even more arduous by the very multidisciplinary nature of robotics.

How prevalent are robots in everyday life?

Robots today are making a considerable impact on many aspects of modern life, from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robots will be as pervasive and personal as today’s personal computers.

What are the potentials of robots in the near future and how will this compare with robots of today?

In the 1990s, research was boosted by the need to resort to robots to address human safety in hazardous environments (field robotics), or to enhance the human operator ability and reduce his/her fatigue (human augmentation), or else by the desire to develop products with wide potential markets aimed at improving the
quality of life (service robotics).

A common denominator of such application scenarios was the need to operate in a scarcely structured environment which ultimately requires increased abilities and a higher degree of autonomy.

By the dawn of the new millennium, robotics has undergone a major transformation in scope and dimensions.

This expansion has been brought about by the maturity of the field and the advances in its related technologies. From a largely dominant industrial focus, robotics has been rapidly expanding into the challenges of the human world (human-centered and life-like robotics).

The new generation of robots is expected to safely and dependably co-habitat with humans in homes, workplaces, and communities, providing support in services, entertainment, education, healthcare, manufacturing, and assistance.

What is the global landscape like for robotics currently?

Even though the first industrial robots were designed and built in the 1960s in the USA, during the course of the following decades they have matured elsewhere.

Mostly due to the needs of their automotive industry, combined with high costs of labour, both Europe and Japan have taken over global technological leadership in industrial robotics in the past decades. In the US, most of robotics research is currently funded through military, space and security programs.

In Japan, robot manufacturers can rely on public opinion that robots are widely accepted by society. They are seen as useful helpers (co-workers to their human counterparts) and not as job-killers, they have a strong home market with the highest density of robots, cover a larger spectrum of robots, and are typically part of huge vertically integrated industrial conglomerates that can build up massive R&D and commercial power.

In Europe, by contrast, the robotics industry is strong, but still quite fragmented and dispersed. Industry observers agree on the following global trends in the industry: due to saturation in the classical (automotive) markets, all major manufacturers will need to identify new areas to maintain growth and (ii) the rapid development in technology areas that are the basis for robotics – mechatronics, computers, sensors, programming, human interfaces – bears huge potential for totally new application scenarios.

Clearly, these developments may also result in a dramatic re-distribution of the market share of robot manufacturers in future application scenarios.

Will robotics ever become an issue for policy makers? How so?

Robotics is rapidly becoming one of the leading fields of science and technology, so that very soon humanity is going to coexist with a totally new class of technological artefacts: robots. It will be an event rich in ethical, social and economic problems.

It is the first time in history that humanity is approaching the challenge to replicate an intelligent and autonomous entity. This compels the scientific community to examine closely the very concept of intelligence – in humans, animals, and of the mechanical – from a cybernetic standpoint.

In fact, complex concepts like autonomy, learning, consciousness, evaluation, free will, decision making, freedom, emotions, and many others shall be analysed, taking into account that the same concept shall not have, in humans, animals, and machines, the same reality and semantic meaning.

From this standpoint, it can be seen as natural and necessary that robotics drew on several other disciplines, like Logic, Linguistics, Neuroscience, Psychology, Biology, Physiology, Philosophy, Literature, Natural History, Anthropology, Art, Design. Robotics de facto unifies the so called two cultures, Science and Humanities. The effort to design Roboethics should take care of this specificity.

What are Roboethics?

Roboethics is an applied ethics whose objective is to develop scientific, cultural, and technical tools that can be shared by different social groups and believes. These tools aim to promote and encourage the development of Robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.

This means that experts shall view robotics as a whole - in spite of the current early stage which recalls a melting pot – so they can achieve the vision of the robotics’ future.

When do you expect robots to be an unavoidable part of life? What needs to be achieved before this happens?

The dream to create machines that are skilled and intelligent has been part of humanity from the beginning of time. This dream is now becoming part of our world’s striking reality.

Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives.

The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline.

Today, new communities of users and developers are forming, with growing connections to the core of robotics research. A strategic goal for the robotics community is one of outreach and scientific cooperation with these communities. Future developments and expected growth of the field will largely depend on the research community’s abilities to achieve this objective.

more