Skip to main content

Posts

Showing posts from 2008

Moving on....

I started a new job at the beginning of the month. I'm no longer working at the National e-Science Centre and that means that I won't be involved with Grid Computing Now! . I was granted a day from my new job to host the webinar on cloud computing with Ross Cooney, but that was my last commitment for GCN!. It's been almost 6 years since I joined NeSC and it was about this time 4 years ago that we were writing the proposal for the Knowledge Transfer Network that became GCN!. Looking back, I the experience has taught me a lot as we evolved the KTN to be most effective. It took a lot of effort to get ourselves recognised - to "build the brand" in marketing speak - and I think the work paid off. Now I have the chance to actually practice some of what I preached. My new position is Head of Development Services in the Information Systems Group of the University of Edinburgh, and one of our goals for the next year or so is to roll out a service-oriented architecture

Entering the Era of the Cloud

The Grid Computing Now! webinar on Cloud computing is now available on the GCN web site . We had to make some last-minute changes because Alan Williamson was unable to join us; so after Ross Cooney finished his presentation, he and I had an extended discussion, including several questions sent in be the audience. It went very well; a couple of times I wondered whether I should bring the broadcast to an early close, only to receive new questions from the audience that kept the debate going. We covered many issues, but perhaps the key issue was when to use cloud and when to keep provision in-house. This depends on measurement and requirements (doesn't everything?). In the case of EMailCloud, Ross estimates that if a server will be kept well utilised for more than 8 hours a day, it is cheaper to run that machine in-house, while using the cloud for peak loads, disaster recovery, and so forth. We went into more detail than that - if you're interested, watch the webinar! We also

Synching the 2.0 web

Advocates of web 2.0 suggest that we can access nearly all of the services we need from web suppliers. We can edit our documents, store our photos or company data, and run our applications. It sounds great - but what happens when the web is unavailable? Over the last few years I have travelled quite a bit and I've often found myself in places with no wifi connectivity - or at least none at a price I'm willing to pay. So I value having a copy of my data on my laptop, so that I can carry on working. I've put forward this argument at a couple of events recently. At an excellent session on Web 2.0 and science at the UK e-Science All Hands Meeting , the response was that 3G coverage will soon be sufficient to give us access almost everywhere. The next generation will take it for granted, the way they take GSM talk coverage for granted already. I have to admit that this scenario seems quite likely, although of course there are still places that don't even have talk co

Cloud Computing Panel

I’ve just attended a panel session on Cloud Computing in Newcastle, which gave several points of view on the uptake and applicability of Cloud. The discussion covered Sofware as a Service (e.g; SalesForce, EMailCloud), Platform as a Service (e.g. Google App Engine, Arjuna) and Infrastructure as a Service (e.g. Flexiscale, Amazon EC2). The optimistic view, taken by the majority of the panel, was that we are on a journey towards cloud computing becoming the norm for business computing. Duncan Mactear of 4Projects sounded a more cautious note; his company provides SaaS for the construction industry but does not use cloud; instead their servers are hosted in a third-party data centre. To which Tony Lucas of Flexiscale pointed out that 10 years ago, similar companies weren’t even using hosting services. Sarat Pedirela of Hedgehog Lab, an ISV, pointed out that the appropriate infrastructure will depend on the type of application. Currently, Hedgehog use cloud for non-critical application

Webinar: Powering your business with Cloud Computing

On October 14th, I will be hosting a Grid Computing Now! web seminar on the topic of Cloud Computing. We have lined up two very interesting speakers who are using Cloud now to make businesses work. Ross Cooney had a good technological solution to sell but couldn't make it economic until Cloud Computing allowed him to pay for his computation only when he needed it. He will discuss the instant benefits and long term impact of cloud computing to the development, competitiveness and scalability of your application. Alan Williamson created the BlueDragon Java CFML runtime engine that powers MySpace.com. He advises several businesses and will give an overview of the different types of services available and how to avoid being locked-in to a single supplier. You can register for this event here .

Technology Strategy Board: Information Day, 22nd October

I've been asked to publicise the following event. The Technology Strategy Board has arranged an Information Day for Wednesday 22nd October to outline the various R & D Competitions being planned over the next 9 months. This Information Day will provide delegates with an opportunity to find out about the activities of the Technology Strategy Board and gain an understanding of the application process for Collaborative R&D Competitions as well as find out about other Technology Strategy Board activities. The event, being held at the Hyatt Regency Hotel in Central Birmingham, will open at 09:30 for a 10:00 start and will close at approximately 16:30; a full agenda will be available shortly. To register for this event please click on the following link and complete the on-line registration form For more information on the Technology Strategy Board please visit their web site

Competition: Grid Solutions for a Greener Planet

This is a reminder that Grid Computing Now! is running a competition to find uses of grid technology to reduce human impact on climate change. The competition is open to anyone who is 18 or over and resident in the UK. So get your thinking caps on and submit your best ideas! The topic is deliberately wide, as is the interpretation of "grid", to allow a wide scope for proposals. The deadline for entries has been extended to Friday October 17 . This extension is particularly intended to give more time to university staff and students who wish to enter. The initial proposal just requires 1,000 words describing the proposed solution. See the competition web page for background information and details of how to enter. There are two tracks, one for IT professionals and the other for everyone else (including students).

AHM 2008

I was pleased by our workshop on research opportunities this week. Our speakers met several people who were interested in their work and might contribute to taking it further. It's hard to measure the outcomes of these events, because the collaborations that we are aiming to catalyse may take months to firm up and then may take much longer to produce actual results, but the first impressions are positive. Some of the networking happened outside the workshop itself, of course. That is the advantage of face-to-face meetings; sometimes all you need is to bring the right people together for the first few minutes. Also, you can follow serendipitous links, such as when a colleague pointed me at the workshop on declarative data centres that Microsoft Research Cambridge and HP Labs organised earlier this year. I think the UK is building a critical mass in data centre management and I hope this can be encouraged to the point where it becomes a viable industry. Beyond our workshop, ther

Workshop on Research Opportunities

This week will see the annual conference for UK e-Science , which for historical reasons is called the e-Science All-Hands Meeting. I have organised a knowledge transfer workshop for the Tuesday afternoon, with the aims of presenting research opportunities for e-Science in the UK commercial and public sectors. We have four excellent speakers lined up. Mark Ferrar is the Director of Infrastructure Architecture for NHS Connecting for Health in England. Mark is interested in opportunities for using the processing power available to the NHS to improve clinical outcomes, for example by running HPC models and diagnosis applications. Liam Newcombe is tackling the question of "Green IT" in data centres. This is a big topic in the industry, because energy prices are rising and carbon accounting is being deployed. Liam has developed an open-source integrated model of data centres for the BCS and the Carbon Trust. He is looking for collaborators to further improve this model. A

Greening the desktop

I attended an interesting workshop this week. It was one of the series that Peter James has put together for his SusteIT project; this one focussed on desktop PCs. The talks and panels looked at measurement procurement of energy use, procurement options, desktop grids, power management and thin clients. In any sizeable organistion, Desktop PCs use a large amount of electricity and there are many options available for reducing this consumption - and saving money too. This is the second time that I've seen a British university do the sums and expect to save £250,000 a year. The panel on power management tools was interesting. These seem to be coming of age at last. Operating systems have had support for managing individual computers but a large organisation needs a system for managing thousands of PCs, with different policies for different groups, and of course the important facility to wake up in time for the distribution of updates. James Osborne gave an interesting analysi

A cloud + a fringe = a silver lining?

The Edinburgh Festival Fringe is over for another year. Hundreds of performers have given thousands of shows to a huge number of culture seekers. Meanwhile, the IT industry has notched up another very public failure. The new box office system failed to cope with the demand for tickets on the first day and had to be patched hurriedly. It struggled along after that but was unable to implement the special offer that had been planned for last few days of the festival. It must be a challenge to cope with the huge interest on the first day of ticket sales and then to manage the varying load fot the next three months. For the other nine months of the year, of course, the system isn't needed at all. Does this sound like a candidate for cloud computing? I don't know how the existing system is implemented, but if it doesn't already use cloud to react to large swings in demand, perhaps the service providers should consider this option.

A holistic use of thin clients

Yesterday saw a workshop on Sustainable IT at the new Queen Margaret University campus in Musselburgh. The workshop was ostensibly about "new ways of working" but another major focus was on how the adoption of thin clients allowed the architects to design an more environmentally friendly campus. Thin clients use less power on the desktop than PCs, which means less heat is generated in the classrooms, labs and open working areas. For QMU, this meant that the building can use natural cooling and ventilation, saving considerably more energy in addition to the saving from the terminals themselves. The switch to thin clients was also used to introduce the use of virtual desktops. These let staff and students access their work from anywhere on the campus and from home. Staff are encouraged to work from home when it suits them. On campus, staff now work in open plan areas rather than offices; overall, the extra freedom seems to outweigh any disadvantage from the change. The

Progress with our "Green IT" theme

We've been running our Green IT theme for several months and I'm pleased with the progress we've made. We've run two webinars and spoken at several events, including the ITU symposium on ICTs and Climate Change and Oxford University's conference on Low Carbon ICT As an example sector, we are working with the JISC-funded project on Sustainable IT in Higher Education- partly because this is a sector in which we can publicise progress without too many strings attached. We helped Peter James organise the first SusteIT workshop, which was held in Cardiff on June 19th. This has already caused quite a stir; Cardiff have done a good job of procuring an energy-efficient machine hall and other education establishments seem to be looking to this as an examplar. Later this year, we will be helping Peter with two more workshops that fall under our remit. Green IT has to consider all aspects of running an IT service. As such, it is a broader topic than we can cover.

Competition: Grid solutions for a greener planet

Grid Computing Now! is running a competition to find uses of grid technology to reduce human impact on climate change. The competition is open to anyone who is 18 or over and resident in the UK. So get your thinking caps on and submit your best ideas! The topic is deliberately wide, as is the interpretation of "grid", to allow a wide scope for proposals. There will be three stages to the competition: an initial proposal of 1,000 words; an apprentice workshop, where finalists can speak to grid architects and academics to develop their ideas; and the final presentation at The British Computer Society on December 1st 2008. The deadline for registering your interest is July 31st, but the initial submission is not required until September 1st. See the competition web page for background information and details of how to enter. There are two tracks, one for IT professionals and the other for everyone else (including students).

Combining BOINC and BitTorrent

A busy schedule and an unusually flakey wi-fi setup have conspired to limit my blogging activity from OGF23 this week. This is just a quick post to note a rather neat idea that was demonstrated by one part of the CoreGrid project. BOINC is the infrastructure used by volunteer computing projects such as SETI@home and ClimatePrediction.net . The system sends out jobs to be run on people's home computers and collects the result. Handling all this network traffic puts quite a load on the central server. What the CoreGrid demo has done is to combine BOINC with the BitTorrent peer-to-peer data distribution system, so that the load is distributed. This may not be news to some of you, as the paper was published last year. I like it; it's a simple idea to solve an immediate problem. I've heard of other projects looking at the potential of BitTorrent as well.

Cross-Enterprise Document Sharing

At HC2008 I was introduced to the XDS standard for Cross-Enterprise Document Sharing (not to be confused with several other uses of the XDS acronym). XDS is a profile of the ebXML standards for registries and other related standards to specify a system for sharing medical documents. The important point is that XDS is supported by several major vendors and has been deployed in clinical health systems in several countries. As far as I'm aware, it has not made any impact in the grid world. The basic XDS standard can be extended to address particular use cases or to add functionality. One popular extension handles DICOM files (a format widely used in medical imaging). Another ( XCA ) supports federated registries, removing the single point of failure of the basic model. I think it is worth investigating for other e-Science applications. It may be simpler to leverage this work than to reinvent it. Also, it might be possible to apply our work on e-infrastructure security to t

PGC08 to discuss Green HPC

GridToday reports that the Platform Global Conference will include a panel on whether HPC data centres can "go green". Among the strategies they will discuss include energy-directed scheduling, i.e. dynamically allocating workload to minimise electricity consumption. It so happens that I am organising a workshop at OGF23 on exactly this subject. We will be looking at the details of what is needed to make this a reality and what steps are needed to make this idea a reality.

Ian Foster finds an interesting take on Green IT

Ian Foster notes a project at University of Notre Dame near Chicago which is distributing research computing facilities in order to provide heat to campus buildings. This is an interesting trade-off: on the one hand, the distribution of resources means that the waste heat is put to good use; on the other hand, it's possible that each distributed installation is less efficient that a good-quality centralised machine hall. This is another example of where we need good quality models and well-measured example deployments to help us decide which approaches give the best results.

Grids & e-Health

This week, I attended Healthcare Computing 2008 to get an update on the current state of e-health in the UK and to explore how grid technology can contribute. Health Informatics is a broad subject and it isn't possible to engage with the whole field, but I see three main areas of potential engagement. The path is most followed by the academic community to date is that of linking together data used in clinical trials or in health research. This is a natural fit for the e-science community as it extends existing work on secure access to distributed research data. The medical world imposes more security constraints, which adds academic interest, but is otherwise familiar to the e-scientists. It is also of a scale that is manageable in research projects. Successful projects include Psygrid , which is now deployed across the NHS research centres in mental health and in bioinformatics. So this is the first area of engagement. A natural question is whether this experience with rese

Garbled Grid Hype

There has been some rather confused coverage in the press about the grid infrastructure that supports the Large Hadron Collider at CERN. The Times has an article that cast the grid as a "superfast internet", with the emphasis on the high-bandwidth links that have been laid to support the LHC data dispersal. It also talks about the numbers of servers connected to the LHC grid, but without clarifying the distinction between bandwidth and processing power. It also implies that the LHC grid is the only grid, which perhaps we can forgive the journalists for, as plenty of technical people still refer to "the grid" as if there were only one. A Yahoo article , taken from Sky News , goes rather further, claiming that "the internet, as we know it, could be obsolete within a decade". The phrase, "as we know it", lends a wonderful vagueness to the claim. The article goes on to say that the Grid was the brainchild of CERN, which of course is an exaggera

Grid infrastructure for clinical trials

Ian Foster pointed readers of his blog at a good article about CaBIG, the Cancer Biomedical Informatics Grid. This major project is led by the US National Cancer Institute and also involves Cancer Research Labs in the UK, with the aim to share data between cancer researchers. The UK's OGSA-DAI system is a major component of the deployed system. It's particularly interesting to note that the system includes support for clinical trials. Clinical trials are time-consuming and expensive, so many people want better systems for managing them and a number of grid projects are tackling this area. PsyGrid is one such project in the UK. They're not exactly blowing their own trumpet about this, but their system is being used as the studies and trials platform for the Mental Health Research Network, and has been selected by the National Institute of Health Research (NIHR) as the platform for providing electronic data collection services to support studies and trial across the

May 7 Webinar: Energy efficient data centres

I am very pleased with the line-up for our webinar on May 7th. We will continue our investigation into best practice for running energy efficient data centres, following our successful webinar last October. This time we will have an emphasis on the impact of virtualisation and on how one can model and measure the effectiveness of proposed improvements. Nic Barnes will explain what Merrill Lynch observed when they applied virtualisation on the desktop and in the server room. They found that the benefits were in some cases partially offset by losses elswhere. Nic will demonstrate the importance of measuring real gains in practice. Liam Newcombe will describe the work of the BCS in developing a model for predicting data centre efficiency. This approach will allow managers to plan and evaluate designs in advance of their implementation. Liam will show that the choice of metrics requires careful analysis. As always, viewers will be encouraged to ask questions. Any that we can'

Virtualisation, HPC and Cloud Computing

Virtualisation has obvious benefits for much commercial IT, where existing servers often have utilisation rates of 10% or less. It's less clear whether virtualisation is so useful for High-Performance Computing (HPC), where systems are often kept running at utilisation rates above 90%. Nevertheless, there are potential benefits from adopting some aspects of virtualisation in these environments. The question is, do the benefits outweigh the cost in performance? This was the topic of yesterday's workshop on System Level Virtualisation for HPC (which was part of EuroSys 2008 ). The workshop was rather small but I did learn quite a bit. A couple of talks investigated the performance hits of using virtualised OS's instead of addressing the hardware directly. This varied, depending on the type of application; if IO was minimal, the slowdown was minimal too. An "average" slowdown seemed to be on the order of 8%. Samuel Thibault of Xensource looked at ways of usin

Webinar on licensing, virtualisation and grid

Licensing has been an ongoing issue for virtualising and grid-enabling applications. Many licensing models do not transfer easily to a world where applications and virtual servers can be run on many processors and moved from machine to machine. Vendors want to ensure that they are paid for the full use of their applications, while IT managers want to know how much they can expect to pay in a given accounting period. Everyone is aware of the problem and several people have been trying to work out ways forward, but the issue is still unresolved. Grid Computing Now! are bringing together some of the key players in the industry for a webinar on Thursday 10 April at 14:00 GMT: David Gittins of Capgemini, who has been working with the public sector on this challenge Neil Sanderson, Product Manager for Virtualisation at Microsoft Mark Cresswell of Scalable Solutions, a leader in the provision of tools for monitoring and reporting usage. Ian Osborne, Project Director of Grid Computin

Low Carbon ICT Conference

Yesterday I attended the Low Carbon ICT Conference organised by Oxford University, with funding from JISC. This was a good an useful day, with speakers covering a range of topics, and a small number of exhibits. Some of the speakers were familiar - Zafar Chaudry gave his excellent talk on the benefits of virtualisation in one of our webinars , while Liam Newcombe will be speaking in our next-but-one webinar on May 7th. Others were new to me. The speakers' talks covered office ITC equipment, virtualisation, data centres, PC manufacturing, sustainability entrepreneurship and enterprise planning. I was more at home with the techie end of things, but this is not a problem that can be tackled by technology alone so it was good to see the broad participation of this conference. I was aware of most of the technical issues, although I was surprised at how much energy is used by desktop PCs and peripherals (see below). I was also interested to hear Juergen Heidegger recommend the use

A Research Strategy for the Century of Information

"This is the Century of Information" - G. Brown, November 2007. This week, I attended a think-a-thon about a strategy document that the e-Science community in the UK is developing. The goal is to put in place the right mechanisms for ensuring that UK research can make the most of new computing technologies and methods. We have many success stories from the e-science programme; the question is, how do we build on those successes and make the techniques available to everyone? One point that arose from the workshop is that we need different types of successes. Most of the examples put forward were of good research enabled in a range of domains (GeoSciences, BioScience, Chemistry, Physics, Social Science, etc.). We also found examples of advances in Computer Science itself, rather than just using CS to support other fields; this is essential if we are to engage CS academics. Beyond academia, we need examples of knowledge transfer to industry. This is where the Grid Computi

Big Green

With the twin growth of supercomputing demand on the one hand and energy cost on the other, it makes sense for companies to produce supercomputers that use less power. IBM started the trend with the Blue Gene series, of course, although that project was nore about building the fastest computer while keeping energy consumption somewhat reasonable. More recently, others have joined the fold. One such is SiCortex , who claim to have designed their systems from the chip level up to minimise power consumption. Their SC5832 machine offers 5832 1GFlops 64-bit processors for 20kW, while for the SC648 they claim half a teraflop powered from a standard wall socket. That's not the only approach, of course. Floating-point accelerator boards and Graphics Processing Units are being used to boost computing power for specific applications while keeping costs low. The Register has a good overview article from November's SuperComputing conference. Meanwhile, in a talk at the Mardi Gras C

Green Broadband can save the planet?

This blog posting is an oddity - Bill St. Arnaud suggests a micropayment scheme whereby consumers pay for broadband services by increasing the cost of their household electricity (or car mileage) for a certain duration. The utility companies then pay this amount to the broadband service provider. So the service provider gets income from a source other than advertising; the consumer gets reasonably priced product, and the consumer reduces their energy usage because the price has gone up for that period. I don't see it happening, myself, but it's an interesting piece of lateral thinking.

More on "green" data centres

In order to expand our "community of practice" on energy-efficient data centres, we have visited several people or groups who have an interest in the topic. Just over a week ago, some NeSC colleagues and I visited BRE's Scottish office . BRE used to be known as the Buildings Research Establishment and the folk at their East Kilbride office are particularly interested in sustainable development. So far they've been mainly working with housing and small businesses, but they seem potentially interested in modelling larger establishments. They are also interested in multi-level modelling of heat flow within buildings; something our e-science connections could definitely help with. We also had a good conversation about DC power circuits. This week, I was invited to speak at a meeting of the Russell University Group IT directors ( RUGIT ) - i.e. the people responsible for ICT at many of our leading universities. I presented some of the outcomes from the HTC week in N

Data Protection, TPM and Grids

This week, the e-Science Institute launched a new research theme which should be of great relevance to industry as well as scientists - in fact, it may even help ordinary consumers to protect our own privacy online. The theme is about "Trust and Security in Virtual Communities". Andrew Martin , the theme leader, explained its aim in a webcast talk . The problem that Andrew is exploring is how we can trust a grid infrastructure to protect our sensitive data. In addition, how can we trust the results that we get back from running a job on "the computing cloud"? To give one concrete example, Andrew was involved in the climateprediction.net project, which encouraged people to contributed their PC's spare cycles to run climate modelling simulations. This raised several security issues. From the users' point of view, could they trust that the climateprediction.net program would not hijack their PC? Conversely, could the scientists trust that the data sets r