EDIT: when I say "on mass" in the poll, of course I mean "en masse" 
Climate Prediction.NET
These guys are launching a new distributed computing project today, modelling climate change.
We have already had a short thread about this in the Distributed@MURC forum here.
I thought I'd post it in the Lounge as I know (a) not all of us venture into the dist@MURC forum, and that (b) most of the existing projects don't appeal to some.
This is a project run by it seems Oxford Uni, the Open University, and the UK Met Office.
Brief blurb:
I'd be interested in your (particularly Brian's actually) views on this project - I have moved one machine over myself to have a go, and whilst there is no facility to put team statistics together yet, the feature is apparently on the way, and when it's there it would be nice to have a Matroxusers team
.
The client is actually one of the nicest and most "modern" looking I have seen for a distributed project, and the team seems very professional. Windows only for now, although a Linux client is on the way I think.
Regards
Gnep

Climate Prediction.NET
These guys are launching a new distributed computing project today, modelling climate change.
We have already had a short thread about this in the Distributed@MURC forum here.
I thought I'd post it in the Lounge as I know (a) not all of us venture into the dist@MURC forum, and that (b) most of the existing projects don't appeal to some.
This is a project run by it seems Oxford Uni, the Open University, and the UK Met Office.
Brief blurb:
Climate models predict significant changes to the Earth's climate in the coming century. But there is a huge range in what they predict - how should we deal with this uncertainty? If they are over-estimating the speed and scale of climate change, we may end up panicking unnecessarily and investing huge amounts of money trying to avert a problem which doesn't turn out to be as serious as the models suggested. Alternatively, if the models are under-estimating the change, we will end up doing too little, too late in the mistaken belief that the changes will be manageably small and gradual.
To cope with this problem we need to evaluate our confidence in the predictions from climate models. In other words we need to quantify the uncertainty in these predictions. By participating in the experiment, you can help us to do this in a way that would not otherwise be possible.
Even with the incredible speed of today's supercomputers, climate models have to include the effects of small-scale physical processes (such as clouds) through simplifications (parameterizations). There is a range of uncertainty in the precise values of many of the parameters used - we do not know precisely what value is most realistic. Sometimes this range can be an order of magnitude! This means that any single forecast represents only one of many possible ways the climate could develop.
How can we assess and reduce this uncertainty?
There are two complementary approaches to this problem:
Improve the parameterizations while narrowing the range of uncertainty in the parameters. This is a continuous process and requires:
Improving the models, using the latest supercomputers as they become available.
Gathering more and more (mainly satellite) data on a wide range of atmospheric variables (such as wind speed, cloud cover, temperature.....).
Carry out large numbers of model runs in which the parameters are varied within their current range of uncertainty. Reject those which fail to model past climate successfully and use the remainder to study future climate.
The second scenario is the climateprediction.net approach. Our intention is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent the whole range of uncertainties in all the parameterizations. This technique, known as ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary computers, each computer tackling one small but key part of the global problem.
To cope with this problem we need to evaluate our confidence in the predictions from climate models. In other words we need to quantify the uncertainty in these predictions. By participating in the experiment, you can help us to do this in a way that would not otherwise be possible.
Even with the incredible speed of today's supercomputers, climate models have to include the effects of small-scale physical processes (such as clouds) through simplifications (parameterizations). There is a range of uncertainty in the precise values of many of the parameters used - we do not know precisely what value is most realistic. Sometimes this range can be an order of magnitude! This means that any single forecast represents only one of many possible ways the climate could develop.
How can we assess and reduce this uncertainty?
There are two complementary approaches to this problem:
Improve the parameterizations while narrowing the range of uncertainty in the parameters. This is a continuous process and requires:
Improving the models, using the latest supercomputers as they become available.
Gathering more and more (mainly satellite) data on a wide range of atmospheric variables (such as wind speed, cloud cover, temperature.....).
Carry out large numbers of model runs in which the parameters are varied within their current range of uncertainty. Reject those which fail to model past climate successfully and use the remainder to study future climate.
The second scenario is the climateprediction.net approach. Our intention is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent the whole range of uncertainties in all the parameterizations. This technique, known as ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary computers, each computer tackling one small but key part of the global problem.

The client is actually one of the nicest and most "modern" looking I have seen for a distributed project, and the team seems very professional. Windows only for now, although a Linux client is on the way I think.
Regards
Gnep
Comment