It is a good project, helping to predict the changes in our climates over the next century. We analyze some climate models they send us, and help to make them more accurate.
We have a nice team going at the moment, ranking (just barely) in the Top 50. Of course, more are welcome to join.
Another plus is that since the project is relatively young, we can catch up more easily than in some of the older projects, as the leaders are not too far ahead.
Some notes though. this project demands quite a bit of horsepower (~800MHz minimum required), takes up 600MB of disk space when running, and the most important part if you join, you must complete your model or the data will be wasted and you will hurt the project.
Climate models predict significant changes to the Earth's climate in the coming century. But there is a huge range in what they predict - how should we deal with this uncertainty? If they are over-estimating the speed and scale of climate change, we may end up panicking unnecessarily and investing huge amounts of money trying to avert a problem which doesn't turn out to be as serious as the models suggested. Alternatively, if the models are under-estimating the change, we will end up doing too little, too late in the mistaken belief that the changes will be manageably small and gradual.
To cope with this problem we need to evaluate our confidence in the predictions from climate models. In other words we need to quantify the uncertainty in these predictions. By participating in the experiment, you can help us to do this in a way that would not otherwise be possible.
Even with the incredible speed of today's supercomputers, climate models have to include the effects of small-scale physical processes (such as clouds) through simplifications (parameterizations). There is a range of uncertainty in the precise values of many of the parameters used - we do not know precisely what value is most realistic. Sometimes this range can be an order of magnitude! This means that any single forecast represents only one of many possible ways the climate could develop.
How can we assess and reduce this uncertainty?
There are two complementary approaches to this problem:
Improve the parameterizations while narrowing the range of uncertainty in the parameters. This is a continuous process and requires:
Improving the models, using the latest supercomputers as they become available.
Gathering more and more (mainly satellite) data on a wide range of atmospheric variables (such as wind speed, cloud cover, temperature.....).
Carry out large numbers of model runs in which the parameters are varied within their current range of uncertainty. Reject those which fail to model past climate successfully and use the remainder to study future climate.
The second scenario is the climateprediction.net approach. Our intention is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent the whole range of uncertainties in all the parameterizations. This technique, known as ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary computers, each computer tackling one small but key part of the global problem.
To cope with this problem we need to evaluate our confidence in the predictions from climate models. In other words we need to quantify the uncertainty in these predictions. By participating in the experiment, you can help us to do this in a way that would not otherwise be possible.
Even with the incredible speed of today's supercomputers, climate models have to include the effects of small-scale physical processes (such as clouds) through simplifications (parameterizations). There is a range of uncertainty in the precise values of many of the parameters used - we do not know precisely what value is most realistic. Sometimes this range can be an order of magnitude! This means that any single forecast represents only one of many possible ways the climate could develop.
How can we assess and reduce this uncertainty?
There are two complementary approaches to this problem:
Improve the parameterizations while narrowing the range of uncertainty in the parameters. This is a continuous process and requires:
Improving the models, using the latest supercomputers as they become available.
Gathering more and more (mainly satellite) data on a wide range of atmospheric variables (such as wind speed, cloud cover, temperature.....).
Carry out large numbers of model runs in which the parameters are varied within their current range of uncertainty. Reject those which fail to model past climate successfully and use the remainder to study future climate.
The second scenario is the climateprediction.net approach. Our intention is to run hundreds of thousands of state-of-the-art climate models with slightly different physics in order to represent the whole range of uncertainties in all the parameterizations. This technique, known as ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary computers, each computer tackling one small but key part of the global problem.
Another plus is that since the project is relatively young, we can catch up more easily than in some of the older projects, as the leaders are not too far ahead.
Some notes though. this project demands quite a bit of horsepower (~800MHz minimum required), takes up 600MB of disk space when running, and the most important part if you join, you must complete your model or the data will be wasted and you will hurt the project.
Comment