Ok, experiments ran faster than expected. I changed strategies and have found good examples to put in the article (good: small enough to be printed, but still complex enough to show benefits/drawbacks).
Currently finalizing the first article, then I can finalize the second one. For the third I will need to run the application on real world data, which means some software development first (need to import the data).
But I think for now I'm back online and crunching.
Announcement
Collapse
No announcement yet.
I have to suspend for a few days...
Collapse
X
-
The algorithm transforms raster data, using a fuzzy self optimizing artificial intelligent system.
I'll give the example that we need to solve.
Imagine you have air pollution data, containing concentrations of some pollutant and supplied on a raster. And then you have e.g. population data, also supplied on a raster.
But the rasters don't align (they have different cell sizes and/or different orientation), which makes it nearly impossible to estimate which part of the population is exposed to which concentration. So one of the rasters has to be transformed to match the other one, so that every cell in one raster matches a cell in the other. The most used approach does this very simplistic, by simply considering the amount of overlap between two cells to determine the portion that is mapped to that cell. This assumes a uniform distribution in each cell (other approaches assume a smooth distribution of the map). But in reality, the distribution can be different (point sources of pollutants, line sources - e.g. roads alter the distribution). So my method takes into account other known information about the pollutant, and uses that to remap the given raster to a different raster, while maintaining accuracy. As it allows for loosely connected data, it uses an artificial intelligent system to judge how the additional information can be used.
At the moment, the prototype is implemented, and I'm running it on artificial data. I need to find some datasets that show where the method works and where it still falls short (I already know why and how to solve those shortcomings, but need to publish current progress). I also would like to find datasets that require different parameters, to show that the self learning aspect is important.
So last night I ran 96 different datasets (calculation takes between 10-30 minutes for one), but one of the external libraries crashed after 10 datasets. Some examples cause errors in the libraries due to rounding errors, and they crash... So now I have to bypass that one...
Leave a comment:
-
I have to suspend for a few days...
Sorry guys... I need to suspend crunching for a while.
I need to run some simulations and test cases to verify a newly developed algorithm and generate output for use in publications....Tags: None
Leave a comment: