Hello all! 

I need to spatially calculate 30-meter grid raster mean over US Census blocks 
in North East region (approx. 1.9 mln street blocks/polygons). I came up with 
the script that runs in parallel, but it still takes days to complete, even 
when the i7 CPU is used up to the max. My question: is there any other way to 
improve the processing speed given the set-up below (I included the backbone of 
the script to save space). I am new to R, so any help will be much appreciated. 
Thanks a lot!

Regards,
Denys
____________________________

library(sp)
library(rgdal)
library(raster)
library(foreach)
library(spatial.tools)

Grid30m <- raster("raster.tif")    # loading main raster into R
NEfips <- c(09, 10, 11, ....)       # list of state FIPS codes, whose street 
block polygons are to be processed
ShapePath <- "T:\\..."              # path to block shapefiles 

sfQuickInit(cpus=8)                 # setting up and registering parallel 
processing back-end

for (x in NEfips)     {
    State <- paste("block_", x , sep="")
    Blocks <- readOGR(dsn=ShapePath, layer=State[1], stringsAsFactors=F)    # 
loading the block polygons
    BlocksReproject <- spTransform(Blocks, crs(Grid30m))
    
    GridMean <- foreach (i=1:length(BlocksReproject$GEOID10), .combine='c', 
.packages="raster") %dopar% {        # Parallel processing loop for extracting 
mean raster value per street block polygon
           extract(Grid30m, BlocksReproject[i], fun=mean)
    }
    write.table(...)    # Generating output containing the mean raster 
statistics for each block polygon.
}

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[email protected]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Reply via email to