[R-sig-Geo] A new version (1.2.0) of the “spm” package for spatial predictive modelling is now on CRAN. [SEC=UNCLASSIFIED]

2019-02-24 Thread Li Jin
Dear R users,



A new version (1.2.0) of the “spm” package for spatial predictive modelling  is 
now available on CRAN.



The introductory vignette is available here:

https://cran.rstudio.com/web/packages/spm/vignettes/spm.html


In this version, two additional functions, avi and rvi have been added; and 
some typos in the help files have been corrected.

avi: to calculate averaged variable importance (avi) for ranfom forest; and
rvi: to calculate relative variable influence (rvi) for generalised boosted 
regression modelling (gbm).



As always, if you find any bugs and have any suggestions, please send me an 
email! Thanks in advance!



Kind regards,

Jin Li, PhD | Spatial Modeller / Computational Statistician
National Earth and Marine Observations | Environmental Geoscience Division
t:  +61 2 6249 9899www.ga.gov.au


Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is
intended only for the person or entity to which it is addressed. If you are not
the intended recipient, then you have received this e-mail by mistake and any
use, dissemination, forwarding, printing or copying of this e-mail and its file
attachments is prohibited. The security of emails transmitted cannot be
guaranteed; by forwarding or replying to this email, you acknowledge and accept
these risks.
-

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [DKIM] Random Forest and OOB error [SEC=UNCLASSIFIED]

2018-06-04 Thread Li Jin
Hi Waldir,

Please check library(spm). The function RFcv and rgcv  in library(spm) provide 
you better options to assess the performance of random forest than using OOB 
error.

Kind regards,
Jin

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Waldir de 
Carvalho Junior
Sent: Tuesday, 5 June 2018 3:38 AM
To: r-sig-geo@r-project.org
Subject: [DKIM] [R-sig-Geo] Random Forest and OOB error

Hi
how can I get and save the "OOB estimate of  error rate" from the model
randomforest?
I am doing a loop and want to save to each loop the OOB error.
I need to get the value to create a new data.frame with the OOB error from
varios models tests.
Thanks in advance

-- 

Waldir de Carvalho Junior

[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] xgboost: problems with predictions for count data [SEC=UNCLASSIFIED]

2018-04-03 Thread Li Jin
Hi All,

I tried to use xgboost to model and predict count data. The predictions are 
however not as expected as shown below.
# sponge count data in library(spm)
library(spm)
data(sponge)
data(sponge.grid)
names(sponge)
[1] "easting"  "northing" "sponge"   "tpi3" "var7" "entro7"   "bs34"
 "bs11"
names(sponge.grid)
[1] "easting"  "northing" "tpi3" "var7" "entro7"   "bs34" "bs11"
range(sponge[, c(3)])
[1]  1 39 # count sample data

# the expected predictions are:
set.seed(1234)
gbmpred1 <- gbmpred(sponge[, -c(3)], sponge[, 3], sponge.grid[, c(1:2)], 
sponge.grid, family = "poisson", n.cores=2)
range(gbmpred1$Predictions)
[1] 10.04643 31.39230 # the expected predictions

# Here are results from xgboost
# use count:poisson
library(xgboost)
xgbst2.1 <- xgboost(data = as.matrix(sponge[, -c(3)]), label = sponge[, 3], 
max_depth = 2, eta = 0.001, nthread = 6, nrounds = 3000, objective = 
"count:poisson")
xgbstpred2 <- predict(xgbst2.1, as.matrix(sponge.grid))
head(xgbstpred2)
range(xgbstpred2)
[1] 1.109032 4.083049 # much lower than expected
table(xgbstpred2)
1.10903215408325 1.26556181907654   3.578040599823 
4.08304929733276  # only four predictions, why?
36535 27144093015351

   plot(gbmpred1$Predictions, xgbstpred2)

   # use reg:linear
xgbst2.2 <- xgboost(data = as.matrix(sponge[, -c(3)]), label = sponge[, 3], 
max_depth = 2, eta = 0.001, nthread = 6, nrounds = 3000, objective = 
"reg:linear")
xgbstpred2.2 <- predict(xgbst2.2, as.matrix(sponge.grid))
head(xgbstpred2.2)
table(xgbstpred2.2)
range( xgbstpred2.2)
[1]  9.019174 23.060669 # this is much closer to but still lower than what 
expected

   plot(gbmpred1$Predictions, xgbstpred2.2)

# use count:poisson and subsample = 0.5
set.seed(1234)
param <- list(max_depth = 2, eta = 0.001, gamma = 0.001, subsample = 0.5, 
silent = 1, nthread = 6, objective = "count:poisson")
xgbst2.4 <- xgboost(data = as.matrix(sponge[, -c(3)]), label = sponge[, 3], 
params = param, nrounds = 3000)
xgbstpred2.4 <- predict(xgbst2.4, as.matrix(sponge.grid))
head(xgbstpred2.4)
table(xgbstpred2.4)
range(xgbstpred2.4)
[1] 1.188561 3.986767 # this is much lower than what expected

   plot(gbmpred1$Predictions, xgbstpred2.4)
  plot(xgbstpred2.2, xgbstpred2.4)

All these were run in R 3.3.3 on Windows"
> Sys.info()
 sysname  release
   "Windows"  "7 x64"
 version
"build 7601, Service Pack 1"
 machine
"x86-64"

Have I miss-specified or missed some parameters? Or there is a bug in xgboost. 
I am grateful for any help.

Kind regards,
Jin

Jin Li, PhD | Spatial Modeller / Computational Statistician
National Earth and Marine Observations | Environmental Geoscience Division
t:  +61 2 6249 9899www.ga.gov.au

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.
-


[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] A new version (1.1.0) of the “spm” package for spatial predictive modelling reelased on CRAN [SEC=UNCLASSIFIED]

2018-03-21 Thread Li Jin
Dear R users,



A new version (1.1.0) of the “spm” package for spatial predictive modelling  is 
now available on CRAN.



The introductory vignette is available here:

https://cran.rstudio.com/web/packages/spm/vignettes/spm.html



There are several new enhancements to the package including a fast version of 
random forest in using ranger (rg) library(ranger) and the ability to convert 
relevant error measures to accuracy measure (VEcv).  A full list of changes are 
shown below.



New Features:

1. Added eight functions to implement random forest using ranger (rg) in 
library(ranger).

2. Added a new function, tovecv, to convert relevant error measures to accuracy 
measure (VEcv).

3. Added some accuracy measures for categorical data and one further accuracy 
measure for numerical data in function pred.acc.

4. Added the variances of predictions to relevant prediction functions.

5. Revised RFcv etc. to use pred.acc.

6. Removed samples with missing values in data(hard).

6. Updated vignette accordingly.



Comments,  suggestions and contributions are welcome and much appreciated!.



Kind regards,

Jin Li, PhD | Spatial Modeller / Computational Statistician
National Earth and Marine Observations | Environmental Geoscience Division
t:  +61 2 6249 9899www.ga.gov.au

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is
intended only for the person or entity to which it is addressed. If you are not
the intended recipient, then you have received this e-mail by mistake and any
use, dissemination, forwarding, printing or copying of this e-mail and its file
attachments is prohibited. The security of emails transmitted cannot be
guaranteed; by forwarding or replying to this email, you acknowledge and accept
these risks.
-

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] [DKIM] Re: [DKIM] Re: Interpolating snowfall values on a Digital Elevation Model [SEC=UNCLASSIFIED]

2018-02-22 Thread Li Jin
Agreed, Michael. Please the refs provided for some demonstrations at a 
latitudinal gradient.

From: Michael Sumner [mailto:mdsum...@gmail.com]
Sent: Thursday, 22 February 2018 11:26 PM
To: Li Jin
Cc: Dominik Schneider; r-sig-geo@r-project.org
Subject: [DKIM] Re: [R-sig-Geo] [DKIM] Re: Interpolating snowfall values on a 
Digital Elevation Model [SEC=UNCLASSIFIED]

Some thoughts.


On Wed, 21 Feb 2018 at 09:09 Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>> 
wrote:
The statement ‘the kriging functions in R still don't accept lat/long’ is 
incorrect. Please check the gstat and spm packages for details. When your data 
is collected from one utm, it is a good idea to project the data using utm. If 
the data is from two or more utms, you need to use different projection 
systems. The references provided demonstrated that the commonly used WGS84 is 
as good as relevant projection systems.

From: Dominik Schneider 
[mailto:dominik.schnei...@colorado.edu<mailto:dominik.schnei...@colorado.edu>]
Sent: Wednesday, 21 February 2018 5:02 AM
To: Li Jin
Cc: Stefano Sofia; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: Re: [DKIM] Re: [R-sig-Geo] Interpolating snowfall values on a Digital 
Elevation Model [SEC=UNCLASSIFIED]

The effects of spatial reference systems on interpolations and accuracy are 
minimal, and lat and long can be used.
Fair enough, thanks for sending the references. But, as far as I know, the 
kriging functions in R still don't accept lat/long.



Any such advice is completely dependent on the study area, and the goals of the 
study. UTM is really bad advice generally, it's just a simplistic system we've 
inherited and is used way too much, a self-fulfilling prophecy. Whether 
standard tools should or shouldn't accept data as given is a crux philosophical 
point, no tool in R is smart enough to know whether it's "correct enough" to 
assume one way or another. You can't assume any measurement represents reality 
in any projection, it depends how far, how much, how large - you can't traverse 
from local neighbourhood scales to continental, for example - you'd make 
different choices regarding compromises at *some such point*.

Please don't ever advise use of UTM without specific caveats about the scope 
and extent of the research - which is impossible in general - learn to use map 
projections with the compromises they entail, there's nothing stopping creating 
a local new one, from any of the main families with PROJ.4, and with many 
variants of compromises on area, length, shape and scale.

I tend not to say anything about this topic in this environment, but this time 
the back and forth is particularly misleading IMO.

We actually have the worst of worlds at the moment, with many softwares 
opinionatedly preventing one from making educational mistakes. There's no real 
authority, lots of opinion and habit. lots of exploration but not enough 
pushing and argument - I advise keeping an open mind and exploring deeply.

Cheers, Mike.

On Mon, Feb 19, 2018 at 8:54 PM, Li Jin 
<jin...@ga.gov.au<mailto:jin...@ga.gov.au><mailto:jin...@ga.gov.au<mailto:jin...@ga.gov.au>>>
 wrote:
The effects of spatial reference systems on interpolations and accuracy are 
minimal, and lat and long can be used. Please see the following studies for 
details.

Jiang, W., Li, J., 2013. Are Spatial Modelling Methods Sensitive to Spatial 
Reference Systems for Predicting Marine Environmental Variables, 20th 
International Congress on Modelling and Simulation: Adelaide, Australia, pp. 
387-393.
Jiang, W., Li, J., 2014. The effects of spatial reference systems on the 
predictive accuracy of spatial interpolation methods. Record 2014/01. 
Geoscience Australia: Canberra, pp 33. 
http://dx.doi.org/10.11636/Record.2014.001.
Turner, A.J., Li, J., Jiang, W., 2017. Effects of Spatial Reference Systems on 
the Accuracy of Spatial Predictive Modelling along a Latitudinal Gradient, 22nd 
International Congress on Modelling and Simulation: Hobart, Tasmania, 
Australia, pp. 106-112.


-Original Message-
From: R-sig-Geo 
[mailto:r-sig-geo-boun...@r-project.org<mailto:r-sig-geo-boun...@r-project.org><mailto:r-sig-geo-boun...@r-project.org<mailto:r-sig-geo-boun...@r-project.org>>]
 On Behalf Of Dominik Schneider
Sent: Wednesday, 14 February 2018 3:21 AM
To: Stefano Sofia
Cc: 
r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org><mailto:r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>>
Subject: [DKIM] Re: [R-sig-Geo] Interpolating snowfall values on a Digital 
Elevation Model

You can't use a lat/long coordinate system when kriging because the concept of 
distance is ambiguous. Convert all your data a UTM grid like you had in your 
first post and it should work.

Another note, It looks like you are working at 0.01 deg which is on the order 
of 1km resolution so you may find  other covariates such as aspect, slope, and 
w

Re: [R-sig-Geo] [DKIM] Re: Interpolating snowfall values on a Digital Elevation Model [SEC=UNCLASSIFIED]

2018-02-20 Thread Li Jin
The statement ‘the kriging functions in R still don't accept lat/long’ is 
incorrect. Please check the gstat and spm packages for details. When your data 
is collected from one utm, it is a good idea to project the data using utm. If 
the data is from two or more utms, you need to use different projection 
systems. The references provided demonstrated that the commonly used WGS84 is 
as good as relevant projection systems.

From: Dominik Schneider [mailto:dominik.schnei...@colorado.edu]
Sent: Wednesday, 21 February 2018 5:02 AM
To: Li Jin
Cc: Stefano Sofia; r-sig-geo@r-project.org
Subject: Re: [DKIM] Re: [R-sig-Geo] Interpolating snowfall values on a Digital 
Elevation Model [SEC=UNCLASSIFIED]

The effects of spatial reference systems on interpolations and accuracy are 
minimal, and lat and long can be used.
Fair enough, thanks for sending the references. But, as far as I know, the 
kriging functions in R still don't accept lat/long.



On Mon, Feb 19, 2018 at 8:54 PM, Li Jin 
<jin...@ga.gov.au<mailto:jin...@ga.gov.au>> wrote:
The effects of spatial reference systems on interpolations and accuracy are 
minimal, and lat and long can be used. Please see the following studies for 
details.

Jiang, W., Li, J., 2013. Are Spatial Modelling Methods Sensitive to Spatial 
Reference Systems for Predicting Marine Environmental Variables, 20th 
International Congress on Modelling and Simulation: Adelaide, Australia, pp. 
387-393.
Jiang, W., Li, J., 2014. The effects of spatial reference systems on the 
predictive accuracy of spatial interpolation methods. Record 2014/01. 
Geoscience Australia: Canberra, pp 33. 
http://dx.doi.org/10.11636/Record.2014.001.
Turner, A.J., Li, J., Jiang, W., 2017. Effects of Spatial Reference Systems on 
the Accuracy of Spatial Predictive Modelling along a Latitudinal Gradient, 22nd 
International Congress on Modelling and Simulation: Hobart, Tasmania, 
Australia, pp. 106-112.


-Original Message-
From: R-sig-Geo 
[mailto:r-sig-geo-boun...@r-project.org<mailto:r-sig-geo-boun...@r-project.org>]
 On Behalf Of Dominik Schneider
Sent: Wednesday, 14 February 2018 3:21 AM
To: Stefano Sofia
Cc: r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] Re: [R-sig-Geo] Interpolating snowfall values on a Digital 
Elevation Model

You can't use a lat/long coordinate system when kriging because the concept of 
distance is ambiguous. Convert all your data a UTM grid like you had in your 
first post and it should work.

Another note, It looks like you are working at 0.01 deg which is on the order 
of 1km resolution so you may find  other covariates such as aspect, slope, and 
wind sheltering/exposure, terrain roughness for estimating snow on the ground 
useful. see some of the earliest papers by Carroll, Cressie, and Elder.

Carroll, S. S., and N. Cressie (1996), A comparison of geostatistical 
methodologies used to estimate snow water equivalent, *JAWRA Journal of the 
American Water Resources Association*, *32*(2), 267–278, 
doi:10./j.1752-1688.1996.tb03450.x.

Carroll, S. S., and N. Cressie (1997), Spatial modeling of snow water 
equivalent using covariances estimated from spatial and geomorphic attributes, 
*Journal of Hydrology*, *190*(1-2), 42–59.

Balk, B., and K. Elder (2000), Combining binary decision tree and 
geostatistical methods to estimate snow distribution in a mountain watershed, 
*Water Resources Research*, *36*(1), 13–26, doi:10.1029/1999WR900251.

Erxleben, J., K. Elder, and R. Davis (2002), Comparison of spatial 
interpolation methods for estimating snow distribution in the Colorado Rocky 
Mountains, *Hydrological Processes*, *16*(18), 3627–3649, doi:10.1002/hyp.1239.

Erickson, T. A., M. W. Williams, and A. Winstral (2005), Persistence of 
topographic controls on the spatial distribution of snow in rugged mountain 
terrain, Colorado, United States, *Water Resour. Res.*, *41*(4), W04014, 
doi:10.1029/2003WR002973.


On Tue, Feb 13, 2018 at 3:45 AM, Stefano Sofia < 
stefano.so...@regione.marche.it<mailto:stefano.so...@regione.marche.it>> wrote:

> Dear Daniel and list users,
> I tried to follow the instructions but I encountered two kinds of errors.
> This is a reproducibile code:
>
> 
> ---
> library(automap)
> library(ggplot2)
> library(gstat)
> library(raster)
> library(rasterVis)
> library(rgdal)
> library(maptools)
>
> ## LOADING DEM
> ita_DEM <- getData('alt', country='ITA', mask=TRUE)
> crs(ita_DEM) <- "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs
> +ellps=WGS84 +towgs84=0,0,0"
> #ita_DEM <- as(ita_DEM, "SpatialGridDataFrame")
> str(ita_DEM)
>
> ## LOADING RAINFALL DATA
> rain_data <- data.frame(Cumulata=c(11.8, 9.0, 8.0, 36.6, 9.4),
> Long_Cent=c(12.61874, 12.78690, 12.96756, 13.15599, 13.2815

Re: [R-sig-Geo] [DKIM] Re: Interpolating snowfall values on a Digital Elevation Model [SEC=UNCLASSIFIED]

2018-02-19 Thread Li Jin
The effects of spatial reference systems on interpolations and accuracy are 
minimal, and lat and long can be used. Please see the following studies for 
details.

Jiang, W., Li, J., 2013. Are Spatial Modelling Methods Sensitive to Spatial 
Reference Systems for Predicting Marine Environmental Variables, 20th 
International Congress on Modelling and Simulation: Adelaide, Australia, pp. 
387-393.
Jiang, W., Li, J., 2014. The effects of spatial reference systems on the 
predictive accuracy of spatial interpolation methods. Record 2014/01. 
Geoscience Australia: Canberra, pp 33. 
http://dx.doi.org/10.11636/Record.2014.001.
Turner, A.J., Li, J., Jiang, W., 2017. Effects of Spatial Reference Systems on 
the Accuracy of Spatial Predictive Modelling along a Latitudinal Gradient, 22nd 
International Congress on Modelling and Simulation: Hobart, Tasmania, 
Australia, pp. 106-112.


-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Dominik 
Schneider
Sent: Wednesday, 14 February 2018 3:21 AM
To: Stefano Sofia
Cc: r-sig-geo@r-project.org
Subject: [DKIM] Re: [R-sig-Geo] Interpolating snowfall values on a Digital 
Elevation Model

You can't use a lat/long coordinate system when kriging because the concept of 
distance is ambiguous. Convert all your data a UTM grid like you had in your 
first post and it should work.

Another note, It looks like you are working at 0.01 deg which is on the order 
of 1km resolution so you may find  other covariates such as aspect, slope, and 
wind sheltering/exposure, terrain roughness for estimating snow on the ground 
useful. see some of the earliest papers by Carroll, Cressie, and Elder.

Carroll, S. S., and N. Cressie (1996), A comparison of geostatistical 
methodologies used to estimate snow water equivalent, *JAWRA Journal of the 
American Water Resources Association*, *32*(2), 267–278, 
doi:10./j.1752-1688.1996.tb03450.x.

Carroll, S. S., and N. Cressie (1997), Spatial modeling of snow water 
equivalent using covariances estimated from spatial and geomorphic attributes, 
*Journal of Hydrology*, *190*(1-2), 42–59.

Balk, B., and K. Elder (2000), Combining binary decision tree and 
geostatistical methods to estimate snow distribution in a mountain watershed, 
*Water Resources Research*, *36*(1), 13–26, doi:10.1029/1999WR900251.

Erxleben, J., K. Elder, and R. Davis (2002), Comparison of spatial 
interpolation methods for estimating snow distribution in the Colorado Rocky 
Mountains, *Hydrological Processes*, *16*(18), 3627–3649, doi:10.1002/hyp.1239.

Erickson, T. A., M. W. Williams, and A. Winstral (2005), Persistence of 
topographic controls on the spatial distribution of snow in rugged mountain 
terrain, Colorado, United States, *Water Resour. Res.*, *41*(4), W04014, 
doi:10.1029/2003WR002973.


On Tue, Feb 13, 2018 at 3:45 AM, Stefano Sofia < 
stefano.so...@regione.marche.it> wrote:

> Dear Daniel and list users,
> I tried to follow the instructions but I encountered two kinds of errors.
> This is a reproducibile code:
>
> 
> ---
> library(automap)
> library(ggplot2)
> library(gstat)
> library(raster)
> library(rasterVis)
> library(rgdal)
> library(maptools)
>
> ## LOADING DEM
> ita_DEM <- getData('alt', country='ITA', mask=TRUE)
> crs(ita_DEM) <- "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs
> +ellps=WGS84 +towgs84=0,0,0"
> #ita_DEM <- as(ita_DEM, "SpatialGridDataFrame")
> str(ita_DEM)
>
> ## LOADING RAINFALL DATA
> rain_data <- data.frame(Cumulata=c(11.8, 9.0, 8.0, 36.6, 9.4), 
> Long_Cent=c(12.61874, 12.78690, 12.96756, 13.15599, 13.28157), 
> Lat_Cent=c(43.79447, 43.85185, 43.76267, 43.03470, 43.08003), 
> Altitude=c(112.20, 42.93, 36.14, 747, 465))
>
> stations <- data.frame(rain_data$Long_Cent, rain_data$Lat_Cent) 
> rain_data <- SpatialPointsDataFrame(stations, rain_data,
> proj4string=CRS("+init=epsg:4326"))
> stations <- SpatialPoints(stations, 
> proj4string=CRS("+init=epsg:4326"))
>
> ## EXTRACT THE ELEVATION VALUES TO MY POINTS 
> rain_data$ExtractedElevationValues <- extract(x=ita_DEM, y=stations)
>
> ## CREATE GRID FOR KRIGING OUTPUT
> minx <-  rain_data@bbox[1,1]
> maxx <- rain_data@bbox[1,2]
> miny <- rain_data@bbox[2,1]
> maxy <- rain_data@bbox[2,2]
> pixel <- 0.01
> grd <- expand.grid(x=seq(minx, maxx, by=pixel), y=seq(miny, maxy,
> by=pixel))
> coordinates(grd) <- ~x+y
> gridded(grd) <- TRUE
> proj4string(grd) <- CRS("+init=epsg:4326")
>
> ## KRIGING: autoKrige(YourMeasurements ~ YourExtractedElevationValues, 
> YourMeasurementLocations, TargetGrid) OK_snow <- autoKrige(Cumulata ~ 
> rain_data$ExtractedElevationValues,
> rain_data, grd)
> 
> ---
>
> The error I get is:
> Error in autoKrige(Cumulata ~ rain_data$ExtractedElevationValues,
> rain_data,  :
>   Either input_data or 

Re: [R-sig-Geo] [DKIM] Re: Fw: Why is there a large predictive difference for Univ. Kriging? [SEC=UNCLASSIFIED]

2017-11-22 Thread Li Jin
That is something I am still working on.

Variances of the predictions by spatial predictive models are not well defined 
yet, and they are many ways to do this but none of them is satisfactory. 
Although people may argue that kriging methods can produce such variance, 
kriging variance in fact does not reflect the uncertainty expected. Please see 
Goovaerts 1997 'Geostatistics for Natural Resources Evaluation' for details, or 
simply see my review 'A Review of Spatial Interpolation Methods for 
Environmental Scientists' for a short explanation.

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Thursday, 23 November 2017 10:01 AM
To: Li Jin; Tomislav Hengl; r-sig-geo@r-project.org
Subject: [DKIM] Re: [DKIM] Re: [R-sig-Geo] Fw: Why is there a large predictive 
difference for Univ. Kriging? [SEC=UNCLASSIFIED]


Jin,

Is there any to get the variances of the predictions in spm?




From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 22, 2017 3:28 PM
To: Tomislav Hengl; Joelle k. Akram; 
r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] Re: [R-sig-Geo] Fw: Why is there a large predictive 
difference for Univ. Kriging? [SEC=UNCLASSIFIED]

Let try spm and see what we can achieve. All these scripts were directly 
modified from examples in spm.
> library(spm)
> library(sp)
> library(gstat)
> data(meuse)

> set.seed(999)
> rfcv1 <- RFcv(meuse[, c(5,4,7,8)], meuse[, 6], predacc = "ALL") # I used the 
> same predictors in the same order as in your model for comparison purpose.
> rfcv1$mae
[1] 53.54404 # This much lower than that for KED

> set.seed(999)
> rfcv1 <- rfokcv(meuse[, c(1,2)], meuse[, c(5,4,7,8)], meuse[, 6], predacc = 
> "ALL")
> rfcv1$mae
[1] 42.22274 # This one further improved the accuracy in comparison with that 
for RF

> set.seed(999)
> rfcv1 <- rfidwcv(meuse[, c(1,2)], meuse[, c(5,4,7,8)], meuse[, 6], predacc = 
> "ALL")
> rfcv1$mae
[1] 42.60406 # This one is similar to RFOK

You may try rfcv1$vecv for each method and see how accurate the models are.

I guess the results speak loudly what should be used.

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Tomislav 
Hengl
Sent: Thursday, 23 November 2017 8:42 AM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] Re: [R-sig-Geo] Fw: Why is there a large predictive difference 
for Univ. Kriging?


Any type of kriging is a convex predictor which means that predictions at 
sampling locations will exactly match measured numbers. That is why you get 
MAE_train = 0.

The actual MAE of your predictions is 85.9. This is not that bad considering 
that the range of values is: 113-1839. If your repeat the CV process e.g. 10 
times you will get a more stable estimate of MAE. Even more interesting is the 
simple mean error (ME) which tells you whether there are over-estimation or 
under-estimation problem. Also plotting observed vs predicted (as in
http://gsif.isric.org/lib/exe/detail.php/wiki:xyplot_predicted_vs_observerd_edgeroi.png?id=wiki%3Asoilmapping_using_mla)
gives you graphical idea if there are any problems with your model.

HTH

Tom Hengl



On 2017-11-22 21:34, Joelle k. Akram wrote:
> Hi Tom,
>
>
> I tried splitting the data into 'training' set and a 'holdout' sample
> set as in my original post. I seem to be getting consistent results,
> i.e., a large predictive difference in terms of MAE between both sets.
> The MAE_train =0.1165816 and MAE_holdOut = 85.91126. In my
> opinion, this significant difference is an indication of over-fitting
> on the training sample set for the semi-variogram modeling. The code
> is below.  Any of your insights are welcome.
>
>
> demo(meuse, echo=FALSE)
>   set.seed(999)
>   sel.d = complete.cases(meuse@data[,c("lead","copper","elev",
> "dist")])
>   meuse = meuse[sel.d,]
>   Training_ids <- sample(seq_len(nrow(meuse)), size = (0.7*
> nrow(meuse)))
>   Training_sample = meuse[Training_ids,]
>   Holdout_sample = meuse[-Training_ids,]
>   # Generate VGM using Training set
>   Training_sample.geo <- as.geodata(Training_sample["zinc"])
>   ## add covariates:
>   Training_sample.geo$covariate =
> Training_sample@data[,c("lead","copper","elev", "dist")] trend = ~
> lead+copper+elev+dist
>   zinc.vgm <- likfit(Training_sample.geo, lambda=0, trend = trend,
>
> ini=c(var(log1p(Training_sample.geo$data)),800),
> fix.psiA = FALSE, fix.psiR = FALSE)
>
> # do prediction for locations in Training set
>   locs2 = Training_sample@coords
>   KC = krige.control(trend.d = trend, trend.l = ~
>Training_sample$lea

Re: [R-sig-Geo] [DKIM] Re: Fw: Why is there a large predictive difference for Univ. Kriging? [SEC=UNCLASSIFIED]

2017-11-22 Thread Li Jin
Let try spm and see what we can achieve. All these scripts were directly 
modified from examples in spm.
> library(spm)
> library(sp)
> library(gstat)
> data(meuse)

> set.seed(999)
> rfcv1 <- RFcv(meuse[, c(5,4,7,8)], meuse[, 6], predacc = "ALL") # I used the 
> same predictors in the same order as in your model for comparison purpose.
> rfcv1$mae
[1] 53.54404 # This much lower than that for KED

> set.seed(999)
> rfcv1 <- rfokcv(meuse[, c(1,2)], meuse[, c(5,4,7,8)], meuse[, 6], predacc = 
> "ALL")
> rfcv1$mae
[1] 42.22274 # This one further improved the accuracy in comparison with that 
for RF

> set.seed(999)
> rfcv1 <- rfidwcv(meuse[, c(1,2)], meuse[, c(5,4,7,8)], meuse[, 6], predacc = 
> "ALL") 
> rfcv1$mae
[1] 42.60406 # This one is similar to RFOK

You may try rfcv1$vecv for each method and see how accurate the models are.

I guess the results speak loudly what should be used.

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Tomislav 
Hengl
Sent: Thursday, 23 November 2017 8:42 AM
To: Joelle k. Akram; r-sig-geo@r-project.org
Subject: [DKIM] Re: [R-sig-Geo] Fw: Why is there a large predictive difference 
for Univ. Kriging?


Any type of kriging is a convex predictor which means that predictions at 
sampling locations will exactly match measured numbers. That is why you get 
MAE_train = 0.

The actual MAE of your predictions is 85.9. This is not that bad considering 
that the range of values is: 113-1839. If your repeat the CV process e.g. 10 
times you will get a more stable estimate of MAE. Even more interesting is the 
simple mean error (ME) which tells you whether there are over-estimation or 
under-estimation problem. Also plotting observed vs predicted (as in
http://gsif.isric.org/lib/exe/detail.php/wiki:xyplot_predicted_vs_observerd_edgeroi.png?id=wiki%3Asoilmapping_using_mla)
gives you graphical idea if there are any problems with your model.

HTH

Tom Hengl



On 2017-11-22 21:34, Joelle k. Akram wrote:
> Hi Tom,
> 
> 
> I tried splitting the data into 'training' set and a 'holdout' sample 
> set as in my original post. I seem to be getting consistent results, 
> i.e., a large predictive difference in terms of MAE between both sets.
> The MAE_train =0.1165816 and MAE_holdOut = 85.91126. In my 
> opinion, this significant difference is an indication of over-fitting 
> on the training sample set for the semi-variogram modeling. The code 
> is below.  Any of your insights are welcome.
> 
> 
> demo(meuse, echo=FALSE)
>   set.seed(999)
>   sel.d = complete.cases(meuse@data[,c("lead","copper","elev", 
> "dist")])
>   meuse = meuse[sel.d,]
>   Training_ids <- sample(seq_len(nrow(meuse)), size = (0.7* 
> nrow(meuse)))
>   Training_sample = meuse[Training_ids,]
>   Holdout_sample = meuse[-Training_ids,]
>   # Generate VGM using Training set
>   Training_sample.geo <- as.geodata(Training_sample["zinc"])
>   ## add covariates:
>   Training_sample.geo$covariate =
> Training_sample@data[,c("lead","copper","elev", "dist")] trend = ~ 
> lead+copper+elev+dist
>   zinc.vgm <- likfit(Training_sample.geo, lambda=0, trend = trend,
>                        
> ini=c(var(log1p(Training_sample.geo$data)),800),
> fix.psiA = FALSE, fix.psiR = FALSE)
> 
> # do prediction for locations in Training set
>   locs2 = Training_sample@coords
>   KC = krige.control(trend.d = trend, trend.l = ~
>                        Training_sample$lead+Training_sample$copper+
>                        Training_sample$elev+Training_sample$dist,
> obj.model = zinc.vgm)
>   zinc_train <- krige.conv(Training_sample.geo, locations=locs2, 
> krige=KC)
>   # do prediction for new locations in Hold-Out sample set
>   newlocs2 = Holdout_sample@coords
>   KC2 = krige.control(trend.d = trend, trend.l = ~
>                        Holdout_sample$lead+Holdout_sample$copper+
>                       Holdout_sample$elev+Holdout_sample$dist, 
> obj.model = zinc.vgm)
>   zinc_holdout <- krige.conv(Training_sample.geo, locations=newlocs2,
> krige=KC2)
>   # Computing Predictive errors for Training and Hold Out samples 
> respectively
>   training_prediction_error_term <- Training_sample$zinc - 
> zinc_train$predict
>   holdout_prediction_error_term <- Holdout_sample$zinc - 
> zinc_holdout$predict
> 
>   # Function that returns Mean Absolute Error
>   mae <- function(error)
>   {
>     mean(abs(error))
>   }
>   # Mean Absolute Error metric :
>   # UK Predictive errors for Training sample set , and UK Predictive 
> Errors for HoldOut sample set
>   print(mae(training_prediction_error_term)) #Error for Training 
> sample set
>   print(mae(holdout_prediction_error_term)) #Error for Hold out sample 
> set
> 
> 
> 
> 
> --
> --
> *From:* Tomislav Hengl 
> *Sent:* November 22, 2017 8:17 AM
> *To:* Joelle k. Akram; r-sig-geo@r-project.org
> *Subject:* Re: [R-sig-Geo] Fw: Why is there a large predictive 
> difference for 

Re: [R-sig-Geo] [DKIM] Re: [DKIM] Fw: Why is there a large predictive difference forUniv. Kriging? [SEC=UNCLASSIFIED]

2017-11-22 Thread Li Jin
Now I guess you need to understand how KED works. This paper 
https://doi.org/10.1016/j.envsoft.2013.12.008 may give you some clue. 
Theoretically, the mae of KED for the training dataset should be 0 due to its 
nature (hint: exactness).

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 5:38 PM
To: Li Jin; r-sig-geo@r-project.org
Subject: Re: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]


Jin,



do you think there is potential evidence of overfitting for KED given the large 
difference in MAE betwen the train and holdout sets?


From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 7:00 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]


For both models, the MAE for holdout is larger than that for the training. That 
is expected.



From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 12:49 PM
To: Li Jin; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: Re: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]



thanks Jin. The reason I am very surprised by the MAE_train and MAE_holdOut 
differences is due to my comparison of the KED (i.e., Univ krig. code in my 
initial message post) with Linear Regression.



Please see below for the Linear Regression code where the MAE_training_set = 
90.1 and the MAE_holdOut_set = 97.4

On the other hand, KED  gave me MAE_training_set = 1 and the MAE_holdOut_set = 
76.5.



Given that KED is a linear model (i.e. Linear Reg + Ord Krig.) I am surprised 
by these differences. Any insight from your end is appreciated.



cat("\014")

rm(list=ls())

cls <- function() cat(rep("\n",100))

cls()

graphics.off()

setwd("C:/Users/Ravi Persad/Desktop/OwenSound_Region25_UR010")

options(scipen = 999)

graphics.off()







library(sp)

library(gstat)

data(meuse)

dataset= meuse

set.seed(999)



# Split Meuse Dataset into Training and HoldOut Sample datasets

Training_ids <- sample(seq_len(nrow(dataset)), size = (0.7* nrow(dataset)))



Training_sample = dataset[Training_ids,]

Holdout_sample_allvars = dataset[-Training_ids,]



holdoutvars_df <-(dataset[,which(names(dataset) %in% 
c("x","y","lead","copper","elev","dist"))])

Hold_out_sample = holdoutvars_df[-Training_ids,]



coordinates(Training_sample) <- c('x','y')

coordinates(Hold_out_sample) <- c('x','y')



# Semivariogram modeling

m1  <- variogram(log(zinc)~lead+copper+elev+dist, Training_sample)

m <- vgm("Exp")

m <- fit.variogram(m1, m)





# Apply Linear regression to Training dataset

train_model <- lm(log(zinc)~lead+copper+elev+dist, Training_sample)

prediction_training_data <- expm1(predict(train_model,newdata =Training_sample 
))



# Apply Linear Regression to Hold Out dataset

prediction_holdout_data <- expm1(predict(train_model,newdata =Hold_out_sample ))



# Computing Predictive errors for Training and Hold Out samples respectively

training_prediction_error_term <- Training_sample$zinc - 
prediction_training_data

holdout_prediction_error_term <- Holdout_sample_allvars$zinc - 
prediction_holdout_data



# Function that returns Mean Absolute Error

mae <- function(error)

{

  mean(abs(error))

}



# Mean Absolute Error metric :

# UK Predictive errors for Training sample set , and UK Predictive Errors for 
HoldOut sample set

print(mae(training_prediction_error_term)) #Error for Training sample set

print(mae(holdout_prediction_error_term)) #Error for Hold out sample set







From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 6:36 PM
To: Li Jin; Joelle k. Akram; 
r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]



BTW, to your question, the first MAE is measuring the goodness of fit, the 
second measuring the predictive accuracy. The second paper below has partially 
address this.

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Li Jin
Sent: Wednesday, 22 November 2017 12:22 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]

Although regression models are transparent, their predictive accuracy is poor 
in many cases, especially in environmental modelling, because of non-linear 
relationsh

Re: [R-sig-Geo] [DKIM] Re: [DKIM] Fw: Why is there a large predictive difference forUniv. Kriging? [SEC=UNCLASSIFIED]

2017-11-21 Thread Li Jin
For both models, the MAE for holdout is larger than that for the training. That 
is expected.

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 12:49 PM
To: Li Jin; r-sig-geo@r-project.org
Subject: Re: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]


thanks Jin. The reason I am very surprised by the MAE_train and MAE_holdOut 
differences is due to my comparison of the KED (i.e., Univ krig. code in my 
initial message post) with Linear Regression.



Please see below for the Linear Regression code where the MAE_training_set = 
90.1 and the MAE_holdOut_set = 97.4

On the other hand, KED  gave me MAE_training_set = 1 and the MAE_holdOut_set = 
76.5.



Given that KED is a linear model (i.e. Linear Reg + Ord Krig.) I am surprised 
by these differences. Any insight from your end is appreciated.


cat("\014")
rm(list=ls())
cls <- function() cat(rep("\n",100))
cls()
graphics.off()
setwd("C:/Users/Ravi Persad/Desktop/OwenSound_Region25_UR010")
options(scipen = 999)
graphics.off()



library(sp)
library(gstat)
data(meuse)
dataset= meuse
set.seed(999)

# Split Meuse Dataset into Training and HoldOut Sample datasets
Training_ids <- sample(seq_len(nrow(dataset)), size = (0.7* nrow(dataset)))

Training_sample = dataset[Training_ids,]
Holdout_sample_allvars = dataset[-Training_ids,]

holdoutvars_df <-(dataset[,which(names(dataset) %in% 
c("x","y","lead","copper","elev","dist"))])
Hold_out_sample = holdoutvars_df[-Training_ids,]

coordinates(Training_sample) <- c('x','y')
coordinates(Hold_out_sample) <- c('x','y')

# Semivariogram modeling
m1  <- variogram(log(zinc)~lead+copper+elev+dist, Training_sample)
m <- vgm("Exp")
m <- fit.variogram(m1, m)


# Apply Linear regression to Training dataset
train_model <- lm(log(zinc)~lead+copper+elev+dist, Training_sample)
prediction_training_data <- expm1(predict(train_model,newdata =Training_sample 
))

# Apply Linear Regression to Hold Out dataset
prediction_holdout_data <- expm1(predict(train_model,newdata =Hold_out_sample ))

# Computing Predictive errors for Training and Hold Out samples respectively
training_prediction_error_term <- Training_sample$zinc - 
prediction_training_data
holdout_prediction_error_term <- Holdout_sample_allvars$zinc - 
prediction_holdout_data

# Function that returns Mean Absolute Error
mae <- function(error)
{
  mean(abs(error))
}

# Mean Absolute Error metric :
# UK Predictive errors for Training sample set , and UK Predictive Errors for 
HoldOut sample set
print(mae(training_prediction_error_term)) #Error for Training sample set
print(mae(holdout_prediction_error_term)) #Error for Hold out sample set



From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 6:36 PM
To: Li Jin; Joelle k. Akram; 
r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]

BTW, to your question, the first MAE is measuring the goodness of fit, the 
second measuring the predictive accuracy. The second paper below has partially 
address this.

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Li Jin
Sent: Wednesday, 22 November 2017 12:22 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]

Although regression models are transparent, their predictive accuracy is poor 
in many cases, especially in environmental modelling, because of non-linear 
relationships and interactions. If your modelling purpose is to generate 
spatial predictions, I would suggest try spm first.
As to the assessment of predictive models, MAE has its limitations and you may 
be interested in https://doi.org/10.1016/j.envsoft.2016.02.004 and 
https://doi.org/10.1371/journal.pone.0183250.

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 12:13 PM
To: Li Jin; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: Re: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging? [SEC=UNCLASSIFIED]


no problem Jin. I am looking a for regression model that is transparent, i.e., 
where I can obtain the regression fitting coefficients (beta) for each 
covariate. Do you recommend any in spm to use?

Also which you do think from your experience, will have a similar predictive 
performance (MAE) for both the training sample set, as well as, the hold-out 
sample test set?

cheers,
Chris

From: Li Jin 
<jin...@ga.gov.au<mailto:jin...@ga.gov.au<

Re: [R-sig-Geo] [DKIM] Re: [DKIM] Fw: Why is there a large predictive difference forUniv. Kriging? [SEC=UNCLASSIFIED]

2017-11-21 Thread Li Jin
BTW, to your question, the first MAE is measuring the goodness of fit, the 
second measuring the predictive accuracy. The second paper below has partially 
address this.

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Li Jin
Sent: Wednesday, 22 November 2017 12:22 PM
To: Joelle k. Akram; r-sig-geo@r-project.org
Subject: [DKIM] Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]

Although regression models are transparent, their predictive accuracy is poor 
in many cases, especially in environmental modelling, because of non-linear 
relationships and interactions. If your modelling purpose is to generate 
spatial predictions, I would suggest try spm first.
As to the assessment of predictive models, MAE has its limitations and you may 
be interested in https://doi.org/10.1016/j.envsoft.2016.02.004 and 
https://doi.org/10.1371/journal.pone.0183250.

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 12:13 PM
To: Li Jin; r-sig-geo@r-project.org
Subject: Re: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging? [SEC=UNCLASSIFIED]


no problem Jin. I am looking a for regression model that is transparent, i.e., 
where I can obtain the regression fitting coefficients (beta) for each 
covariate. Do you recommend any in spm to use?

Also which you do think from your experience, will have a similar predictive 
performance (MAE) for both the training sample set, as well as, the hold-out 
sample test set?

cheers,
Chris

From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 6:07 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging? [SEC=UNCLASSIFIED]


They are not yet.



From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 11:56 AM
To: Li Jin; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] Re: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]



Hi Jin,



thank you for sharing. I was having a read of your paper:"Application of 
machine learning methods to spatial interpolation of environmental variables " 
of which the spm package is based.



In Table 1 from the paper you compare many algorithms. I was interested in 
assessing RKglm, RKgls, RKlm. Are these available in spm?



thanks

Chris



________

From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 5:33 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging? [SEC=UNCLASSIFIED]



Hi Chris,
The UK used here is usually called kriging with an external drift (KED). It, in 
fact, is a linear model plus kriging, which assumes linear relationship that is 
usually not true. It has been tested in several studies and was outperformed by 
machine learning methods like RF, RFOK, RFIDW etc. I have release an R package, 
spm, to introduce these methods. It is easy to use as demonstrated in 
vignette('spm').
Hope this helps.
Regards,
Jin

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Joelle k. 
Akram
Sent: Wednesday, 22 November 2017 11:08 AM
To: r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging?




down 
votefavorite<https://stackoverflow.com/questions/47424740/why-is-predictive-error-large-for-universal-kriging#<https://stackoverflow.com/questions/47424740/why-is-predictive-error-large-for-universal-kriging>>


I am using the Meuse dataset for universal kriging (UK) via the gstat library 
in R. I am following a strategy used in Machine Learning where data is 
partioned into a Train set and Hold out set. The Train set is used for defining 
the regressive model and defining the semivariogram.

I employ UK to predict on both the Train sample set, as well as the Hold Out 
sample set. However, there mean absolute error (MAE) from the predictions of 
the response variable (i.e., zinc for the Meuse dataset) and actual values are 
very different. I would expect them to be similar or at least closer. So far I 
have MAE_training_set = 1 and MAE_holdOut_set = 76.5. My code is below and 
advice is welcome.

library(sp)
library(gstat)
data(meuse)
dataset= meuse
set.seed(999)

# Split Meuse Dataset into Training and HoldOut Sample datasets Training_ids <- 
sample(seq_len(nrow(dataset)), size = (0.7* nrow(dataset)))

Training_sample = dataset[Training_ids,] Holdout_sample_allvars = 
dataset[-Training_ids,]

holdoutvars_df <

Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive difference forUniv. Kriging? [SEC=UNCLASSIFIED]

2017-11-21 Thread Li Jin
They are not yet.

From: Joelle k. Akram [mailto:chino_to...@hotmail.com]
Sent: Wednesday, 22 November 2017 11:56 AM
To: Li Jin; r-sig-geo@r-project.org
Subject: [DKIM] Re: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive 
difference forUniv. Kriging? [SEC=UNCLASSIFIED]


Hi Jin,



thank you for sharing. I was having a read of your paper:"Application of 
machine learning methods to spatial interpolation of environmental variables " 
of which the spm package is based.



In Table 1 from the paper you compare many algorithms. I was interested in 
assessing RKglm, RKgls, RKlm. Are these available in spm?



thanks

Chris

____
From: Li Jin <jin...@ga.gov.au<mailto:jin...@ga.gov.au>>
Sent: November 21, 2017 5:33 PM
To: Joelle k. Akram; r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: RE: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging? [SEC=UNCLASSIFIED]

Hi Chris,
The UK used here is usually called kriging with an external drift (KED). It, in 
fact, is a linear model plus kriging, which assumes linear relationship that is 
usually not true. It has been tested in several studies and was outperformed by 
machine learning methods like RF, RFOK, RFIDW etc. I have release an R package, 
spm, to introduce these methods. It is easy to use as demonstrated in 
vignette('spm').
Hope this helps.
Regards,
Jin

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Joelle k. 
Akram
Sent: Wednesday, 22 November 2017 11:08 AM
To: r-sig-geo@r-project.org<mailto:r-sig-geo@r-project.org>
Subject: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging?




down 
votefavorite<https://stackoverflow.com/questions/47424740/why-is-predictive-error-large-for-universal-kriging#<https://stackoverflow.com/questions/47424740/why-is-predictive-error-large-for-universal-kriging>>


I am using the Meuse dataset for universal kriging (UK) via the gstat library 
in R. I am following a strategy used in Machine Learning where data is 
partioned into a Train set and Hold out set. The Train set is used for defining 
the regressive model and defining the semivariogram.

I employ UK to predict on both the Train sample set, as well as the Hold Out 
sample set. However, there mean absolute error (MAE) from the predictions of 
the response variable (i.e., zinc for the Meuse dataset) and actual values are 
very different. I would expect them to be similar or at least closer. So far I 
have MAE_training_set = 1 and MAE_holdOut_set = 76.5. My code is below and 
advice is welcome.

library(sp)
library(gstat)
data(meuse)
dataset= meuse
set.seed(999)

# Split Meuse Dataset into Training and HoldOut Sample datasets Training_ids <- 
sample(seq_len(nrow(dataset)), size = (0.7* nrow(dataset)))

Training_sample = dataset[Training_ids,] Holdout_sample_allvars = 
dataset[-Training_ids,]

holdoutvars_df <-(dataset[,which(names(dataset) %in% 
c("x","y","lead","copper","elev","dist"))])
Hold_out_sample = holdoutvars_df[-Training_ids,]

coordinates(Training_sample) <- c('x','y')
coordinates(Hold_out_sample) <- c('x','y')

# Semivariogram modeling
m1  <- variogram(log(zinc)~lead+copper+elev+dist, Training_sample) m <- 
vgm("Exp") m <- fit.variogram(m1, m)


# Apply Univ Krig to Training dataset
prediction_training_data <- krige(log(zinc)~lead+copper+elev+dist, 
Training_sample, Training_sample, model = m) prediction_training_data <- 
expm1(prediction_training_data$var1.pred)

# Apply Univ Krig to Hold Out dataset
prediction_holdout_data <- krige(log(zinc)~lead+copper+elev+dist, 
Training_sample, Hold_out_sample, model = m) prediction_holdout_data <- 
expm1(prediction_holdout_data$var1.pred)

# Computing Predictive errors for Training and Hold Out samples respectively 
training_prediction_error_term <- Training_sample$zinc - 
prediction_training_data holdout_prediction_error_term <- 
Holdout_sample_allvars$zinc - prediction_holdout_data



# Function that returns Mean Absolute Error mae <- function(error) {
  mean(abs(error))
}

# Mean Absolute Error metric :
# UK Predictive errors for Training sample set , and UK Predictive Errors for 
HoldOut sample set
print(mae(training_prediction_error_term)) #Error for Training sample set
print(mae(holdout_prediction_error_term)) #Error for Hold out sample set


cheers,

Kristopher (Chris)

[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org<mailto:R-sig-Geo@r-project.org>
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have receive

Re: [R-sig-Geo] [DKIM] Fw: Why is there a large predictive difference forUniv. Kriging? [SEC=UNCLASSIFIED]

2017-11-21 Thread Li Jin
Hi Chris,
The UK used here is usually called kriging with an external drift (KED). It, in 
fact, is a linear model plus kriging, which assumes linear relationship that is 
usually not true. It has been tested in several studies and was outperformed by 
machine learning methods like RF, RFOK, RFIDW etc. I have release an R package, 
spm, to introduce these methods. It is easy to use as demonstrated in 
vignette('spm').
Hope this helps.
Regards,
Jin

-Original Message-
From: R-sig-Geo [mailto:r-sig-geo-boun...@r-project.org] On Behalf Of Joelle k. 
Akram
Sent: Wednesday, 22 November 2017 11:08 AM
To: r-sig-geo@r-project.org
Subject: [DKIM] [R-sig-Geo] Fw: Why is there a large predictive difference 
forUniv. Kriging?




down 
votefavorite


I am using the Meuse dataset for universal kriging (UK) via the gstat library 
in R. I am following a strategy used in Machine Learning where data is 
partioned into a Train set and Hold out set. The Train set is used for defining 
the regressive model and defining the semivariogram.

I employ UK to predict on both the Train sample set, as well as the Hold Out 
sample set. However, there mean absolute error (MAE) from the predictions of 
the response variable (i.e., zinc for the Meuse dataset) and actual values are 
very different. I would expect them to be similar or at least closer. So far I 
have MAE_training_set = 1 and MAE_holdOut_set = 76.5. My code is below and 
advice is welcome.

library(sp)
library(gstat)
data(meuse)
dataset= meuse
set.seed(999)

# Split Meuse Dataset into Training and HoldOut Sample datasets Training_ids <- 
sample(seq_len(nrow(dataset)), size = (0.7* nrow(dataset)))

Training_sample = dataset[Training_ids,] Holdout_sample_allvars = 
dataset[-Training_ids,]

holdoutvars_df <-(dataset[,which(names(dataset) %in% 
c("x","y","lead","copper","elev","dist"))])
Hold_out_sample = holdoutvars_df[-Training_ids,]

coordinates(Training_sample) <- c('x','y')
coordinates(Hold_out_sample) <- c('x','y')

# Semivariogram modeling
m1  <- variogram(log(zinc)~lead+copper+elev+dist, Training_sample) m <- 
vgm("Exp") m <- fit.variogram(m1, m)


# Apply Univ Krig to Training dataset
prediction_training_data <- krige(log(zinc)~lead+copper+elev+dist, 
Training_sample, Training_sample, model = m) prediction_training_data <- 
expm1(prediction_training_data$var1.pred)

# Apply Univ Krig to Hold Out dataset
prediction_holdout_data <- krige(log(zinc)~lead+copper+elev+dist, 
Training_sample, Hold_out_sample, model = m) prediction_holdout_data <- 
expm1(prediction_holdout_data$var1.pred)

# Computing Predictive errors for Training and Hold Out samples respectively 
training_prediction_error_term <- Training_sample$zinc - 
prediction_training_data holdout_prediction_error_term <- 
Holdout_sample_allvars$zinc - prediction_holdout_data



# Function that returns Mean Absolute Error mae <- function(error) {
  mean(abs(error))
}

# Mean Absolute Error metric :
# UK Predictive errors for Training sample set , and UK Predictive Errors for 
HoldOut sample set
print(mae(training_prediction_error_term)) #Error for Training sample set
print(mae(holdout_prediction_error_term)) #Error for Hold out sample set


cheers,

Kristopher (Chris)

[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] A new R package - spm: Spatial Predictive Modelling, is now available on the CRAN [SEC=UNCLASSIFIED]

2017-08-27 Thread Li Jin
Hi All,

Just thought you might be interested in a recently released R package, spm: 
Spatial Predictive Modelling. 

It aims to introduce some novel, accurate, hybrid geostatistical and machine 
learning methods for spatial predictive modelling. It currently contains two 
commonly used geostatistical methods, two machine learning methods, four hybrid 
methods and two averaging methods.

For each method, two functions are provided. One function is for assessing the 
predictive errors and accuracy of the method based on cross-validation. The 
other one is for generating spatial predictions using the method. They all use 
data.frame as input data. Moreover, two functions are provided for accuracy 
assessment. These functions attempt to simplify and streamline the model 
evaluation and model application processes, which may assist users to apply 
these methods to their data to improve modelling efficiency as well as 
predictive accuracy.

It can be downloaded from CRAN now.  

Any feedback and comments are much appreciated! 

Kind regards,

Jin Li, PhD | Spatial Modeller / Computational Statistician
National Earth and Marine Observations | Environmental Geoscience Division 
t:  +61 2 6249 9899    www.ga.gov.au



Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Re: [R-sig-Geo] Error with loading rJava for spcosa [SEC=UNCLASSIFIED]

2016-11-27 Thread Li Jin
Thank you very much, Roger! The suggestions are very helpful.
Best wishes,
Jin

-Original Message-
From: Roger Bivand [mailto:roger.biv...@nhh.no] 
Sent: Friday, 25 November 2016 7:24 PM
To: Li Jin
Cc: r-sig-geo@r-project.org
Subject: Re: [R-sig-Geo] Error with loading rJava for spcosa [SEC=UNCLASSIFIED]

On Thu, 24 Nov 2016, Li Jin wrote:

> Hi All,
>
> I have been using library(spcosa) in R version 3.2.3 (2015-12-10) and all 
> worked well, until today.
>
> The error was as below when I called:
>>  library(spcosa)
> Loading required package: rJava
> Error : .onLoad failed in loadNamespace() for 'rJava', details:
>  call: fun(libname, pkgname)
>  error: JAVA_HOME cannot be determined from the Registry
> Error: package 'rJava' could not be loaded
>
> Although rJava was installed, I reinstalled it, successfully as:
>>   install.packages("rJava")
> --- Please select a CRAN mirror for use in this session --- trying URL 
> 'https://cloud.r-project.org/bin/windows/contrib/3.2/rJava_0.9-8.zip'
> Content type 'application/zip' length 765340 bytes (747 KB) downloaded 
> 747 KB
>
> package 'rJava' successfully unpacked and MD5 sums checked
>
> The downloaded binary packages are in
>
> C:\Users\u09672\AppData\Local\Temp\1\Rtmp2PslO0\downloaded_packages
>
> But when I called:
>> library(rJava)
> Error in get(Info[i, 1], envir = env) :
>  lazy-load database 'C:/LegacyApps/R-3.2.3/library/rJava/R/rJava.rdb' 
> is corrupt In addition: Warning messages:
> 1: package 'rJava' was built under R version 3.2.5
> 2: In get(Info[i, 1], envir = env) : internal error -3 in 
> R_decompress1
> Error: package or namespace load failed for 'rJava'
>
> When I called:
>>  library(spcosa)
> Loading required package: rJava
> Error in get(Info[i, 1], envir = env) :
>  lazy-load database 'C:/LegacyApps/R-3.2.3/library/rJava/R/rJava.rdb' 
> is corrupt In addition: Warning messages:
> 1: package 'rJava' was built under R version 3.2.5
> 2: In get(Info[i, 1], envir = env) : internal error -3 in 
> R_decompress1
> Error: package 'rJava' could not be loaded
>
> Looks like something is wrong with loading "rJava". The only thing has 
> been changed is that my PC was refreshed recently.

Three possibilities: you may need rJava built for R 3.2.3 - you are warned 
about the mis-match; the refreshing of your PC may have taken away something 
the rJava needed when it loaded; and the rJava Windows binary may be corrupted 
on your mirror - try a different mirror.

Try replacing R 3.2.3 with the current R 3.3.2, and update all your packages 
(all Windows binary packages for R < 3.3 were built using an older compiler, 
all for R >= 3.3 with a newer compiler, and they are not mutually compatible if 
they include compiled code.

Roger

>
> Any suggestions to how to fix this loading problem? Thanks!
>
>
>> sessionInfo()
> R version 3.2.3 (2015-12-10)
> Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 
> (build 7601) Service Pack 1
>
> locale:
> [1] LC_COLLATE=English_Australia.1252  LC_CTYPE=English_Australia.1252 
> [3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C [5] 
> LC_TIME=English_Australia.1252
>
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
>
> other attached packages:
> [1] raster_2.5-8 sp_1.2-3
>
> loaded via a namespace (and not attached):
> [1] tools_3.2.3 Rcpp_0.12.7 grid_3.2.3  lattice_0.20-34
>>
>
> Please note this error persists for:  R version 3.3.1 (2016-06-21)
>
> Kind regards,
> Jin
>
> Jin Li, PhD | Spatial Modeller / Computational Statistician National 
> Earth and Marine Observations | Environmental Geoscience Division
> t:  +61 2 6249 9899
>
> Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) 
> is intended only for the person or entity to which it is addressed. If you 
> are not the intended recipient, then you have received this e-mail by mistake 
> and any use, dissemination, forwarding, printing or copying of this e-mail 
> and its file attachments is prohibited. The security of emails transmitted 
> cannot be guaranteed; by forwarding or replying to this email, you 
> acknowledge and accept these risks.
> --
> ---
>
>
> [[alternative HTML version deleted]]
>
> ___
> R-sig-Geo mailing list
> R-sig-Geo@r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>

--
Roger Bivand
Department of Economics, Norwegian School of Economics, Helleveien 30, N-5045 
Bergen, Norway.
voice: +47 55 95 93 55; fax +

[R-sig-Geo] Error with loading rJava for spcosa [SEC=UNCLASSIFIED]

2016-11-24 Thread Li Jin
Hi All,

I have been using library(spcosa) in R version 3.2.3 (2015-12-10) and all 
worked well, until today.

The error was as below when I called:
>  library(spcosa)
Loading required package: rJava
Error : .onLoad failed in loadNamespace() for 'rJava', details:
  call: fun(libname, pkgname)
  error: JAVA_HOME cannot be determined from the Registry
Error: package 'rJava' could not be loaded

Although rJava was installed, I reinstalled it, successfully as:
>   install.packages("rJava")
--- Please select a CRAN mirror for use in this session ---
trying URL 'https://cloud.r-project.org/bin/windows/contrib/3.2/rJava_0.9-8.zip'
Content type 'application/zip' length 765340 bytes (747 KB)
downloaded 747 KB

package 'rJava' successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\u09672\AppData\Local\Temp\1\Rtmp2PslO0\downloaded_packages

But when I called:
> library(rJava)
Error in get(Info[i, 1], envir = env) :
  lazy-load database 'C:/LegacyApps/R-3.2.3/library/rJava/R/rJava.rdb' is 
corrupt
In addition: Warning messages:
1: package 'rJava' was built under R version 3.2.5
2: In get(Info[i, 1], envir = env) : internal error -3 in R_decompress1
Error: package or namespace load failed for 'rJava'

When I called:
>  library(spcosa)
Loading required package: rJava
Error in get(Info[i, 1], envir = env) :
  lazy-load database 'C:/LegacyApps/R-3.2.3/library/rJava/R/rJava.rdb' is 
corrupt
In addition: Warning messages:
1: package 'rJava' was built under R version 3.2.5
2: In get(Info[i, 1], envir = env) : internal error -3 in R_decompress1
Error: package 'rJava' could not be loaded

Looks like something is wrong with loading "rJava". The only thing has been 
changed is that my PC was refreshed recently.

Any suggestions to how to fix this loading problem? Thanks!


> sessionInfo()
R version 3.2.3 (2015-12-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=English_Australia.1252  LC_CTYPE=English_Australia.1252
[3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C
[5] LC_TIME=English_Australia.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] raster_2.5-8 sp_1.2-3

loaded via a namespace (and not attached):
[1] tools_3.2.3 Rcpp_0.12.7 grid_3.2.3  lattice_0.20-34
>

Please note this error persists for:  R version 3.3.1 (2016-06-21)

Kind regards,
Jin

Jin Li, PhD | Spatial Modeller / Computational Statistician
National Earth and Marine Observations | Environmental Geoscience Division
t:  +61 2 6249 9899

Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.
-


[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[R-sig-Geo] Error with gstat::predict [SEC=UNCLASSIFIED]

2016-10-30 Thread Li Jin
Hi All,

I need to use the predict{gstat}  function in one of my functions for a R 
package. I use RStudio to make the package. When I specified gstat::predict in 
the function, I received the following error:

Error: 'predict' is not an exported object from 'namespace:gstat'

The session information is:

> sessionInfo()

R version 3.3.1 (2016-06-21)

Platform: x86_64-w64-mingw32/x64 (64-bit)

Running under: Windows 7 x64 (build 7601) Service Pack 1



locale:

[1] LC_COLLATE=English_Australia.1252  LC_CTYPE=English_Australia.1252

[3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C

[5] LC_TIME=English_Australia.1252



attached base packages:

[1] stats graphics  grDevices utils datasets  methods   base



other attached packages:

[1] myrpackage_0.0.0.9000 devtools_1.12.0



loaded via a namespace (and not attached):

[1] tools_3.3.1   withr_1.0.2   memoise_1.0.0 digest_0.6.10

Is this a bug? Any suggestions? Many thanks in advance!

Kind regards,
Jin

Jin Li, PhD
Spatial Modeller/Computational Statistician  |  National Earth and Marine 
Observations
Environmental Geoscience Division  |  GEOSCIENCE AUSTRALIA

Phone:  +61 2 6249 9899Fax:  +61 2 6249 
Email:  jin...@ga.gov.auWeb:  
www.ga.gov.au
101 Jerrabomberra Avenue Symonston ACT
GPO Box 378 Canberra ACT 2601 Australia
Applying geoscience to Australia’s most important challenges



Geoscience Australia Disclaimer: This e-mail (and files transmitted with it) is 
intended only for the person or entity to which it is addressed. If you are not 
the intended recipient, then you have received this e-mail by mistake and any 
use, dissemination, forwarding, printing or copying of this e-mail and its file 
attachments is prohibited. The security of emails transmitted cannot be 
guaranteed; by forwarding or replying to this email, you acknowledge and accept 
these risks.
-


[[alternative HTML version deleted]]

___
R-sig-Geo mailing list
R-sig-Geo@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-geo