Thank you very much for the replies. I've found the papers suggested
by Mahmut very useful, as I'm asked to find a method to spatialize
soil salinity measures (Ece and SAR) to investigate/forecast the
salinity risk in coastal areas under saline waters irrigation
(temporal changes will be elaboratod with numerical modelling).

First I will consider the use of Regression Kriging (Hengl and
Rossiter references) and Indicator Kriging, even if from literature
the latter appears having higher RMSE respect to UK and KED (and RK)
in estimating the unknown values (assessed with cross-validation) when
dealing with datas affected by local and global trends. Is it a
general, intrnisic, problem of IK?

Giovanni

2008/1/14, Ashton Shortridge <[EMAIL PROTECTED]>:
> On Monday 14 January 2008, G. Allegri wrote:
> > Hi everyone,
> > this is my very first post on this mailing-list. I need to produce a
> > map of soil salinity space variability. I have chosen to compare the
> > estimation of OK, Regression Kriging and Indicator Kriging reliability
> > on my data-set. I know that kriging variance depends only on the
> > geometrical configuration of my samples and not on their actual
> > values, but I thought the kriging variance could be used as a measure
> > of the estimation quality. Instead reading Goovaerts (p. 184) I
> > understand that "the kriging standard deviation cannot be used as a
> > direct measure of estimation error".
> > Someone says that simulations would give a better result, but I found
> > an archive post from Pebesma saying that for many simulations the
> > error variance tends to the kriging variance...
> > So, what kriging statistics should I use to assess local estimation
> > precision?
> >
> > Giovanni
>
> Hi Giovanni,
>
> You are correct - the kriging standard deviation is an internal measure of
> model uncertainty. You can easily shrink this standard deviation by changing
> the model - for example, lowering the sill of your variogram model. However,
> this does not mean you've improved model predictions!
>
> One common approach is cross-validation, in which you leave each of your
> observations out in turn and use the remaining points to predict the value at
> that location. Then you can compare the predicted values to the actual ones.
> by cross-validating alternative models and comparing their predictive
> capability at the observations, you may be able to contrast model predictive
> performance.
>
> This is not a complete method of model validation, but in the absence of
> additional data, it can be a useful approach.
>
> Yours,
>
> Ashton
>
>
> --
> Ashton Shortridge
> Associate Professor                     [EMAIL PROTECTED]
> Dept of Geography                       http://www.msu.edu/~ashton
> 235 Geography Building                  ph (517) 432-3561
> Michigan State University               fx (517) 432-1671
>
+
+ To post a message to the list, send it to [email protected]
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and 
"unsubscribe ai-geostats" in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/

Reply via email to