Tyler Dean Rudolph wrote:
I am trying to estimate home range size using the plug-in method with
kernel density estimation in the kernel smoothing (ks) package. Unless
there is another way I am not familiar with, in order to calculate spatial
area under the space I need to convert my kde
On Thu, Nov 26, 2009 at 6:56 PM, David Winsemius dwinsem...@comcast.net
wrote:
The situation that I see (after looking at the documentation for adehabit)
is that for some as yet unarticulated reason, you have decided that the
methods used in prior publications in your domain are not the best and
I am trying to estimate home range size using 2 different methods in the
adehabitat package, but I am slightly confounded by the results.
## Attached is an R object file containing animal relocations with a field
for id, and x y coordinates ## (in metres)
load(temp)
require(adehabitat)
## This
I am trying to estimate home range size using the plug-in method with kernel
density estimation in the kernel smoothing (ks) package. Unless there is
another way I am not familiar with, in order to calculate spatial area under
the space I need to convert my kde () object into a spatial object
=tmp.Hpi)
Tyler
http://old.nabble.com/file/p26533942/temp temp
David Winsemius wrote:
On Nov 26, 2009, at 12:40 PM, T.D.Rudolph wrote:
I am trying to estimate home range size using the plug-in method
with kernel
density estimation in the kernel smoothing (ks) package. Unless
hello there,
Is there a way of truncating in the opposite direction so as to retain only
the values to the right of the decimal??
i.e. rather than:
trunc(39.5)
[1] 39
i would get something like:
revtrunc(39.5)
[1] 0.5
I've been searching to no avail but I imagine there is a very simple
want for negative numbers. One possibility:
revtrunc - function(x) { sign(x) * (x - floor(x)) }
revtrunc(39.5)
[1] 0.5
revtrunc(-39.5)
[1] -0.5
Sarah
On Thu, Apr 16, 2009 at 5:30 PM, T.D.Rudolph prairie.pic...@gmail.com
wrote:
hello there,
Is there a way of truncating in the opposite
Is there any way to identify or infer the inflection points in a smooth
spline object? I am doing a comparison of various methods of time-series
analysis (polynomial regression, spline smoothing, recursive partitioning)
and I am specifically interested in obtaining the julian dates associated
Is there any package or operation in R designed to conduct or facilitate
Quadrat Variance analysis of spatial data? Any leads would be much
appreciated as I have found very little in my searches thus far.
Tyler
--
View this message in context:
Does anyone know an alternate way of calculating R2n with spatial data other
than converting values into ltraj format (adehabitat package)?
I have a series of geographic xy coordinates (in metres). I imagine I can
subtract x1y1 from x2y2 to get the spatial difference (dxdy), but what
function
Can anyone shed any light on this topic for us?
I would like to attempt agglomerative clustering with a contiguity
constraint (a.k.a. intermediate linkage clustering), as described by
Legendre Legendre (1998, page 697)
Is there any code kicking around for this type of analysis specifically?
I thought this problem would be resolved when I switched to R version 2.7.0
(for Windows), but no - anytime I plot something that produces more than one
page of graphics, the graphics window starts by showing the first page,
until such time as I hit enter to show me the next page, at which time
a
new series of figures for review in the single window (as opposed to
numerous separate ones). At this point scrolling using PgUp and PgDn seems
to work fine.
Tyler
T.D.Rudolph wrote:
I thought this problem would be resolved when I switched to R version
2.7.0 (for Windows), but no - anytime
I am trying to fit a very simple broken stick model using the package
segmented but I have hit a roadblock.
str(data)
'data.frame': 18 obs. of 2 variables:
$ Bin : num 0.25 0.75 1.25 1.75 2.25 2.75 3.25 3.75 4.25 4.75 ...
$ LnFREQ: num 5.06 4.23 3.50 3.47 2.83 ...
I fit the lm
Hi there,
I'm dealing with a pretty big dataset (~22,000 entries) with numerous
entries for every day over a period of several years. I have a column
judy (for Julian Day) with 0 beginning on Jan. 1st of every new year (I
want to compare tendencies between years). However, in order to control
http://www.nabble.com/file/p18018170/subdata.csv subdata.csv
I've attached 100 rows of a data frame I am working with.
I have one factor, id, with 27 levels. There are two columns of reference
data, x and y (UTM coordinates), one column date in POSIXct format, and
one column diff in times
have no way of verifying without the id membership in the output.
Charilaos Skiadas-3 wrote:
On Jun 14, 2008, at 1:25 AM, T.D.Rudolph wrote:
aggregate() is indeed a useful function in this case, but it only
returns the
columns by which it was grouped. Is there a way I can use
I have a dataframe, x, with over 60,000 rows that contains one Factor, id,
with 27 levels.
The dataframe contains numerous continuous values (along column diff) per
day (column date) for every level of id. I would like to select only one
row per animal per day, i.e. that containing the minimum
information at all and retain it in the final output
Marc Schwartz wrote:
on 06/13/2008 11:10 PM T.D.Rudolph wrote:
I have a dataframe, x, with over 60,000 rows that contains one Factor,
id,
with 27 levels.
The dataframe contains numerous continuous values (along column diff)
per
day (column
I am trying to set up a function which processes my data according to the
following rules:
1. if (x[i]==0) NA
2. if (x[i]0) log(x[i]/(number of consecutive zeros immediately preceding
it +1))
The data this will apply to include a variety of whole numbers not limited
to 1 0, a number of which
You are absolutely correct Marc; thank you for making this assumption!
While it will take some time for me to rightfully understand the logical
ordering of your proposed function, it certainly seems to produce the result
I was looking for, and economically at that. rle is indeed highly
such run, 1 to the next,
2 to the next and so on. Thus shifting it forward and
adding 1 gives the number of preceding zeros, num0p1.
cs - cumsum(x != 0)
num0p1 - head(c(0, seq(cs) - match(cs, cs)), -1) + 1
ifelse(x == 0, NA, log(x/num0p1))
On Mon, Jun 2, 2008 at 2:30 PM, T.D.Rudolph [EMAIL
] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
-1.0986123 -0.6931472 0.000
[10] -1.6094379 -1.3862944 -1.0986123 -0.6931472 0.000
On Tue, May 27, 2008 at 8:04 PM, T.D.Rudolph [EMAIL PROTECTED]
wrote:
In fact x[4,2] should = log(x[5,1]/2]
whereas x[3,2] = log
])
+ })
unlist(result)
[1] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
-1.0986123 -0.6931472 0.000
[10] -1.6094379 -1.3862944 -1.0986123 -0.6931472 0.000
On Tue, May 27, 2008 at 8:04 PM, T.D.Rudolph [EMAIL PROTECTED]
wrote:
In fact x[4,2] should
$lengths[.indx])
})
# but I am clearly missing something!
Does it not work because I haven't addressed what to do with the zeros and
log(0)=-Inf?
I've tried adding another ifelse but I still get the same result.
Can someone find the error in my ways?
Tyler
T.D.Rudolph wrote:
I have
])
+ })
unlist(result)
[1] -0.6931472 0.000 -1.0986123 -0.6931472 0.000 -1.3862944
-1.0986123 -0.6931472 0.000
[10] -1.6094379 -1.3862944 -1.0986123 -0.6931472 0.000
On Tue, May 27, 2008 at 8:04 PM, T.D.Rudolph [EMAIL PROTECTED]
wrote:
In fact x[4,2] should
T.D.Rudolph wrote:
I'm trying to build on Jim's approach to change the parameters in the
function, with new rules:
1. if (x[i]==0) NA
2. if (x[i]0) log(x[i]/(number of consecutive zeros immediately preceding
it +1))
x-c(1,0,1,0,0,1,0,0,0,1,0,0,0,0,1)
# i.e. output desired = c(0, NA
the 15th element, 1, becomes log(1) = 0
T.D.Rudolph wrote:
I'm trying to build on Jim's approach to change the parameters in the
function, with new rules:
1. if (x[i]==0) NA
2. if (x[i]0) log(x[i]/(number of consecutive zeros immediately preceding
it +1))
x-c
I did have the problem of not having two continuous variables and this
approach circumvents this, allowing me in fact to plot the rownames.
Prof Brian Ripley wrote:
On Tue, 27 May 2008, T.D.Rudolph wrote:
In the following example:
x - rnorm(1:100)
y - seq(from=-2.5, to=3.5, by=0.5
This definitely does the trick.
I knew there was an easier way!
Petr Pikal wrote:
[EMAIL PROTECTED] napsal dne 27.05.2008 09:31:16:
Hi Petr
My mistake in omitting replace=T in the first part.
Unfortunately I oversimplified my problem as I'm not actually dealing
with
whole numbers
I have a matrix of frequency counts from 0-160.
x-as.matrix(c(0,1,0,0,1,0,0,0,1,0,0,0,0,1))
I would like to apply a function creating a new column (x[,2])containing
values equal to:
a) log(x[m,1]) if x[m,1] 0; and
b) for all x[m,1]= 0, log(next x[m,1] 0 / count of preceding zero values
+1)
In fact x[4,2] should = log(x[5,1]/2]
whereas x[3,2] = log(x[5,1/3])
i.e. The denominator in the log function equals the number of rows between
m==0 and m0 (inclusive, hence the +1)
Hope this helps!...
Charles C. Berry wrote:
On Tue, 27 May 2008, T.D.Rudolph wrote:
I have a matrix
In fact x[4,2] should = log(x[5,1]/2]
whereas x[3,2] = log(x[5,1/3])
i.e. The denominator in the log function equals the number of rows between
m==0 and m0 (inclusive, hence the +1)
Hope this helps!
Charles C. Berry wrote:
On Tue, 27 May 2008, T.D.Rudolph wrote:
I have a matrix
In the following example:
x - rnorm(1:100)
y - seq(from=-2.5, to=3.5, by=0.5)
z - as.matrix(table(cut(x,c(-Inf, y, +Inf
## I wish to transform the values in z
j - log(z)
## Yet retain the row names
row.names(j)-row.names(z)
Now, how can I go about creating a scatterplot with row.names(j)
and A Guide for the Unwilling S User)
T.D.Rudolph wrote:
I have numerous objects, each containing continuous data representing the
same variable, movement rate, yet each having a different number of rows.
e.g.
d1-as.matrix(rnorm(5))
d2-as.matrix(rnorm(3))
d3-as.matrix(rnorm(6))
How can I merge
I have numerous objects, each containing continuous data representing the
same variable, movement rate, yet each having a different number of rows.
e.g.
d1-as.matrix(rnorm(5))
d2-as.matrix(rnorm(3))
d3-as.matrix(rnorm(6))
How can I merge these three columns side-by-side in order to create a
36 matches
Mail list logo