Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-18 Thread Alexander Ilich
"df" is not an object, but is an input to a function (known as a function 
argument). If you run the code for the "ExtractFirstMin" function definition 
with a clear environment you'll notice there's no error event though there's no 
object df. What will happen after you run the code is a new variable called 
"ExtactFirstMin" will be defined. This is new variable in your environent is 
actually a function. It works just like any built in R function such as "mean", 
"range", "min", etc, but it only exists because you defined it. When you supply 
an input to the function it is substituted for "df" in that function code. When 
you use "sapply" you input a list of all your data frames as well as the 
function to apply to them. So when you do sapply(df_list, ExtactFirstMin), you 
are applying that ExtractFirstMin function across all of your dataframes.  You 
should only need to edit the right side of the following line of code to put 
your dataframes in the list by substituting the names of your data
 frames:

df_list<- list(dataframe1, dataframe2, dataframe3, dataframe4, dataframe5, 
dataframe6, dataframe7, dataframe8, dataframe9, dataframe10)

You do not need the code block to create 10 data frames. Since I don't have 
your data, I needed to generate data with a similar structure to run the code 
on, but you can run the code on your real data.

Here are some resources on functions and iteration that may help clarify a few 
things.
https://r4ds.had.co.nz/functions.html
https://r-coder.com/sapply-function-r/


From: rain1...@aim.com 
Sent: Wednesday, May 17, 2023 3:56 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

Yes, you're right - that approach would be much faster and much less subject to 
error. The method that I was using worked as intended, but I am more than happy 
to try to learn this arguably more effective way. My only question in that 
regard is the defining of the object "df" in:

ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$em-1)
  min_rate<- df$pct[which.min(df$abs_diff)]
  return(min_rate)
}

Is object "df" in your example above coming from this?

#Generate data
set.seed(5)
for (i in 1:10) {
  assign(x = paste0("df", i),
 value = data.frame(Time = sort(rnorm(n = 10, mean = 1, sd = 0.1)),
Rate= rnorm(n = 10, mean = 30, sd = 1)))
} # Create 10 Data Frames

If so, how would I approach placing all 10 of my dataframes (i.e. df1, df2, 
df3, df4...df10) in that command?

Thanks, again, and sorry if I missed this previously in your explanation! In 
any case, at least I am able to obtain the results that I was looking for!

-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Wed, May 17, 2023 10:16 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Awesome, glad you were able to get the result you needed. Just to be clear 
though, you shouldn't need to manually copy the code 
"df$pct[which.min(df$abs_diff)]" repeatedly for each dataframe. That I sent 
just to explain what internally was happening when using sapply and the 
function. If you replace "$x" with "$em" and $y" with "$pct" you can 
automatically iterate through as many dataframes as you want as long as they 
are in df_list.

# Define Functions (two versions based on how you want to deal with ties)
ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$em-1)
  min_rate<- df$pct[which.min(df$abs_diff)]
  return(min_rate)
}

# Put all dataframes into a list
df_list<- list(df1,df2,df3,df4,df5,df6,df7,df8,df9,df10)

# Apply function across list
w<- sapply(df_list, ExtractFirstMin)
w
#>  [1] 29.40269 32.21546 30.75330 30.12109 30.38361 28.64928 30.45568 29.66190
#>  [9] 31.57229 31.33907

From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 11:13 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

Oh, wow...you are absolutely right - I cannot believe that I did not notice 
that previously! Thank you so much, yet again, including for the insight on 
what "numeric(0)" signifies! Indeed, it all works just fine now!

I am now able to flexibly achieve the goal of deriving the range of these 
values across the 10 dataframes using the "range" function!

I cannot thank you enough, including for your tireless efforts to explain 
everything step-by-step throughout all of this, though I do apologize for the 
time spent on this!



-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 16, 2023 10:24 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-17 Thread Alexander Ilich
Awesome, glad you were able to get the result you needed. Just to be clear 
though, you shouldn't need to manually copy the code 
"df$pct[which.min(df$abs_diff)]" repeatedly for each dataframe. That I sent 
just to explain what internally was happening when using sapply and the 
function. If you replace "$x" with "$em" and $y" with "$pct" you can 
automatically iterate through as many dataframes as you want as long as they 
are in df_list.

# Define Functions (two versions based on how you want to deal with ties)
ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$em-1)
  min_rate<- df$pct[which.min(df$abs_diff)]
  return(min_rate)
}

# Put all dataframes into a list
df_list<- list(df1,df2,df3,df4,df5,df6,df7,df8,df9,df10)

# Apply function across list
w<- sapply(df_list, ExtractFirstMin)
w
#>  [1] 29.40269 32.21546 30.75330 30.12109 30.38361 28.64928 30.45568 29.66190
#>  [9] 31.57229 31.33907

From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 11:13 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

Oh, wow...you are absolutely right - I cannot believe that I did not notice 
that previously! Thank you so much, yet again, including for the insight on 
what "numeric(0)" signifies! Indeed, it all works just fine now!

I am now able to flexibly achieve the goal of deriving the range of these 
values across the 10 dataframes using the "range" function!

I cannot thank you enough, including for your tireless efforts to explain 
everything step-by-step throughout all of this, though I do apologize for the 
time spent on this!



-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 16, 2023 10:24 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

I believe you didn't clear your environment and that's why df1 works. All 
should evaluate to "numeric(0) with the current code. You call df2$abs_diff, 
but you never defined that variable. You assigned that result to an object 
called diff2 which is not used anywhere else in your code. If you type in 
df2$abs_diff, you'll see it evaluates to NULL and that caries through the rest 
of your code. numeric(0) means that it's a variable of type numeric but it's 
empty (zero in length).

set.seed(5)
df2<- data.frame(em= rnorm(10), pct=rnorm(10))

diff2 <- abs(df2$em-1) #You defined diff2
df2$abs_diff #This was never defined so it evalues to NULL
#> NULL

which.min(df2$abs_diff) #can't find position of min since df2$abs_diff was 
never defined
#> integer(0)

df2$pct[which.min(df2$abs_diff)] #cannot subset df2$pct since 
which.min(df2$abs_diff) evaluates to integer(0)
#> numeric(0)


From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 8:29 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

I am receiving that for my real data, which is, indeed, odd. It works just fine 
with my very first dataframe, but for all other dataframes, it returns 
"numeric(0)". I did the following to organize each dataframe accordingly (note 
that I renamed my dataframes to "df1" through to "df10" for simplicity):

diff1 <- abs(df1$em-1)
w1 <- df1$pct[which.min(df1$abs_diff)]

diff2 <- abs(df2$em-1)
w2 <- df2$pct[which.min(df2$abs_diff)]

diff3 <- abs(df3$em-1)
w3 <- df3$pct[which.min(df3$abs_diff)]

diff4 <- abs(df4$em-1)
w4 <- df4$pct[which.min(df4$abs_diff)]

diff5 <- abs(df5$em-1)
w5 <- df5$pct[which.min(df5$abs_diff)]

diff6 <- abs(df6$em-1)
w6 <- df6$pct[which.min(df6$abs_diff)]

diff7 <- abs(df7$em-1)
w7 <- df7$pct[which.min(df7$abs_diff)]

diff8 <- abs(df8$em-1)
w8 <- df8$pct[which.min(df8$abs_diff)]

diff9 <- abs(df9$em-1)
w9 <- df9$pct[which.min(df9$abs_diff)]

diff10 <- abs(df10$em-1)
w10 <- df10$pct[which.min(df10$abs_diff)]

This is what object "df2" looks like (the first 21 rows are displayed - there 
are 140 rows in total). All dataframes are structured the same way, including 
"df1" (which, as mentioned previously, worked just fine). All begin with 
"0.0" in the first row. "em" is my x-column name, and "pct" is my 
y-column name, as shown in the image below:

[df2.jpg]

What could make the other dataframes so different from "df1" to cause 
"numeric(0)"? Essentially, why would "df1" be fine, and not the other 9 
dataframes? Unless my code for the other dataframes is flawed somehow?

Thanks, again,

-Original Message-
From: Alexander Ilich 
To: rain1...@aim.com ; r-sig-geo@r-project.org 

Sent: Tue, May 16, 2023 7:38 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

It's not clear to me why that would be 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-16 Thread Alexander Ilich
It's not clear to me why that would be happening. Are you getting that with 
your real data or the example data generated in the code I sent? The only 
reasons I can think of for that happening is if you're trying to access the 
zeroeth element of a vector which would require which.min(df2$abs_diff) to 
somehow evaluating to zero (which I don't see how it could) or if your 
dataframe is zero rows.

From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 5:58:22 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,
Thank you so much, once again! Very, very helpful explanations.

I experimented with this method:

df1$abs_diff<- abs(df1$x-1)
min_rate[1]<- df1$y[which.min(df1$abs_diff)]

df2$abs_diff<- abs(df2$x-1)
min_rate[2]<- df2$y[which.min(df2$abs_diff)]


For the first dataframe, it correctly returned the first y-value where x = ~1. 
However, for dataframe2 to dataframe9, I strangely received: "numeric(0)".  
Everything is correctly placed. It does not appear to be an error per se, but 
is there a way around that to avoid that message and see the correct value?

Thanks, again,

-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 16, 2023 2:03 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

sapply goes element by element in your list, where each element is one of your 
dataframes. So mydata starts out as dataframe1, then dataframe2, then 
dataframe3, etc. It is never all of them at once. It goes through the list 
sequentially. So, at the end of the sapply call, you have a vector of length 10 
where the first element corresponds to the rate closest to x=1 in dataframe 1, 
and the tenth element corresponds to the rate closest to x=1 in dataframe 10. 
If your columns are not named x and y, then the function should be edited 
accordingly based on the names. It does assume the "x" and "y" have the same 
name across dataframes. For example, if x was actually "Time" and y was "Rate", 
you could use

#Generate data
set.seed(5)
for (i in 1:10) {
  assign(x = paste0("df", i),
 value = data.frame(Time = sort(rnorm(n = 10, mean = 1, sd = 0.1)),
Rate= rnorm(n = 10, mean = 30, sd = 1)))
} # Create 10 Data Frames

# Define Functions (two versions based on how you want to deal with ties)
ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$Time-1)
  min_rate<- df$Rate[which.min(df$abs_diff)]
  return(min_rate)
}

# Put all dataframes into a list
df_list<- list(df1,df2,df3,df4,df5,df6,df7,df8,df9,df10)

# Apply function across list
sapply(df_list, ExtractFirstMin)

From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 12:46 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

Wow, thank you so very much for taking the time to articulate this answer! It 
really gives a good understanding of what is going on at each stage in the 
coding!

And sorry if I missed this previously, but the object "mydata" is defined based 
on the incorporation of all dataframes? Since it is designed to swiftly obtain 
the first minimum at y = ~1 across each dataframe, "mydata" must take into 
account "dataframe1" to dataframe10", correct?

Also, the "x" is simply replaced with the name of the x-column and the "y" with 
the y-column name, if I understand correctly?

Again, sorry if I overlooked this, but that would be all, and thank you so very 
much, once again for your help and time with this! Much appreciated!

~Trav.~


-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 16, 2023 11:42 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

The only spot you'll need to change the names for is when putting all of your 
dataframes in a list as that is based on the names you gave them in your script 
when reading in the data. In the function, you don't need to change the input 
to "dataframe1", and naming it that way could be confusing since you are 
applying the function to more than just dataframe1 (you're applying it to all 
10 of your dataframes). I named the argument df to indicate that you should 
supply your dataframe as the input to the function, but you could name it 
anything you want. For example, you could call it "mydata" and define the 
function this way if you wanted to.

ExtractFirstMin<- function(mydata){
  mydata$abs_diff<- abs(mydata$x-1)
  min_rate<- mydata$y[which.min(mydata$abs_diff)]
  return(min_rate)
}

#The function has its own environment of variables that is separate from 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-16 Thread Alexander Ilich
sapply goes element by element in your list, where each element is one of your 
dataframes. So mydata starts out as dataframe1, then dataframe2, then 
dataframe3, etc. It is never all of them at once. It goes through the list 
sequentially. So, at the end of the sapply call, you have a vector of length 10 
where the first element corresponds to the rate closest to x=1 in dataframe 1, 
and the tenth element corresponds to the rate closest to x=1 in dataframe 10. 
If your columns are not named x and y, then the function should be edited 
accordingly based on the names. It does assume the "x" and "y" have the same 
name across dataframes. For example, if x was actually "Time" and y was "Rate", 
you could use

#Generate data
set.seed(5)
for (i in 1:10) {
  assign(x = paste0("df", i),
 value = data.frame(Time = sort(rnorm(n = 10, mean = 1, sd = 0.1)),
Rate= rnorm(n = 10, mean = 30, sd = 1)))
} # Create 10 Data Frames

# Define Functions (two versions based on how you want to deal with ties)
ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$Time-1)
  min_rate<- df$Rate[which.min(df$abs_diff)]
  return(min_rate)
}

# Put all dataframes into a list
df_list<- list(df1,df2,df3,df4,df5,df6,df7,df8,df9,df10)

# Apply function across list
sapply(df_list, ExtractFirstMin)

From: rain1...@aim.com 
Sent: Tuesday, May 16, 2023 12:46 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander,

Wow, thank you so very much for taking the time to articulate this answer! It 
really gives a good understanding of what is going on at each stage in the 
coding!

And sorry if I missed this previously, but the object "mydata" is defined based 
on the incorporation of all dataframes? Since it is designed to swiftly obtain 
the first minimum at y = ~1 across each dataframe, "mydata" must take into 
account "dataframe1" to dataframe10", correct?

Also, the "x" is simply replaced with the name of the x-column and the "y" with 
the y-column name, if I understand correctly?

Again, sorry if I overlooked this, but that would be all, and thank you so very 
much, once again for your help and time with this! Much appreciated!

~Trav.~


-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 16, 2023 11:42 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

The only spot you'll need to change the names for is when putting all of your 
dataframes in a list as that is based on the names you gave them in your script 
when reading in the data. In the function, you don't need to change the input 
to "dataframe1", and naming it that way could be confusing since you are 
applying the function to more than just dataframe1 (you're applying it to all 
10 of your dataframes). I named the argument df to indicate that you should 
supply your dataframe as the input to the function, but you could name it 
anything you want. For example, you could call it "mydata" and define the 
function this way if you wanted to.

ExtractFirstMin<- function(mydata){
  mydata$abs_diff<- abs(mydata$x-1)
  min_rate<- mydata$y[which.min(mydata$abs_diff)]
  return(min_rate)
}

#The function has its own environment of variables that is separate from the 
global environment of variables you've defined in your script.
#When we supply one of your dataframes to the function, we are assigning that 
information to a variable in the function's environment called "mydata". 
Functions allow you to generalize your code so that you're not required to name 
your variables a certain way. Note here, we do assume that "mydata" has a "$x" 
and "$y" slot though.

#Without generalizing the code using a function, we'd need to copy and paste 
the code over and over again and make sure to change the name of the dataframe 
each time. This is very time consuming and error prone. Here's an example for 
the first 3 dataframes.

min_rate<- rep(NA_real_, 10) #initialize empty vector
df1$abs_diff<- abs(df1$x-1)
min_rate[1]<- df1$y[which.min(df1$abs_diff)]

df2$abs_diff<- abs(df2$x-1)
min_rate[2]<- df2$y[which.min(df2$abs_diff)]

df3$abs_diff<- abs(df3$x-1)
min_rate[3]<- df3$y[which.min(df3$abs_diff)]

print(min_rate)
#>  [1] 29.40269 32.21546 30.75330   NA   NA   NA   NA   NA
#>  [9]   NA   NA

#With the function defined we can run that it for each individual dataframe, 
which is less error prone than copying and pasting but still fairly repetitive
ExtractFirstMin(mydata = df1) # You can explicitly say "mydata ="
#> [1] 29.40269
ExtractFirstMin(df2) # Or equivalently it will be based on the order arguments 
when you defined the function. Since there is just one argument, then what you 
supply is assigned to "mydata"
#> [1] 32.21546

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-16 Thread Alexander Ilich
The only spot you'll need to change the names for is when putting all of your 
dataframes in a list as that is based on the names you gave them in your script 
when reading in the data. In the function, you don't need to change the input 
to "dataframe1", and naming it that way could be confusing since you are 
applying the function to more than just dataframe1 (you're applying it to all 
10 of your dataframes). I named the argument df to indicate that you should 
supply your dataframe as the input to the function, but you could name it 
anything you want. For example, you could call it "mydata" and define the 
function this way if you wanted to.

ExtractFirstMin<- function(mydata){
  mydata$abs_diff<- abs(mydata$x-1)
  min_rate<- mydata$y[which.min(mydata$abs_diff)]
  return(min_rate)
}

#The function has its own environment of variables that is separate from the 
global environment of variables you've defined in your script.
#When we supply one of your dataframes to the function, we are assigning that 
information to a variable in the function's environment called "mydata". 
Functions allow you to generalize your code so that you're not required to name 
your variables a certain way. Note here, we do assume that "mydata" has a "$x" 
and "$y" slot though.

#Without generalizing the code using a function, we'd need to copy and paste 
the code over and over again and make sure to change the name of the dataframe 
each time. This is very time consuming and error prone. Here's an example for 
the first 3 dataframes.

min_rate<- rep(NA_real_, 10) #initialize empty vector
df1$abs_diff<- abs(df1$x-1)
min_rate[1]<- df1$y[which.min(df1$abs_diff)]

df2$abs_diff<- abs(df2$x-1)
min_rate[2]<- df2$y[which.min(df2$abs_diff)]

df3$abs_diff<- abs(df3$x-1)
min_rate[3]<- df3$y[which.min(df3$abs_diff)]

print(min_rate)
#>  [1] 29.40269 32.21546 30.75330   NA   NA   NA   NA   NA
#>  [9]   NA   NA

#With the function defined we can run that it for each individual dataframe, 
which is less error prone than copying and pasting but still fairly repetitive
ExtractFirstMin(mydata = df1) # You can explicitly say "mydata ="
#> [1] 29.40269
ExtractFirstMin(df2) # Or equivalently it will be based on the order arguments 
when you defined the function. Since there is just one argument, then what you 
supply is assigned to "mydata"
#> [1] 32.21546
ExtractFirstMin(df3)
#> [1] 30.7533

# Rather than manually typing out to tun the function on eeach dataframe and 
bringing it together, we can instead use sapply.
# Sapply takes a list of inputs and a function as arguments. It then applies 
the function to every element in the list and returns a vector (i.e. goes 
through each dataframe in your list, applies the function to each one 
individually, and then records the result for each one in a single variable).
sapply(df_list, ExtractFirstMin)
#>  [1] 29.40269 32.21546 30.75330 30.12109 30.38361 28.64928 30.45568 29.66190
#>  [9] 31.57229 31.33907



From: rain1...@aim.com 
Sent: Monday, May 15, 2023 4:44 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alexander and everyone,

I hope that all is well! Just to follow up with this, I recently was able to 
try the following code that you had kindly previously shared:

ExtractFirstMin<- function(df){
  df$abs_diff<- abs(df$x-1)
  min_rate<- df$y[which.min(df$abs_diff)]
  return(min_rate)
} #Get first y value of closest to x=1

Just to be clear, do I simply replace the "df" in that code with the name of my 
individual dataframes? For example, here is the name of my 10 dataframes, which 
are successfully placed in a list (i.e. df_list), as you showed previously:

dataframe1
dataframe2
dataframe3
dataframe4
dataframe5
dataframe6
dataframe7
dataframe8
dataframe9
dataframe10

Thus, using your example above, using the first dataframe listed there, would 
this become:

ExtractFirstMin<- function(dataframe1){
  dataframe1$abs_diff<- abs(dataframe1$x-1)
  min_rate<- dataframe1$y[which.min(dataframe1$abs_diff)]
  return(min_rate)
} #Get first y value of closest to x=1

df_list<- list(dataframe1, dataframe2, dataframe3, dataframe4, dataframe5, 
dataframe6, dataframe7, dataframe8, dataframe9, dataframe10)

# Apply function across list
sapply(df_list, ExtractFirstMin)


Am I doing this correctly?

Thanks, again!


-Original Message-
From: Alexander Ilich 
To: rain1...@aim.com 
Sent: Thu, May 11, 2023 1:48 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Sure thing. Glad I could help!

From: rain1...@aim.com 
Sent: Thursday, May 11, 2023 12:17:12 AM
To: Alexander Ilich 
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-15 Thread rain1290--- via R-sig-Geo
Hi Alexander and everyone,
I hope that all is well! Just to follow up with this, I recently was able to 
try the following code that you had kindly previously shared:
ExtractFirstMin<- function(df){  df$abs_diff<- abs(df$x-1)  min_rate<- 
df$y[which.min(df$abs_diff)]  return(min_rate)} #Get first y value of closest 
to x=1
Just to be clear, do I simply replace the "df" in that code with the name of my 
individual dataframes? For example, here is the name of my 10 dataframes, which 
are successfully placed in a list (i.e. df_list), as you showed previously:
dataframe1
dataframe2dataframe3dataframe4dataframe5dataframe6dataframe7dataframe8dataframe9dataframe10
Thus, using your example above, using the first dataframe listed there, would 
this become:
ExtractFirstMin<- function(dataframe1){  dataframe1$abs_diff<- 
abs(dataframe1$x-1)  min_rate<- dataframe1$y[which.min(dataframe1$abs_diff)]  
return(min_rate)} #Get first y value of closest to x=1
df_list<- list(dataframe1, dataframe2, dataframe3, dataframe4, dataframe5, 
dataframe6, dataframe7, dataframe8, dataframe9, dataframe10)
# Apply function across listsapply(df_list, ExtractFirstMin)

Am I doing this correctly?
Thanks, again!

-Original Message-
From: Alexander Ilich 
To: rain1...@aim.com 
Sent: Thu, May 11, 2023 1:48 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Sure thing. Glad I could help!From: rain1...@aim.com 
Sent: Thursday, May 11, 2023 12:17:12 AM
To: Alexander Ilich 
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R Hi Alexander,
Many thanks for sharing this! It was really helpful!

-Original Message-
From: Alexander Ilich 
To: rain1...@aim.com 
Sent: Wed, May 10, 2023 2:05 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

One way to do this would be to put all your 
dataframes in a list, make one of the code implementation I put earlier into a 
function, and then use sapply to apply it across all the data frames. 
#Generate dataset.seed(5)for (i in 1:10) {  assign(x = paste0("df", i),         
 value = data.frame(x = sort(rnorm(n = 10, mean = 1, sd = 0.1)),                
            y= rnorm(n = 10, mean = 30, sd = 1)))  } # Create 10 Data Frames
# Define Functions (two versions based on how you want to deal with 
ties)ExtractFirstMin<- function(df){  df$abs_diff<- abs(df$x-1)  min_rate<- 
df$y[which.min(df$abs_diff)]  return(min_rate)} #Get first y value of closest 
to x=1
ExtractAvgMin<- function(df){  df$abs_diff<- abs(df$x-1)  min_rate<- 
mean(df$y[df$abs_diff==min(df$abs_diff)])  return(min_rate)} #Average all y 
values that are closest to x=1
# Put all dataframes into a listdf_list<- 
list(df1,df2,df3,df4,df5,df6,df7,df8,df9,df10)
# Apply function across listsapply(df_list, ExtractFirstMin)#>  [1] 29.40269 
32.21546 30.75330 30.12109 30.38361 28.64928 30.45568 29.66190#>  [9] 31.57229 
31.33907
sapply(df_list, ExtractAvgMin)#>  [1] 29.40269 32.21546 30.75330 30.12109 
30.38361 28.64928 30.45568 29.66190#>  [9] 31.57229 31.33907From: 
rain1...@aim.com 
Sent: Wednesday, May 10, 2023 1:40 PM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R Hi Alexander,

Thank you so much for taking the time to outline these suggestions! 
What if I wanted to only isolate the y-value at x = 1.0 across all of my 10 
dataframes? That way, I could quickly see what the highest and lowest y-value 
is at x = 1.0? That said, in reality, not all x values are precisely 1.0 (it 
can be something like 0.99 to 1.02), but the idea is to target the y-value at x 
= ~1.0. Is that at all possible? 
Thanks, again!
-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Wed, May 10, 2023 10:31 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

So using your data but removing x=1, 0.8 and 
1.2 would be equally close. Two potential options are to choose the y value 
corresponding to the first minimum difference (in this case x=0.8, y=39), or 
average the y values for all that are equally close (in this case average the y 
values for x=0.8 and x=1.2). I think the easiest wayodo that would to first 
calculate a column of the absolute value of differences between x and 1 and 
then subset the dataframe to the minimum of that column to extract the y 
values. Here's a base R and tidyverse implementation to do that.
#Base Rdf<- data.frame(x=c(0,0.2,0.4,0.6,0.8,1.2,1.4),                y= 
c(0,27,31,32,39,34,25))df$abs_diff<- abs(df$x-1)
df$y[which.min(df$abs_diff)] #Get first y value of closest to x=1#> [1] 
39mean(df$y[df$abs_diff==min(df$abs_diff)]) #Average all y values that 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-10 Thread rain1290--- via R-sig-Geo
Hi Alexander,

Thank you so much for taking the time to outline these suggestions! 
What if I wanted to only isolate the y-value at x = 1.0 across all of my 10 
dataframes? That way, I could quickly see what the highest and lowest y-value 
is at x = 1.0? That said, in reality, not all x values are precisely 1.0 (it 
can be something like 0.99 to 1.02), but the idea is to target the y-value at x 
= ~1.0. Is that at all possible? 
Thanks, again!
-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Wed, May 10, 2023 10:31 am
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

#yiv9338178727 P {margin-top:0;margin-bottom:0;}So using your data but removing 
x=1, 0.8 and 1.2 would be equally close. Two potential options are to choose 
the y value corresponding to the first minimum difference (in this case x=0.8, 
y=39), or average the y values for all that are equally close (in this case 
average the y values for x=0.8 and x=1.2). I think the easiest wayodo that 
would to first calculate a column of the absolute value of differences between 
x and 1 and then subset the dataframe to the minimum of that column to extract 
the y values. Here's a base R and tidyverse implementation to do that.
#Base Rdf<- data.frame(x=c(0,0.2,0.4,0.6,0.8,1.2,1.4),                y= 
c(0,27,31,32,39,34,25))df$abs_diff<- abs(df$x-1)
df$y[which.min(df$abs_diff)] #Get first y value of closest to x=1#> [1] 
39mean(df$y[df$abs_diff==min(df$abs_diff)]) #Average all y values that are 
closest to x=1#> [1] 36.5
#tidyverse
rm(list=ls())library(dplyr)
df<- data.frame(x=c(0,0.2,0.4,0.6,0.8,1.2,1.4),                y= 
c(0,27,31,32,39,34,25))df<- df %>% mutate(abs_diff = abs(x-1))
df %>% filter(abs_diff==min(abs_diff)) %>% pull(y) %>% head(1) #Get first y 
value of closest to x=1#> [1] 39
df %>% filter(abs_diff==min(abs_diff)) %>% pull(y) %>% mean() #Average all y 
values that are closest to x=1#> [1] 36.5From: rain1...@aim.com 

Sent: Wednesday, May 10, 2023 8:13 AM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R Hi Alex and everyone,
My apologies for the confusion and this double message (I just noticed that the 
example dataset appeared distorted)! Let me try to simplify here again.

My dataframes are structured in the following way: an x column and y column, 
like this:



Now, let's say that I want to determine the rate of increase at about x = 1.0, 
relative to the beginning of the period (i.e. 0 at the beginning). We can see 
clearly here that the answer would be y = 43. My question is would it be 
possible to quickly determine the value at around x = 1.0 across the 10 
dataframes that I have like this without having to manually check them? The 
idea is to determine the range of values for y at around x = 1.0 across all 
dataframes. Note that it's not perfectly x = 1.0 in all dataframes - some could 
be 0.99 or 1.01.  
I hope that this is clearer!
Thanks,

-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 9, 2023 2:23 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

I'm currently having a bit of difficultly 
following. Rather than using your actual data, perhaps you could include code 
to generate a smaller dataset with the same structure with clear definitions of 
what is contained within each (r faq - How to make a great R reproducible 
example - Stack Overflow). You can design that dataset to be small with a known 
answer and the describe how you got to that answer and then others could help 
determine some code to accomplish that task.
Best Regards,AlexFrom: R-sig-Geo  on behalf of 
rain1290--- via R-sig-Geo 
Sent: Tuesday, May 9, 2023 1:01 PM
To: r-sig-geo@r-project.org 
Subject: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R I would like to attempt to 
determine the difference between the highest and lowest rates of increase 
across a series of dataframes at a specified x value. As shown below, the 
dataframes have basic x and y columns, with emissions values in the x column, 
and precipitation values in the y column. Among the dataframes, the idea would 
be to determine the highest and lowest rates of precipitation increase at 
"approximately" 1 Terratons of emissions (TtC) relative to the first value of 
each time series. For example, I want to figure out which dataframe has the 
highest increase at 1 TtC, and which dataframe has the lowest increase at 1 
TtC. at However, I am not sure if there is a way to quickly achieve this? Here 
are the dataframes that I created, followed by an example of how each dataframe 
is structured:
#Dataframe objects created:
    

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-10 Thread Alexander Ilich
So using your data but removing x=1, 0.8 and 1.2 would be equally close. Two 
potential options are to choose the y value corresponding to the first minimum 
difference (in this case x=0.8, y=39), or average the y values for all that are 
equally close (in this case average the y values for x=0.8 and x=1.2). I think 
the easiest wayodo that would to first calculate a column of the absolute value 
of differences between x and 1 and then subset the dataframe to the minimum of 
that column to extract the y values. Here's a base R and tidyverse 
implementation to do that.

#Base R
df<- data.frame(x=c(0,0.2,0.4,0.6,0.8,1.2,1.4),
y= c(0,27,31,32,39,34,25))
df$abs_diff<- abs(df$x-1)

df$y[which.min(df$abs_diff)] #Get first y value of closest to x=1
#> [1] 39
mean(df$y[df$abs_diff==min(df$abs_diff)]) #Average all y values that are 
closest to x=1
#> [1] 36.5

#tidyverse
rm(list=ls())
library(dplyr)

df<- data.frame(x=c(0,0.2,0.4,0.6,0.8,1.2,1.4),
y= c(0,27,31,32,39,34,25))
df<- df %>% mutate(abs_diff = abs(x-1))

df %>% filter(abs_diff==min(abs_diff)) %>% pull(y) %>% head(1) #Get first y 
value of closest to x=1
#> [1] 39

df %>% filter(abs_diff==min(abs_diff)) %>% pull(y) %>% mean() #Average all y 
values that are closest to x=1
#> [1] 36.5

From: rain1...@aim.com 
Sent: Wednesday, May 10, 2023 8:13 AM
To: Alexander Ilich ; r-sig-geo@r-project.org 

Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

Hi Alex and everyone,

My apologies for the confusion and this double message (I just noticed that the 
example dataset appeared distorted)! Let me try to simplify here again.

My dataframes are structured in the following way: an x column and y column, 
like this:

[X]


Now, let's say that I want to determine the rate of increase at about x = 1.0, 
relative to the beginning of the period (i.e. 0 at the beginning). We can see 
clearly here that the answer would be y = 43. My question is would it be 
possible to quickly determine the value at around x = 1.0 across the 10 
dataframes that I have like this without having to manually check them? The 
idea is to determine the range of values for y at around x = 1.0 across all 
dataframes. Note that it's not perfectly x = 1.0 in all dataframes - some could 
be 0.99 or 1.01.

I hope that this is clearer!

Thanks,


-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 9, 2023 2:23 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

I'm currently having a bit of difficultly following. Rather than using your 
actual data, perhaps you could include code to generate a smaller dataset with 
the same structure with clear definitions of what is contained within each (r 
faq - How to make a great R reproducible example - Stack 
Overflow).
 You can design that dataset to be small with a known answer and the describe 
how you got to that answer and then others could help determine some code to 
accomplish that task.

Best Regards,
Alex

From: R-sig-Geo  on behalf of rain1290--- via 
R-sig-Geo 
Sent: Tuesday, May 9, 2023 1:01 PM
To: r-sig-geo@r-project.org 
Subject: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

I would like to attempt to determine the difference between the highest and 
lowest rates of increase across a series of dataframes at a specified x value. 
As shown below, the dataframes have basic x and y columns, with emissions 
values in the x column, and precipitation values in the y column. Among the 
dataframes, the idea would be to determine the highest and lowest rates of 
precipitation increase at "approximately" 1 Terratons of emissions (TtC) 
relative to the first value of each time series. For example, I want to figure 
out which dataframe has the highest increase at 1 TtC, and which dataframe has 
the lowest increase at 1 TtC. at However, I am not sure if there is a way to 
quickly achieve this? Here are the dataframes that I created, followed by an 
example of how each dataframe is structured:
#Dataframe objects created:
CanESMRCP8.5PL<-data.frame(get3.teratons, pland20) 
IPSLLRRCP8.5PL<-data.frame(get6.teratons, pland21)
IPSLMRRCP8.5PL<-data.frame(get9.teratons, pland22)
IPSLLRBRCP8.5PL<-data.frame(get12.teratons, pland23)
MIROCRCP8.5PL<-data.frame(get15.teratons, pland24)
HadGEMRCP8.5PL<-data.frame(get18.teratons, pland25)
MPILRRCP8.5PL<-data.frame(get21.teratons, pland26)
GFDLGRCP8.5PL<-data.frame(get27.teratons, pland27)
GFDLMRCP8.5PL<-data.frame(get30.teratons, pland28)
#Example of what each of these look like:
>CanESMRCP8.5PL
get3.teratons   pland20 

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-10 Thread rain1290--- via R-sig-Geo
Hi Alex and everyone,
My apologies for the confusion and this double message (I just noticed that the 
example dataset appeared distorted)! Let me try to simplify here again.

My dataframes are structured in the following way: an x column and y column, 
like this:



Now, let's say that I want to determine the rate of increase at about x = 1.0, 
relative to the beginning of the period (i.e. 0 at the beginning). We can see 
clearly here that the answer would be y = 43. My question is would it be 
possible to quickly determine the value at around x = 1.0 across the 10 
dataframes that I have like this without having to manually check them? The 
idea is to determine the range of values for y at around x = 1.0 across all 
dataframes. Note that it's not perfectly x = 1.0 in all dataframes - some could 
be 0.99 or 1.01.  
I hope that this is clearer!
Thanks,

-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 9, 2023 2:23 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

 #yiv7769615370 P {margin-top:0;margin-bottom:0;}I'm currently having a bit of 
difficultly following. Rather than using your actual data, perhaps you could 
include code to generate a smaller dataset with the same structure with clear 
definitions of what is contained within each (r faq - How to make a great R 
reproducible example - Stack Overflow). You can design that dataset to be small 
with a known answer and the describe how you got to that answer and then others 
could help determine some code to accomplish that task.
Best Regards,AlexFrom: R-sig-Geo  on behalf of 
rain1290--- via R-sig-Geo 
Sent: Tuesday, May 9, 2023 1:01 PM
To: r-sig-geo@r-project.org 
Subject: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R I would like to attempt to 
determine the difference between the highest and lowest rates of increase 
across a series of dataframes at a specified x value. As shown below, the 
dataframes have basic x and y columns, with emissions values in the x column, 
and precipitation values in the y column. Among the dataframes, the idea would 
be to determine the highest and lowest rates of precipitation increase at 
"approximately" 1 Terratons of emissions (TtC) relative to the first value of 
each time series. For example, I want to figure out which dataframe has the 
highest increase at 1 TtC, and which dataframe has the lowest increase at 1 
TtC. at However, I am not sure if there is a way to quickly achieve this? Here 
are the dataframes that I created, followed by an example of how each dataframe 
is structured:
#Dataframe objects created:
    CanESMRCP8.5PL<-data.frame(get3.teratons, pland20) 
IPSLLRRCP8.5PL<-data.frame(get6.teratons, pland21)    
IPSLMRRCP8.5PL<-data.frame(get9.teratons, pland22)    
IPSLLRBRCP8.5PL<-data.frame(get12.teratons, pland23)    
MIROCRCP8.5PL<-data.frame(get15.teratons, pland24)    
HadGEMRCP8.5PL<-data.frame(get18.teratons, pland25)    
MPILRRCP8.5PL<-data.frame(get21.teratons, pland26)    
GFDLGRCP8.5PL<-data.frame(get27.teratons, pland27)    
GFDLMRCP8.5PL<-data.frame(get30.teratons, pland28)
#Example of what each of these look like:
    >CanESMRCP8.5PL
    get3.teratons   pland20    X1  0.4542249 13.252426    X2  
0.4626662  3.766658    X3  0.4715780  2.220986    X4  0.4809204  
8.495072    X5  0.4901427 10.206458    X6  0.4993126 10.942797    X7
  0.5088599  6.592956    X8  0.5187588  2.435796    X9  0.5286758  
2.275836    X10 0.5389284  5.051706    X11 0.5496212  8.313389    X12   
  0.5600628  9.007722    X13 0.5708608 11.905644    X14 0.5819234  
6.126022    X15 0.5926283  9.883264    X16 0.6042306  7.699696    X17   
  0.6159752  5.614193    X18 0.6274483  6.681527    X19 0.6394011 
10.112812    X20 0.6519496  8.721810    X21 0.6646344 10.315931    X22  
   0.6773436 11.372490    X23 0.6903203  8.662169    X24 0.7036479 
10.106109    X25 0.7180955 10.990867    X26 0.7322746 13.491778    X27  
   0.7459771 17.256650    X28 0.7604589 12.040960    X29 0.7753096 
10.638796    X30 0.7898374  7.889500    X31 0.8047258 11.757174    X32  
   0.8204160 15.060151    X33 0.8359387  9.822078    X34 0.8510721 
11.388695    X35 0.8661237 10.271567    X36 0.8815913 13.224285    X37  
   0.8984146 15.584782    X38 0.9154501  9.320024    X39 0.9324529  
9.187128    X40 0.9497379 12.919805    X41 0.9672824 15.190318    X42   
  0.9854439 12.098606    X43 1.0041460 16.758629    X44 1.0241779 
17.435182    X45 1.0451656 15.323428    X46 1.0663605 18.292109    X47  
   1.0868977 12.625429    X48 1.1079376 17.318583    X49 1.1295719 
14.056624    X50 1.1516720 18.239445    X51 1.1736696 16.312087    X52  
   1.1963065 18.683315    

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-09 Thread rain1290--- via R-sig-Geo
Hi Alex and everyone,
My apologies for the confusion! Let me try to simplify here.

My dataframes are structured in the following way: an x column and y column, 
like this:
x        y0        00.2     27 0.4     310.6     320.8     391.0     43
1.2     341.4     25

Now, let's say that I want to determine the rate of increase at about x = 1.0, 
relative to the beginning of the period (i.e. 0 at the beginning). We can see 
clearly here that the answer would be y = 43. My question is would it be 
possible to quickly determine the value at around x = 1.0 across the 10 
dataframes that I have like this without having to manually check them? The 
idea is to determine the range of values for y at around x = 1.0 across all 
dataframes. Note that it's not perfectly x = 1.0 in all dataframes - some could 
be 0.99 or 1.01.  
I hope that this is clearer!
Thanks,
-Original Message-
From: Alexander Ilich 
To: r-sig-geo@r-project.org ; rain1...@aim.com 

Sent: Tue, May 9, 2023 2:23 pm
Subject: Re: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

 #yiv7713285102 P {margin-top:0;margin-bottom:0;}I'm currently having a bit of 
difficultly following. Rather than using your actual data, perhaps you could 
include code to generate a smaller dataset with the same structure with clear 
definitions of what is contained within each (r faq - How to make a great R 
reproducible example - Stack Overflow). You can design that dataset to be small 
with a known answer and the describe how you got to that answer and then others 
could help determine some code to accomplish that task.
Best Regards,AlexFrom: R-sig-Geo  on behalf of 
rain1290--- via R-sig-Geo 
Sent: Tuesday, May 9, 2023 1:01 PM
To: r-sig-geo@r-project.org 
Subject: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R I would like to attempt to 
determine the difference between the highest and lowest rates of increase 
across a series of dataframes at a specified x value. As shown below, the 
dataframes have basic x and y columns, with emissions values in the x column, 
and precipitation values in the y column. Among the dataframes, the idea would 
be to determine the highest and lowest rates of precipitation increase at 
"approximately" 1 Terratons of emissions (TtC) relative to the first value of 
each time series. For example, I want to figure out which dataframe has the 
highest increase at 1 TtC, and which dataframe has the lowest increase at 1 
TtC. at However, I am not sure if there is a way to quickly achieve this? Here 
are the dataframes that I created, followed by an example of how each dataframe 
is structured:
#Dataframe objects created:
    CanESMRCP8.5PL<-data.frame(get3.teratons, pland20) 
IPSLLRRCP8.5PL<-data.frame(get6.teratons, pland21)    
IPSLMRRCP8.5PL<-data.frame(get9.teratons, pland22)    
IPSLLRBRCP8.5PL<-data.frame(get12.teratons, pland23)    
MIROCRCP8.5PL<-data.frame(get15.teratons, pland24)    
HadGEMRCP8.5PL<-data.frame(get18.teratons, pland25)    
MPILRRCP8.5PL<-data.frame(get21.teratons, pland26)    
GFDLGRCP8.5PL<-data.frame(get27.teratons, pland27)    
GFDLMRCP8.5PL<-data.frame(get30.teratons, pland28)
#Example of what each of these look like:
    >CanESMRCP8.5PL
    get3.teratons   pland20    X1  0.4542249 13.252426    X2  
0.4626662  3.766658    X3  0.4715780  2.220986    X4  0.4809204  
8.495072    X5  0.4901427 10.206458    X6  0.4993126 10.942797    X7
  0.5088599  6.592956    X8  0.5187588  2.435796    X9  0.5286758  
2.275836    X10 0.5389284  5.051706    X11 0.5496212  8.313389    X12   
  0.5600628  9.007722    X13 0.5708608 11.905644    X14 0.5819234  
6.126022    X15 0.5926283  9.883264    X16 0.6042306  7.699696    X17   
  0.6159752  5.614193    X18 0.6274483  6.681527    X19 0.6394011 
10.112812    X20 0.6519496  8.721810    X21 0.6646344 10.315931    X22  
   0.6773436 11.372490    X23 0.6903203  8.662169    X24 0.7036479 
10.106109    X25 0.7180955 10.990867    X26 0.7322746 13.491778    X27  
   0.7459771 17.256650    X28 0.7604589 12.040960    X29 0.7753096 
10.638796    X30 0.7898374  7.889500    X31 0.8047258 11.757174    X32  
   0.8204160 15.060151    X33 0.8359387  9.822078    X34 0.8510721 
11.388695    X35 0.8661237 10.271567    X36 0.8815913 13.224285    X37  
   0.8984146 15.584782    X38 0.9154501  9.320024    X39 0.9324529  
9.187128    X40 0.9497379 12.919805    X41 0.9672824 15.190318    X42   
  0.9854439 12.098606    X43 1.0041460 16.758629    X44 1.0241779 
17.435182    X45 1.0451656 15.323428    X46 1.0663605 18.292109    X47  
   1.0868977 12.625429    X48 1.1079376 17.318583    X49 1.1295719 
14.056624    X50 1.1516720 18.239445    X51 1.1736696 16.312087    X52  
   1.1963065 18.683315    X53  

Re: [R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-09 Thread Alexander Ilich
I'm currently having a bit of difficultly following. Rather than using your 
actual data, perhaps you could include code to generate a smaller dataset with 
the same structure with clear definitions of what is contained within each (r 
faq - How to make a great R reproducible example - Stack 
Overflow).
 You can design that dataset to be small with a known answer and the describe 
how you got to that answer and then others could help determine some code to 
accomplish that task.

Best Regards,
Alex

From: R-sig-Geo  on behalf of rain1290--- via 
R-sig-Geo 
Sent: Tuesday, May 9, 2023 1:01 PM
To: r-sig-geo@r-project.org 
Subject: [R-sig-Geo] Finding the highest and lowest rates of increase at 
specific x value across several time series in R

I would like to attempt to determine the difference between the highest and 
lowest rates of increase across a series of dataframes at a specified x value. 
As shown below, the dataframes have basic x and y columns, with emissions 
values in the x column, and precipitation values in the y column. Among the 
dataframes, the idea would be to determine the highest and lowest rates of 
precipitation increase at "approximately" 1 Terratons of emissions (TtC) 
relative to the first value of each time series. For example, I want to figure 
out which dataframe has the highest increase at 1 TtC, and which dataframe has 
the lowest increase at 1 TtC. at However, I am not sure if there is a way to 
quickly achieve this? Here are the dataframes that I created, followed by an 
example of how each dataframe is structured:
#Dataframe objects created:
CanESMRCP8.5PL<-data.frame(get3.teratons, pland20) 
IPSLLRRCP8.5PL<-data.frame(get6.teratons, pland21)
IPSLMRRCP8.5PL<-data.frame(get9.teratons, pland22)
IPSLLRBRCP8.5PL<-data.frame(get12.teratons, pland23)
MIROCRCP8.5PL<-data.frame(get15.teratons, pland24)
HadGEMRCP8.5PL<-data.frame(get18.teratons, pland25)
MPILRRCP8.5PL<-data.frame(get21.teratons, pland26)
GFDLGRCP8.5PL<-data.frame(get27.teratons, pland27)
GFDLMRCP8.5PL<-data.frame(get30.teratons, pland28)
#Example of what each of these look like:
>CanESMRCP8.5PL
get3.teratons   pland20X1  0.4542249 13.252426X2  
0.4626662  3.766658X3  0.4715780  2.220986X4  0.4809204  
8.495072X5  0.4901427 10.206458X6  0.4993126 10.942797X7
  0.5088599  6.592956X8  0.5187588  2.435796X9  0.5286758  
2.275836X10 0.5389284  5.051706X11 0.5496212  8.313389X12   
  0.5600628  9.007722X13 0.5708608 11.905644X14 0.5819234  
6.126022X15 0.5926283  9.883264X16 0.6042306  7.699696X17   
  0.6159752  5.614193X18 0.6274483  6.681527X19 0.6394011 
10.112812X20 0.6519496  8.721810X21 0.6646344 10.315931X22  
   0.6773436 11.372490X23 0.6903203  8.662169X24 0.7036479 
10.106109X25 0.7180955 10.990867X26 0.7322746 13.491778X27  
   0.7459771 17.256650X28 0.7604589 12.040960X29 0.7753096 
10.638796X30 0.7898374  7.889500X31 0.8047258 11.757174X3
 2 0.8204160 15.060151X33 0.8359387  9.822078X34 0.8510721 
11.388695X35 0.8661237 10.271567X36 0.8815913 13.224285X37  
   0.8984146 15.584782X38 0.9154501  9.320024X39 0.9324529  
9.187128X40 0.9497379 12.919805X41 0.9672824 15.190318X42   
  0.9854439 12.098606X43 1.0041460 16.758629X44 1.0241779 
17.435182X45 1.0451656 15.323428X46 1.0663605 18.292109X47  
   1.0868977 12.625429X48 1.1079376 17.318583X49 1.1295719 
14.056624X50 1.1516720 18.239445X51 1.1736696 16.312087X52  
   1.1963065 18.683315X53 1.2195753 20.364835X54 1.2425277 
14.337167X55 1.2653873 16.072449X56 1.2888002 14.870248X57  
   1.3126799 18.431717X58 1.3362459 19.873449X59 1.3593610 
17.278361X60 1.3833589 18.532887X61 1.4083234 16.178170X62  
   1.4328881 17.689810X63 1.4572568 21.395131X64
  1.4821021 20.154886X65 1.5072721 15.655971X66 1.5325393 
21.692028X67 1.5581797 23.258303X68 1.5842384 23.802459X69  
   1.6108635 15.824673X70 1.6365393 19.016228X71 1.6618322 
20.957593X72 1.6876948 19.105363X73 1.7134712 19.759288X74  
   1.7392598 27.315595X75 1.7652725 24.882263X76 1.7913807 
25.813408X77 1.8173818 23.658997X78 1.8434211 24.223432X79  
   1.8695911 23.560818X80 1.8960611 28.057708X81 1.9228969 
26.996265X82 1.9493552 26.659719X83 1.9759324 22.723687X84  
   2.002 30.977267X85 2.0290137 29.384326X86 

[R-sig-Geo] Finding the highest and lowest rates of increase at specific x value across several time series in R

2023-05-09 Thread rain1290--- via R-sig-Geo
I would like to attempt to determine the difference between the highest and 
lowest rates of increase across a series of dataframes at a specified x value. 
As shown below, the dataframes have basic x and y columns, with emissions 
values in the x column, and precipitation values in the y column. Among the 
dataframes, the idea would be to determine the highest and lowest rates of 
precipitation increase at "approximately" 1 Terratons of emissions (TtC) 
relative to the first value of each time series. For example, I want to figure 
out which dataframe has the highest increase at 1 TtC, and which dataframe has 
the lowest increase at 1 TtC. at However, I am not sure if there is a way to 
quickly achieve this? Here are the dataframes that I created, followed by an 
example of how each dataframe is structured:
#Dataframe objects created:
    CanESMRCP8.5PL<-data.frame(get3.teratons, pland20)     
IPSLLRRCP8.5PL<-data.frame(get6.teratons, pland21)    
IPSLMRRCP8.5PL<-data.frame(get9.teratons, pland22)    
IPSLLRBRCP8.5PL<-data.frame(get12.teratons, pland23)    
MIROCRCP8.5PL<-data.frame(get15.teratons, pland24)    
HadGEMRCP8.5PL<-data.frame(get18.teratons, pland25)    
MPILRRCP8.5PL<-data.frame(get21.teratons, pland26)    
GFDLGRCP8.5PL<-data.frame(get27.teratons, pland27)    
GFDLMRCP8.5PL<-data.frame(get30.teratons, pland28)
#Example of what each of these look like:
    >CanESMRCP8.5PL
        get3.teratons   pland20    X1      0.4542249 13.252426    X2      
0.4626662  3.766658    X3      0.4715780  2.220986    X4      0.4809204  
8.495072    X5      0.4901427 10.206458    X6      0.4993126 10.942797    X7    
  0.5088599  6.592956    X8      0.5187588  2.435796    X9      0.5286758  
2.275836    X10     0.5389284  5.051706    X11     0.5496212  8.313389    X12   
  0.5600628  9.007722    X13     0.5708608 11.905644    X14     0.5819234  
6.126022    X15     0.5926283  9.883264    X16     0.6042306  7.699696    X17   
  0.6159752  5.614193    X18     0.6274483  6.681527    X19     0.6394011 
10.112812    X20     0.6519496  8.721810    X21     0.6646344 10.315931    X22  
   0.6773436 11.372490    X23     0.6903203  8.662169    X24     0.7036479 
10.106109    X25     0.7180955 10.990867    X26     0.7322746 13.491778    X27  
   0.7459771 17.256650    X28     0.7604589 12.040960    X29     0.7753096 
10.638796    X30     0.7898374  7.889500    X31     0.8047258 11.757174    X32  
   0.8204160 15.060151    X33     0.8359387  9.822078    X34     0.8510721 
11.388695    X35     0.8661237 10.271567    X36     0.8815913 13.224285    X37  
   0.8984146 15.584782    X38     0.9154501  9.320024    X39     0.9324529  
9.187128    X40     0.9497379 12.919805    X41     0.9672824 15.190318    X42   
  0.9854439 12.098606    X43     1.0041460 16.758629    X44     1.0241779 
17.435182    X45     1.0451656 15.323428    X46     1.0663605 18.292109    X47  
   1.0868977 12.625429    X48     1.1079376 17.318583    X49     1.1295719 
14.056624    X50     1.1516720 18.239445    X51     1.1736696 16.312087    X52  
   1.1963065 18.683315    X53     1.2195753 20.364835    X54     1.2425277 
14.337167    X55     1.2653873 16.072449    X56     1.2888002 14.870248    X57  
   1.3126799 18.431717    X58     1.3362459 19.873449    X59     1.3593610 
17.278361    X60     1.3833589 18.532887    X61     1.4083234 16.178170    X62  
   1.4328881 17.689810    X63     1.4572568 21.395131    X64     1.4821021 
20.154886    X65     1.5072721 15.655971    X66     1.5325393 21.692028    X67  
   1.5581797 23.258303    X68     1.5842384 23.802459    X69     1.6108635 
15.824673    X70     1.6365393 19.016228    X71     1.6618322 20.957593    X72  
   1.6876948 19.105363    X73     1.7134712 19.759288    X74     1.7392598 
27.315595    X75     1.7652725 24.882263    X76     1.7913807 25.813408    X77  
   1.8173818 23.658997    X78     1.8434211 24.223432    X79     1.8695911 
23.560818    X80     1.8960611 28.057708    X81     1.9228969 26.996265    X82  
   1.9493552 26.659719    X83     1.9759324 22.723687    X84     2.002 
30.977267    X85     2.0290137 29.384326    X86     2.0549359 24.840383    X87  
   2.0811679 26.952620    X88     2.1081763 29.894790    X89     2.1349227 
25.224040    X90     2.1613017 27.722623
    >IPSLLRRCP8.5PL
        get6.teratons   pland21    X1      0.5300411  8.128827    X2      
0.5401701  6.683660    X3      0.5503503 12.344974    X4      0.5607762 
11.322411    X5      0.5714146 14.250646    X6      0.5825357 10.013592    X7   
   0.5937966  9.437394    X8      0.6051673  8.138396    X9      0.6168960  
9.767765    X10     0.6290367  8.166579    X11     0.6413864 12.307348    X12   
  0.6539184 12.623931    X13     0.6667360 11.182448    X14     0.6800060 
12.585040    X15     0.6935350 13.408614    X16     0.7071757  9.352335    X17  
   0.7211951 12.743725    X18     0.7356089 11.625612    X19     0.7502665 
10.240418    X20     0.7650959 12.394282    X21     0.7800845 16.963066    X22