[R] Svyolr-Properly Specifying Start Values

2016-11-29 Thread Courtney Benjamin
Hello R Users,

I am trying to use the svyolr command and coming up with the following error:

Error in MASS::polr(formula, data = df, ..., Hess = TRUE, model = FALSE,  :
  attempt to find suitable starting values failed
>From what I have read online, a possible solution is to specify a value in the 
>start argument of svyolr; unfortunately, I have not been able to find any 
>detailed/clear descriptions about how to go about specifying values for the 
>start argument.  Any help in explaining how to go about establishing 
>reasonable start values would be greatly appreciated.

library (survey)
library(RCurl)

data <- 
getURL("https://raw.githubusercontent.com/cbenjamin1821/careertech-ed/master/elsq1adj2.csv;)
elsq1ch <- read.csv(text = data)


#Specifying the svyrepdesign object which applies the BRR weights
elsq1ch_brr<-svrepdesign(variables = elsq1ch[,1:16], repweights = 
elsq1ch[,18:217], weights = elsq1ch[,17], combined.weights = TRUE, type = "BRR")
elsq1ch_brr

allCColr <- 
svyolr(F3ATTAINMENT~F1PARED+BYINCOME+F1RACE+F1SEX+F1RGPP2+F1HIMATH+F1RTRCC,design=subset(elsq1ch_brr,BYSCTRL==1==1),na.action=na.omit)

?





Courtney Benjamin

Broome-Tioga BOCES

Automotive Technology II Teacher

Located at Gault Toyota

Doctoral Candidate-Educational Theory & Practice

State University of New York at Binghamton

cbenj...@btboces.org

607-763-8633

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] files

2016-11-29 Thread Jeff Newmiller

Presumably using read.table instead of read.csv.

Get the import working with one sample file first, then do whatever you 
had to do with one file over and over.


You still need to go read up on regex patterns... to get you started the 
pattern for matching the csv files would be something like 
"Bdat[0-9]*\\.csv$", and the pattern for matching txt files would be 
"Bdat[0-9]*\\.txt$", but understanding regular expressions is a topic in 
its own right, separate from R. Google is your friend.


On Tue, 29 Nov 2016, Val wrote:


Thank you Sarah,

Some of the files are not csv but  some are *txt with  space delimited.
  Bdat.txt
  Bdat123.txt
  Bdat456.txt
How do I do that?



On Tue, Nov 29, 2016 at 8:28 PM, Sarah Goslee  wrote:

Something like this:

filelist <- list.files(pattern="^test")
myfiles <- lapply(filelist, read.csv)
myfiles <- do.call(rbind, myfiles)



On Tue, Nov 29, 2016 at 9:11 PM, Val  wrote:

Hi all,

In one folder  I have several files  and  I want
combine/concatenate(rbind) based on some condition .
Here is  the sample of the files in one folder
   test.csv
   test123.csv
   test456.csv
   Adat.csv
   Adat123.csv
   Adat456.csv

I want to create 2  files as follows

test_all  = rbind(test.csv, test123.csv,test456.csv)
Adat_al l= rbind(Adat.csv, Adat123.csv,Adat456.csv)

The actual number of  of files are many and  is there an efficient way
of doing it?

Thank you




--
Sarah Goslee
http://www.functionaldiversity.org


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



---
Jeff NewmillerThe .   .  Go Live...
DCN:Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] files

2016-11-29 Thread Val
Thank you Sarah,

Some of the files are not csv but  some are *txt with  space delimited.
   Bdat.txt
   Bdat123.txt
   Bdat456.txt
How do I do that?



On Tue, Nov 29, 2016 at 8:28 PM, Sarah Goslee  wrote:
> Something like this:
>
> filelist <- list.files(pattern="^test")
> myfiles <- lapply(filelist, read.csv)
> myfiles <- do.call(rbind, myfiles)
>
>
>
> On Tue, Nov 29, 2016 at 9:11 PM, Val  wrote:
>> Hi all,
>>
>> In one folder  I have several files  and  I want
>> combine/concatenate(rbind) based on some condition .
>> Here is  the sample of the files in one folder
>>test.csv
>>test123.csv
>>test456.csv
>>Adat.csv
>>Adat123.csv
>>Adat456.csv
>>
>> I want to create 2  files as follows
>>
>> test_all  = rbind(test.csv, test123.csv,test456.csv)
>> Adat_al l= rbind(Adat.csv, Adat123.csv,Adat456.csv)
>>
>> The actual number of  of files are many and  is there an efficient way
>> of doing it?
>>
>> Thank you
>>
>
>
> --
> Sarah Goslee
> http://www.functionaldiversity.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] files

2016-11-29 Thread Sarah Goslee
Something like this:

filelist <- list.files(pattern="^test")
myfiles <- lapply(filelist, read.csv)
myfiles <- do.call(rbind, myfiles)



On Tue, Nov 29, 2016 at 9:11 PM, Val  wrote:
> Hi all,
>
> In one folder  I have several files  and  I want
> combine/concatenate(rbind) based on some condition .
> Here is  the sample of the files in one folder
>test.csv
>test123.csv
>test456.csv
>Adat.csv
>Adat123.csv
>Adat456.csv
>
> I want to create 2  files as follows
>
> test_all  = rbind(test.csv, test123.csv,test456.csv)
> Adat_al l= rbind(Adat.csv, Adat123.csv,Adat456.csv)
>
> The actual number of  of files are many and  is there an efficient way
> of doing it?
>
> Thank you
>


-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] files

2016-11-29 Thread Val
Hi all,

In one folder  I have several files  and  I want
combine/concatenate(rbind) based on some condition .
Here is  the sample of the files in one folder
   test.csv
   test123.csv
   test456.csv
   Adat.csv
   Adat123.csv
   Adat456.csv

I want to create 2  files as follows

test_all  = rbind(test.csv, test123.csv,test456.csv)
Adat_al l= rbind(Adat.csv, Adat123.csv,Adat456.csv)

The actual number of  of files are many and  is there an efficient way
of doing it?

Thank you

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using grepl in dplyr

2016-11-29 Thread Jeff Newmiller
That is not a very selective regex.

Actually, a long "or" probably is best, but you don't have to type it in 
directly. 

prefixes <- c( "AD", "FN" )
pat <- paste0( "^(", paste( prefixes, collapse="|" ), ")[0-9]{4}$" )
grepl( pat, Identifier )

-- 
Sent from my phone. Please excuse my brevity.

On November 29, 2016 10:37:29 AM PST, Glenn Schultz  
wrote:
>Hello All,
>
>I have a dataframe of about 1.5 million rows from this dataframe I need
>to filter out identifiers.  An example would be 07-07099,
>AD-AD0999, and AL-AL, FN-FN.  I am using grepl to
>identify those of interest as follows:
>
> grepl("^[FN]|[AD]{2}", Identifier)
>
>The above seems to work in the case of FN and AD.  However, there are
>20 such identifiers and there must be a better way to do this than a
>long "or" statement.  Ultimately, I would like to filter these out
>using dplyr which I think the first step is to create a vector of
>TRUE/FALSE then filter on TRUE
>
>Any Ideas are appreciated,
>Glenn
>
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread David Winsemius

> On Nov 29, 2016, at 4:09 PM, David Winsemius  wrote:
> 
> 
>> On Nov 29, 2016, at 11:26 AM, Hadley Wickham  wrote:
>> 
>> On Tue, Nov 29, 2016 at 11:52 AM, William Dunlap  wrote:
 The other option would be to load dplyr first (which would give the waring
 that >stats::lag was masked) and then later load plm (which should give a
 further >warning that dplyr::lag is masked). Then the plm::lag function 
 will
 be found
 first.
>>> 
>>> Another option is to write the package maintainers and complain
>>> that masking core functions is painful for users.
>> 
>> Don't worry; many people have done that.
> 
> Is it possible that the maintainer could add an explicit importation of  the 
> lag function from pkg:stats?

Sorry. Meant to ask explicitly whether the plm maintainer could do that for plm 
users. Seems clear that dplyr is holding its ground.
> 
> 
>> 
>> Hadley
>> 
>> -- 
>> http://hadley.nz
> 
> David Winsemius
> Alameda, CA, USA
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread David Winsemius

> On Nov 29, 2016, at 11:26 AM, Hadley Wickham  wrote:
> 
> On Tue, Nov 29, 2016 at 11:52 AM, William Dunlap  wrote:
>>> The other option would be to load dplyr first (which would give the waring
>>> that >stats::lag was masked) and then later load plm (which should give a
>>> further >warning that dplyr::lag is masked). Then the plm::lag function will
>>> be found
>>> first.
>> 
>> Another option is to write the package maintainers and complain
>> that masking core functions is painful for users.
> 
> Don't worry; many people have done that.

Is it possible that the maintainer could add an explicit importation of  the 
lag function from pkg:stats?


> 
> Hadley
> 
> -- 
> http://hadley.nz

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dendrogram branches and igraph edges

2016-11-29 Thread Hassan Sinky
Hello everyone,

I have generated a dendrogram by applying a hierarchical clustering
technique to a graph. Given this dendrogram I am trying to efficiently
find/ map/ label the dendrogram branches to their corresponding graph
edges. Using dendextend I am able to partition the leaves, obtain
subgraphs, perform depth first searches, etc. However, the issue is finding
which graph edge (endpoints) is represented by which dendrogram branch.

Any help is greatly appreciated.

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using grepl in dplyr

2016-11-29 Thread Glenn Schultz

Hello All,

I have a dataframe of about 1.5 million rows from this dataframe I need to 
filter out identifiers.  An example would be 07-07099, AD-AD0999, and 
AL-AL, FN-FN.  I am using grepl to identify those of interest 
as follows:

 grepl("^[FN]|[AD]{2}", Identifier)

The above seems to work in the case of FN and AD.  However, there are 20 such identifiers 
and there must be a better way to do this than a long "or" statement.  
Ultimately, I would like to filter these out using dplyr which I think the first step is 
to create a vector of TRUE/FALSE then filter on TRUE

Any Ideas are appreciated,
Glenn


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Convert arc-second to degree

2016-11-29 Thread Ben Tupper
Hello,

I haven't downloaded the data, but a mock-up of your steps below does as you 
ask. You can see the resolution of y is 1 x 1 and each is filled with the sum 
of 120 x 120 original cells each of which had a value of 1.

In this case, the raster package faithfully interprets the fractional degree 
spatial units from the get-go.  So you needn't worry about the arc-second to 
degree issue.

Ben

P.S.  This question is about spatial data; your best results will be had by 
subscribing to and posting to the spatial mailing list for R.  
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

library(raster)
x <- raster(
 nrows = 17400, ncols = 43200,
 xmn = -180, xmx = 180, ymn = -60, ymx = 85, 
 crs = '+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0')
x[] <- 1
x
  class   : RasterLayer 
  dimensions  : 17400, 43200, 75168  (nrow, ncol, ncell)
  resolution  : 0.00833, 0.00833  (x, y)
  extent  : -180, 180, -60, 85  (xmin, xmax, ymin, ymax)
  coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 
  data source : in memory
  names   : layer 
  values  : 1, 1  (min, max)

y <- aggregate(x, fact = 120, fun = sum)
y
  class   : RasterLayer 
  dimensions  : 145, 360, 52200  (nrow, ncol, ncell)
  resolution  : 1, 1  (x, y)
  extent  : -180, 180, -60, 85  (xmin, xmax, ymin, ymax)
  coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 
  data source : 
/private/var/folders/xx/nnm6q33102z059rfg4rh2y90gn/T/RtmpgsByoE/raster/r_tmp_2016-11-29_180022_89972_05480.grd
 
  names   : layer 
  values  : 14400, 14400  (min, max)




> On Nov 29, 2016, at 5:45 PM, Miluji Sb  wrote:
> 
> Dear all,
> 
> I am using the Gridded Population of the World (v4) for the year 2010. The
> data is in GeoTiFF format.
> 
> Source:
> http://sedac.ciesin.columbia.edu/data/set/gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals/data-download
> 
> I imported the data using:
> 
> library(raster)
> library(maptools)
> library(ncdf4)
> library(rgdal)
> 
> population <-
> raster("gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals_2010.tif")
> population1 <- stack(population )
> extent(population1 ) <- c(-180, 180,-58,85)
> 
> ### Information
> class   : RasterStack
> dimensions  : 17400, 43200, 75168, 1  (nrow, ncol, ncell, nlayers)
> resolution  : 0.00833, 0.00833  (x, y)
> extent  : -180, 180, -60, 85  (xmin, xmax, ymin, ymax)
> coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84
> +towgs84=0,0,0
> names   :
> gpw.v4.population.count.adjusted.to.2015.unwpp.country.totals_2010
> min values  :
>   0
> max values  :
> 141715.3
> ###
> 
> I need to extract population by a set of coordinates which are at 1° x 1°,
> how can I convert from arc-second to degree in R? The information also
> shows that resolution is 0.00833° x 0.00833°, is it enough to do
> something like this?
> 
> pop_agg <- aggregate(population1 , fact=120, fun=sum)
> 
> Thank you very much.
> 
> Sincerely,
> 
> Milu
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



Ben Tupper
Bigelow Laboratory for Ocean Sciences
60 Bigelow Drive, P.O. Box 380
East Boothbay, Maine 04544
http://www.bigelow.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Convert arc-second to degree

2016-11-29 Thread Miluji Sb
Dear all,

I am using the Gridded Population of the World (v4) for the year 2010. The
data is in GeoTiFF format.

Source:
http://sedac.ciesin.columbia.edu/data/set/gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals/data-download

I imported the data using:

library(raster)
library(maptools)
library(ncdf4)
library(rgdal)

population <-
raster("gpw-v4-population-count-adjusted-to-2015-unwpp-country-totals_2010.tif")
population1 <- stack(population )
extent(population1 ) <- c(-180, 180,-58,85)

### Information
class   : RasterStack
dimensions  : 17400, 43200, 75168, 1  (nrow, ncol, ncell, nlayers)
resolution  : 0.00833, 0.00833  (x, y)
extent  : -180, 180, -60, 85  (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84
+towgs84=0,0,0
names   :
gpw.v4.population.count.adjusted.to.2015.unwpp.country.totals_2010
min values  :
   0
max values  :
141715.3
###

I need to extract population by a set of coordinates which are at 1° x 1°,
how can I convert from arc-second to degree in R? The information also
shows that resolution is 0.00833° x 0.00833°, is it enough to do
something like this?

pop_agg <- aggregate(population1 , fact=120, fun=sum)

Thank you very much.

Sincerely,

Milu

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] transpose rows and columns for large data

2016-11-29 Thread Marc Schwartz
Hi,

To provide a [very] small example of what Bert is referring to:

DF <- data.frame(Letters = letters[1:4], int = 1:4)

> str(DF)
'data.frame':   4 obs. of  2 variables:
 $ Letters: Factor w/ 4 levels "a","b","c","d": 1 2 3 4
 $ int: int  1 2 3 4

> DF
  Letters int
1   a   1
2   b   2
3   c   3
4   d   4


DFt <- t(DF)

> str(DFt)
 chr [1:2, 1:4] "a" "1" "b" "2" "c" "3" "d" "4"
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:2] "Letters" "int"
  ..$ : NULL

> DFt
[,1] [,2] [,3] [,4]
Letters "a"  "b"  "c"  "d" 
int "1"  "2"  "3"  "4" 


Note the change in the structure  and the data types of the data frame after 
the transposition...

A data frame, which is a special type of 'list', may contain multiple data 
types, one per column. It is designed, more or less, specifically to be able to 
handle multiple data types across the columns, as you might have in a database 
(character, numeric, date, etc.), but at the expense of some operations.

A matrix, which is in reality a vector with dimensions, can only contain a 
single data type and in this example, the numeric column is coerced to 
character. Note also that the "Letters" column in "DF" is changed from being a 
factor to character as well. So both columns are affected by the transposition.

Regards,

Marc Schwartz


> On Nov 29, 2016, at 1:33 PM, Bert Gunter  wrote:
> 
> No, no. It *is* for transposing. But it is *what* you are transposing
> -- a data frame -- that may lead to the problems. You will have to
> read what I referred you to and perhaps spend time with an R tutorial
> or two (there are many good ones on the web) if your R learning is not
> yet sufficient to understand what they say.
> 
> -- Bert
> 
> 
> 
> 
> Bert Gunter
> 
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
> 
> 
> On Tue, Nov 29, 2016 at 11:01 AM, Elham -  wrote:
>> excuse me I did not understand,you mean this function is not for
>> transposing? what function do you suggest?
>> 
>> 
>> On Tuesday, November 29, 2016 10:24 PM, Bert Gunter 
>> wrote:
>> 
>> 
>> It is probably worth mentioning that this (i.e. transposing a data
>> frame) can be a potentially disastrous thing to do in R, though the
>> explanation is probably more than you want to know at this point (see
>> ?t  and follow the 'as.matrix' link for details).  But if you start
>> getting weird results and error/warning messages when working with
>> your transposed data, at least you'll know why.
>> 
>> Cheers,
>> Bert
>> 
>> 
>> Bert Gunter
>> 
>> "The trouble with having an open mind is that people keep coming along
>> and sticking things into it."
>> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>> 
>> 
>> On Tue, Nov 29, 2016 at 10:37 AM, Elham - via R-help
>>  wrote:
>>> thank you all,it worked
>>> 
>>>   On Tuesday, November 29, 2016 9:49 PM, "Dalthorp, Daniel"
>>>  wrote:
>>> 
>>> 
>>> Try David's suggestion to spell the argument "stringsAsFactors"
>>> correctly. Then:
>>> 
>>> data <- read.table("your_file_location", sep ="\t", comment.char = "",
>>> stringsAsFactors = F, header = T)
>>> transpose_data <- t(data)
>>> 
>>> -Dan
>>> On Tue, Nov 29, 2016 at 9:56 AM, Elham - via R-help 
>>> wrote:
>>> 
>>> yes you have right about excel.by R,what should I do for transposing row
>>> and column?
>>> 
>>>   On Tuesday, November 29, 2016 9:13 PM, David Winsemius
>>>  wrote:
>>> 
>>> 
>>> 
 On Nov 29, 2016, at 9:22 AM, Elham - via R-help 
 wrote:
 
 Hi,
 
 I am trying to transpose large datasets inexcel (44 columns and 57774
 rows) but it keeps giving me the message we can'tpaste because copy area 
 and
 paste area aren't the same size. Is there a way totranspose all the data at
 one time instead of piece by piece? One dataset has agreat amount of rows
 and columns.
 
 I tried this R function to transpose the datamatrix:
 
 data <- read.table("your_file_ location", sep ="\t", comment.char = "",
 stringAsFactors = F, header = T)
 
 
 
 transpose_data <- t(data)
 
 But I received tis error:
 
 unused argument (stringAsFactors = F)
 
>>> 
>>> You misspelled that argument's name. And do learn to use FALSE and TRUE.
>>> 
 
 Is there another way (I prefer a way with Excel)?
>>> 
>>> This is not a help list for Excel.
>>> 
>>> 
>>> --
>>> 
>>> David Winsemius
>>> Alameda, CA, USA
>>> 
>>> 
>>> 
>>>   [[alternative HTML version deleted]]
>>> 
>>> __ 
>>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/ listinfo/r-help
>>> PLEASE do read the posting guide 

Re: [R] transpose rows and columns for large data

2016-11-29 Thread Bert Gunter
No, no. It *is* for transposing. But it is *what* you are transposing
-- a data frame -- that may lead to the problems. You will have to
read what I referred you to and perhaps spend time with an R tutorial
or two (there are many good ones on the web) if your R learning is not
yet sufficient to understand what they say.

-- Bert




Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Tue, Nov 29, 2016 at 11:01 AM, Elham -  wrote:
> excuse me I did not understand,you mean this function is not for
> transposing? what function do you suggest?
>
>
> On Tuesday, November 29, 2016 10:24 PM, Bert Gunter 
> wrote:
>
>
> It is probably worth mentioning that this (i.e. transposing a data
> frame) can be a potentially disastrous thing to do in R, though the
> explanation is probably more than you want to know at this point (see
> ?t  and follow the 'as.matrix' link for details).  But if you start
> getting weird results and error/warning messages when working with
> your transposed data, at least you'll know why.
>
> Cheers,
> Bert
>
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along
> and sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
>
> On Tue, Nov 29, 2016 at 10:37 AM, Elham - via R-help
>  wrote:
>> thank you all,it worked
>>
>>On Tuesday, November 29, 2016 9:49 PM, "Dalthorp, Daniel"
>>  wrote:
>>
>>
>>  Try David's suggestion to spell the argument "stringsAsFactors"
>> correctly. Then:
>>
>> data <- read.table("your_file_location", sep ="\t", comment.char = "",
>> stringsAsFactors = F, header = T)
>> transpose_data <- t(data)
>>
>> -Dan
>> On Tue, Nov 29, 2016 at 9:56 AM, Elham - via R-help 
>> wrote:
>>
>> yes you have right about excel.by R,what should I do for transposing row
>> and column?
>>
>>On Tuesday, November 29, 2016 9:13 PM, David Winsemius
>>  wrote:
>>
>>
>>
>>> On Nov 29, 2016, at 9:22 AM, Elham - via R-help 
>>> wrote:
>>>
>>> Hi,
>>>
>>> I am trying to transpose large datasets inexcel (44 columns and 57774
>>> rows) but it keeps giving me the message we can'tpaste because copy area and
>>> paste area aren't the same size. Is there a way totranspose all the data at
>>> one time instead of piece by piece? One dataset has agreat amount of rows
>>> and columns.
>>>
>>> I tried this R function to transpose the datamatrix:
>>>
>>> data <- read.table("your_file_ location", sep ="\t", comment.char = "",
>>> stringAsFactors = F, header = T)
>>>
>>>
>>>
>>> transpose_data <- t(data)
>>>
>>> But I received tis error:
>>>
>>> unused argument (stringAsFactors = F)
>>>
>>
>> You misspelled that argument's name. And do learn to use FALSE and TRUE.
>>
>>>
>>> Is there another way (I prefer a way with Excel)?
>>
>> This is not a help list for Excel.
>>
>>
>> --
>>
>> David Winsemius
>> Alameda, CA, USA
>>
>>
>>
>>[[alternative HTML version deleted]]
>>
>> __ 
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/ listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/
>> posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>>
>> --
>> Dan Dalthorp, PhDUSGS Forest and Rangeland Ecosystem Science Center
>> Forest Sciences Lab, Rm 189
>> 3200 SW Jefferson Way
>> Corvallis, OR 97331
>> ph: 541-750-0953
>
>> ddalth...@usgs.gov
>>
>>
>>
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread Sarah Goslee
On Tue, Nov 29, 2016 at 12:39 PM, David Winsemius
 wrote:
>
>
> The other option would be to load dplyr first (which would give the waring 
> that stats::lag was masked) and then later load plm (which should give a 
> further warning that dplyr::lag is masked). Then the plm::lag function will 
> be found first.

There isn't a plm::lag function; the desired function is stats::lag

It matters whether dplyr is loaded because that masks stats::lag().

It only matters whether plm is loaded because that package provides
the function that the original querent wanted to use lag() in the
context of.

Sarah




-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread Hadley Wickham
On Tue, Nov 29, 2016 at 11:52 AM, William Dunlap  wrote:
>>The other option would be to load dplyr first (which would give the waring
>> that >stats::lag was masked) and then later load plm (which should give a
>> further >warning that dplyr::lag is masked). Then the plm::lag function will
>> be found
>>first.
>
> Another option is to write the package maintainers and complain
> that masking core functions is painful for users.

Don't worry; many people have done that.

Hadley

-- 
http://hadley.nz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] transpose rows and columns for large data

2016-11-29 Thread Bert Gunter
It is probably worth mentioning that this (i.e. transposing a data
frame) can be a potentially disastrous thing to do in R, though the
explanation is probably more than you want to know at this point (see
?t  and follow the 'as.matrix' link for details).  But if you start
getting weird results and error/warning messages when working with
your transposed data, at least you'll know why.

Cheers,
Bert


Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Tue, Nov 29, 2016 at 10:37 AM, Elham - via R-help
 wrote:
> thank you all,it worked
>
> On Tuesday, November 29, 2016 9:49 PM, "Dalthorp, Daniel" 
>  wrote:
>
>
>  Try David's suggestion to spell the argument "stringsAsFactors" correctly. 
> Then:
>
> data <- read.table("your_file_location", sep ="\t", comment.char = "", 
> stringsAsFactors = F, header = T)
> transpose_data <- t(data)
>
> -Dan
> On Tue, Nov 29, 2016 at 9:56 AM, Elham - via R-help  
> wrote:
>
> yes you have right about excel.by R,what should I do for transposing row and 
> column?
>
> On Tuesday, November 29, 2016 9:13 PM, David Winsemius 
>  wrote:
>
>
>
>> On Nov 29, 2016, at 9:22 AM, Elham - via R-help  wrote:
>>
>> Hi,
>>
>> I am trying to transpose large datasets inexcel (44 columns and 57774 rows) 
>> but it keeps giving me the message we can'tpaste because copy area and paste 
>> area aren't the same size. Is there a way totranspose all the data at one 
>> time instead of piece by piece? One dataset has agreat amount of rows and 
>> columns.
>>
>> I tried this R function to transpose the datamatrix:
>>
>> data <- read.table("your_file_ location", sep ="\t", comment.char = "", 
>> stringAsFactors = F, header = T)
>>
>>
>>
>> transpose_data <- t(data)
>>
>> But I received tis error:
>>
>> unused argument (stringAsFactors = F)
>>
>
> You misspelled that argument's name. And do learn to use FALSE and TRUE.
>
>>
>> Is there another way (I prefer a way with Excel)?
>
> This is not a help list for Excel.
>
>
> --
>
> David Winsemius
> Alameda, CA, USA
>
>
>
> [[alternative HTML version deleted]]
>
> __ 
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/ listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/ posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> --
> Dan Dalthorp, PhDUSGS Forest and Rangeland Ecosystem Science Center
> Forest Sciences Lab, Rm 189
> 3200 SW Jefferson Way
> Corvallis, OR 97331
> ph: 541-750-0953
> ddalth...@usgs.gov
>
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] transpose rows and columns for large data

2016-11-29 Thread Elham - via R-help
thank you all,it worked 

On Tuesday, November 29, 2016 9:49 PM, "Dalthorp, Daniel" 
 wrote:
 

 Try David's suggestion to spell the argument "stringsAsFactors" correctly. 
Then:

data <- read.table("your_file_location", sep ="\t", comment.char = "", 
stringsAsFactors = F, header = T)
transpose_data <- t(data)

-Dan
On Tue, Nov 29, 2016 at 9:56 AM, Elham - via R-help  
wrote:

yes you have right about excel.by R,what should I do for transposing row and 
column?

    On Tuesday, November 29, 2016 9:13 PM, David Winsemius 
 wrote:



> On Nov 29, 2016, at 9:22 AM, Elham - via R-help  wrote:
>
> Hi,
>
> I am trying to transpose large datasets inexcel (44 columns and 57774 rows) 
> but it keeps giving me the message we can'tpaste because copy area and paste 
> area aren't the same size. Is there a way totranspose all the data at one 
> time instead of piece by piece? One dataset has agreat amount of rows and 
> columns.
>
> I tried this R function to transpose the datamatrix:
>
> data <- read.table("your_file_ location", sep ="\t", comment.char = "", 
> stringAsFactors = F, header = T)
>
>
> 
> transpose_data <- t(data)
>
> But I received tis error:
>
> unused argument (stringAsFactors = F)
>

You misspelled that argument's name. And do learn to use FALSE and TRUE.

> 
> Is there another way (I prefer a way with Excel)?

This is not a help list for Excel.


--

David Winsemius
Alameda, CA, USA



        [[alternative HTML version deleted]]

__ 
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/ listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/ posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



-- 
Dan Dalthorp, PhDUSGS Forest and Rangeland Ecosystem Science Center
Forest Sciences Lab, Rm 189
3200 SW Jefferson Way 
Corvallis, OR 97331 
ph: 541-750-0953
ddalth...@usgs.gov



   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] transpose rows and columns for large data

2016-11-29 Thread Dalthorp, Daniel
Try David's suggestion to spell the argument "stringsAsFactors" correctly.
Then:

data <- read.table("your_file_location", sep ="\t", comment.char = "",
stringsAsFactors = F, header = T)
transpose_data <- t(data)

-Dan

On Tue, Nov 29, 2016 at 9:56 AM, Elham - via R-help 
wrote:

> yes you have right about excel.by R,what should I do for transposing row
> and column?
>
> On Tuesday, November 29, 2016 9:13 PM, David Winsemius <
> dwinsem...@comcast.net> wrote:
>
>
>
> > On Nov 29, 2016, at 9:22 AM, Elham - via R-help 
> wrote:
> >
> > Hi,
> >
> > I am trying to transpose large datasets inexcel (44 columns and 57774
> rows) but it keeps giving me the message we can'tpaste because copy area
> and paste area aren't the same size. Is there a way totranspose all the
> data at one time instead of piece by piece? One dataset has agreat amount
> of rows and columns.
> >
> > I tried this R function to transpose the datamatrix:
> >
> > data <- read.table("your_file_location", sep ="\t", comment.char = "",
> stringAsFactors = F, header = T)
> >
> >
> >
> > transpose_data <- t(data)
> >
> > But I received tis error:
> >
> > unused argument (stringAsFactors = F)
> >
>
> You misspelled that argument's name. And do learn to use FALSE and TRUE.
>
> >
> > Is there another way (I prefer a way with Excel)?
>
> This is not a help list for Excel.
>
>
> --
>
> David Winsemius
> Alameda, CA, USA
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.




-- 
Dan Dalthorp, PhD
USGS Forest and Rangeland Ecosystem Science Center
Forest Sciences Lab, Rm 189
3200 SW Jefferson Way
Corvallis, OR 97331
ph: 541-750-0953
ddalth...@usgs.gov

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] transpose rows and columns for large data

2016-11-29 Thread Bert Gunter
It's 'stringsAsFactors' = FALSE (without my added quotes) with an 's'
at the end of 'strings' .

-- Bert
Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Tue, Nov 29, 2016 at 9:22 AM, Elham - via R-help
 wrote:
> Hi,
>
> I am trying to transpose large datasets inexcel (44 columns and 57774 rows) 
> but it keeps giving me the message we can'tpaste because copy area and paste 
> area aren't the same size. Is there a way totranspose all the data at one 
> time instead of piece by piece? One dataset has agreat amount of rows and 
> columns.
>
> I tried this R function to transpose the datamatrix:
>
> data <- read.table("your_file_location", sep ="\t", comment.char = "", 
> stringAsFactors = F, header = T)
>
>
>
> transpose_data <- t(data)
>
> But I received tis error:
>
> unused argument (stringAsFactors = F)
>
>
>
>
>
> Is there another way (I prefer a way with Excel)?
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] transpose rows and columns for large data

2016-11-29 Thread Elham - via R-help
yes you have right about excel.by R,what should I do for transposing row and 
column? 

On Tuesday, November 29, 2016 9:13 PM, David Winsemius 
 wrote:
 

 
> On Nov 29, 2016, at 9:22 AM, Elham - via R-help  wrote:
> 
> Hi,
> 
> I am trying to transpose large datasets inexcel (44 columns and 57774 rows) 
> but it keeps giving me the message we can'tpaste because copy area and paste 
> area aren't the same size. Is there a way totranspose all the data at one 
> time instead of piece by piece? One dataset has agreat amount of rows and 
> columns. 
> 
> I tried this R function to transpose the datamatrix:
> 
> data <- read.table("your_file_location", sep ="\t", comment.char = "", 
> stringAsFactors = F, header = T)
> 
> 
>  
> transpose_data <- t(data)
> 
> But I received tis error:
> 
> unused argument (stringAsFactors = F)
> 

You misspelled that argument's name. And do learn to use FALSE and TRUE.

>  
> Is there another way (I prefer a way with Excel)?

This is not a help list for Excel.


-- 

David Winsemius
Alameda, CA, USA


   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread William Dunlap via R-help
>The other option would be to load dplyr first (which would give the waring
that >stats::lag was masked) and then later load plm (which should give a
further >warning that dplyr::lag is masked). Then the plm::lag function
will be found
>first.

Another option is to write the package maintainers and complain
that masking core functions is painful for users.


Bill Dunlap
TIBCO Software
wdunlap tibco.com

On Tue, Nov 29, 2016 at 9:39 AM, David Winsemius 
wrote:

>
> > On Nov 29, 2016, at 6:52 AM, Sarah Goslee 
> wrote:
> >
> > Hi,
> >
> > It shouldn't be entirely unexpected: when I load dplyr, I get a series
> > of messages telling me that certain functions are masked.
> >
> >
> > The following object is masked from ‘package:plm’:
> >
> >between
> >
> > The following objects are masked from ‘package:stats’:
> >
> >filter, lag
> >
> > The following objects are masked from ‘package:base’:
> >
> >intersect, setdiff, setequal, union
> >
> >
> > You can see the search path that R uses when looking for a function or
> > other object here:
> >
> > In your example, it should look like this:
> >
> >> search()
> > [1] ".GlobalEnv""package:dplyr" "package:plm"
> > "package:Formula"
> > [5] "package:stats" "package:graphics"  "package:grDevices"
> > "package:utils"
> > [9] "package:datasets"  "package:vimcom""package:setwidth"
> > "package:colorout"
> > [13] "package:methods"   "Autoloads" "package:base"
> >
> >
> > So R is searching the local environment, then dplyr, and then farther
> > down the list, stats, which is where the lag function comes from (see
> > above warning).
> >
> > Once you know where the desired function comes from you can specify
> > its namespace:
>
> The other option would be to load dplyr first (which would give the waring
> that stats::lag was masked) and then later load plm (which should give a
> further warning that dplyr::lag is masked). Then the plm::lag function will
> be found first.
>
> --
> David.
> >
> >
> > summary(plm(y~lagx, data = df, index = c("i", "t")))
> > summary(plm(y~stats::lag(x, 1), data = df, index = c("i", "t")))
> >
> > If you weren't paying attention to the warning messages at package
> > load, you can also use the getAnywhere function to find out:
> >
> >> getAnywhere(lag)
> > 2 differing objects matching ‘lag’ were found
> > in the following places
> >  package:dplyr
> >  package:stats
> >  namespace:dplyr
> >  namespace:stats
> >
> >
> > Sarah
> >
> >
> > On Tue, Nov 29, 2016 at 9:36 AM, Constantin Weiser 
> wrote:
> >> Hello,
> >>
> >> I'm struggling with an unexpected interference between the two packages
> >> dplyr and plm, or to be more concrete with the "lag(x, ...)" function of
> >> both packages.
> >>
> >> If dplyr is in the namespace the plm function uses no longer the
> appropriate
> >> lag()-function which accounts for the panel structure.
> >>
> >> The following code demonstrates the unexpected behaviour:
> >>
> >> ## starting from a new R-Session (plm and dplyr unloaded) ##
> >>
> >>  ## generate dataset
> >>  set.seed(4711)
> >>  df <- data.frame(
> >>  i = rep(1:10, each = 4),
> >>  t = rep(1:4, times = 10),
> >>  y = rnorm(40),
> >>  x = rnorm(40)
> >>  )
> >>  ## manually generated laged variable
> >>  df$lagx <- c(NA, df$x[-40])
> >>  df$lagx[df$t == 1] <- NA
> >>
> >>
> >> require(plm)
> >> summary(plm(y~lagx, data = df, index = c("i", "t")))
> >> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
> >> # > this result is expected
> >>
> >> require(dplyr)
> >> summary(plm(y~lagx, data = df, index = c("i", "t")))
> >> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
> >> # > this result is unexpected
> >>
> >> Is there a way to force R to use the "correct" lag-function? (or at the
> >> devel-level to harmonise both functions)
> >>
> >> Thank you very much in advance for your answer
> >>
> >> Yours
> >> Constantin
> >>
> >> --
> >> ^
> >
> > --
> > Sarah Goslee
> > http://www.functionaldiversity.org
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
> David Winsemius
> Alameda, CA, USA
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting 

Re: [R] transpose rows and columns for large data

2016-11-29 Thread David Winsemius

> On Nov 29, 2016, at 9:22 AM, Elham - via R-help  wrote:
> 
> Hi,
> 
> I am trying to transpose large datasets inexcel (44 columns and 57774 rows) 
> but it keeps giving me the message we can'tpaste because copy area and paste 
> area aren't the same size. Is there a way totranspose all the data at one 
> time instead of piece by piece? One dataset has agreat amount of rows and 
> columns. 
> 
> I tried this R function to transpose the datamatrix:
> 
> data <- read.table("your_file_location", sep ="\t", comment.char = "", 
> stringAsFactors = F, header = T)
> 
> 
>  
> transpose_data <- t(data)
> 
> But I received tis error:
> 
> unused argument (stringAsFactors = F)
> 

You misspelled that argument's name. And do learn to use FALSE and TRUE.

>  
> Is there another way (I prefer a way with Excel)?

This is not a help list for Excel.


-- 

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread David Winsemius

> On Nov 29, 2016, at 6:52 AM, Sarah Goslee  wrote:
> 
> Hi,
> 
> It shouldn't be entirely unexpected: when I load dplyr, I get a series
> of messages telling me that certain functions are masked.
> 
> 
> The following object is masked from ‘package:plm’:
> 
>between
> 
> The following objects are masked from ‘package:stats’:
> 
>filter, lag
> 
> The following objects are masked from ‘package:base’:
> 
>intersect, setdiff, setequal, union
> 
> 
> You can see the search path that R uses when looking for a function or
> other object here:
> 
> In your example, it should look like this:
> 
>> search()
> [1] ".GlobalEnv""package:dplyr" "package:plm"
> "package:Formula"
> [5] "package:stats" "package:graphics"  "package:grDevices"
> "package:utils"
> [9] "package:datasets"  "package:vimcom""package:setwidth"
> "package:colorout"
> [13] "package:methods"   "Autoloads" "package:base"
> 
> 
> So R is searching the local environment, then dplyr, and then farther
> down the list, stats, which is where the lag function comes from (see
> above warning).
> 
> Once you know where the desired function comes from you can specify
> its namespace:

The other option would be to load dplyr first (which would give the waring that 
stats::lag was masked) and then later load plm (which should give a further 
warning that dplyr::lag is masked). Then the plm::lag function will be found 
first.

-- 
David.
> 
> 
> summary(plm(y~lagx, data = df, index = c("i", "t")))
> summary(plm(y~stats::lag(x, 1), data = df, index = c("i", "t")))
> 
> If you weren't paying attention to the warning messages at package
> load, you can also use the getAnywhere function to find out:
> 
>> getAnywhere(lag)
> 2 differing objects matching ‘lag’ were found
> in the following places
>  package:dplyr
>  package:stats
>  namespace:dplyr
>  namespace:stats
> 
> 
> Sarah
> 
> 
> On Tue, Nov 29, 2016 at 9:36 AM, Constantin Weiser  wrote:
>> Hello,
>> 
>> I'm struggling with an unexpected interference between the two packages
>> dplyr and plm, or to be more concrete with the "lag(x, ...)" function of
>> both packages.
>> 
>> If dplyr is in the namespace the plm function uses no longer the appropriate
>> lag()-function which accounts for the panel structure.
>> 
>> The following code demonstrates the unexpected behaviour:
>> 
>> ## starting from a new R-Session (plm and dplyr unloaded) ##
>> 
>>  ## generate dataset
>>  set.seed(4711)
>>  df <- data.frame(
>>  i = rep(1:10, each = 4),
>>  t = rep(1:4, times = 10),
>>  y = rnorm(40),
>>  x = rnorm(40)
>>  )
>>  ## manually generated laged variable
>>  df$lagx <- c(NA, df$x[-40])
>>  df$lagx[df$t == 1] <- NA
>> 
>> 
>> require(plm)
>> summary(plm(y~lagx, data = df, index = c("i", "t")))
>> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
>> # > this result is expected
>> 
>> require(dplyr)
>> summary(plm(y~lagx, data = df, index = c("i", "t")))
>> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
>> # > this result is unexpected
>> 
>> Is there a way to force R to use the "correct" lag-function? (or at the
>> devel-level to harmonise both functions)
>> 
>> Thank you very much in advance for your answer
>> 
>> Yours
>> Constantin
>> 
>> --
>> ^
> 
> -- 
> Sarah Goslee
> http://www.functionaldiversity.org
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] transpose rows and columns for large data

2016-11-29 Thread Elham - via R-help
Hi,

I am trying to transpose large datasets inexcel (44 columns and 57774 rows) but 
it keeps giving me the message we can'tpaste because copy area and paste area 
aren't the same size. Is there a way totranspose all the data at one time 
instead of piece by piece? One dataset has agreat amount of rows and columns. 

I tried this R function to transpose the datamatrix:

data <- read.table("your_file_location", sep ="\t", comment.char = "", 
stringAsFactors = F, header = T)


 
transpose_data <- t(data)

But I received tis error:

unused argument (stringAsFactors = F)


 

 
Is there another way (I prefer a way with Excel)?


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Unexpected interference between dplyr and plm

2016-11-29 Thread Sarah Goslee
Hi,

It shouldn't be entirely unexpected: when I load dplyr, I get a series
of messages telling me that certain functions are masked.


The following object is masked from ‘package:plm’:

between

The following objects are masked from ‘package:stats’:

filter, lag

The following objects are masked from ‘package:base’:

intersect, setdiff, setequal, union


You can see the search path that R uses when looking for a function or
other object here:

In your example, it should look like this:

> search()
 [1] ".GlobalEnv""package:dplyr" "package:plm"
"package:Formula"
 [5] "package:stats" "package:graphics"  "package:grDevices"
"package:utils"
 [9] "package:datasets"  "package:vimcom""package:setwidth"
"package:colorout"
[13] "package:methods"   "Autoloads" "package:base"


So R is searching the local environment, then dplyr, and then farther
down the list, stats, which is where the lag function comes from (see
above warning).

Once you know where the desired function comes from you can specify
its namespace:


summary(plm(y~lagx, data = df, index = c("i", "t")))
summary(plm(y~stats::lag(x, 1), data = df, index = c("i", "t")))

If you weren't paying attention to the warning messages at package
load, you can also use the getAnywhere function to find out:

> getAnywhere(lag)
2 differing objects matching ‘lag’ were found
in the following places
  package:dplyr
  package:stats
  namespace:dplyr
  namespace:stats


Sarah


On Tue, Nov 29, 2016 at 9:36 AM, Constantin Weiser  wrote:
> Hello,
>
> I'm struggling with an unexpected interference between the two packages
> dplyr and plm, or to be more concrete with the "lag(x, ...)" function of
> both packages.
>
> If dplyr is in the namespace the plm function uses no longer the appropriate
> lag()-function which accounts for the panel structure.
>
> The following code demonstrates the unexpected behaviour:
>
> ## starting from a new R-Session (plm and dplyr unloaded) ##
>
>   ## generate dataset
>   set.seed(4711)
>   df <- data.frame(
>   i = rep(1:10, each = 4),
>   t = rep(1:4, times = 10),
>   y = rnorm(40),
>   x = rnorm(40)
>   )
>   ## manually generated laged variable
>   df$lagx <- c(NA, df$x[-40])
>   df$lagx[df$t == 1] <- NA
>
>
> require(plm)
> summary(plm(y~lagx, data = df, index = c("i", "t")))
> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
> # > this result is expected
>
> require(dplyr)
> summary(plm(y~lagx, data = df, index = c("i", "t")))
> summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
> # > this result is unexpected
>
> Is there a way to force R to use the "correct" lag-function? (or at the
> devel-level to harmonise both functions)
>
> Thank you very much in advance for your answer
>
> Yours
> Constantin
>
> --
> ^

-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Unexpected interference between dplyr and plm

2016-11-29 Thread Constantin Weiser

Hello,

I'm struggling with an unexpected interference between the two packages 
dplyr and plm, or to be more concrete with the "lag(x, ...)" function of 
both packages.


If dplyr is in the namespace the plm function uses no longer the 
appropriate lag()-function which accounts for the panel structure.


The following code demonstrates the unexpected behaviour:

## starting from a new R-Session (plm and dplyr unloaded) ##

  ## generate dataset
  set.seed(4711)
  df <- data.frame(
  i = rep(1:10, each = 4),
  t = rep(1:4, times = 10),
  y = rnorm(40),
  x = rnorm(40)
  )
  ## manually generated laged variable
  df$lagx <- c(NA, df$x[-40])
  df$lagx[df$t == 1] <- NA


require(plm)
summary(plm(y~lagx, data = df, index = c("i", "t")))
summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
# > this result is expected

require(dplyr)
summary(plm(y~lagx, data = df, index = c("i", "t")))
summary(plm(y~lag(x, 1), data = df, index = c("i", "t")))
# > this result is unexpected

Is there a way to force R to use the "correct" lag-function? (or at the 
devel-level to harmonise both functions)


Thank you very much in advance for your answer

Yours
Constantin

--
^
|X
|   /eiser, Dr. Constantin (weis...@hhu.de)
|  /Chair of Statistics and Econometrics
| / Heinrich Heine-University of Düsseldorf
| */\/  Universitätsstraße 1, 40225 Düsseldorf, Germany
|  \  /  \  /   Oeconomicum (Building 24.31), Room 01.22
|   \/\/Tel: 0049 211 81-15307
+--->

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] on ``unfolding'' a json into data frame columns

2016-11-29 Thread Hadley Wickham
Two quick hints:

* use simplifyDataFrame = FALSE in fromJSON()

* read 
https://jennybc.github.io/purrr-tutorial/ls02_map-extraction-advanced.html
(and https://jennybc.github.io/purrr-tutorial/)

Hadley

On Tue, Nov 29, 2016 at 8:06 AM, Daniel Bastos  wrote:
> Greetings!
>
> In an SQL table, I have a column that contains a JSON.  I'd like easy
> access to all (in an ideal world) of these JSON fields.  I started out
> trying to get all fields from the JSON and so I wrote this function.
>
> unfold.json <- function (df, column)
> {
> library(jsonlite)
> ret <- data.frame()
>
> for (i in 1:nrow(df)) {
> js <- fromJSON(df[i, ][[column]])
> ret <- rbind(ret, cbind(df[i, ], js))
> }
>
> ret
> }
>
> It takes a data frame and a column-string where the JSON is to be
> found.  It produces a new RET data frame with all the rows of DF but
> with new columns --- extracted from every field in the JSON.
>
> (The performance is horrible.)
>
> fromJSON sometimes produces a list that sometimes contains a data frame.
> As a result, I end up getting a RET data frame with duplicated rows.
> Here's what happens.
>
>> nrow(df)
> [1] 1
>
>> nrow(unfold.json(df, "response"))
> [1] 3
> Warning messages:
> 1: In data.frame(CreateUTC = "2016-11-29 02:00:43", Payload = list( :
>   row names were found from a short variable and have been discarded
> 2: In data.frame(..., check.names = FALSE) :
>   row names were found from a short variable and have been discarded
>>
>
> I expected a data frame with 1 row.  The reason 3 rows is produced is
> because in the JSON there's an array with 3 rows.
>
>> fromJSON(df$response)$RawPayload
> [1] 200   1 128
>
> I have also cases where fromJSON(df$response)$Payload$Fields is a data
> frame containing various rows.  So unfold.json produces a data frame
> with these various rows.
>
> So I gave up on this general approach.
>
> (*) My humble approach
>
> For the moment I'm not interested in RawPayload nor Payload$Fields, so I
> nullified them in this new approach.  To improve performance, I guessed
> perhaps merge() would help and I think it did, but this was not at all a
> decision thought out.
>
> unfold.json.fast <- function (df, column)
> {
> library(jsonlite)
> ret <- data.frame()
> if (nrow(df) > 0) {
> for (i in 1:nrow(df)) {
> ls <- fromJSON(df[i, ][[column]])
> ls$RawPayload <- NULL
> ls$Payload$Fields <- NULL
> js <- data.frame(ls)
> ret <- rbind(ret, merge(df[i, ], js))
> }
> }
>
> ret
> }
>
> I'm looking for advice.  How would you approach this problem?
>
> Thank you!
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
http://hadley.nz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] on ``unfolding'' a json into data frame columns

2016-11-29 Thread Daniel Bastos
Greetings!

In an SQL table, I have a column that contains a JSON.  I'd like easy
access to all (in an ideal world) of these JSON fields.  I started out
trying to get all fields from the JSON and so I wrote this function.

unfold.json <- function (df, column)
{
library(jsonlite)
ret <- data.frame()

for (i in 1:nrow(df)) {
js <- fromJSON(df[i, ][[column]])
ret <- rbind(ret, cbind(df[i, ], js))
}

ret
}

It takes a data frame and a column-string where the JSON is to be
found.  It produces a new RET data frame with all the rows of DF but
with new columns --- extracted from every field in the JSON.

(The performance is horrible.)

fromJSON sometimes produces a list that sometimes contains a data frame.
As a result, I end up getting a RET data frame with duplicated rows.
Here's what happens.

> nrow(df)
[1] 1

> nrow(unfold.json(df, "response"))
[1] 3
Warning messages:
1: In data.frame(CreateUTC = "2016-11-29 02:00:43", Payload = list( :
  row names were found from a short variable and have been discarded
2: In data.frame(..., check.names = FALSE) :
  row names were found from a short variable and have been discarded
> 

I expected a data frame with 1 row.  The reason 3 rows is produced is
because in the JSON there's an array with 3 rows.

> fromJSON(df$response)$RawPayload
[1] 200   1 128

I have also cases where fromJSON(df$response)$Payload$Fields is a data
frame containing various rows.  So unfold.json produces a data frame
with these various rows.

So I gave up on this general approach.

(*) My humble approach

For the moment I'm not interested in RawPayload nor Payload$Fields, so I
nullified them in this new approach.  To improve performance, I guessed
perhaps merge() would help and I think it did, but this was not at all a
decision thought out.

unfold.json.fast <- function (df, column)
{
library(jsonlite)
ret <- data.frame()
if (nrow(df) > 0) {
for (i in 1:nrow(df)) {
ls <- fromJSON(df[i, ][[column]])
ls$RawPayload <- NULL
ls$Payload$Fields <- NULL
js <- data.frame(ls)
ret <- rbind(ret, merge(df[i, ], js))
}
}

ret
}

I'm looking for advice.  How would you approach this problem?  

Thank you!

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls for R version 3.3.2

2016-11-29 Thread J C Nash
You may also want to use tools that are more robust.

Package nlmrt uses analytic Jacobian where possible and a Marquardt solver.

Package minpack.lm uses a Marquardt solver, but the forward difference 
derivatives of
nls() for its Jacobian.

alpha level work in https://r-forge.r-project.org/R/?group_id=395 package nlsr 
is aimed at cleaning up some of the work
in nlmrt and extending its capabilities.

JN


On 16-11-29 03:30 AM, Troels Ring wrote:
> Dear friends - updated to R 3.3.2 - tried to install nls - got this sad 
> response
> 
> package ‘nls’ is not available (as a binary package for R version 3.3.2)
> 
> I'm on windows 7
> 
> Did I do something wrong? Will a binary appear eventually? Would I have to 
> make it myself?
> 
> Best wishes
> 
> Troels Ring
> 
> Aalborg, Denmark
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] independent censoring

2016-11-29 Thread Therneau, Terry M., Ph.D.



On 11/29/2016 05:00 AM, r-help-requ...@r-project.org wrote:

Independent censoring is one of the fundamental assumptions in the survival 
analysis. However, I cannot find any test for it or any paper which discusses 
how real that assumption is.


I would be grateful if anybody could point me to some useful references. I have 
found the following paper as an interesting reference but it is not freely 
available.


Leung, Kwan-Moon, Robert M. Elashoff, and Abdelmonem A. Afifi. "Censoring issues in 
survival analysis." Annual review of public health 18.1 (1997): 83-104.




This is because there is no test for independent censoring.  Say I am following a cohort 
of older gentlemen (65 years old) who were diagnoses with condition "x", and after 8 years 
there a a dozen who no longer answer my letters.  Why not?

 a. Perhaps because they are in a nursing home, with dementia.
 b. Perhaps because they have moved to another city to be interact and be near to 
grandchildren.


In case a, those lost to follow-up are much sicker than the average subject, and in case b 
they are most likely the most healthy and active of the group.   In a) the KM will 
over-estimate survival and in b it will underestimate.


The main point is that there is absolutely no way to know, other than actually tracking 
the subjects down.  Any study which has a substantial fraction with incomplete follow-up 
is making a guess.  The more accurate phrase would be "a blind hope for independent 
censoring" than "assume".   There are cases where simple reasoning or experience tells me 
that this hope is futile, but mostly we just hope.  The alternative is proactive 
follow-up, i.e., devote enough staff and resources to actively contact all of the study 
subjects on a regular schedule.   Even then you will lose a few.  (In one study several 
years ago, long term follow-up of cancer, there was a new Mrs Smith who refused to 
acknowledge the existence of the prior wife, even to forward letters.)


Terry Therneau

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] errors when installing packages

2016-11-29 Thread peter dalgaard
Presumably a problem at your end. Suspicion points to permission settings on 
the target directory, or virus checkers "helpfully" moving recently created 
files to a safe place for scrutiny.

(Do you _really_ get errors for "survival" when installing "Hmisc"? It could 
happen via a dependency, I suppose. If it is just the survival folder that has 
incorrect owner/permission, you could try removing the folder and reinstalling 
survival.) 

-pd

On 29 Nov 2016, at 00:36 , Chris  wrote:

> I'm using R 3.3.1 
> when installing/ updating a library module, for example "Hmisc" I get an 
> error message about "unable to move..."
> 
> cutting/pasting
> survival’ successfully unpacked and MD5 sums checkedWarning: unable to move 
> temporary installation 
> ‘C:\Users\Chris\Documents\R\win-library\3.3\file4681d2a5a2a\survival’ to 
> ‘C:\Users\Chris\Documents\R\win-library\3.3\survival’
> 
>  Chris Barker, Ph.D.
> Adjunct Associate Professor of Biostatistics - UIC-SPH
> and
> 
> skype: barkerstats
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reshaping a large dataframe in R

2016-11-29 Thread PIKAL Petr
Hi

probably not at all simpler

> dat2.p<-split(t(dat), rep(1:(ncol(dat)/4), each=4))
> dat3.p<-as.data.frame(do.call(rbind, lapply(dat2.p, function(x) t(matrix(x, 
> 4,nrow(dat))
>
> all.equal(dat3.p, dat3)
[1] TRUE

Cheers
Petr


> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David L
> Carlson
> Sent: Monday, November 28, 2016 8:27 PM
> To: jean-philippe ; r-help@r-project.org
> Subject: Re: [R] reshaping a large dataframe in R
>
> There may be a simpler way of getting there, but this works:
>
> > rows <- 500
> > cols <- 4004
> > dat <- as.data.frame(t(replicate(rows, 1:cols))) dat[c(1:3, 500),
> > c(1:4, 4001:4004)]
> V1 V2 V3 V4 V4001 V4002 V4003 V4004
> 11  2  3  4  4001  4002  4003  4004
> 21  2  3  4  4001  4002  4003  4004
> 31  2  3  4  4001  4002  4003  4004
> 500  1  2  3  4  4001  4002  4003  4004
> > dat2 <- array(as.matrix(dat), dim=c(rows, 4, cols/4))
> > dat3 <- as.data.frame(matrix(aperm(dat2, c(1, 3, 2)), rows*cols/4, 4))
> > head(dat3)
>   V1 V2 V3 V4
> 1  1  2  3  4
> 2  1  2  3  4
> 3  1  2  3  4
> 4  1  2  3  4
> 5  1  2  3  4
> 6  1  2  3  4
> > tail(dat3)
>  V1   V2   V3   V4
> 500495 4001 4002 4003 4004
> 500496 4001 4002 4003 4004
> 500497 4001 4002 4003 4004
> 500498 4001 4002 4003 4004
> 500499 4001 4002 4003 4004
> 500500 4001 4002 4003 4004
>
> -
> David L Carlson
> Department of Anthropology
> Texas A University
> College Station, TX 77840-4352
>
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of jean-
> philippe
> Sent: Monday, November 28, 2016 3:07 AM
> To: r-help@r-project.org
> Subject: [R] reshaping a large dataframe in R
>
> dear all,
>
> I have a dataframe of 500 rows and 4004 columns that I would like to reshape
> to a dataframe of 500500 rows and 4 columns. That is from this
> dataframe:
>
> V1 V2 V3 V4 ... V4001 V4002 V4003 V4004
>
> 1 2 3 4 ... 4001 4002 4003 4004
>
> 1 2 3 4 ... 4001 4002 4003 4004
>
> 1 2 3 4 ... 4001 4002 4003 4004
>
> ... ... ... ... ... ... ... ... ... ... ... ... ...
>
> 1 2 3 4 ... 4001 4002 4003 4004
>
> I would like :
>
>
> V1 V2 V3 V4
>
> 1 2 3 4
>
> 1 2 3 4
>
> 1 2 3 4
>
> 1 2 3 4
>
> ... ... ... ... ... ... ... ... ...
>
> 4001 4002 4003 4004
>
> 4001 4002 4003 4004
>
> 4001 4002 4003 4004
>
> ... ... ... ... ...
>
> 4001 4002 4003 4004
>
> I tried already to use y=matrix(as.matrix(dataGaus[[1]]),500500,4)
> (where dataGaus is my dataframe) but it doesn't give the expected result. I
> tried also to use reshape but I can't manage to use it to reproduce the result
> (and I have been through lot of posts on StackOverflow and on the net). In
> python, we can do this with a simple command
> numpy.array(dataGaus[[1]]).reshape(-1,4). For some reasons, I am doing my
> analysis in R, and I would like to know if there is a function which does the
> same thing as the reshape(-1,4) of numpy in Python?
>
> Thanks in advance, best
>
>
> Jean-Philippe
>
> --
> Jean-Philippe Fontaine
> PhD Student in Astroparticle Physics,
> Gran Sasso Science Institute (GSSI),
> Viale Francesco Crispi 7,
> 67100 L'Aquila, Italy
> Mobile: +393487128593, +33615653774
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.


Tento e-mail a jakékoliv k němu připojené dokumenty jsou důvěrné a jsou určeny 
pouze jeho adresátům.
Jestliže jste obdržel(a) tento e-mail omylem, informujte laskavě neprodleně 
jeho odesílatele. Obsah tohoto emailu i s přílohami a jeho kopie vymažte ze 
svého systému.
Nejste-li zamýšleným adresátem tohoto emailu, nejste oprávněni tento email 
jakkoliv užívat, rozšiřovat, kopírovat či zveřejňovat.
Odesílatel e-mailu neodpovídá za eventuální škodu způsobenou modifikacemi či 
zpožděním přenosu e-mailu.

V případě, že je tento e-mail součástí obchodního jednání:
- vyhrazuje si odesílatel právo ukončit kdykoliv jednání o uzavření smlouvy, a 
to z jakéhokoliv důvodu i bez uvedení důvodu.
- a obsahuje-li nabídku, je adresát oprávněn nabídku bezodkladně přijmout; 
Odesílatel tohoto e-mailu (nabídky) vylučuje přijetí nabídky ze strany příjemce 
s dodatkem či odchylkou.
- trvá odesílatel na tom, že příslušná smlouva je uzavřena teprve výslovným 
dosažením shody na všech jejích náležitostech.
- odesílatel tohoto emailu informuje, že není oprávněn uzavírat za společnost 

Re: [R] nls for R version 3.3.2

2016-11-29 Thread David Winsemius

> On Nov 29, 2016, at 12:30 AM, Troels Ring  wrote:
> 
> Dear friends - updated to R 3.3.2 - tried to install nls - got this sad 
> response
> 
> package ‘nls’ is not available (as a binary package for R version 3.3.2)
> 
> I'm on windows 7

I don't see an `nls` package on CRAN. Perhaps it has been replaced by nls2. On 
the other hand, you may be looking for the functions `nls` in the stats 
package. (Hard to tell from the available information.)

packageDescription('nls2')

Package: nls2
Version: 0.2
Date: 2013-03-07
Title: Non-linear regression with brute force
Author: G. Grothendieck
Maintainer: G. Grothendieck 
Description: Adds brute force and multiple starting values to nls.
Depends: proto
Suggests: nlstools
License: GPL-2
BugReports: http://groups.google.com/group/sqldf
URL: http://nls2.googlecode.com


> 
> Did I do something wrong? Will a binary appear eventually? Would I have to 
> make it myself?
> 
> Best wishes
> 
> Troels Ring
> 
> Aalborg, Denmark
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] nls for R version 3.3.2

2016-11-29 Thread Berend Hasselman

> On 29 Nov 2016, at 09:30, Troels Ring  wrote:
> 
> Dear friends - updated to R 3.3.2 - tried to install nls - got this sad 
> response
> 
> package ‘nls’ is not available (as a binary package for R version 3.3.2)
> 
> I'm on windows 7
> 
> Did I do something wrong? Will a binary appear eventually? Would I have to 
> make it myself?
> 

Indeed you did something wrong.
nls is part of the stats package in R.
Just do ?nls to see the help.

Berend Hasselman

> Best wishes
> 
> Troels Ring
> 
> Aalborg, Denmark
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] nls for R version 3.3.2

2016-11-29 Thread Troels Ring
Dear friends - updated to R 3.3.2 - tried to install nls - got this sad 
response


package ‘nls’ is not available (as a binary package for R version 3.3.2)

I'm on windows 7

Did I do something wrong? Will a binary appear eventually? Would I have 
to make it myself?


Best wishes

Troels Ring

Aalborg, Denmark

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.