Hi Jan;
Thanks so much. It is much appreciated. The problem has been solved.
Regards,
Greg
On Mon, Sep 24, 2018 at 3:05 PM Jan T Kim wrote:
> hmm... I don't see the quote="" paraneter in your read.csv call
>
>
> Best regards, Jan
> --
> Sent from my mobile. Apologies for typos and terseness
hmm... I don't see the quote="" paraneter in your read.csv call
Best regards, Jan
--
Sent from my mobile. Apologies for typos and terseness
On Mon, Sep 24, 2018, 20:40 greg holly wrote:
> Hi Jan;
>
> Thanks so much for this. Yes, I did. Her is my code to read
> data:
Hi Jan;
Thanks so much for this. Yes, I did. Her is my code to read
data: a<-read.csv("for_R_graphs.csv", header=T, sep=",")
On Mon, Sep 24, 2018 at 2:07 PM Jan T Kim via R-help
wrote:
> Yet one more: have you tried adding quote="" to your read.table
> parameters? Quote characters have a 50%
Hi Bert;
Thanks for writing. Here are my answers to your questions:
Regards,
Greg
1. What is your OS? What is your R version? *The version is 3.5.0*
2. How do you know that your data has 151 rows? *Because I looked in excel
also I work on the same data in SAS*
3. Are there stray
Yet one more: have you tried adding quote="" to your read.table
parameters? Quote characters have a 50% chance of being balanced,
and they can encompass multiple lines...
On Mon, Sep 24, 2018 at 11:40:47AM -0700, Bert Gunter wrote:
> One more question:
>
> 5. Have you tried shutting down,
One more question:
5. Have you tried shutting down, restarting R, and rereading?
-- Bert
On Mon, Sep 24, 2018 at 11:36 AM Bert Gunter wrote:
> *Perhaps* useful questions (perhaps *not*, though):
>
> 1. What is your OS? What is your R version?
> 2. How do you know that your data has 151 rows?
*Perhaps* useful questions (perhaps *not*, though):
1. What is your OS? What is your R version?
2. How do you know that your data has 151 rows?
3. Are there stray characters -- perhaps a stray eof -- in your data? Have
you checked around row 96 to see what's there?
4. Are the data you did get in
Hi Dear all;
I have a dataset with 151*291 dimension. After making data read into R I am
getting a data with 96*291 dimension. Even though I have no error message
from R I could not understand the reason why I cannot get data correctly?
Here are my codes to make read the data
Hi Jim,
With a little dig on my side , I have found the issue as to why the
script is skipping that file. The file is "ISO-8859 text, with CRLF
line terminators"
The file should be ASCII and I changed using dos2unix and CRLF line
terminators is eliminated but still I am not reading it. How can
You need to provide reproducible data. What does the file contain? Why
are you using 'sep=' when reading fixed format. You might be able to
attach the '.txt' to your email to help with the problem. Also you did not
state what the differences that you are seeing. So help us out here.
Jim
Hi all,
I am using R to extract data on a regular basis.
However, sometimes using the same script and the same data I am
getting different observation.
The library I am using and how I am reading it is as follows.
library(stringr)
namelist <- file("Adress1.txt",encoding="ISO-8859-1")
Name <-
Try asking on R-sig-geo mailing list
Also, state what package(s) you are using, and include what you have already
tried.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
On 1/19/17, 10:53 AM, "R-help on behalf of lily li"
Hi R users,
I'm trying to open netcdf files in R. Each nc file has daily climate
measurements for a whole year, covering the whole US. How to limit the file
to a specific rectangle? Thanks.
[[alternative HTML version deleted]]
__
Hi Ed,
I'm not sure I understand, but can't you rwad the files one by one and
create one data.frane using rbind?
Is easy to put do in a loop too.
Best wishes,
Ulrik
On Thu, 2 Jun 2016, 20:23 Ed Siefker, wrote:
> I have many data files named like this:
>
>
I have many data files named like this:
E11.5-021415-dko-1-1-masked-bottom-area.tsv
E11.5-021415-dko-1-1-masked-top-area.tsv
E11.5-021415-dko-1-2-masked-bottom-area.tsv
E11.5-021415-dko-1-2-masked-top-area.tsv
E11.5-021415-dko-1-3-masked-bottom-area.tsv
E11.5-021415-dko-1-3-masked-top-area.tsv
Thanks, Dan.
Your codes work fine. But I have tens of countries UK, JP, BR, US...,
each of which has ten columns a1, a2, ..., a10 of data. So a little more
automation is needed.
I have been trying to make a list of each country's data and use sapply
thing
to get
UK JP
2009 Q2
-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of ???
Sent: Tuesday, July 28, 2015 9:42 AM
To: r-help@r-project.org
Subject: [R] Reading data with two rows of variable names using read.zoo
Dear R gurus.
I have a data file which has two rows of variable names.
And the time index has
Dear R gurus.
I have a data file which has two rows of variable names.
And the time index has a little unusual format. I have no idea
how to handle two names and awkward indexing for the quarters.
Lines -
Index; UK; UK; JP; JP
Index; a1; a2; a1; a2
2009 2/4;2;4;3;2
2009 3/4;5;2;1;4
2009
On Tue, Nov 18, 2014 at 9:42 PM, Upananda Pani upananda.p...@gmail.com wrote:
Dear All,
I want to read the my time series data using XTS package and then to
calculate return using PeformanceAnalytics Package but i am getting the
following error. Please help me to solve the problem. The error
Dear All,
I want to read the my time series data using XTS package and then to
calculate return using PeformanceAnalytics Package but i am getting the
following error. Please help me to solve the problem. The error follows:
# Required Libraries
library(xts)
library(PerformanceAnalytics)
Dear All,
I have data of the format shown in the link
http://www.data.jma.go.jp/gmd/env/data/radiation/data/geppo/201004/DR201004_sap.txt
that I need to read. I have downloaded all the data from the link and I
have it on my computer. I used the following script (got it from web) and
was able to
-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Alemu Tadesse
Sent: Wednesday, October 29, 2014 2:21 PM
To: r-help@r-project.org
Subject: [R] reading data from a web
Dear All,
I have data of the format shown in the link
http://www.data.jma.go.jp/gmd/env/data
After saving a file like so...
con - gzcon(file.gz, wb))
writeBin(vector, con, size=2)
close(con)
I can read it back into R like so...
con - gzcon(file.gz, rb))
vector - readBin(con, integer(), 4800, size=2, signed=FALSE)
close(con)
...and I'm wondering what other programs might be able
-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf
Of Mike Miller
Sent: Monday, April 21, 2014 6:00 PM
To: R-Help List
Subject: [R] reading data saved with writeBin() into anything other than R
After saving a file like so...
con - gzcon(file.gz, wb))
writeBin(vector
On Tue, 22 Apr 2014, William Dunlap wrote:
For me that other software would probably be Octave. I'm interested if
anyone here has read in these files using Octave, or a C program or
anything else.
I typed 'octave read binary file' into google.com and the first hit was
the Octave help file
Dear list,
I've gotten access to the US Census Bureau's developer API for accessing
various datasets they maintain. Here is the link:
http://www.census.gov/developers/
They say that:
Data are accessible to software developers through a stateless HTTP GET
request. Up to 50 variables can be
I got it:
library(rjson)
library(plyr)
test-fromJSON(file=url(http://api.census.gov/data/2010/sf1?key=mykeyget=P0030001,NAMEfor=county:*in=state:48;))
test2-ldply(test)[-1,]
names(test2)-ldply(test)[1,]
head(test2)
P0030001 NAME state county
258458 Anderson County48001
Hi experts,
I want to read data from an excel data like this:
for the fifth column, from first row until 140 but only 1,3,5,7,.139
(only 70 values),
How can I do it in R?
thanks
[[alternative HTML version deleted]]
__
Take a look at the XLConnect package. I use it for all the
reading/writing for Excel files.
Jim Holtman
Data Munger Guru
What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.
On Mon, Nov 4, 2013 at 8:47 AM, Baro babak...@gmail.com wrote:
Hi
You can use the XLConnect package to read in a range of rows and columns,
then define a function to subset the odd rows. For example,
library(XLConnect)
wb - loadWorkbook(C:/temp/MyData.xls)
dat - readWorksheet(wb, sheet=getSheets(wb)[1], startRow=1, endRow=139,
startCol=5, endCol=5)
dat -
thanks alot, but now I have another problem: my Excel file is very big and
I get this error, which says:
Error: OutOfMemoryError (Java): Java heap space
Is there any way to read each value one by one and save them in an array?
On Mon, Nov 4, 2013 at 6:13 AM, Adams, Jean jvad...@usgs.gov wrote:
Perhaps the discussion at this link will help ... (see especially the
second answer).
http://stackoverflow.com/questions/7963393/out-of-memory-error-java-when-using-r-and-xlconnect-package
Jean
On Mon, Nov 4, 2013 at 8:26 AM, Baro babak...@gmail.com wrote:
thanks alot, but now I have
thanks, I changed my code, but still have the same problem :/
On Mon, Nov 4, 2013 at 6:49 AM, Adams, Jean jvad...@usgs.gov wrote:
Perhaps the discussion at this link will help ... (see especially the
second answer).
)
To: Adams, Jean jvad...@usgs.gov
Cc: R help r-help@r-project.org
Subject: Re: [R] Reading data from Excel file in r
thanks alot, but now I have another problem: my Excel file is very big and
I get this error, which says:
Error: OutOfMemoryError (Java): Java heap space
Is there any way to read each
LTE Smartphone
Original message
From: Baro babak...@gmail.com
Date: 11/04/2013 09:26 (GMT-05:00)
To: Adams, Jean jvad...@usgs.gov
Cc: R help r-help@r-project.org
Subject: Re: [R] Reading data from Excel file in r
thanks alot, but now I have another problem: my Excel
Hi,
It would be better to give an example.
If your dataset is like the one attached:
con-file(Trial1.txt)
Lines1- readLines(con)
close(con)
#If the data you wanted to extract is numeric and the header and footer are
characters,
Hi,
I tried to read your data from the image:
OPENCUT- read.table(OpenCut.dat,header=TRUE,sep=\t)
OPENCUT
FC LC SR DM
1 400030.34 1323.5 0 400
2 12680.13 2.5 0 180
3 472272.75 2004.7 3 300
4 332978.03 1301.3 106 180
5 98654.20 295.0 0 180
6 68142.05 259.9 69
Hi,
Try this:
files-paste(MSMS_,23,PepInfo.txt,sep=)
read.data-function(x) {names(x)-gsub(^(.*)\\/.*,\\1,x);
lapply(x,function(y) read.table(y,header=TRUE,sep =
\t,stringsAsFactors=FALSE,fill=TRUE))}
Hi Vera,
Not sure I understand your question.
Your statement
In my lista I can´t merge rows to have the group, because the idea is
for each file count frequencies of mm, when b0.01. after that I
want a graph like the graph in attach.
files-paste(MSMS_,23,PepInfo.txt,sep=)
Hi,
I am not able to open your graph. I am using linux.
Also, the codes in the function are not reproducible
directT - direct[grepl(^t, direct)]
directC - direct[grepl(^c, direct)]
It takes double the time to know what is going on.
dir()
#[1] a1 a2 a3 b1 b2 c1
direct-
HI Vera,
No problem. I am cc:ing to r-help.
A.K.
From: Vera Costa veracosta...@gmail.com
To: arun smartpink...@yahoo.com
Sent: Sunday, February 17, 2013 5:44 AM
Subject: Re: reading data
Hi. Thank you. It works now:-)
And yes, I use windows.
Thank you
Hi,
Try by putting quotes ie.
res- do.call(c,...)
A.K.
From: Vera Costa veracosta...@gmail.com
To: arun smartpink...@yahoo.com
Sent: Saturday, February 16, 2013 7:10 PM
Subject: Re: reading data
Thank you.
In mine, I have an error 'what' must be a
Hi,
#working directory data1 #changed name data to data1. Added some files in each
of sub directories a1, a2, etc.
indx1- indx[indx!=]
lapply(indx1,function(x) list.files(x))
#[[1]]
#[1] a1.txt m11kk.txt
#[[2]]
#[1] a2.txt m11kk.txt
#[[3]]
#[1] a3.txt
HI,
Just to add:
res-do.call(c,lapply(list.files(recursive=T)[grep(m11kk,list.files(recursive=T))],function(x)
{names(x)-gsub(^(.*)\\/.*,\\1,x); lapply(x,function(y)
read.table(y,header=TRUE,stringsAsFactors=FALSE,fill=TRUE))})) #it seems like
one of the rows of your file doesn't have 6
HI,
No problem.
?c() for concatenate to vector or list().
If I use do.call(cbind,..) or do.call(rbind,...)
do.call(cbind,lapply(list.files(recursive=T)[grep(m11kk,list.files(recursive=T))],function(x)
{names(x)-gsub(^(.*)\\/.*,\\1,x); lapply(x,function(y)
Hi I am really new using R, so this is really a beginner stuff! I
created a very small data set on excel and then converted it to .csv
file. I am able to open the data on R using the command read.table
(mydata1.csv, sep=,, header=T) and it just works fine. But when I
want to work on the data
hello,
The error message is right, you have read the file have NOT assigned it to
an object, to a variable.
mydata1 - read.table (mydata1.csv, sep=,, header=T)
Now you can use the variable 'mydata1'. It's a data.frame, and you can see
what it looks like with the following instructions.
You need to assign your data set to something -- right now you're just
reading it in and then throwing it away:
dats - read.csv(mydata1.csv)
mean(dats$X) # Dollar sign, not ampersand
Best,
Michael
On Tue, May 15, 2012 at 8:57 AM, jacaranda tree myjacara...@yahoo.com wrote:
Hi I am really new
Message-
From: myjacara...@yahoo.com
Sent: Tue, 15 May 2012 05:57:51 -0700 (PDT)
To: r-help@r-project.org
Subject: [R] reading data into R
Hi I am really new using R, so this is really a beginner stuff! I
created a very small data set on excel and then converted it to .csv
file. I am able
Hi !
You need to assign the output of read.table() into an object; this is
how R works:
mydata - read.table (mydata1.csv, sep=,, header=T)
mymean - mean(mydata$var)
You should read some introductory material.
I found this useful:
http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
And
Dear R-users,
I have to read data from a worksheet that is available on the Internet. I
have been doing this by copying the worksheet from the browser.
But I would like to be able to copy the data automatically using the url
command.
But when using url command the result is the source code, I
Thanks Sarah. I have read about the problems with attach(), and I
will try to avoid it.
I have now found the line that's causing the problem is:
setwd(z:/homework)
With that line in place, either in a program or in Rprofile.site (?),
then the moment I run R and simply enter (before reading any
Well, if your problem is that a workspace is being loaded automatically
and you don't want that workspace, you have several options:
1. Use a different directory for each project so that the file loaded
by default is the correct one.
2. Don't save your workspace, but regenerate it each time.
3.
A follow-up on the data/variable issue I posted earlier:
Here was what I did, which might was obviously causing the problem:
I inserted the following line in my file Rprofile.site:
setwd(z:/R)
Then, as soon as I run R (before I read any data) I issue
summary(mydata)
I get summary
Can someone help me with this variable/data reading issue?
I read a csv file and transform/create an additional variable (called y).
The first set of commands below produced different sample statistics
for hw11$y and y
In the second set of command I renameuse the variable name yy, and
sample
Hi,
The obvious answer is don't use attach() and you'll never have
that problem. And see further comments inline.
On Tue, Nov 15, 2011 at 6:05 PM, Steven Yen s...@utk.edu wrote:
Can someone help me with this variable/data reading issue?
I read a csv file and transform/create an additional
Hi,
I had a large file for which I require a subset of rows. Instead of reading
it all into memory, I use the awk command to get the relevant rows. However,
I'm doing it pretty inefficiently as I write the subset to disk, before
reading it into R. Is there a way that I can read it into an R
On Mon, Oct 17, 2011 at 9:23 AM, Brian Smith bsmith030...@gmail.com wrote:
Hi,
I had a large file for which I require a subset of rows. Instead of reading
it all into memory, I use the awk command to get the relevant rows. However,
I'm doing it pretty inefficiently as I write the subset to
On Mon, 17 Oct 2011, Brian Smith wrote:
Hi,
I had a large file for which I require a subset of rows. Instead of reading
it all into memory, I use the awk command to get the relevant rows. However,
I'm doing it pretty inefficiently as I write the subset to disk, before
reading it into R. Is
Got it. Thanks!
On Mon, Oct 17, 2011 at 9:40 AM, Prof Brian Ripley rip...@stats.ox.ac.ukwrote:
On Mon, 17 Oct 2011, Brian Smith wrote:
Hi,
I had a large file for which I require a subset of rows. Instead of
reading
it all into memory, I use the awk command to get the relevant rows.
: ESTEBAN ALFARO CORTES
CC: r-help@r-project.org
Asunto: Re: [R] Reading data in lisp format
If you think that R is loosely typed, then examining LiSP code will
change your mind, or at least give you a new data point further out on
the Loose-Tight axis. I think you will need to do the processing
Thanks Cesar,
Any idea for this contents of the file?
;; positive examples represent people that were granted credit
(def-pred credit_screening :type (:person)
:pos
((s1) (s2) (s4) (s5) (s6) (s7) (s8) (s9) (s14) (s15) (s17) (s18) (s19)
(s21) (s22) (s24) (s28) (s29) (s31) (s32)
, that somebody there should be able to help.
Rainer
Regards,
Esteban
De: David Winsemius [mailto:dwinsem...@comcast.net] Enviado el:
mié 21/09/2011 17:08 Para: ESTEBAN ALFARO CORTES CC:
r-help@r-project.org Asunto: Re: [R] Reading data in lisp format
Hi,
I am trying to read the credit.lisp file of the Japanese credit database in
UCI repository, but it is in lisp format which I do not know how to read. I
have not found how to do that in the foreign library
http://archive.ics.uci.edu/ml/datasets/Japanese+Credit+Screening
Em 21/9/2011 07:39, ESTEBAN ALFARO CORTES escreveu:
Hi,
I am trying to read the credit.lisp file of the Japanese credit database in
UCI repository, but it is in lisp format which I do not know how to read. I have not
found how to do that in the foreign library
If you think that R is loosely typed, then examining LiSP code will
change your mind, or at least give you a new data point further out on
the Loose-Tight axis. I think you will need to do the processing by
hand.
The organization of the data is fairly clear. There are logical
columns
If you know how many lines to skip, you can set skip=xx in read.table.
The question is what you can do if you have variable lines to skip in
various files but you have characters indicating the begining of the
data, like ~A. What you can do is get the file in using readLines,
use grep to find the
Dear All,
I have many files with a lot of headers and text at the beginning of the file.
The headers are not uniform though and they contain different sizes Is there a
way where I can read a table and skip all of the headers/text on top of it
until I encounter a certain text pattern? Here is
use readLines to read in the entire file, find your pattern of where your data
starts and then write the data starting there using writeLines to a temporary
file and now you can just read in that file using read.table; you will have
'skipped' the extra header data.
Sent from my iPad
On Aug
Hi Duncan
Your method works well for my situation when I make only one call to the
database/URL with the login info. Our database is configured like the
first situation (cookies) that you described below. Now, I will need to
make multiple successive calls to get data for different sites in
Hi Steve
RCurl can help you when you need to have more control over Web requests.
The details vary from Web site to Web site and the different ways to specify
passwords, etc.
If the JSESSIONID and NCES_JSESSIONID are regular cookies and returned in the
first
request as cookies, then you can
I am trying to retrieve data from a password protected database. I have
login information and the proper url. When I make a request to the url,
I get back some info, but need to read the hidden header information
that has JSESSIONID and NCES_JSESSIONID. They need to be used to set
cookies
Can I use sink() to transfer the MLE results which are a S4 type object to a
text file?
Can someone show me how to do this?
--
View this message in context:
http://r.789695.n4.nabble.com/Reading-Data-from-mle-into-excel-tp3545569p3563385.html
Sent from the R help mailing list archive at
.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of Bazman76
Sent: Tuesday, May 31, 2011 9:04 AM
To: r-help@r-project.org
Subject: Re: [R] Reading
Intermountain Healthcare
[hidden email]
801.408.8111
-Original Message-
From: [hidden email] [mailto:r-help-bounces@r-
project.org] On Behalf Of Bazman76
Sent: Tuesday, May 31, 2011 9:04 AM
To: [hidden email]
Subject: Re: [R] Reading Data from mle into excel?
Can I use sink
To: r-help@r-project.org
Subject: Re: [R] Reading Data from mle into excel?
Hi Greg,
I have about 40 time series each of which I have to run a seperate MLE
on. I will be experimenting with different starting values for the
parameters etc, so some way to automate the process will be useful
Greg that's it!
Thank you thank you thank you
So simple in the end?
From: greg.s...@imail.org
To: h_a_patie...@hotmail.com; r-help@r-project.org
Date: Tue, 31 May 2011 10:27:13 -0600
Subject: RE: [R] Reading Data from mle into excel?
I did not see any code
thanks for all your help
I have taken a slightly different route but I think I am getting there
library(plyr)
#setwd(C:/Documents and Settings/Hugh/My Documents/PhD)
#files-list.files(C:/Documents and Settings/Hugh/My
Documents/PhD/,pattern=Swaption Vols.csv)
#vols - lapply(files, read.csv,
Hi there,
I ran the following code:
vols=read.csv(file=C:/Documents and Settings/Hugh/My Documents/PhD/Swaption
vols.csv
, header=TRUE, sep=,)
X-ts(vols[,2])
#X
dcOU-function(x,t,x0,theta,log=FALSE){
Ex-theta[1]/theta[2]+(x0-theta[1]/theta[2])*exp(-theta[2]*t)
Hi:
This isn't too hard to do. The strategy is basically this:
(1) Create a list of file names. (See ?list.files for some ideas)
(2) Read the data files from (1) into a list.
(3) Create a function to apply to each data frame in the list.
(4) Apply the function to each data frame.
(5) Extract
I think cognizance should be taken of fortune(very uneasy).
cheers,
Rolf Turner
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Hi Scott,
Thanks for this.
Got some questions below:
Thanks
Hugh
Date: Mon, 23 May 2011 17:32:52 -0500
From: scttchamberla...@gmail.com
To: h_a_patie...@hotmail.com
CC: r-help@r-project.org
Subject: Re: [R] Reading Data from mle into excel?
I would read the datasets into a list
I would read the datasets into a list first, something like this which will
make a list of dataframes:
filenames - dir() # where only filenames you want to read in are in this
directory
dataframelist - lapply(filenames, read.csv, header = TRUE, sep = ,)
You should be able to put the whole
I have data in the following form:
judge poster score poster score poster score
a1 89 2 79 392
b 3 45 4 65
and am trying to get it to the following:
Poster Judge_A Judge_B Judge_C
1
try this:
input - readLines(textConnection(a1 89 2
79 392
+ b 3 45 4 65))
closeAllConnections()
# now parse each line to create a dataframe with each row being the score
result - NULL
for (i in
Hi
r-help-boun...@r-project.org napsal dne 16.06.2010 22:14:33:
Thanks for your reply. Possibly I donot have perl. I am not sure
although.
How I can find whether I have it? If I dont have it then where can I
download it from?
Do you have Excel? If yes you can
Open Excel
Select data you
Surely you could also save the excel spreadsheet with the relevant data as a
text file, and then read it into R as normal?
Select save as in Excel and then change save as type to Text (Tab
delimited)(*.txt).
Save it in the directory you are using in R, (or change the directory in R to
where
If you're on windows and you never installed perl, then you don't have
it. Another easy way to find out is to type perl in the search
window under the start menu. If there's no perl.exe on your computer,
you don't have it.
Take a look at : http://www.perl.org/
If you download Perl, it doesn't
Hi
r-help-boun...@r-project.org napsal dne 18.06.2010 14:00:47:
Surely you could also save the excel spreadsheet with the relevant data
as a
text file, and then read it into R as normal?
Select save as in Excel and then change save as type to Text (Tab
delimited)(*.txt).
Save it in
Can anyone help me how to read xls file into R. I have tried following
library(gdata)
xlsfile - file.path(.path.package('gdata'),'xls','iris.xls')
read.xls(xlsfile)
I got following error:
Converting xls file to csv file... Error in system(cmd, intern = !verbose) :
perl not found
Error in
On Wed, Jun 16, 2010 at 2:29 PM, Christofer Bogaso
bogaso.christo...@gmail.com wrote:
Can anyone help me how to read xls file into R. I have tried following
library(gdata)
xlsfile - file.path(.path.package('gdata'),'xls','iris.xls')
read.xls(xlsfile)
I got following error:
Converting xls
On Wed, Jun 16, 2010 at 7:29 PM, Christofer Bogaso
bogaso.christo...@gmail.com wrote:
Can anyone help me how to read xls file into R. I have tried following
library(gdata)
xlsfile - file.path(.path.package('gdata'),'xls','iris.xls')
read.xls(xlsfile)
I got following error:
Converting xls
Thanks for your reply. Possibly I donot have perl. I am not sure although.
How I can find whether I have it? If I dont have it then where can I
download it from?
On Thu, Jun 17, 2010 at 12:57 AM, Barry Rowlingson
b.rowling...@lancaster.ac.uk wrote:
On Wed, Jun 16, 2010 at 7:29 PM, Christofer
Hello R wizards,
What is the best way to read a data file containing both fixed-width and
tab-delimited files? (More detail follows.)
_*Details:*_
The U.S. Bureau of Labor Statistics provides local area unemployment
statistics at ftp://ftp.bls.gov/pub/time.series/la/, and the data are
I tried to shoehorn the read.* functions and match both the fixed width and
the variable width fields
in the data but it doesn't seem evident to me. (read.fwf reads fixed width
data properly but the rest
of the fields must be processed separately -- maybe insert NULL stubs in the
remaining fields
Ah, I should have mentioned this. Personally I work on Macs (Leopard)
and PC's (XP Pro and XP Pro x64). Even though the PC's do have Cygwin,
I'm trying to make this code portable. So I want to avoid such things as
sed, perl, etc.
I want to do this in R, even if processing is a bit slower.
Hullo
I'm trying to read some time series data of meteorological records
that are available on the web (eg http://climate.arm.ac.uk/calibrated/soil/dsoil100_cal_1910-1919.dat)
. I'd like to be able to read in the digital data directly into R.
However, I cannot work out the right function and
Try this. First we read the raw lines into R using grep to remove any
lines containing a character that is not a number or space. Then we
look for the year lines and repeat them down V1 using cumsum. Finally
we omit the year lines.
myURL -
Mark Leeds pointed out to me that the code wrapped around in the post
so it may not be obvious that the regular expression in the grep is
(i.e. it contains a space):
[^ 0-9.]
On Sat, Feb 27, 2010 at 7:15 AM, Gabor Grothendieck
ggrothendi...@gmail.com wrote:
Try this. First we read the raw
Thanks, Gabor. My take away from this and Phil's post is that I'm
going to have to construct some code to do the parsing, rather than
use a standard function. I'm afraid that neither approach works, yet:
Gabor's gets has an off-by-one error (days start on the 2nd, not the
first), and the
1 - 100 of 163 matches
Mail list logo