On May 8, 2012, at 05:10 , array chip wrote:
Hi, what does a low R-square value from an ANCOVA model mean? For example, if
the R square from the model is about 0.2, does this mean the results should
NOT be trusted? I checked the residuals of the model, it looked fine...
It just means that
On May 8, 2012, at 08:34 , array chip wrote:
Thank you Peter, so if I observe a significant coefficient, that significance
still holds because the standard error of the coefficient has taken the
residual error (which is large because large R square) into account, am I
correct?
In
HI
I've a unix machine which is of 16 GB. when i run any R process it takes
only 2GB of Memory. How to increase Memory limit. It takes a lot of time to
run the process for larger datasets
-
Thanks in Advance
Arun
--
View this message in context:
Thank you Peter, so if I observe a significant coefficient, that significance
still holds because the standard error of the coefficient has taken the
residual error (which is large because large R square) into account, am I
correct?
John
From: peter
Hi there,
I am new to the package glmmadmb, but need it to perform a zero-inflated
gzlmm with a binomial error structure. I can't seem to get it to work
without getting some strange error messages.
I am trying to find out what is affecting the number of seabird calls on an
array of recorders
Hi Tal,
Thanks for replying.
(1) I am going to use cohort as a factor and (2) no, there are no strong
correlation between cohort and the other predictors.
I am using a binomial GLM and the lack of significance of cohort seems it
was due to one of the 11 levels (the base level) of this factor to
Thank you Peter for showing me the error.
I did not realize it. Now I have removed that cohort (there was just one
observation!) and checked the numbers for each of the other cohorts. I have
re-run the model and now it seems to make much more sense to me.
I am going to use one specific cohort,
Hi,
I have a data with the forum
a b c
8.9 0 0
7.4 1 0
4.2 0 1
2.3 1 1
Which are explanatory variables in this data?
And also I want to fit logistic regression model with two explanatory
variables? (I think that I should
Hi,
I want to estimate the probability within logistic regression? How can I do
this (within R) ?
For example;
I have a data with the form. How should I estimate the probability that a
has the value 4.2 and b=1 and also c=1?:
a b c
8.9 0 0
7.4 1 0
From the posting guide of this mailing list: Basic statistics and classroom
homework: R-help is not intended for these.
Ask your fellow students or your teacher.
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie
Hello.
Sorry if that's considered laziness as I've just learnt R and didn't know how
important it is to do dput for all problems.
If I was truly lazy then I wouldn't even bother to sign up here and ask
questions.
Please be nicer next time.
Suhaila.
CC: r-help@r-project.org
From:
On Thu, 2012-05-03 at 08:32 -0500, Nathan Furey wrote:
Dear R-help users,
I am trying to analyze some visual transect data of organisms to generate a
habitat distribution model. Once organisms are sighted, they are followed
as point data is collected at a given time interval. Because of the
On 05/08/2012 01:04 PM, Romero, Ingrid C. wrote:
Hi,
My name is Ingrid, in this moment I try to make a plot with filled.contour.
Initially, I can to obtain the graphic but the xaxis was not fine, because the
intervals were not coherent (Attach file 1: Plot_age_ML_contamana_final.pdf)
I
Hello.
Thanks for the help and your reasonable explanation. I'll definitely start
using 'dput' next time.
Apologies for the trouble to all who helped me.
Suhaila.
On 05/08/2012 06:35 PM, Suhaila Haji Mohd Hussin wrote:
Hello.
Sorry if that's considered laziness as I've just learnt R
I have checked that. It allows me to get the t-1, t-2 value but not the t+1
value.
Is there any other way of achieving this other than using the plm package?
On Mon, May 7, 2012 at 8:27 PM, Liviu Andronic landronim...@gmail.comwrote:
On Mon, May 7, 2012 at 3:21 PM, Apoorva Gupta
Dear all,
I would like to run a simple regression model y~x1+x2+x3+...
The problem is that I have a lot of independent variables (xi) -- around
one hundred -- and that some of them are categorical with a lot of
categories (like, for example, ZIP code). One straightforward way would be
to (a)
-Original Message-
I avoid the biplot at all costs, because IMHO it violates one
of the tenets of good graphic design: It has two entirely
different scales on axes. These are maximally confusing to
the end-user. So I never use it.
I think you're being unnecessarily
Has anyone got any advice about what hardware to buy to run lots of R
analysis? Links to studies or other documents would be great as would
be personal opinion.
We are not currently certain what analysis we shall be running, but our
first implementation uses the functions lme and gls from
Hello, there!
Basically my problem is very clear. I would like to take a
(numerical) integration of a function f(x,y) which can be quite complex of x
and y, over a disk (x-a)^2+(y-b)^2= r^2 (with r constant). However, after some
search in R, I just cannot find a function in R that suits my
At 08:40 08/05/2012, lincoln wrote:
Thank you Peter for showing me the error.
I did not realize it. Now I have removed that cohort (there was just one
observation!) and checked the numbers for each of the other cohorts. I have
re-run the model and now it seems to make much more sense to me.
I
Simply impossible seems an odd description for a technique described in every
elementary calculus text under the heading integration in cylindrical
coordinates.
---
Jeff NewmillerThe .
Fantastic Jan,
Thanks a lot for the example on how i achieve this with melt()/cast().
Very good for my understanding of these functions.
Karl
On 07/05/12 13:49, Jan van der Laan wrote:
using reshape:
library(reshape)
m - melt(my.df, id.var=pathway, na.rm=T)
cast(m, pathway~variable, sum,
Hi all,
Basically, I have data in the format of (up to 1 gig in size) text files
containing stuff like:
F34060F81000F28055F8A000F2E05EF8F000F34 (...)
The data is basically strings denoting hex values (9 = 9, A = 10, B = 11,
...) organised in fixed, small blocks. What I want to do is to read in
Hi everybody, I am sorry that I am kind of spamming this forum, but I have
searched for some input everywhere and cant really find a nice solution for my
problem.
Data looks like:
price
2011-11-01 08:00:00 0.0
2011-11-01 08:00:00 0.0
Hello,
Im currently writing my bachelor thesis in statistical finance and i have
run into a small problem. I want to evaluate forcasts from my GARCH with
realized intraday volatility. The intraday data is Tick-data over a certain
period. The date column is presented as for example 2011-11-01
How many data points do you have?
--
View this message in context:
http://r.789695.n4.nabble.com/What-is-the-most-cost-effective-hardware-for-R-tp4617155p4617187.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org
Hello,
I want to perform nagelkerke pseudo r2 test ...
can someone tell me is there any r function or package available for doing
it.
and also the sample input data how it should be.
Regards
GRR
[[alternative HTML version deleted]]
__
Hi there,
I'm sorry if I a send it for second time, I've just subscribed for the list.
I am trying to interface c++ code in R and make a package. With R CMD SHLIB
the dll was created, but when I try R CMD check, I am getting 'undefined
reference to..' linkage error messages.
The relevant c++
I think the question on your mind should be: 'what do I want to do with this
plot'? Just producing output from the PCA is easy - plotting the output$sd
is probably quite informative. From the sounds of it, though, you want to do
clustering with the PCA component loadings? (Since that's mostly what
On May 8, 2012, at 4:35 AM, Suhaila Haji Mohd Hussin wrote:
Hello.
Sorry if that's considered laziness as I've just learnt R and didn't
know how important it is to do dput for all problems.
If I was truly lazy then I wouldn't even bother to sign up here and
ask questions.
Please be
I don't know if we can figure that out... I would figure out what these
data are, and then read the relevant help files, ?glm, and literature
associated with linear modeling.
HTH,
Stephen
On 05/08/2012 01:15 AM, T Bal wrote:
Hi,
I have a data with the forum
a b c
8.9
Can you parallelize the code? It really depends on where the bottle
neck is.
HTH,
Stephen
On 05/07/2012 10:37 PM, arunkumar wrote:
HI
I've a unix machine which is of 16 GB. when i run any R process it takes
only 2GB of Memory. How to increase Memory limit. It takes a lot of time to
run
On May 8, 2012, at 4:35 AM, Suhaila Haji Mohd Hussin wrote:
Hello.
Sorry if that's considered laziness as I've just learnt R and didn't
know how important it is to do dput for all problems.
If I was truly lazy then I wouldn't even bother to sign up here and
ask questions.
I didn't say
You can use the to.period family of functions in the xts package for
this. For example,
Lines -
2011-11-01 08:00:00 0.0
2011-11-01 08:00:00 0.0
2011-11-01 08:02:00 0.0
2011-11-01 08:03:00 -0.01709
2011-11-01 08:24:00 0.0
2011-11-01 08:24:00 0.0
Le mardi 08 mai 2012 à 10:44 +0200, osvald wiklander a écrit :
Hi everybody, I am sorry that I am kind of spamming this forum, but I
have searched for some input everywhere and cant really find a nice
solution for my problem.
Data looks like:
price
On Mon, May 7, 2012 at 8:54 PM, Santosh santosh2...@gmail.com wrote:
Hello experts!!
I apologize for posting SPlus related query here..badly in need of relevant
info..
I usually use R (and your advice/tips) for my daily work. Was wondering if
there is an equivalent of sheetCount of the
On Tue, May 8, 2012 at 12:14 PM, Apoorva Gupta apoorva.ni...@gmail.com wrote:
I have checked that. It allows me to get the t-1, t-2 value but not the t+1
value.
Is there any other way of achieving this other than using the plm package?
It would be easier to help if you provided a minimal
I think the general experience is that R is going to be more
memory-hungry than other resources so you'll get the best bang for
your buck on that end. R also has good parallelization support: that
and other high performance concerns are addressed here:
[...]
But having indicated that I don't see a biplot's multiple scales as
particularly likely to confuse or mislead, I'm always interested in
alternatives. The interesting question is 'given the same objective - a
qualitative indication of which variables have most influenced the location
On Tue, May 8, 2012 at 11:49 AM, Hugh Morgan h.mor...@har.mrc.ac.uk wrote:
Has anyone got any advice about what hardware to buy to run lots of R
analysis? Links to studies or other documents would be great as would be
personal opinion.
We are not currently certain what analysis we shall be
Are you the oswi who just asked a very similar question?
Regardless, as Josh said, the high-performance way to do this is to
use the specialty C code available through the xts package and the
to.period() functions, specifically to.minutes5
Michael
On Tue, May 8, 2012 at 8:48 AM, Milan
I'd imagine there are better tricks, but I know you can use
as.numeric() if you signal to R that you've got a hex value. See,
e.g.,
http://tolstoy.newcastle.edu.au/R/help/06/08/33758.html
Best,
Michael
On Tue, May 8, 2012 at 5:44 AM, Fang zhou.zf...@gmail.com wrote:
Hi all,
Basically, I have
Hello all,
I am doing an aggregation where the aggregating function returns not a
single numeric value but a vector of two elements using return(c(val1,
val2)). I don't know how to access the individual columns of that
vector in the resulting dataframe though. How is this done correctly?
Thanks,
ramakanth reddy ramakanth1387 at gmail.com writes:
I want to perform nagelkerke pseudo r2 test ...
can someone tell me is there any r function or package available for doing
it.
and also the sample input data how it should be.
How about
library(sos)
findFn(nagelkerke)
?
So this actually looks like something of a tricky one: if you wouldn't
mind sending the result of dput(head(agg)) I can confirm, but here's
my hunch:
Try this:
agg2 - aggregate(len ~ ., data = ToothGrowth, function(x) c(min(x), max(x)))
print(agg2)
str(agg2)
You'll see that the third column is
rbuxton moyble at hotmail.com writes:
I am new to the package glmmadmb, but need it to perform a
zero-inflated gzlmm with a binomial error structure. I can't seem
to get it to work without getting some strange error messages.
# I am trying to find out what is affecting the number of seabird
Dear useRs,
I am using mgcv version 1.7-16. When I create a model with a few
non-linear terms and a random intercept for (in my case) country using
s(Country,bs=re), the representative line in my model (i.e.
approximate significance of smooth terms) for the random intercept
reads:
You have received no answer yet. I think this is largely because there
is no simple answer.
1. You don't need to mess with dummy variable. R takes care of this
itself. Please read up on how to do regression in R.
2. However, it may not work anyway: too many variables/categories for
your data. Or
Hi everyone,
while trying to use 'segmented' (R i386 2.15.0 for Windows 32bit OS) to
determine the breakpoint I got stuck with an error message and I can't find
solution. It is connected with psi value, and the error says:
Error in seg.glm.fit(y, XREG, Z, PSI, weights, offs, opz) :
(Some)
On 05/08/2012 12:14 PM, Zhou Fang wrote:
How many data points do you have?
Currently 200,000. We are likely to have 10 times that in 5 years.
Why buy when you can rent? Unless your hardware is going to be
running 24/7 doing these analyses then you are paying for it to sit
idle. You might
You should think about the cloud as a serious alternative.
I completely agree with Barry. Unless you will utilize your machines
(and by utilize, I mean 100% cpu usage) all the time (including
weekends) you will probably better use your funds to purchase blocks
of machines when you need to run
Probably just pointing out the obvious, but:
200,000 data points may not be that many these days, depending on the
dimensionality of the data. Nor is 10 times that number, neither now
nor in 5 years, again depending on data dimensionality. So my question
is, have you actually tried running your
dear Szymon,
what do you mean
it does not work for others.. that fit within similar range?
Each dataset has its own features and breakpoint estimation is not as
simple as estimation of linear models even if your data fit within
similar range.
I will contact you out of the list for details,
Hi everyone,
Is there anyway I can convert more than 400 numeric variables to categorical
variables simultaneously?
as.factor() is really slow, and only one at a time.
Thank you very much.
ya
[[alternative HTML version deleted]]
__
How are they arranged currently? And should they be all one set of
levels or different factor sets?
Michael
On Tue, May 8, 2012 at 12:32 PM, ya xinxi...@163.com wrote:
Hi everyone,
Is there anyway I can convert more than 400 numeric variables to categorical
variables simultaneously?
Perhaps I have confused the issue. When I initally said data points I
meant one stand alone analysis, not one piece of data. Each analysis
point takes 1.5 seconds. I have not implemented running this over the
whole dataset yet, but I would expect it to take about 5 to 10 hours.
This is
Put a number on it.
really slow is not quantitative. What are we specially talking
about with respect to the size of the object you are converting? What
have you experienced so far? Exactly what is the code you are doing?
simultaneously would only happen if you parallelized the code and
I think this may be an R 2.14 vs R 2.13 difference: like you I get
different results for each run the beta of Revolution R Enterprise 6.0,
which has the R 2.14.2 engine (see below). In earlier versions of R, you
can manage parallel random number streams with the rsprng library.
By the way, you
On Tue, 8 May 2012, Hugh Morgan wrote:
Perhaps I have confused the issue. When I initially said data points I
meant one stand alone analysis, not one piece of data. Each analysis point
takes 1.5 seconds. I have not implemented running this over the whole
dataset yet, but I would expect it to
Hi Jim and Michael,
Thank you very much for replying.
Here is the information about my data. I have a data frame, including more than
800 variables(columns) and 3 cases(rows).400 of those variables are
categorical variables. I used to use Rcmdr to convert variables, however, when
On 05/08/2012 06:02 PM, Rich Shepard wrote:
On Tue, 8 May 2012, Hugh Morgan wrote:
Perhaps I have confused the issue. When I initially said data points I
meant one stand alone analysis, not one piece of data. Each analysis
point
takes 1.5 seconds. I have not implemented running this over the
On 08.05.2012 14:34, Stephen Sefick wrote:
Can you parallelize the code? It really depends on where the bottle neck
is.
HTH,
Stephen
On 05/07/2012 10:37 PM, arunkumar wrote:
HI
I've a unix machine which is of 16 GB. when i run any R process it takes
only 2GB of Memory. How to increase
Hi all,
I have some graphs where the values on the X and Y axes are by default
in exponent form like 2e+05 or 1.0e+07. Is it possible to make them in
a more readable form like 10M for 1.0e+07 or 200K for 2e+05?
Thanks and Regards,
- vihan
__
So, I'm maintaining some else's code, which is as always, a fun thing. One
feature of this code is the use of the 'seek' command.
In ?seek:
We have found so many errors in the Windows implementation of file
positioning that users are advised to use it only at their own
risk,
Hello,
Try
(x is your matrix)
rowMeans(x)
apply(x, 1, function(y) mean( y[y = 0] ))
Hope this helps,
Rui Barradas
york8866 wrote
Dear all,
I have encountered a problem with such a dataset:
1 52 2 5 2 6
1523 2 1 3 3
2 5
Dear all,
I have a database of 93 variables.
I have created a few subsets (10) by inserting different numbers of
variables in each one of them (the maximum is 6 anyway), like to represent
different phenomena. Hence, this is the logic;
1subset=contains a few variables=expresses 1phenomenon.
Now
Hi, Rui,
I tried your code. It did not work.
thanks,
--
View this message in context:
http://r.789695.n4.nabble.com/Help-deleting-negative-values-in-a-matrix-and-do-statistic-analysis-tp4617792p4618080.html
Sent from the R help mailing list archive at Nabble.com.
Dear community,
First of all, apologies, I'm pretty newbie, and maybe have not truly
understood this multiple correspondence analysis.
I have 9 categorial variables with 15, 12,12,7,9,11,8 ,4 , 31 levels
respectively; that is 109 levels.
(*By the way, is there any problem because of having
Given the following example
library(lattice)
attach(barley)
After a long meandering around the web I managed to get a side by side
boxplots through:
bwplot(yield ~ site, data = barley, groups=year,
pch = |, box.width = 1/3,
auto.key = list(points = FALSE, rectangles = TRUE, space
I am a newbie in R, and I am trying to build an R package but I keep getting
an unexpected input error when I try using the build, check or install
commands. I used the following command to generate the skeleton:
package.skeleton(test)
After this I went to the command prompt and to the directory
Hi everyone, i´m a new user of R and i´m trying to translate an linear
optimization problem from Matlab into r.
The matlab code is as follow:
options = optimset('Diagnostics','on');
[x fval exitflag] = linprog(f,A,b,Aeq,beq,lb,ub,[],options);
exitflag
fval
x=round(x);
Where:
f = Linear
Here is an example that may help. I found the idea somewhere in the R-help
archives but don't have a reference any more.
mydata - data.frame(a1 = 1:5, a2 = 2:6, a3 = 3:7)
str(mydata)
mydata[, 1:2] - lapply(mydata[,1:2], factor)
str(mydata)
so basically all you need to do is specific what
Quite likely, but we need to know what you are doing and what graphics package
you are using.
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
John Kane
Kingston ON Canada
-Original
On Tue, 8 May 2012, Uwe Ligges wrote:
If it is a 64-bit R, it will take as much memory as it needs unless your
admin applied some restrictions.
Some BIOS versions limit the memory the system sees. When I bought my Dell
Latitude E5410 in June 2010 it came with BIOS version A03 and supported
It looks fine to me. Why do you say it does not work?
Any error messages?
John Kane
Kingston ON Canada
-Original Message-
From: yu_y...@hotmail.com
Sent: Tue, 8 May 2012 10:06:07 -0700 (PDT)
To: r-help@r-project.org
Subject: Re: [R] Help deleting negative values in a matrix, and
I made this rather cool plot which I am quite pleased with:
http://brainimaging.waisman.wisc.edu/~perlman/data/BeeswarmLinesDemo.pdf
However, I feel there must be a better way to do it than what I did. I'm
attaching the code to create it, which downloads the data by http so it should
run for
Assuming the 400 numeric variables are integers this will be simpler if you
can identify the columns to be converted to factors as a block of column
numbers (e.g. 1:400, or 401:800)
# Create some data
X - data.frame(matrix(nrow=20, ncol=20))
for (i in 1:10) X[,i] - round(runif(20, .5, 5.5), 0)
Hello, I would like to write a function that makes a grouping variable for
some panel data . The grouping variable is made conditional on the begin
year and the end year. Here is the code I have written so far.
name - c(rep('Frank',5), rep('Tony',5), rep('Edward',5));
begin - c(seq(1990,1994),
Hi, John,
the code ran well.
however, somehow, the means were not calculated correctly using the
following code.
test - read.csv(Rtestdataset.csv, as.is=T,header=T)
test - data.frame(test)
test
rowMeans(test)
apply(test,1,function(y)mean(y=0))
Is there anything wrong?
thanks,
--
View this
On 8 May 2012 19:47, John Kane jrkrid...@inbox.com wrote:
Quite likely, but we need to know what you are doing and what graphics
package you are using.
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible
Hello,
york8866 wrote
Hi, John,
the code ran well.
however, somehow, the means were not calculated correctly using the
following code.
test - read.csv(Rtestdataset.csv, as.is=T,header=T)
test - data.frame(test)
test
rowMeans(test)
apply(test,1,function(y)mean(y=0))
Is there
On Tue, May 8, 2012 at 9:32 AM, maxbre mbres...@arpa.veneto.it wrote:
and then with the superposition of relative average values to the boxplots,
i.e. something like:
panel.points(…, mean.values, ..., pch = 17)
Almost. You need to give panel.points the new x, and make sure the
right
Hi,
On Tue, May 8, 2012 at 2:17 PM, Geoffrey Smith g...@asu.edu wrote:
Hello, I would like to write a function that makes a grouping variable for
some panel data . The grouping variable is made conditional on the begin
year and the end year. Here is the code I have written so far.
name -
Hi All,
Sorry for posting the same question again. I was not sure if the message
was sent initially since it was my first post the forum.
Can the MNP package available in R be used to analyze panel data as well?
*i.e., *if there are 3 observed discrete choices for three time periods for
the
Sorry, yes: I changed it before posting it to more closely match what
the default value in the pseudocode. That's a very minor issue: the
very last value in the nested ifelse() statements is what's used by
default.
Sarah
On Tue, May 8, 2012 at 2:46 PM, arun smartpink...@yahoo.com wrote:
HI
On May 8, 2012, at 2:23 PM, Vihan Pandey wrote:
On 8 May 2012 19:47, John Kane jrkrid...@inbox.com wrote:
Quite likely, but we need to know what you are doing and what
graphics package you are using.
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide
Kristi,
It's a little unclear what exactly you're trying to do. However, I
recently wanted to run a series of ANOVAs in a for loop and found this R
Help thread useful:
http://tolstoy.newcastle.edu.au/R/e6/help/09/01/2679.html
I also found Chapter 6 of the following book helpful:
Zuur, A. F.,
Actually I meant a working example and some data (See ?dput for a handy way to
supply data)
It is also a good idea to include the information from sessionInfo()
I think David W has a good approach.
Otherwise you might just want to write the axis yourself.
=
x
Dear all,
For the following code, I have the error message
Error in uniroot(f1star, lower = -10, upper = 0, tol = 1e-10, lambda =
lam[i], :
f() values at end points not of opposite sign.
It seems the problem occurs when lambda is equal to 0.99.
However, there should be a solution for
On Tue, May 08, 2012 at 10:21:59AM -0700, Haio wrote:
Hi everyone, i´m a new user of R and i´m trying to translate an linear
optimization problem from Matlab into r.
The matlab code is as follow:
options = optimset('Diagnostics','on');
[x fval exitflag] =
...still new to R and trying to figure this one out.
I have a number of variables x, y, z, etc. in a data frame.
Each contains a 2 digit year (e.g., 80, 81, 82) representing the
first year that something occurred. Each variable represents a
different type of event.
If the event did not
It's neater if you use dput() to give your data rather than just
copying it into the email, but anyway:
testdata - read.table(clipboard, header=TRUE)
apply(testdata, 1, function(x)if(all(x == 0)) {0} else {min(x[x 0])})
[1] 80 76 86 0
Sarah
On Tue, May 8, 2012 at 3:50 PM, Jeff
On Tue, May 08, 2012 at 02:50:47PM -0500, Jeff wrote:
...still new to R and trying to figure this one out.
I have a number of variables x, y, z, etc. in a data frame.
Each contains a 2 digit year (e.g., 80, 81, 82) representing the
first year that something occurred. Each variable
R Users-
I have been trying to automate a manual code that I have developed for
calling in a .csv file, isolating certain rows and columns that correspond
to specified months:
something to the effect
i=name.csv
N=length(i$month)
iphos1=0
iphos2=0
isphos3=0
for i=1,N
if month=1
Can you show us the file that's throwing an error? This suggests
there's something syntactically invalid in your code, but it's
impossible to say what without seeing it.
Best,
Michael
On Tue, May 8, 2012 at 1:00 PM, abhisarihan abhisari...@gmail.com wrote:
I am a newbie in R, and I am trying
You did not specify any object in the function. Thus R is building the
package test with all the objects present in your session when you are
calling the package.skeleton function. I suppose that one of these
objects is causing problem. I suggest you list all the
variables/function necessary
I set up a local repo for testing packages. My packages are not
showing up from the repository when viewed by Linux clients. I suspect
this is a web administrator/firewall issue, but it could be I created
the repo wrongly. I am supposed to run write_PACKAGES separately in
each R-version folder.
I have not done this myself, but reading through your book I see no reference
to actual sample file names. I mention this because UNIX-ish operating systems
download the tar.gz source archives while Windows works with the zip binary
packages, and I can't tell what files you are putting in the
Sorry, my mistake.
it works very well!!!
thanks,
Rui Barradas wrote
Hello,
york8866 wrote
Hi, John,
the code ran well.
however, somehow, the means were not calculated correctly using the
following code.
test - read.csv(Rtestdataset.csv, as.is=T,header=T)
test -
HI Sarah,
I run the same code from your reply email. For the makegroup2, the results are
0 in places of NA.
makegroup1 - function(x,y) {
+ group - numeric(length(x))
+ group[x = 1990 y 1990] - 1
+ group[x = 1991 y 1991] - 2
+ group[x = 1992 y 1992] - 3
+ group
+ }
makegroup2 -
1 - 100 of 125 matches
Mail list logo