Le mardi 09 avril 2013 à 10:10 +0300, Ventseslav Kozarev, MPP a écrit :
Hi,
I bumped into a serious issue while trying to analyse some texts in
Bulgarian language (with the tm package). I import a tab-separated csv
file, which holds a total of 22 variables, most of which are text cells
Please keep your posts on the mailing list... I don't do one-on-one consulting
without a contract.
Please carefully re-read my last email and follow its advice before posting
again. You may yet have a problem that is on topic here, but until you can give
us a reproducible example that runs or
a) This is not the RStudio support forum
b) Don't post using HTML on this list. It messes up R code. Truly.
c) You have quoted a snippet of instructions to follow, but not provided what
you actually did.
d) You also include errors having to do with knitr and latex, which seem
unrelated to the
See ?readBin - works also with raw objects.
Henrik
On Apr 3, 2013 1:18 AM, Mike Chen chenminyi1...@gmail.com wrote:
I know that there is a function to convert binary data to string named
rawToChar.but I wander is there any similar function for Integer and
float.I need to read some binary file
Hi, Pancho,
1. Quote:
PLEASE do ... provide commented, minimal, self-contained, reproducible
code.
2. Changes of variables of attached data frames are -- usually -- not
permanent (i.e., disappear when the data frame is detached again).
3. The of use detach( dummy) is to detach the data
On Thu, Mar 28, 2013 at 6:21 PM, Andras Farkas motyoc...@yahoo.com wrote:
Dear All,
wonder if you have a thought on the following: I am using the
round(x,digits=3) command, but some of my values come out as:
0.07099 AND 0.06901. Any thoughts on why this maty be
On 28/03/2013 2:21 PM, Andras Farkas wrote:
Dear All,
wonder if you have a thought on the following: I am using the round(x,digits=3) command, but some of my values come out as: 0.07099 AND 0.06901. Any thoughts on why this maty be happening or how to eliminate the
Hello,
Maybe using ?parent.frame. Something like
fn2 - function(x) {
x1 - function(y = x) {
env - parent.frame()
env$y1 - log(y)
y2 - y^2
return(0)
}
### I want the variable 'y1' is available here.
: [R] question about nls
Message-ID:
cap01urmodfn87qqvtwmatuid0fx0d7lqmfqh4chofm5b2c9...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1
On Thu, Mar 14, 2013 at 5:07 AM, menglaomen...@163.com wrote:
Hi,all:
I met a problem of nls.
My data:
xy
60 0.8
80 6.5
100 20.5
120 45.9
On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) nas...@uottawa.ca wrote:
Actually, it likely won't matter where you start. The Gauss-Newton direction
is nearly always close to 90 degrees from the gradient, as seen by turning
trace=TRUE in the package nlmrt function nlxb(), which does a
As Gabor indicates, using a start based on a good approximation is
usually helpful, and nls() will generally find solutions to problems
where there are such starts, hence the SelfStart methods. The Marquardt
approaches are more of a pit-bull approach to the original
specification. They grind
I decided to follow up my own suggestion and look at the robustness of
nls vs. nlxb. NOTE: this problem is NOT one that nls() would usually be
applied to. The script below is very crude, but does illustrate that
nls() is unable to find a solution in 70% of tries where nlxb (a
Marquardt
On Thu, Mar 14, 2013 at 5:07 AM, meng laomen...@163.com wrote:
Hi,all:
I met a problem of nls.
My data:
xy
60 0.8
80 6.5
100 20.5
120 45.9
I want to fit exp curve of data.
My code:
nls(y ~ exp(a + b*x)+d,start=list(a=0,b=0,d=1))
Error in nlsModel(formula, mf, start, wts) :
I do not use these functions myself. But, why don't you do a little test
on some paths that you can control. Put the same files in directories with
and without spaces and see if you can get the functions to work.
Jean
On Thu, Feb 28, 2013 at 2:54 PM, Lane, Michael mfl...@odu.edu wrote:
This
On 01/03/2013 15:49, Adams, Jean wrote:
I do not use these functions myself. But, why don't you do a little test
on some paths that you can control. Put the same files in directories with
and without spaces and see if you can get the functions to work.
In the case of RODBC, it never uses
.
-Original Message-
From: Prof Brian Ripley [mailto:rip...@stats.ox.ac.uk]
Sent: Friday, March 01, 2013 12:08 PM
To: Adams, Jean
Cc: Lane, Michael; r-help@R-project.org
Subject: Re: [R] Question concerning directory/path names
On 01/03/2013 15:49, Adams, Jean wrote:
I do not use these functions
On Sat, Feb 9, 2013 at 11:43 AM, Uwe Ligges lig...@statistik.tu-dortmund.de
wrote:
On 08.02.2013 20:14, David Romano wrote:
Hi everyone,
I'm not exactly sure how to ask this question most clearly, but I hope
that
giving the context in which it occurs for me will help: I'm trying to
On 08.02.2013 20:14, David Romano wrote:
Hi everyone,
I'm not exactly sure how to ask this question most clearly, but I hope that
giving the context in which it occurs for me will help: I'm trying to
compare the brain images of two patient populations; each image is composed
of voxels (the
After applying the NLS for a model like y=exp(a*x), and I get
a result showing the summary as:
Estimate Std. Error t value Pr(|t|)
2.6720 1.4758 1.811 0.3212
My question is what this t-statistics tests? And what's the
meaning of Pr?
t is (estimate/std.err) and can be used to
Thanks, Ellison. Another question is if this p-value is a good parameter to
test if the fitting is good, as you this test is only for the null that the
coefficient is 0 (a is 0 in y=exp(a*x), right?)?
On Thu, Feb 7, 2013 at 10:48 AM, S Ellison s.elli...@lgcgroup.com wrote:
After applying
Liang:
In nonlinear models especially (and more generally, also), small p
values are not reliable indicators of whether a fit is or is
notgood. I would strongly suggest that you consult with your local
statistician -- this is a (complicated, as it depends on the meaning
of good) statistical
Thanks, Ellison. Another question is if this p-value is a good parameter to
test if the fitting is good,
Absolutely not.
***
This email and any attachments are confidential. Any use, copying or
disclosure other than by the
On Sun, Feb 3, 2013 at 1:50 AM, Bert Gunter gunter.ber...@gene.com wrote:
A related approach which, if memory serves, was originally in S eons
ago, is to define a doc attribute of any function (or object, for
that matter) that you wish to document that contains text for
documentation and a
On 02/02/2013 02:21 AM, li li wrote:
Thanks so much for the reply, Ista. I used plotrix library.
Here is my example:
xx- seq(0.05, 0.95, by=0.05)
lower- c(-2.896865, -2.728416, -2.642574, -2.587724, -2.548672, -2.518994,
-2.495409, -2.476031, -2.459662, -2.445513, -2.433014, -2.421739,
The normal expectations of an R user is that useful functions you want to share
are in packages, which include help files. There is no way to both avoid the
package development process and offer help to the user within R. Read the
Writing R Extensions document for the most up-to-date
On Sat, Feb 2, 2013 at 6:31 PM, Chee Chen chen...@purdue.edu wrote:
Dear All,
I would like to ask a question on how to incorporate into an R script help
information for the user. I vaguely recall that I saw some instructions on an
R manual, but am not able to figure them out. Hereunder is
A related approach which, if memory serves, was originally in S eons
ago, is to define a doc attribute of any function (or object, for
that matter) that you wish to document that contains text for
documentation and a doc() function of the form:
doc - function(obj) cat(attr(obj,doc))
used as:
There are many plotCI functions in many different packages... which
one are you referring to? Also please construct a reproducible example
illustrating your problem.
Best,
Ista
On Thu, Jan 31, 2013 at 11:58 PM, li li hannah@gmail.com wrote:
Hi all,
In my plotCI function, the argument x
Thanks so much for the reply, Ista. I used plotrix library.
Here is my example:
xx - seq(0.05, 0.95, by=0.05)
lower - c(-2.896865, -2.728416, -2.642574, -2.587724, -2.548672, -2.518994,
-2.495409, -2.476031, -2.459662, -2.445513, -2.433014, -2.421739, -2.411344,
-2.401536, -2.392040, -2.382571,
You haven't given any y values to plotCI.
Try, for example,
plotCI(xx, (lower+upper)/2, ui=upper, (etc)
What you got was in effect
plot( seq(along=xx), xx )
which is standard behavior for plot() when no y values are supplied.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
Hello,
Something like this?
do.call(rbind, lapply(split(as.data.frame(mat), match_df$criteria),
colSums))
Hope this helps,
Rui Barradas
Em 24-01-2013 19:39, Christofer Bogaso escreveu:
Hello again,
Ley say I have 1 matrix and 1 data frame:
mat - matrix(1:15, 5)
match_df -
Hi,
You could also use:
mat - matrix(1:15, 5)
set.seed(5)
match_df - data.frame(Seq = 1:5, criteria = sample(letters[1:5], 5, replace =
T),stringsAsFactors=F)
library(plyr)
res-daply(as.data.frame(mat),.(match_df$criteria),colSums)
res
#match_df$criteria V1 V2 V3
#
-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of arun
Sent: Thursday, January 24, 2013 2:27 PM
To: Christofer Bogaso
Cc: R help
Subject: Re: [R] Question on matrix calculation
Hi,
You could also use:
mat - matrix(1:15, 5)
set.seed(5)
match_df
On Jan 12, 2010, at 2:13 AM, Benedikt Drosse wrote:
Dear R-Users,
I have a question concerning the determination of breakpoints and
comparison of slopes from broken-line regression models. Although
this is rather a standard problem in data analysis, all information
I gathered so far,
?round explicitly says:
Note that for rounding off a 5, the IEC 60559 standard is expected to be
used, â*go to the even digit*â. Therefore round(0.5) is 0 and round(-1.5)is
-2. However, this is dependent on OS services and on representation error
(since e.g. 0.15 is not represented exactly,
On Jan 3, 2013, at 11:44 AM, Christofer Bogaso bogaso.christo...@gmail.com
wrote:
I happened to see these:
round(.5, 0)
[1] 0
round(1.5, 0)
[1] 2
round(2.5, 0)
[1] 2
round(3.5, 0)
[1] 4
round(4.5, 0)
[1] 4
What is the rule here?
Should not round(.5, 0) = 1, round(2.5, 0) = 3
On 03-01-2013, at 18:44, Christofer Bogaso bogaso.christo...@gmail.com wrote:
I happened to see these:
round(.5, 0)
[1] 0
round(1.5, 0)
[1] 2
round(2.5, 0)
[1] 2
round(3.5, 0)
[1] 4
round(4.5, 0)
[1] 4
What is the rule here?
Should not round(.5, 0) = 1, round(2.5, 0) = 3
Hello,
The format AM/PM should be for display purposes only, when you use
format() you get a variable of class character, not of classes
POSIXct POSIXt . Produce a variable y with as.POSIXct (without
AM/PM) for arithmetics and another formated for display.
Hope this helps,
Rui Barradas
Em
On Dec 31, 2012, at 9:40 PM, Christofer Bogaso wrote:
On 01 January 2013 03:00:18, David Winsemius wrote:
On Dec 31, 2012, at 11:57 AM, David Winsemius wrote:
On Dec 31, 2012, at 11:54 AM, Christofer Bogaso wrote:
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at
-01 11:00:00 AM
#2 2013-01-01 11:15:00 AM
A.K.
- Original Message -
From: Christofer Bogaso bogaso.christo...@gmail.com
To: David Winsemius dwinsem...@comcast.net; David L Carlson
dcarl...@tamu.edu
Cc: r-help@r-project.org
Sent: Tuesday, January 1, 2013 12:40 AM
Subject: Re: [R
To: David Winsemius dwinsem...@comcast.net; David L Carlson
dcarl...@tamu.edu
Cc: r-help@r-project.org
Sent: Tuesday, January 1, 2013 12:40 AM
Subject: Re: [R] Question on creating Date variable
On 01 January 2013 03:00:18, David Winsemius wrote:
On Dec 31, 2012, at 11:57 AM, David Winsemius
Winsemius dwinsem...@comcast.net; David L Carlson dcarl...@tamu.edu
Cc: r-help@r-project.org
Sent: Tuesday, January 1, 2013 12:40 AM
Subject: Re: [R] Question on creating Date variable
On 01 January 2013 03:00:18, David Winsemius wrote:
On Dec 31, 2012, at 11:57 AM, David Winsemius wrote
Hello,
Try the following.
x - scan(text = 11.00 11.25 11.35 12.01 11.14 13.00 13.25 13.35 14.01
13.14 14.50 14.75 14.85 15.51 14.64 )
hours - floor(x)
mins - (100*x) %% 100
as.POSIXct(paste(Sys.Date(), hours, mins), format = %Y-%m-%d %H %M)
As you can see, there are three values of 'x'
On Dec 31, 2012, at 9:12 AM, Christofer Bogaso wrote:
Hello all,
Let say I have following (numeric) vector:
x
[1] 11.00 11.25 11.35 12.01 11.14 13.00 13.25 13.35 14.01 13.14
14.50 14.75 14.85 15.51 14.64
Now, I want to create a 'Date' variable (i.e. I should be able to do
all
On 01 January 2013 00:17:50, David Winsemius wrote:
On Dec 31, 2012, at 9:12 AM, Christofer Bogaso wrote:
Hello all,
Let say I have following (numeric) vector:
x
[1] 11.00 11.25 11.35 12.01 11.14 13.00 13.25 13.35 14.01 13.14 14.50
14.75 14.85 15.51 14.64
Now, I want to create a 'Date'
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013 00:17:50, David Winsemius wrote:
On Dec 31, 2012, at 9:12 AM, Christofer Bogaso wrote:
Hello all,
Let say I have following (numeric) vector:
x
[1] 11.00 11.25 11.35 12.01 11.14 13.00 13.25 13.35 14.01 13.14
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013 00:17:50, David Winsemius wrote:
On Dec 31, 2012, at 9:12 AM, Christofer Bogaso wrote:
Hello all,
Let say I have following (numeric) vector:
x
[1] 11.00 11.25
On Dec 31, 2012, at 11:54 AM, Christofer Bogaso wrote:
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013 00:17:50, David Winsemius wrote:
On Dec 31, 2012, at 9:12 AM, Christofer Bogaso wrote:
Hello all,
Let
-project.org
Subject: Re: [R] Question on creating Date variable
On Dec 31, 2012, at 11:54 AM, Christofer Bogaso wrote:
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013 00:17:50, David Winsemius wrote
On Dec 31, 2012, at 11:57 AM, David Winsemius wrote:
On Dec 31, 2012, at 11:54 AM, Christofer Bogaso wrote:
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013 00:17:50, David Winsemius wrote:
On Dec 31, 2012,
Hi,
Try this:
x-c(11.00,11.25,11.35,12.01,11.14,13.00,13.25,13.35,14.01,13.14,14.50,14.75,14.85,15.51,14.64)
x[substr(x,4,5)=60]-(x[substr(x,4,5)=60]-.60)+1
res-sort(as.POSIXct(paste(2012-12-31, sprintf(%.2f,x),sep=
),format=%Y-%m-%d %H.%M)) #
ifelse(format(res,%H:%M)=12, paste(res,PM),
On 01 January 2013 03:00:18, David Winsemius wrote:
On Dec 31, 2012, at 11:57 AM, David Winsemius wrote:
On Dec 31, 2012, at 11:54 AM, Christofer Bogaso wrote:
On 01 January 2013 01:29:53, David Winsemius wrote:
On Dec 31, 2012, at 11:35 AM, Christofer Bogaso wrote:
On 01 January 2013
You can run that as it is. The term to search for on Google is 'dummy
coding'.
Jeremy
On 28 December 2012 07:45, Lorenzo Isella lorenzo.ise...@gmail.com wrote:
where x3 is a dichotomous variable assuming only 0 and 1 values (x1 and x2
are continuous variables).
Is there any particular
Hi,
You can follow this example:
test - structure(list(V1 = c(0L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 3L,
3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L), V2 = c(12L, 10L, 4L, 6L,
7L, 13L, 21L, 23L, 20L, 18L, 17L, 16L, 27L, 33L, 11L, 8L, 19L,
16L, 9L)), .Names = c(V1, V2), class = data.frame, row.names =
Hi, T. Bal,
homework? Take a look at
?tapply
Regards -- Gerrit
On Tue, 4 Dec 2012, T Bal wrote:
Hi,
I have the following data:
0 12
1 10
1 4
1 6
1 7
1 13
2 21
2 23
2 20
3 18
3 17
3 16
3 27
3 33
4 11
4 8
4 19
4 16
4 9
In this data file I would
On 04-12-2012, at 08:59, T Bal wrote:
Hi,
I have the following data:
0 12
1 10
1 4
1 6
1 7
1 13
2 21
2 23
2 20
3 18
3 17
3 16
3 27
3 33
4 11
4 8
4 19
4 16
4 9
In this data file I would like to sum the numbers of second column
Hi,
Imagine the data you have is in a data frame, c, with columns a and b.
Then you can do this:
res=tapply(b,c[,-2],sum)
res[is.na(res)]-0
res
0 1 2 3 4
12 40 64 111 63
Hope it helps,
José
José Iparraguirre
Chief Economist
Age UK
-Original Message-
From:
HI,
You can use either ?tapply(), ?aggregate(), ?ddply() from library(plyr)
dat1-read.table(text=
0 12
1 10
1 4
1 6
1 7
1 13
2 21
2 23
2 20
3 18
3 17
3 16
3 27
3 33
4 11
4 8
4 19
4 16
4 9
,sep=,header=FALSE)
with(dat1,aggregate(dat1[,2],by=list(V1=dat1[,1]),sum))
# V1 x
#1
DID YOU SOLVE THE PROBLEM?
--
View this message in context:
http://r.789695.n4.nabble.com/question-about-A2R-tp4640539p4650952.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
Most likely the value you are using in the 'hist' function is a
factor. It would help if you included an 'str' of the object you are
using.
On Sun, Nov 18, 2012 at 4:45 PM, 9man lucas_9...@hotmail.com wrote:
Hello all,
I hope someone of you can help me out, I have searched other posts as well
You tried setting shorter variable names, but the two lines of your data set
that you gave us still have the original names, eg.
Telephone.lines..per.100.people.
--
David L Carlson
Associate Professor of Anthropology
Texas AM University
College Station,
On 12-11-17 1:51 PM, John Muschelli wrote:
That works! Thanks for the help, but I can't seem to figure out why
this happens with even one contour in the example below:
Disclaimer: using MNI template from FSL
(http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases).
Firefox still has array initialiser
On 12-11-17 3:11 PM, Duncan Murdoch wrote:
On 12-11-17 1:51 PM, John Muschelli wrote:
That works! Thanks for the help, but I can't seem to figure out why
this happens with even one contour in the example below:
Disclaimer: using MNI template from FSL
That works! Thanks for the help, but I can't seem to figure out why this
happens with even one contour in the example below:
Disclaimer: using MNI template from FSL (
http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases).
Firefox still has array initialiser too large for this one contour, but
Safari
On 12-11-16 5:59 PM, John Muschelli wrote:
I saw that in rgl:::writeWebGL that Polygons will only be rendered as
filled; there is no support in WebGL for wireframe or point rendering.. I
found that you can easily use contour3d to make reproducible contour web
figures, such as (taken from
On 12-11-16 7:09 PM, John Muschelli wrote:
The contour its just half a brain and the vertices are not surfaces and
are filled in
Sounds like a bug in the browser. When I try it in Firefox 16.0.2 it
doesn't display properly; the error log (found via Tools | Web developer
| Error console has
On 11/06/2012 07:03 AM, Simon Knapp wrote:
I don't understand why I get the following results. I define two classes
'Base' and 'Derived', the latter of which 'contains' the first. I then
define a generic method 'test' and overload it for each of these classes. I
call 'callNextMethod()' in the
Simon Knapp sleepingw...@gmail.com writes:
I don't understand why I get the following results. I define two classes
'Base' and 'Derived', the latter of which 'contains' the first. I then
define a generic method 'test' and overload it for each of these classes. I
call 'callNextMethod()' in the
On 11/2/2012 11:11 AM, Ni, Shenghua wrote:
r-c(1,1,9,1,1,1)
col_no-cut(r,c(0,2,3,6,8,10,100))
levels(col_no)-c(2%,2-4%,4-6%,6-8%,8-10%,10%)
col_no
[1] 2% 2% 8-10% 2% 2% 2%
Levels: 2% 2-4% 4-6% 6-8% 8-10% 10%
Yes.
(Or, I get the same output from this code. What is the question?)
On Nov 2, 2012, at 11:51 AM, Berend Hasselman wrote:
On 02-11-2012, at 19:11, Ni, Shenghua wrote:
r-c(1,1,9,1,1,1)
col_no-cut(r,c(0,2,3,6,8,10,100))
levels(col_no)-c(2%,2-4%,4-6%,6-8%,8-10%,10%)
col_no
[1] 2% 2% 8-10% 2% 2% 2%
Levels: 2% 2-4% 4-6% 6-8% 8-10% 10%
Where is
On Nov 6, 2012, at 22:16 , Brian Diggs wrote:
On 11/2/2012 11:11 AM, Ni, Shenghua wrote:
r-c(1,1,9,1,1,1)
col_no-cut(r,c(0,2,3,6,8,10,100))
levels(col_no)-c(2%,2-4%,4-6%,6-8%,8-10%,10%)
col_no
[1] 2% 2% 8-10% 2% 2% 2%
Levels: 2% 2-4% 4-6% 6-8% 8-10% 10%
Yes.
(Or, I get the
On 02-11-2012, at 19:11, Ni, Shenghua wrote:
r-c(1,1,9,1,1,1)
col_no-cut(r,c(0,2,3,6,8,10,100))
levels(col_no)-c(2%,2-4%,4-6%,6-8%,8-10%,10%)
col_no
[1] 2% 2% 8-10% 2% 2% 2%
Levels: 2% 2-4% 4-6% 6-8% 8-10% 10%
Where is the question?
Berend
Thank you for your replay and help !!
Best Regards, Young.
--
View this message in context:
http://r.789695.n4.nabble.com/Question-about-survdiff-in-for-loop-tp4646707p4646827.html
Sent from the R help mailing list archive at Nabble.com.
__
On Fri, Oct 19, 2012 at 8:02 PM, Sando chocosa...@daum.net wrote:
Hi everyone!!
I have dataset composed of a numbers of survival analyses.
( for batch survival analyses by using for-loop) .
Here are code !!
###
dim(svsv)
Num_t-dim(svsv)
Num-Num_t[2] # These are predictors !!
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf
Of Tyler Ritchie
Sent: Tuesday, October 16, 2012 2:23 PM
To: r-help@r-project.org
Subject: [R] Question about use of sort.list(sort.list(x)) in rank.r
I was looking at rank()
It is easy to get a cumulative hazard curve.
First, decide what values of age, a, and b you want curves for
tdata- data.frame(age=55, a=2, b=6)
Get the curves, there will be one for each strata in the output
sfit- survfit(coxPhMod, newdata= tdata)
Plot them
plot(sfit, fun='cumhaz',
lau pel wrote
Hi,
I'm going crazy trying to plot a quite simple graph.
i need to plot estimated hazard rate from a cox model.
supposing the model i like this:
coxPhMod=coxph(Surv(TIME, EV) ~ AGE+A+B+strata(C) data=data)
with 4 level for C.
how can i obtain a graph with 4 estimated (better
Hello,
Inline.
Em 20-09-2012 22:48, Benjamin Caldwell escreveu:
Hi,
I need some help with making a function a bit more elegant.
Yes you do!
Below, your first function, for instance, becomes a one liner.
Trick: R is vectorized. Use functions that act on whole vectors,
avoiding loops.
On 12-09-17 06:00 AM, r-help-requ...@r-project.org wrote:
Date: Sun, 16 Sep 2012 14:41:42 -0500
From: Dirk Eddelbuettele...@debian.org
To: Eberle, Anthonyae...@allstate.com
Cc:r-help@r-project.org
Subject: Re: [R] Question about R performance on UNIX/LINUX with
different memory
On 16 September 2012 at 13:30, Eberle, Anthony wrote:
| Does anyone have any guidance on swap and memory configuration when
| running R v2.15.1 on UNIX/LINUX? Through some benchmarking across
| multiple hardware (UNIX, LINUX, SPARC, x86, Windows, physical, virtual)
| it seems that the smaller
My first criteria is to make sure my application never swaps/pages due
to memory issues -- have enough physical memory so it never happens
and control what else is running on the machine. Once you start
paging, performance takes a real hit.
On Sun, Sep 16, 2012 at 2:30 PM, Eberle, Anthony
Models with different fixed effects estimated by REML cannot be
compared by anova.
In future, please post questions on mixed effects models on the
r-sig-mixed-effects mailing lists. You're likely to receive more
informative replies there, too.
-- Bert
On Wed, Aug 22, 2012 at 7:23 AM, Rainer M
On 22/08/12 16:36, Bert Gunter wrote:
Models with different fixed effects estimated by REML cannot be
compared by anova.
I have seen that much in Modern Applied Statistics in S, and therefore have chosen the
model = ML
In future, please post questions on mixed effects models on the
Oops -- missed that. OTOH, my reply demonstrates the value of the
mixed models list recommendation.
-- Bert
On Wed, Aug 22, 2012 at 7:55 AM, Rainer M Krug r.m.k...@gmail.com wrote:
On 22/08/12 16:36, Bert Gunter wrote:
Models with different fixed effects estimated by REML cannot be
compared
Further discussed on r-sig-mixed-models
Rainer
On 22/08/12 17:04, Bert Gunter wrote:
Oops -- missed that. OTOH, my reply demonstrates the value of the
mixed models list recommendation.
-- Bert
On Wed, Aug 22, 2012 at 7:55 AM, Rainer M Krug r.m.k...@gmail.com wrote:
On 22/08/12 16:36, Bert
Hi,
I am doing Bayesian Calibration for a model which has 24 parameters using MCMC.
I get a sample of 100,000 points from the posterior pdf. Let's say, It is a
dataframe named as BC_output, with 24 parameters (V1,V2,V24)
I would like to show the Maximum a Posteriori (MAP)parameter value
The package hasn't been modified since version 0.0-4 in August 2005. Your
only hope is to try to contact the author:
Package: A2R
Type: Package
Title: Romain Francois misc functions
Version: 0.0-4
Date: 2005-08-05
Author: Romain Francois francoisrom...@free.fr
Maintainer: Romain Francois
Read ?Startup _carefully_ (it's pretty dense!).
Does the .First file on the search path give you the functionality you seek?
-- Bert
On Fri, Aug 10, 2012 at 12:34 PM, Ryan Rene ren...@hotmail.com wrote:
Hi all,
I had a specific question about the loading of objects
into R. I apologize in
On Wed, Aug 8, 2012 at 6:18 PM, Eberle, Anthony ae...@allstate.com wrote:
I have a question about multiple cores and CPU's for running R. I've
been running various tests on different types of hardware and operating
systems (64 bit, 32 bit, Solaris, Linux, Windows, RV.10, .12, .15,
.15.1.)
Hey Arne
I don't know about the rounding, but you shouldn't be concerned with the
calculation of Q^2. In the line where a value is assigned to RSS, the value
is assigned to h+1 as shown here:
RSS[h + 1, ] = colSums((Y.old - t.new %*% t(c.new))^2)
i.e. the programmer is messing
On Saturday, July 21, 2012, Mehrab Nodehi wrote:
hello
can you explain to me about package shapes in R software?
especially about order procGPA?
Not unless you read the posting guide and formulate a more specific
question.
Sarah
--
Sarah Goslee
http://www.stringpage.com
Hello,
Inline.
Em 03-07-2012 09:22, Sajeeka Nanayakkara escreveu:
I have already fitted several models
using R code; arima(rates,c(p,d,q))
As I heard, best model produce the
smallest AIC value, but maximum likelihood estimation procedure optimizer
should converge.
How to check whether maximum
On Tue, 26-Jun-2012 at 10:54PM -0500, Erin Hodgess wrote:
| Dear R People:
|
| I have dates as factors in the following:
|
| poudel.df$DATE
| [1] 1/2/2011 1/4/2011 1/4/2011 1/4/2011 1/6/2011 1/7/2011 1/8/2011
| [8] 1/9/2011 1/10/2011
| Levels: 1/10/2011 1/2/2011 1/4/2011 1/6/2011
Hello,
I'm afraid you're wrong, this has nothing to do with leading zeros. Just
see:
x - c(1/2/2011, 1/4/2011, 1/4/2011, 1/4/2011, 1/6/2011,
1/7/2011,
1/8/2011, 1/9/2011, 1/10/2011)
as.Date(x, %m/%d/%Y)
y - factor(x)
str(y)
as.Date(as.character(y), %m/%d/%Y)
Note that the
How about
dates - as.factor(c(1/2/2011, 1/4/2011, 1/4/2011, 1/4/2011,
1/6/2011, 1/7/2011, 1/8/2011, 1/9/2011, 1/10/2011))
dates
[1] 1/2/2011 1/4/2011 1/4/2011 1/4/2011 1/6/2011 1/7/2011 1/8/2011
[8] 1/9/2011 1/10/2011
Levels: 1/10/2011 1/2/2011 1/4/2011 1/6/2011 1/7/2011 1/8/2011
Hi,
I can see the mistake in the code as $ before Y.
dat1-as.factor(c(1/2/2011,1/4/2011,1/4/2011))
dat1
[1] 1/2/2011 1/4/2011 1/4/2011
Levels: 1/2/2011 1/4/2011
as.Date(as.character(dat1), %m/%d/%Y)
[1] 2011-01-02 2011-01-04 2011-01-04
as.Date(as.character(dat1),%m/%d/$Y)
[1] NA NA NA
On Tue, Jun 26, 2012 at 10:54 PM, Erin Hodgess erinm.hodg...@gmail.com wrote:
Dear R People:
I have dates as factors in the following:
poudel.df$DATE
[1] 1/2/2011 1/4/2011 1/4/2011 1/4/2011 1/6/2011 1/7/2011 1/8/2011
[8] 1/9/2011 1/10/2011
Levels: 1/10/2011 1/2/2011 1/4/2011
Hi --
EBImage is a Bioconductor package so better to ask there
http://bioconductor.org/help/mailing-list/
and to cc the packageDescription(EBImage)$Maintainer
Martin
On 06/20/2012 02:57 PM, Steven Winter wrote:
I am having trouble using the resize function (in the package EBImage) with
This is a bug, which I just fixed in EBImage 3.13.1. As Martin suggested,
please post EBImage-related questions on the Bioconductor mailing list.
Greg
On 06/20/2012 02:57 PM, Steven Winter wrote:
I am having trouble using the resize function (in the package EBImage)
with matrices containing
sample() takes a prob = argument which lets you supply weights, which
need not sum to one so, if I understand you, you could just pass TRUEs
and FALSEs for those rows you want. If I'm wrong about that last bit,
I'm still pretty confident sample(prob = ) is the way to go.
Best,
Michael
On Thu,
401 - 500 of 1516 matches
Mail list logo