Here is some sample code:
## Simulation function to create data, analyze it using
## kruskal.test, and return the p-value
## change rexp to change the simulation distribution
simfun - function(means, k=length(means), n=rep(50,k)) {
mydata - lapply( seq_len(k), function(i) {
rexp(n[i], 1) -
It may be simpler to specify the order in the contrasts rather than trying
to order the data. See the C function (notice capitol C). I have never
tried this with the bigglm function, so I don't know if it will work the
same way or not. But if it works, then that may be a simpler approach.
On
The split function does essentially this, but puts the results into a list
rather than using the dangerous and messy assign function. The overall
syntax is simpler as well.
On Thu, Feb 12, 2015 at 3:14 AM, Jim Lemon drjimle...@gmail.com wrote:
Hi Samarvir,
Assuming that you want to generate a
Steve (and any others still paying attention to this thread),
Larry Wall (author of Perl) said something along the lines of:
things that are similar should look similar, things that are different
should look different.
Ironically one of the first places I saw that quote was in a Perl vs.
Python
There is a section in the High-Performance Computing (HPC) CRAN Task View
on Large memory and out-of-memory data (
http://cran.r-project.org/web/views/HighPerformanceComputing.html) that
should probably be the first place to start.
On Wed, Jan 21, 2015 at 8:18 AM, Paromita Guha parog...@gmail.com
why not just use the tools in npudens? they can predict on a new set.
You can also you tools like fitdistr to fit a parametric multivariate
density, or you can use loess or lm with poly or splines to estimate the
surface (but this will not guarantee a volume of 1).
On Thu, Jan 15, 2015 at 2:19
Yes, there are several. Which is best and which subset to suggest depends
on what you are trying to do, what your inputs look like (do you have the
function, but want a simpler approximation? or do you have
observations/datapoints?)
If you can give us more detail about what you have to work with
Pushpa,
To extend John Fox's answer a little.
Look at the outliers dataset in the TeachingDemos package, see
?outliers and run the examples on that help page. Then ask yourself if
you are comfortable with the automatic outlier removal shown in the example.
On Mon, Dec 29, 2014 at 12:39 PM,
By At the top level Hadley meant to put that code outside of the function
definition. In you source file that line should be very near the top,
before any function definitions. Then myenv will not be temporary (well
it will go away when you end the R session). Further, when this code is
Look at the task view for High Performance Computing (
http://cran.r-project.org/web/views/HighPerformanceComputing.html) there is
a section on packages for large memory and out-of-memory analyses. There
are also sections on parallel computing which is one way to deal with large
data if you have
a dabbler, but not expert at cluster
analysis), but for some reason the word cophenetic never occurred to
me as a search term while thinking about how to create the requested
plot.
On Tue, Oct 28, 2014 at 9:31 AM, Martin Maechler
maech...@stat.math.ethz.ch wrote:
Greg Snow 538...@gmail.com
with the cats
instead of me as her clustering).
On Tue, Oct 28, 2014 at 12:26 PM, Martin Maechler
maech...@stat.math.ethz.ch wrote:
Greg Snow 538...@gmail.com
on Tue, 28 Oct 2014 10:31:27 -0600 writes:
Thanks Martin, It is always great to learn that I don't need to
reinvent the wheel
I don't know of any tools that automate this process. For small
sample sizes it may be easiest to just do this by hand, for large
sample sizes that plot will probably be to complicated to make sense
of. There may be a range of moderate sample sizes for which
automation (or partial automation)
,a-2 , a%++% to get a-3 .
It seems that the operator overwrite system in R, must pass two params. Is it
true?
--
PO SU
mail: desolato...@163.com
Majored in Statistics from SJTU
At 2014-10-18 00:54:40, Greg Snow 538...@gmail.com wrote:
You may be interested in looking at Reference
You may be interested in looking at Reference Classes/objects (see
?setRefClass). This is a form of OO programming that is more similar
to C++ and Java. You could create a counter object that you could
then increment with syntax like:
x$inc()
x$inc(5)
The first would increment by the default
I think we have a fortune candidate.
On Thu, Oct 16, 2014 at 12:35 AM, PIKAL Petr petr.pi...@precheza.cz wrote:
Hi
It will be even worse with age, try to contact optician :-)
If you want to get better answer you need to provide more info about your
file, what you did and how it failed.
Hadley, have you tried producing the book in other electronic formats
(other than pdf)? such as epub? I tried and ended up with a file that
worked, but all the example code was missing (which defeats the
convenience of having it on an ebook reader), I did not check if
everything else was there or
I believe that what is happening is that the clipping region is being
reset when you call box, but not when you call rect. If you insert
the command par(xpd=NA) (or TRUE instead of NA) after the plot.new
and use the rect commands then you can see both rectangles (because
this turns the clipping
Instead of making a local copy and editing, you may consider using the
trace function with edit=TRUE, this allows you to insert things like
print statements, but takes care of the environment and other linkages
for you (and is easy to untrace when finished).
On Fri, Oct 3, 2014 at 11:12 AM, Erin
When working with datasets too large to fit in memory it is usually
best to use an actual database, read the data into the database, then
pull the records that you want into R. There are several packages for
working with databases, but 2 of the simplest are the RSQLite and
sqldf packages
Others have discussed some of the theoretical approaches (delta
method), but as has also been pointed out, this is a mailing list
about R, not theory, so here are some approaches to your question from
the approach of those of us who like programming R more than
remembering theory.
I assume that
If you have 2 dichotomous variables coded 0/1 (and stored as numerics)
then the var and cov functions can be used to compute the covariance
as if they were continuous variables. Some algebra shows that the
continous covariance and the binomial covariance only differ by the
denominator (n for
The TeachingDemos package has %% and %=% operators for a between
style comparison. So for your example you could write:
1 %% 5 %% 10
or
1 %=% 5 %=% 10
And these operators already work with vectors:
lb %=% x %% ub
and can even be further chained:
0 %% x %% y %% z %% 1 # only points where x
A perhaps better approach would be to have the functions that
currently call fixx accept an argument of a function to use. It could
default to fixx, but if the caller passed in a new function it would
use that function instead.
If you really want to overwrite a function inside of a package
While there are tools that claim to compute power for tests beyond
what you find in the pwr package, I don't like to use them because
either I don't agree with the assumptions that they make, or I don't
know what assumptions are being made (and therefore I don't know
whether I agree with them or
On Fri, Aug 15, 2014 at 4:06 PM, Rolf Turner r.tur...@auckland.ac.nz wrote:
OTOH R is still lacking a mind_read() function so it probably
would NOT give you *exactly* what you want.
We can try the (very pre-alpha) esp package:
source('g:/R/esp/esp.R')
esp()
[1] piccalilli crawlspace mole
Generally if you want to save the results of a loop then it is time to
learn to use the lapply and sapply functions instead.
Try something like:
Tukey - lapply( 6:n, function(i) Tukey1 = HSD.test(lm(sdata_mg[,i] ~
sdata_mg$Medium+sdata_mg$color+sdata_mg$type+sdata_mg$Micro),
'sdata_mg$Micro') )
In addition to the solution and comments that you have already
received, here are a couple of additional comments:
This is a variant on FAQ 7.21, if you had found that FAQ then it would
have told you about the get function.
The most important part of the answer in FAQ 7.21 is the last part
where
Not a single function, but the subplot function in the TeachingDemos
package can be used to add the histogram and/or density plot in the
empty part of a qqplot.
On Sun, Aug 3, 2014 at 1:38 PM, Spencer Graves
spencer.gra...@structuremonitoring.com wrote:
Does a function exist that
You could create a tcltk window that looks for a button click and/or
key press and when that happens change the value of a variable. Then
in your loop you just look at the value of the same variable and break
when the value changes.
On Tue, Aug 5, 2014 at 6:13 AM, William Simpson
I think it may be time for you to rethink your process. Yes there are
ways to do what you are asking, but when you start wanting to combine
graphs, tables, r output and descriptions and annotations then it is
time to look into tools like knitr. With knitr you can create a
template file with R
Sherbrooke Street West
Montreal, QC H3A 1G5
Ph: +1 (514) 398-1457
__
On Wed, Jul 30, 2014 at 12:45 PM, Greg Snow 538...@gmail.com wrote:
Another option that is in developement, but may do what you want is
ggvis (http://ggvis.rstudio.com/). I have seen
Montreal, QC H3A 1G5
Ph: +1 (514) 398-1457
__
On Tue, Jul 29, 2014 at 11:01 AM, Greg Snow 538...@gmail.com wrote:
There is the TkBrush function in the TeachingDemos package that gives
brushing in a scatterplot matrix using a Tk interface rather than
ggobi
My preference when teaching is to have the code and results look the
same as it appears in the R console window, so with the prompts and
without the output commented. But then I also `purl` my knitr file to
create a script file to give to the students that they can copy and
paste from easily.
On
There is the TkBrush function in the TeachingDemos package that gives
brushing in a scatterplot matrix using a Tk interface rather than
ggobi. There is also the iplots package which allows you to create
multiple scatterplots, histograms, boxplots, barcharts, etc. and
points selected in any one of
For speed your best choice is probably to load your data into a
database, then pull your samples from the database. A simple database
is SQLite and there are R packages that work directly with that
database.
Can the later samples contain some of the same rows as previous
samples? Or once a row
Here are 2 possibilities to consider.
The shiny package allows a web browser interface to R. It can run
from a server, but can also run just on a single computer. You could
set this up like you suggest, where the user would double click on an
icon on the desktop which would then run R, load the
Here is another approach in R (blatantly stealing Jim Holtman's code
to generate sample data):
set.seed(1)
n - 100
test - data.frame(p = sample(10, n, TRUE)
+ , b = sample(10, n, TRUE)
+ )
test$e - sample(5, n, TRUE) + test$b # make sure e b
tmp1 - test$b -
You ask: Is there a way to produce good quality area graphs in R? I
would modify that question a little and ask it back as:
Is there a way to produce good quality area graphs?
Consider the following:
library(fortunes)
fortune(197)
If anything, there should be a Law: Thou Shalt Not Even
and Cleveland and perhaps there is a
better way to portray these data.
What I’m looking to do is to illustrate how several blocks of data change
distribution relative to one another over time but in a less boring way.
On 16 Jul 2014, at 16:30, Greg Snow 538...@gmail.com wrote:
You ask
Here is one approach that gives almost the same answer as your example:
A1-list(c(1:4),c(2,4,5),23,c(4,5,13))
A2 - sort(unique(unlist(A1)))
names(A2) - A2
sapply(A2, function(x) which( sapply(A1, function(y) x %in% y) ),
+ simplify=FALSE, USE.NAMES=TRUE )
$`1`
[1] 1
$`2`
[1] 1 2
$`3`
[1] 1
of
time then it is probably not worth the time to optimize them.
On Tue, Jul 8, 2014 at 11:11 AM, Greg Snow 538...@gmail.com wrote:
Here is one approach that gives almost the same answer as your example:
A1-list(c(1:4),c(2,4,5),23,c(4,5,13))
A2 - sort(unique(unlist(A1)))
names(A2) - A2
sapply(A2
Here is another approach inspired by Jim's answer:
names(A1) - paste0(seq_along(A1),'.')
tmp - unlist(A1)
split( rep( seq_along(A1), sapply(A1,length) ),
as.numeric(sub('\\..+$','',tmp)) )
$`1`
[1] 1
$`2`
[1] 1 2
$`3`
[1] 1
$`4`
[1] 1 2 4
$`5`
[1] 2 4
$`13`
[1] 4
$`23`
[1] 3
On Tue,
Oops, I combined 2 ideas (by chance it still worked), the last line
should have been one of the following:
split( rep( seq_along(A1), sapply(A1,length) ), tmp )
split( as.numeric(sub('\\..*$','',names(tmp))), tmp )
On Tue, Jul 8, 2014 at 11:41 AM, Greg Snow 538...@gmail.com wrote:
Here
The subplot function in the TeachingDemos package is more up to date
than the version in Hmisc (the Hmisc version is a copy of an earlier
version of the one in TeachingDemos). If you replace library(Hmisc)
with library(TeachingDemos) (with a recent version of TeachingDemos
installed) then the
You could use
which( sapply(l, length) == 2 )
but that still uses a loop internally.
On Thu, Jul 3, 2014 at 1:35 PM, carol white wht_...@yahoo.com wrote:
Hi,
Is there any way to access an element of a list without looping over the list
nor using unlist? Just to avoid parsing a very long
If you really want to mix base and grid graphics (they don't play
nicely together as you have noticed) then you should really use the
gridBase package. There is at least one package for doing maps using
grid (ggmap or ggMaps or something similar) and there is the regular
text function for adding
Or
densityplot(~mu, dat, group=gp, auto.key=TRUE)
which will be more like the matplot result.
On Thu, Jun 26, 2014 at 6:46 PM, Duncan Mackay dulca...@bigpond.com wrote:
As Greg has listed lattice
Here are ways in lattice
quick 1 panel
library(lattice)
densityplot(~ mu1+mu2+mu3+mu4)
dat
Ailan, Most of the readers of this list speak English and you may
have an easier time getting an answer if you translate to English
(though there may be some native Portuguese speakers on the list that
can help you directly). This is a text only list so please in the
future post in plain text,
For your 2nd question (which also answers your first question) I use
the permn function in the combinat package, this function is nice that
in addition to generating all the permutations it will also,
optionally, run a function on each permutation for you:
t(simplify2array( permn( c(A,B,C) ) ))
Does this do what you want?
d1 - density(mu1)
d2 - density(mu2)
d3 - density(mu3)
d4 - density(mu4)
matplot( cbind( d1$x, d2$x, d3$x, d4$x ), cbind( d1$y, d2$y, d3$y,
d4$y ), type='l')
Or in a more expandable way:
mus - mget( ls(pat='^mu') )
ds - lapply( mus, density )
xs - sapply( ds, `[[`, x
I think that you are looking for the `list` argument in `save`.
save( list=foo, file=paste0(foo, '.Rdata') )
In general it is best to avoid using the assign function (and get when
possible). Usually there are better alternatives.
On Tue, Jun 24, 2014 at 2:35 PM, David Stevens
wrote:
I recommend to use saveRDS()/readRDS() instead. More convenient and
avoids the risk that load() has of overwriting existing variables with
the same name.
/Henrik
On Tue, Jun 24, 2014 at 1:45 PM, Greg Snow 538...@gmail.com wrote:
I think that you are looking for the `list` argument
The .Rprofile file is processed before all the standard packages are
loaded, that is why you are seeing the error. If you instead run the
command as utils::read.csv or use library or require to manually load
the utils package before calling read.csv then everything should work
for you.
On Mon,
The sample function can be used to sample discrete values with
designated probabilities. I would just construct your list of 5
values based on the selected value (duplicating end values if needed,
so a choice of x=0 would be the vector c(0,0,0, 0.125, 0.25) ), then
sample from this vector with
First you should do your best to forget that you ever saw the command
par(new=T), rip that page out of any book you saw it in, blacklist
any webpage etc. In general (as you see) it causes more problems than
it solves.
Now to do what you want there are a couple of options, one of the
simplest is
the split.screen
function proposed by Jim.
Thanks a lot for the help,
Cheers,
Luca
2014-06-17 18:00 GMT+02:00 Greg Snow 538...@gmail.com:
I am not familiar with the pamr.plotcv function, but in general if it
uses par(mfrow=c(2,1)) to set up for the multiple plots then your
problem with going back
I am not familiar with the pamr.plotcv function, but in general if it
uses par(mfrow=c(2,1)) to set up for the multiple plots then your
problem with going back to the first plot is that you have lost the
other information (such as user coordinates) needed to add to the 1st
plot. You can see that
explicitly told me that it was using n-1 not n. By
the way, is there another standard function in R, that will compute either
the variance or the standard deviation by dividing by n instead of n-1?
Thanks
Bob
On Sunday, June 8, 2014 10:33 PM, Greg Snow 538...@gmail.com wrote:
Which
Here is an example using the subplot function.
library(TeachingDemos)
map(state, region= ohio, xlim=c(-85, -80), ylim=c(38, 42))
tmp - subplot(map(state,add=TRUE), 'bottomright', type='fig',
size=c(0.2,0.2), inset=0.1)
op - par(fig=tmp$fig)
map(state, region=ohio, fill=T, add=T)
par(op)
With
Which formula for standard deviation are you using?
If you know the population mean then you should divide by n (3 in this
case), but if you don't know the population mean and use the mean
calculated from the sample then it is more usual to use n-1 as the
denominator (this makes the variance an
What type of mapping?
Do you want to add points to an existing map? Color in polygons of a
downloaded map? Create a completely new map? other?
our advice to you depends on what you want to do.
Also please read the posting guide linked to at the bottom of all
emails and don't post in HTML,
-
From: Greg Snow [mailto:538...@gmail.com]
Sent: June 3, 2014 1:41 PM
To: Fowler, Mark
Cc: R help
Subject: Re: [R] tkbind (callback)
I think the problem is that the tkbind function is expecting the 3rd argument
to be a function and the arguments of functions passed to tkbind need
I think the problem is that the tkbind function is expecting the 3rd
argument to be a function and the arguments of functions passed to
tkbind need to have specific names. In your call when you do the
binding the OtoFanCallback function is called at that time (hence the
initial print) and its
There is the boxcox function in the MASS package that will look at the
Box Cox family of transformations.
On Mon, Jun 2, 2014 at 9:15 AM, Diederick Stoffers d.stoff...@gmail.com wrote:
Hi guys,
I distinctly remember having used an R toolbox that compared different
transformation with regard
Meskin
Sent from my iPad
On May 29, 2014, at 1:06 PM, Greg Snow 538...@gmail.com wrote:
This is a warning and in your case would not be a problem, but it is
good to think about and the reason why it is suggested that you avoid
using attach and be very careful when you do use attach. What
There are several options for creating GUIs depending on how much
control you want and how much work you are willing to put in.
One simple option is the tkexamp function in the TeachingDemos
package. This approach would require whoever receives your script to
have R running, but then they could
I believe that what is happening is that you never run fun1, so no
environment for fun1 is ever created and therefore x1 is never defined
with its own environment. You grab the statement y - x1 from the
body of fun1, but then tries to evaluate it in the current environment
(where it cannot find
This is a warning and in your case would not be a problem, but it is
good to think about and the reason why it is suggested that you avoid
using attach and be very careful when you do use attach. What is
happening is that you first created a vector named 'x' in your global
workspace, you then
You should probably read the posting guide, there is a link at the
bottom of every e-mail on the list.
You provide a large amount of code but no description of what you are
trying to do with it and very little about what problem you are
having. Most people (well me at least) are unwilling to run
Look at the aggregate function. As long as you have a column like
Name that indicates which rows should be averaged together it will
work (technically it will average the other rows as well, but since
the average of 1 number is that number you will not see a difference).
On Tue, May 27, 2014 at
For numeric/continuous/normal values you can use the mvrnorm function
in the MASS package (set the empirical argument to TRUE to force the
exact correlation). Some would argue that you should not compute
correlations with binary variables, but you could generate 4 normals,
then take the last 2
Here are some functions to look at for help with your problem (though
as Sarah comments, it would be easier for us to help you if you help
us by giving more detail as mentioned in the posting guide).
The flush.console function will send output to the screen even when
buffering is happening.
The
There is an addtable2plot function in the plotrix package, which is on
CRAN and can be installed using install.packages or other standard
ways of installing packages.
If you need the table to line up with parts of the plot then you could
leave extra margin area and call the axis function multiple
You could have them spawn a vanilla R session using system instead of
the command line:
system('R CMD BATH --vanilla foo.R')
Or you could use the local argument to source to evaluate in a new
environment that does not inherit from the global environment:
source('foo.R',
You can specify whatever plotting size you want for graphics devices,
so for example you could send the plot to a pdf device set up as A0 or
other poster size. Then you just have the challenge of finding a
place to print it.
The pairs2 function in the TeachingDemos package will create pairs
like
You could search all of CRAN using the RSiteSearch function for terms
like bond or coupon. There are also Task Views for Finance and
Econometrics that may point you towards useful packages.
On Wed, May 14, 2014 at 3:07 AM, Katherine Gobin
katherine_go...@yahoo.com wrote:
Dear R forum,
EXCEL
The Predict.Plot function in the TeachingDemos package can do this for
you. Or you can just calculate the intercept for the call to abline
by plugging in the mean for all the other variables and do the
arithmetic then pass the intercept and slope by hand to the abline
function. Or you can create
deparse(quote(x$y))
[1] x$y
It looks like deparse does what you want here.
On Wed, May 7, 2014 at 3:23 PM, Spencer Graves
spencer.gra...@structuremonitoring.com wrote:
Hello, All:
Is there a simple utility someplace to convert quote(x$y) to x$y?
I ask, because
Your command will generate 3 random values from gamma distributions,
the first will be from a gamma with shape a[1] and scale b[1], then
the 2nd will come from a gamma with shape a[2] and scale b[2] and the
3rd will have shape a[3] and scale b[3].
On Wed, Apr 30, 2014 at 3:00 PM, Stefano Sofia
Many, probably even most (but I have not checked) of the datasets
available in R packages have help files with a references section.
That section should lead you to an original source that may have the
copyright information and is what should be referenced.
My understanding (but I am not a
Convert your 'targets' matrix into a 2 column matrix with the 1st
column representing the row and the 2nd the column where you want your
values, then change the values to a single vector and you can just use
the targets matrix as the subsetting in 1 step without (explicit)
looping, for example:
Can you provide some sample data and the family of curves that you
would like to fit?
Reproducible examples greatly increase your chances of receiving a
useful response.
On Wed, Apr 23, 2014 at 12:33 AM, andreas betz abet...@gmail.com wrote:
Hello,
is it possible to fit a group of curves
The mean value theorem of integration (I have a cross-stitch of this
theorem hanging on my wall (between cross-stitches of the central
limit theorem and Bayes theorem)) tells us that the area under a curve
is equal to the width of the area of interest times the average height
of the curve. Often
Here is one approach to generating a set (or in this case multiple
sets) of normals that sum to 0 (with a little round off error) and
works for an odd number of points:
v - matrix(-1/8, 9, 9)
diag(v) - 1
eigen(v)
x - mvrnorm(100,mu=rep(0,9), Sigma=v, empirical=TRUE)
rowSums(x)
range(.Last.value)
I agree with the others that you should consult with a statistician,
but here are some additional things to consider:
The usual equivalence test can be done much simpler (both computation
and conception) by just calculating a confidence interval on the
difference and seeing if the entire interval
The hpd functions in the TeachingDemos package do not currently take
weights, so they will not work with the results of importance
sampling.
On Wed, Mar 26, 2014 at 10:35 PM, Baran rad baran@gmail.com wrote:
hi :)
if i using importance sampling for obtain postreior density, can i use
The mk_pent function returns a matrix with 2 columns that is stored
into a variable I called tmp. The x-values are in the first column
and tmp[,1] is the first column of tmp, tmp[,2] is the second column,
the y-values.
This is covered in An Introduction to R and you can also find
discussion in
Look at ?plot.stl and read the section on range.bars. Basically the
bars are all the same height in user coordinates, so it gives a
feeling of the relative scale of each panel.
On Tue, Mar 25, 2014 at 11:32 AM, Rich Shepard rshep...@appl-ecosys.com wrote:
This command produced the attached
on the displaying of the results. For many
functions it is a good idea to look for print, plot, and summary
methods for them.
On Tue, Mar 25, 2014 at 11:53 AM, Rich Shepard rshep...@appl-ecosys.com wrote:
On Tue, 25 Mar 2014, Greg Snow wrote:
Look at ?plot.stl and read the section on range.bars
To plot a bunch of pentagons I would suggest using the my.symbols and
ms.polygon functions in the TeachingDemos package.
If this is more to learn programming, then you can just loop over an
index of the vectors containing the x and y coordinates (are they in 2
vectors?, 2 columns of a data
Just to satisfy my curiosity:
library(microbenchmark)
a - 1:10
b - 101:110
microbenchmark(
+ m1=as.vector( rbind(a,b) ),
+ m2=c( rbind(a,b) ),
+ m3=as.vector( matrix(c(a,b), nrow=2, byrow=TRUE) ),
+ m4={x - integer(length(a)*2); x[c(TRUE,FALSE)] - a;
x[c(FALSE,TRUE)] - b; x},
+ m5={x -
Please read the posting guide (there is a link at the bottom of every
post) and post in plain text, not HTML, and also avoid all caps.
The biggest repository of R code is probably CRAN which is linked from
the main R page. There are also R-forge and Bioconductor and many R
packages on Github.
Well if I had it and you asked nicely, then I would be happy to give
it to you. Oh, you mean the gls function, not GLS as my initials (my
parents are OLS and WLS, perhaps I was destined to regress), sorry.
The gls function in the nlme package (is that the one that you are
asking about? or is
You can use get to grab the object, then subset it:
test - list(a=2)
get('test')[['a']]
[1] 2
This way you can even have the variable names in other variables:
whichvar - 'a'
whichlist - 'test'
get(whichlist)[[whichvar]]
[1] 2
Even better would be to have any lists that you want to get in
Here are a couple more options if you want some variety:
d - c(8,7,5,5,3,3,2,1,1,1)
as.numeric( factor(d, levels=unique(d)) )
[1] 1 2 3 3 4 4 5 6 6 6
cumsum( !duplicated(d) )
[1] 1 2 3 3 4 4 5 6 6 6
What would you want the output to be if your d vector had another 8
after the last 1? The
Depending on how you use the logistic regression this can be a silly
question. Remember that the prediction interval is where you predict
new observations to be. If you fit your logistic regression on data
that is 0 or 1 (or FALSE/TRUE, etc.) then predictions for new data
will be predictions of
How predict works depends on the method written for that type of
object. The zeroinfl function is not in any of the standard packages,
so it must be in another package, but you did not tell us which.
Since it is from a package other than the main ones, it may work
similarly to the regular predict
You could run the cor function on a small dataset where you know the
values of tau-a and/or tau-b (either because you hand computed them,
or found an example on the internet showing the difference), that
would give some good evidence as to which is used.
Or you could look at the source code, R is
Essentially what the sample function is doing (though it does it in a
much more efficient way I expect) is the equivalent of this code:
i - c(1:10)
myProbs - c(0.1, 0.1, 0.1, 0.1, 0.1, 0.9, 0.9, 0.9, 0.9, 0.9)
myProbs - myProbs/sum(myProbs)
cp - c(0,cumsum(myProbs))
i[findInterval( runif(5), cp
101 - 200 of 2156 matches
Mail list logo