Thanks, Writing plot(addTA()) worked fine.
I find myself with such mixed feelings about R. After finding that addTA
worked fine at the command line but not in a function, I puzzled for a long
time about what kind of virtual machine structure could possibly account for
that. I couldn't think of an
Hi,
Could anyone help me with the following problem? After I finished a R
session and modified some options (for example, set
options(editor="neditor")), I want save these modifications so that after I
load the saved R data I do not need to modify these options again. What is
the best way to do th
All: Using gWidgetsWWW, I'm in the process of designing a website that will
provide users with a GUI to interact with various datasets. As part of
that, I want to provide users the ability select particular variables (via
combo boxes) that will then result in summary (text) output being returned
I am trying to find a confidence band for a fitted non-linear curve. I
see that the predict.nls function has an interval argument, but a
previous post indicates that this argument has not been implemented. Is
this still true? I have tried various ways to extract the interval
information from t
Look at the 'na.locf' function in the 'zoo' package.
On Fri, May 6, 2011 at 5:29 PM, Nick Manginelli wrote:
> I'm using the survey api. I am taking 1000 samples of size of 100 and
> replacing 20 of those values with missing values. Im trying to use sequential
> hot deck imputation, and thus I a
On May 6, 2011, at 6:22 PM, Eva Bouguen wrote:
Dear users,
In a study with recurrent events:
My objective is to get estimates of survival (obtained through a Cox
model) by rank of recurrence and by treatment group.
With the following code (corresponding to a model with a global
effect of t
On May 6, 2011, at 5:17 PM, claire wrote:
How to use the package generalized hyperbolic distribution in order to
estimate the four parameters in the NIG-distribution? I have a data
material
with stock returns that I want to fit the parameters to.
On StackOverfolw you have already been told
On May 6, 2011, at 3:15 PM, Christopher G Oakley wrote:
> Is there a way to generate a new dataframe that produces x lines based on the
> contents of a column?
>
> for example: I would like to generate a new dataframe with 70 lines of
> data[1, 1:3], 67 lines of data[2, 1:3], 75lines of data[3,
On May 6, 2011, at 6:00 PM, mk90...@gmx.de wrote:
> Hi everyone,
>
> I'm using R, Latex and Sweave for some years now, but today it confuses me
> alot:
>
> Running Sweave produces only figures in .pdf format, no .eps figures.
>
> The header looks like this:
> <>=
>
> There was no error messag
Hello Folks,
I'm working on trying to scrape my first web site and ran into a issue
because I'm really
don't know anything about regular expressions in R.
library(XML)
library(RCurl)
site <- "http://thisorthat.com/leader/month";
site.doc <- htmlParse(site, ?, xmlValue)
At the ?, I realize that
Dear users,
In a study with recurrent events:
My objective is to get estimates of survival (obtained through a Cox model) by
rank of recurrence and by treatment group.
With the following code (corresponding to a model with a global effect of the
treatment=rx), I get no error and manage to obtai
I'm using the survey api. I am taking 1000 samples of size of 100 and
replacing 20 of those values with missing values. Im trying to use
sequential hot deck imputation, and thus I am trying to figure out how
to replace missing values with the value before it. Other things I have
to keep in mind
Hi Frank,
For to answer your request:
> print(f)
Logistic Regression Model
lrm(formula = Ignition ~ FMC + Charge, data = Fire)
Model Likelihood DiscriminationRank Discrim.
Ratio TestIndexes Indexes
Obs
Is there a way to generate a new dataframe that produces x lines based on the
contents of a column?
for example: I would like to generate a new dataframe with 70 lines of data[1,
1:3], 67 lines of data[2, 1:3], 75lines of data[3,1:3] and so on up to numrow =
sum(count).
> data
pop fam yesorno
Thanks Max. I'm using now the library caret with my data. But the models
showed a correlation under 0.7. Maybe the problem is with the variables that
I'm using to generate the model. For that reason I'm asking for some
packages that allow me to reduce the number of feature and to remove the
worst f
Hi everyone,
I'm using R, Latex and Sweave for some years now, but today it confuses me alot:
Running Sweave produces only figures in .pdf format, no .eps figures.
The header looks like this:
<>=
There was no error message.
Does anybody have an idea?
Any changes in the Sweave-package?
Or a mi
I'm using the survey api. I am taking 1000 samples of size of 100 and replacing
20 of those values with missing values. Im trying to use sequential hot deck
imputation, and thus I am trying to figure out how to replace missing values
with the value before it. Other things I have to keep in mind
Hi all, I am to find some way on how I can tell R to use this small number
10^-20 as zero by default. This means if any number is below this then that
should be treated as negative, or if I divide something by any number less than
that (in absolute term) then, Inf will be displayed etc.
I have
Is it possible to create weighted boxplots or violin plots in lattice?
It seems that you can specify weights for panel.histogram() and
panel.densityplot(), but not for panel.bwplot or panel.violin().
Please let me know if I've missed something in the package documentation.
Thanks!
--
Raphael
How to use the package generalized hyperbolic distribution in order to
estimate the four parameters in the NIG-distribution? I have a data material
with stock returns that I want to fit the parameters to.
--
View this message in context:
http://r.789695.n4.nabble.com/Generalized-Hyperbolic-distri
Some good suggestions, just (as always) be aware of floating-point imprecision.
See FAQ 7.31
> s <- seq(1,30,0.1)
> s[8]
[1] 1.7
> s[8] == 1.7
[1] FALSE
Just trying to forestall future questions :-)
Dan
Daniel Nordlund
Bothell, WA USA
> -Original Message-
> From: r-help-boun...@r-proj
Hi Matthias,
If you know the column number you want to change, it is pretty straightforward.
## use the builtin mtcars dataset as an example
## and store it in variable, 'x'
x <- mtcars
## change the second column name to "cylinder"
colnames(x)[2] <- "cylinder"
## compare the column names of 'x'
Perfect - that's it, Gabor, thanks a lot!
Dimitri
On Fri, May 6, 2011 at 4:11 PM, Gabor Grothendieck
wrote:
> On Fri, May 6, 2011 at 4:07 PM, Dimitri Liakhovitski
> wrote:
>> Hello!
>>
>> I'd like to take Dates and extract from them months and years - but so
>> that it sorts correctly. For examp
On May 6, 2011, at 12:14 PM, Lee, Eric wrote:
Hello,
I'm running version R x64 v2.12.2 on a 64bit windows 7 PC. I have
two data vectors, x and y, and try to run archmCopulaFit. Most of
the copulas produce errors. Can you tell me what the errors mean
and if possible, how I can set arch
On May 6, 2011, at 4:16 PM, Ben Haller wrote:
As for correlated coefficients: x, x^2, x^3 etc. would obviously be
highly correlated, for values close to zero.
Not just for x close to zero:
> cor( (10:20)^2, (10:20)^3 )
[1] 0.9961938
> cor( (100:200)^2, (100:200)^3 )
[1] 0.9966219
Is th
I figured out a poor way to do what I want.
meas<-runif(30)
times<-sort(runif(30))
timesdec<-seq(0,1,0.2)
ltim<-length(timesdec)
storing<-rep(0,ltim)
for (i in 1:ltim) {
if (i=1) {rowstart=1} else {rowstart<-findInterval(timesdec[i-1],times)+1}
rowfinal<-findInterval(timesdec[i],times)
storing[i]
Hi,
If your geology map is a special kind of object, this may not work,
but if you are just dealing with a data frame or matrix type object
named, "geology" with columns, something like this ought to do the
trick:
geology[is.na(geology[, "landform"]), "landform"] <- 0
?is.na returns a logical ve
On May 6, 2011, at 1:58 PM, Prof Brian Ripley wrote:
> On Fri, 6 May 2011, Bert Gunter wrote:
>
>> FWIW:
>>
>> Fitting higher order polynomials (say > 2) is almost always a bad idea.
>>
>> See e.g. the Hastie, Tibshirani, et. al book on "Statistical
>> Learning" for a detailed explanation why.
Dear All
I trained a neural network for 200 data and I did prediction for a grid file
(e.g. 100 points) such as below:
snn<-predict(nn, newdata=data.frame(wetness=wetnessgrid$band1,
ndvi=ndvigrid$band1))
the pixels of snn is same with wetnessgrid or ndvigrid
I want to convert thi
Hello all
I have a geology map that has three level, bellow
<-geology
lithology landscape landform
landform level is used as covariate (with codes=1,2,3,4,5) for training of
neural network, but this level has missing data as NA.
I want to replace the missing data of landform level wi
On Fri, May 6, 2011 at 4:07 PM, Dimitri Liakhovitski
wrote:
> Hello!
>
> I'd like to take Dates and extract from them months and years - but so
> that it sorts correctly. For example:
>
> x1<-seq(as.Date("2009-01-01"), length = 14, by = "month")
> (x1)
> order(x1) # produces correct order based o
Hello!
I'd like to take Dates and extract from them months and years - but so
that it sorts correctly. For example:
x1<-seq(as.Date("2009-01-01"), length = 14, by = "month")
(x1)
order(x1) # produces correct order based on full dates
# Of course, I could do "format" - but this way I am losing t
Hi Matthias,
What do you mean by "that doesn't work"? What platform are you using? Using:
> sessionInfo()
R version 2.13.0 (2011-04-13)
Platform: x86_64-pc-mingw32/x64 (64-bit)
fix(mydataframe)
brings up the data editor, then if I click a variable name, and change
it and shut down the data ed
Beautiful.
-Original Message-
From: greg.s...@imail.org [mailto:greg.s...@imail.org]
Sent: Friday, May 06, 2011 02:17 PM
To: Thompson, Adele - adele_thomp...@cargill.com; r-help@r-project.org
Subject: RE: [R] create arrays
?seq
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
In
On Fri, May 6, 2011 at 12:11 PM, Schatzi wrote:
> In Matlab, an array can be created from 1 - 30 using the command similar to R
> which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
> which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
Hmm, in this case, I would do it
?seq
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Schatzi
> Sent: Friday, May 06, 2011 1:12 PM
> To: r-hel
On Fri, May 06, 2011 at 12:11:30PM -0700, Schatzi wrote:
> In Matlab, an array can be created from 1 - 30 using the command similar to R
> which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
> which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
> ...
This may well be a
I can get around it by doing something like:
as.matrix(rep(1,291))*row(as.matrix(rep(1,291)))/10+.9
I was just hoping for a simple command.
Schatzi wrote:
>
> In Matlab, an array can be created from 1 - 30 using the command similar
> to R which is 1:30. Then, to make the array step by 0.1 the c
In Matlab, an array can be created from 1 - 30 using the command similar to R
which is 1:30. Then, to make the array step by 0.1 the command is 1:0.1:30
which is 1, 1.1, 1.2,...,29.9,30. How can I do this in R?
-
In theory, practice and theory are the same. In practice, they are not - Albert
Hi all R users,
Thanks Frank for your advices
In fact I posted all my script. In the R Help, the script for nomogram is
long and I took only the part what I think interesting in my case.
I use informations from( datadist {Design} and rms {rms}) in the help of R
software to do my code.
I see tha
Hi again everybody
I have I new problem concerning the editor of R. It is possible to add a
new variable column, but they all have the name "var1".I read somewhere that
it should be possible to change the variable name by clicking on it, but
that doesn't work. Is that a bug or how is it possible
On May 6, 2011, at 2:28 PM, Pavan G wrote:
Hello All,
Let's say I have data spanning all quadrants of x-y plane. If I plot
data
with a certain x and y range using xlim and ylim or by using
plot.formula as
described in this link:
http://www.mathkb.com/Uwe/Forum.aspx/statistics/5684/plotting
Don't attach the Design package. Use only rms. Please provide the output of
lrm (print the f object). With such a strong model make sure you do not
have a circularity somewhere. With nomogram you can specify ranges for the
predictors; default is 10th smallest to 10th largest.
rms will not make
Hello All,
Let's say I have data spanning all quadrants of x-y plane. If I plot data
with a certain x and y range using xlim and ylim or by using plot.formula as
described in this link:
http://www.mathkb.com/Uwe/Forum.aspx/statistics/5684/plotting-in-R
*DF <- data.frame(x = rnorm(1000), y = rnorm(
On May 6, 2011, at 1:11 PM, wwreith wrote:
The following code works mostly. It runs fine but...
1. Is there a way to increment the xlab for each graph? I would like
to have
Graph 1, Graph 2, etc. Right now it just gives me Graph i over and
over
again.
Use the power of bquote. See modif
1. ?paste ?sprintf
2. ?par (look at col.axis) ?axis
3. ?pdf ?png ?dev.copy
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org]
This should work!!
for(i in 1:12){
xLabel <- paste("Graph",i)
plotTitle <- paste("Graph",i,".jpg")
jpeg(plotTitle)
print(hist(zNort1[,i], freq=FALSE, xlab=xLabel, col="blue",
main="Standardized Residuals Histogram", ylim=c(0,1), xlim=c(-3.0,3.0)),axes
= FALSE)
axis(1, col = "blue",col.axis = "bl
Hello
I am a new user of R .
and I ve problem with R and netcdf .
I succed installation , I could use all examples .
But when I take my netcf it is different .
I want to do statistic on this kind of file .
1)
first calculate mean .
my data is like that
through ncdump -h test.nc
netcdf test {
di
The following code works mostly. It runs fine but...
1. Is there a way to increment the xlab for each graph? I would like to have
Graph 1, Graph 2, etc. Right now it just gives me Graph i over and over
again.
2. Is there a way to get the x-axis and y-axis to be bold or at least a
darker color?
On Fri, 6 May 2011, Bert Gunter wrote:
FWIW:
Fitting higher order polynomials (say > 2) is almost always a bad idea.
See e.g. the Hastie, Tibshirani, et. al book on "Statistical
Learning" for a detailed explanation why. The Wikipedia entry on
"smoothing splines" also contains a brief explanat
On May 6, 2011, at 11:35 AM, Pete Pete wrote:
Gabor Grothendieck wrote:
On Tue, Dec 7, 2010 at 11:30 AM, Pete Pete
wrote:
Hi,
consider the following two dataframes:
x1=c("232","3454","3455","342","13")
x2=c("1","1","1","0","0")
data1=data.frame(x1,x2)
y1=c("232","232"
Thanks to all that reply to my post. The best solution that answers
entirely to my question and can be used as a general function and not
case by case is the one sent by the package author.
Many thanks to everybody. It was helpful.
Cristina
On 05/05/2011 10:44, Deepayan Sarkar wrote:
On Wed,
FWIW:
Fitting higher order polynomials (say > 2) is almost always a bad idea.
See e.g. the Hastie, Tibshirani, et. al book on "Statistical
Learning" for a detailed explanation why. The Wikipedia entry on
"smoothing splines" also contains a brief explanation, I believe.
Your ~0 P values for the
Here is an example of what I would like to do:
meas = measurements
times = time of measurement
measf = measurements in final, reduced matrix
timesf = time of measurement in final matrix
meas<-runif(30)
times<-sort(runif(30))
inputmat<-cbind(times,meas)
names(inputmat)<-c("timef","measf")
I would
On Fri, 2011-05-06 at 11:20 -0500, Gene Leynes wrote:
> Hmmm
>
> After reading that email four times, I think I see what you mean.
>
> Checking for variables within particular scopes is probably one of the most
> challenging things in R, and I would guess in other languages too. In R
> it's
On May 6, 2011, at 12:31 PM, David Winsemius wrote:
> On May 6, 2011, at 11:35 AM, Ben Haller wrote:
>
>> Hi all! I'm getting a model fit from glm() (a binary logistic regression
>> fit, but I don't think that's important) for a formula that contains powers
>> of the explanatory variable up to
Hmmm
After reading that email four times, I think I see what you mean.
Checking for variables within particular scopes is probably one of the most
challenging things in R, and I would guess in other languages too. In R
it's compounded by situations when you're writing a function to accept
va
I'm trying to create an xyplot with a "groups" argument where the y-variable
is the cumsum of the values stored in the input data frame. I almost have
it, but I can't get it to automatically adjust the y-axis scale. How do I
get the y-axis to automatically scale as it would have if the cumsum value
Hello,
I'm running version R x64 v2.12.2 on a 64bit windows 7 PC. I have two data
vectors, x and y, and try to run archmCopulaFit. Most of the copulas produce
errors. Can you tell me what the errors mean and if possible, how I can set
archmCopulaFit options to make them run? I see in the do
Gabor Grothendieck wrote:
>
> On Tue, Dec 7, 2010 at 11:30 AM, Pete Pete
> wrote:
>>
>> Hi,
>> consider the following two dataframes:
>> x1=c("232","3454","3455","342","13")
>> x2=c("1","1","1","0","0")
>> data1=data.frame(x1,x2)
>>
>> y1=c("232","232","3454","3454","3455","3
> From what you have written, I am not exactly sure what your
> seat-of-the-pant sense is coming from. My pantseat typically does not
> tell me much; however, quartic trends tend to less stable than linear,
> so I am not terribly surprised.
My pantseat is not normally very informative either, b
Hello all,
I'm trying to create a heatmap using 2 matrices I have: z and v. Both
matrices represent different correlations for the same independent
variables. The problem I have is that I wish to have the values from matrix
z to be represented by color intensity while having the values from matri
On May 6, 2011, at 11:35 AM, Ben Haller wrote:
Hi all! I'm getting a model fit from glm() (a binary logistic
regression fit, but I don't think that's important) for a formula
that contains powers of the explanatory variable up to fourth. So
the fit looks something like this (typing into
Will all the keywords always be present in the same order? Or are you looking
for the keywords, but some may be absent or in different orders?
Look into the gsubfn package for some tools that could help.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-proj
The strsplit function is probably the closest R function to perls split
function. For more detailed control the gsubfn package can be useful.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gamliel Beyderman
Sent: Thursday, May 05
Hi Ben,
>From what you have written, I am not exactly sure what your
seat-of-the-pant sense is coming from. My pantseat typically does not
tell me much; however, quartic trends tend to less stable than linear,
so I am not terribly surprised.
As two side notes:
x_qt <- x^4 # shorter code-wise
an
Hi all! I'm getting a model fit from glm() (a binary logistic regression
fit, but I don't think that's important) for a formula that contains powers of
the explanatory variable up to fourth. So the fit looks something like this
(typing into mail; the actual fit code is complicated because it
On 05.05.2011 21:20, Ray Brownrigg wrote:
On 6/05/2011 6:06 a.m., swaraj basu wrote:
Dear All,
I am trying to build a package for a set of functions. I am
able to build the package and its working fine. When I check it with
R CMD check
I get a following warning : no visible global function
de
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
>
> Hi,
> I'm hoping someone can offer some advice:I have a matrix "x" of dimensions
> 160 by 1. I need to create a matrix "y", where the first 7 elements are
> equal to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^
On Fri, May 6, 2011 at 6:33 AM, CarJabo wrote:
> sorry I am not asking someone to do my homework, as I have finished all the
> procedure. I am just wondering why this technical error occurs, so I can fix
> it myself.
My guess would be it has something to do with the random data
generated at the 4
Hello
I'm interested in Long Term Prediction over time series: regarding it, I and
other guys have developed STRATEGICO, a free and opensource tool at http://code.
google.com/p/strategico/
Please have a look at it, test it online with your own time series and give us
any feedbacks and suggestio
Dear R Community,
I am currently facing this seemingly obscure Problem with Panel
Corrected Standard Errors (PCSE) following Beck & Katz (1995). As the
authors suggest, I regressed a linear model (tmodel) with lm() with
option "na.action=na.exclude" (I have also tried other options here). My
d
Thank you very much for the reply. I tend to agree with your first
suggestion. And that's exactly what I did.
In other functions, an easier way to marginalize such a variable C (not
necessarily a factor) is to use the option
include=c("A","B","A:B")
This essentially sets C at a value such that
sorry I am not asking someone to do my homework, as I have finished all the
procedure. I am just wondering why this technical error occurs, so I can fix
it myself.
By the way i don't have any instructor or teaching assistant for help, so
any suggestion for the error will be appreciated.
Thanks very
On Fri, May 06, 2011 at 09:17:11AM -0400, David Winsemius wrote:
>
> On May 6, 2011, at 4:03 AM, Philipp Pagel wrote:
> >The .Machine() command will provide some insight into these matters.
>
> On my device (and I suspect on all versions of R) .Machine is a
> built-in list and there is no .Machin
On May 6, 2011, at 4:24 AM, Petr Savicky wrote:
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
Hi,
I'm hoping someone can offer some advice:I have a matrix "x" of
dimensions 160 by 1. I need to create a matrix "y", where the
first 7 elements are equal to x[1]^1/7, then t
On Fri, Apr 29, 2011 at 4:27 PM, mathijsdevaan wrote:
> Hi list,
>
> Can anyone tell my why the following does not work? Thanks a lot! Your help
> is very much appreciated.
>
> DF = data.frame(read.table(textConnection(" B C D E F G
> 8025 1995 0 4 1 2
> 8025 1997 1 1 3 4
> 8026
You should ask your instructor or teaching assistant for help. R-help
is not for doing homework.
Duncan Murdoch
On 06/05/2011 9:00 AM, CarJabo wrote:
Hi,
I have tried to use uniroot to solve a value (value a in my function) that
gives f=0, and I repeat this process for 1 times(stimulatio
On May 6, 2011, at 4:03 AM, Philipp Pagel wrote:
which is the maximum large of digits that R has?, because SQL work
with 50 digits I think.
I am wondering if that is binary or decimal.
and I need a software that work with a lot
of digits.
The .Machine() command will provide some insight
Taking the final value for each 30-minute interval seems like it would get what
I want. The problem is that sometimes this value would be 15 minutes before the
end of the 30-minute interval. What would I use to pick up this value?
-Original Message-
From: ehl...@ucalgary.ca [mailto:ehl..
Hi,
I have tried to use uniroot to solve a value (value a in my function) that
gives f=0, and I repeat this process for 1 times(stimulations). However
error occures from the 4625th stimulation - Error in uniroot(f, c(0, 2),
maxiter = 1000, tol = 0.001) :
f() values at end points not of oppo
Please follow the posting guide. You didn't state which package you are
using and didn't include a trivial self-reproducing example that causes the
error.
For your purpose the rms package is going to plot restricted cubic spline
fits (and shaded confidence bands) more flexibly.
Frank
Haleh G
Please post the entire script next time, e.g., include require(rms). You
have one line duplicated. Put this before the first use of lrm: d <-
datadist(donnee); options(datadist='d')
Frank
Komine wrote:
>
> Hi,
> I use datadist fonction in rms library in order to draw my nomogram.
> After rea
I think those functions are now defunct (were only available in previous
versions).
S
On Thursday, May 5, 2011 at 6:33 PM, Andrew Robinson wrote:
> Hi Arnau,
>
> please send the output of sessionInfo() and the exact commands and
> response that you used to install and load apTreeshape.
>
> Ch
Dear R-help,
I am trying to reproduce some results presented in a paper by Anderson
and Blundell in 1982 in Econometrica using R.
The estimation I want to reproduce concerns maximum likelihood
estimation of a singular equation system.
I can estimate the static model successfully in Stata but for t
G'day Rolf,
On Fri, 06 May 2011 09:58:50 +1200
Rolf Turner wrote:
> but it's strange that the dodgey code throws an error with gam(dat1$y
> ~ s(dat1$x)) but not with gam(dat2$cf ~ s(dat2$s))
> Something a bit subtle is going on; it would be nice to be able to
> understand it.
Well,
R> trac
Dear R-users,
I am trying to run sensitivity and uncertainty analysis with R using the
following functions :
- samplingSimple from the package SMURFER
- morris from the package sensitivity
I have a different problem for each of these two functions:
- the functi
An alternative approach:
library(fdth)
fd <- fdt(rnorm(1e3, m=10, sd=2))
plot(fd)
breaks <- with(fd, seq(breaks["start"], breaks["end"], breaks["h"]))
mids <- 0.5 * (breaks[-1] + breaks[-length(breaks)])
y <- fd$table[, 2]
text(x=mids, y=y,
lab=y,
pos=3)
HTH,
JCFaria
--
View this mess
Thank's a lot
I owe you all 10 points of my grade!!
--
View this message in context:
http://r.789695.n4.nabble.com/Insert-values-to-histogram-tp3498140p3502017.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
Hello,
Thank you for your reply but I'm not sure your code answers my needs,
from what I read it creates a 10-fold partition and then extracts the
kth partition for future processing.
My question was rather: once I have a 10-fold partition of my data,
how to supply it to the "train" function of t
Hi,
I did the similar experiment with my data. may be following code will give
you some idea. It might not be the best solution but for me it worked.
please do share if you get other idea.
Thank you
CODE###
library(dismo)
set.seed(111)
dd<-read.delim("yourfile.csv",sep=",",header=T)
On 05/05/2011 10:48 PM, pcc wrote:
This is probably a very simple question but I am completely stumped!I am
trying to do shapiro.wilk(x) test on a relatively small dataset(75) and each
time my variable and keeps coming out as 'NULL', and
shapiro.test(fcv)
Error in complete.cases(x) : no input
On 05/05/2011 09:50 PM, matibie wrote:
I'm trying to add the exact value on top of each column of an Histogram, i
have been trying with the text function but it doesn't work.
The problem is that the program it self decides the exact value to give to
each column, and ther is not like in a bar-plot
Hi,
sorry for the late response and many thanks. A combination of get() and
paste() did the job.
Regards
On Thu, Apr 28, 2011 at 5:06 PM, Petr PIKAL wrote:
> Hi
>
> r-help-boun...@r-project.org napsal dne 28.04.2011 16:16:16:
>
> > ivan
> > Odeslal: r-help-boun...@r-project.org
> >
> > 28.04.
Hello Lee,
in addition to David's answer, see: ?MacKinnonPValues in package 'urca' (CRAN
and R-Forge).
Best,
Bernhard
> -Ursprüngliche Nachricht-
> Von: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] Im Auftrag von David Winsemius
> Gesendet: Freitag, 6. Mai 2011
Hi Danielle.
You appear to have two problems:
1) getting the data into R
Because I don't have the file at hand, I'm going to simulate reading it
through a text connection
orgdata<-textConnection("Graph ID | Vertex1 | Vertex2 | weight\n1 | Alice |
Bob | 2\n1 | Alice | Chris | 1\n1 | Alice | Jane |
On Fri, May 06, 2011 at 02:28:57PM +1000, andre bedon wrote:
>
> Hi,
> I'm hoping someone can offer some advice:I have a matrix "x" of dimensions
> 160 by 1. I need to create a matrix "y", where the first 7 elements are
> equal to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^
> which is the maximum large of digits that R has?, because SQL work
> with 50 digits I think. and I need a software that work with a lot
> of digits.
The .Machine() command will provide some insight into these matters.
cu
Philipp
--
Dr. Philipp Pagel
Lehrstuhl für Genomorientierte Bi
Hi,
I use datadist fonction in rms library in order to draw my nomogram. After
reading, I try this code:
f<-lrm(Y~L+P,data=donnee)
f <- lrm(Y~L+P,data=donnee)
d <- datadist(f,data=donnee)
options(datadist="d")
f <- lrm(Y~L+P)
summary(f,L=c(0,506,10),P=c(45,646,10))
plot(Predict(
Hi,
I'm hoping someone can offer some advice:I have a matrix "x" of dimensions 160
by 1. I need to create a matrix "y", where the first 7 elements are equal
to x[1]^1/7, then the next 6 equal to x[2]^1/6, next seven x[3]^1/7 and so on
all the way to the 1040th element. I have implemente
1 - 100 of 104 matches
Mail list logo