A: Make the efficient use of space
B: Minimise the spatial disclocation of related books
(it is acceptable to separate large books from small books
on the same subject, for the sake of efficient packing).
Some comments, hope they make sense:
Let f(x) be a function that maps from a
mister_bluesman wrote:
Ive been getting the color.scale function to work. However, I really need to
know is that if i have values: 0.1 0.2, 0.3, 0.4, 0.5, for example, how I
can plot these using colours that would be different if the contents of the
file were 0.6, 0.7, 0.8, 0.9 and 1.0. Using
hello,
can you tell me please how I can to do AFD it means discriminent factorial
analysis to present the results in group?
___
[[alternative HTML version deleted]]
use the lda (linear discriminant analysis) function in the MASS package. Or use
discrimin function ade4 package
Justin BEM
Elève Ingénieur Statisticien Economiste
BP 294 Yaoundé.
Tél (00237)9597295.
- Message d'origine
De : elyakhlifi mustapha [EMAIL PROTECTED]
À :
On 5/10/07, Paul Murrell [EMAIL PROTECTED] wrote:
Hi
Paul Gilbert wrote:
Tony
Thanks for the summary.
My ad hoc system is pretty good for catching flagged errors, and
numerical errors when I have a check. Could you (or someone else)
comment on how easy it would be with one of
Yes, butler is pretty much abandoned. I didn't realise that runit
existed when I first wrote it, so much of the functionality is
probably implemented better there (with maintainers that are actually
doing something with the package).
That said, the purpose of butler wasn't only for unit testing,
hi,
I have this error
tr - sample(1:50, 25)
train - rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test - rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl - factor(c(rep(s,25), rep(c,25), rep(v,25)))
z - lda(train, cl)
Erreur : impossible de trouver la fonction lda
I don't understand
'pchip' from the 'signal' package seems to do the desirable operations.
Vladimir Eremeev wrote:
Which function implements the piecewise cubic Hermite interpolation?
I am looking for equivalent of matlab's interp1 with the method = 'pchip'
Here is the reference
Hi
[EMAIL PROTECTED] napsal dne 10.05.2007 11:08:31:
hi,
I have this error
tr - sample(1:50, 25)
train - rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test - rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl - factor(c(rep(s,25), rep(c,25), rep(v,25)))
z - lda(train, cl)
Martin Morgan wrote:
Oops, taking a look at the unit tests in RUnit, I see that specifying
'where=.GlobalEnv' is what I had been missing.
testCreateClass - function() {
setClass(A, contains=numeric, where=.GlobalEnv)
a=new(A)
checkTrue(validObject(a))
removeClass(A,
Urania Sun wrote:
I have a dataset of 1 records which I want to use to compare two
prediction models.
I split the records into test dataset (size = ntest) and training dataset
(size = ntrain). Then I run the two models.
Now I want to shuffle the data and rerun the models. I want
All,
As an addition to my earlier posting, I've now implemented the PRE
measures of prediction accuracy suggested by Menard (1995) as an R
function, which is not a lengthy one and is thus attached below.
With respect to the P-values one has an option in testing for either
1) significantly better
Hello,
given situation:
- pre / post test comparison of discrete, paired data [values can be
1,2,3,4].
- the data are neither normal distributed nor symetric
Which test should I use to calculate the p-value? Is the wilcoxon rank sum
test ok ?
Alex
The Wilcoxon paired rank sign test assumes symmetry.
(cf. http://www.basic.northwestern.edu/statguidefiles/srank_paired_alts.html
)
Christophe Pallier
On 5/10/07, Alexander Kollmann [EMAIL PROTECTED] wrote:
Hello,
given situation:
- pre / post test comparison of discrete, paired data
Have you loaded the library containing lda?
library(MASS)
Christophe Pallier
On 5/10/07, elyakhlifi mustapha [EMAIL PROTECTED] wrote:
hi,
I have this error
tr - sample(1:50, 25)
train - rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test - rbind(iris3[-tr,,1], iris3[-tr,,2],
l used the following code to generate a sample and then calculated then did a
log rank test.can l get the normal version of the logrank eg sqrt of the
chisqr(1) will give you the N~(0,1).
from my sample can i use the above expression to get the normal dist from the
result of the log rank
Hi Jim.
Thanks for all your help. But would this ensure that, say, the color for the
value 0.1 would ALWAYS be the SAME and ALWAYS be DIFFERENT from that of
other values, regardless of the size of the vector?
CHEERS
Jim Lemon-2 wrote:
mister_bluesman wrote:
Ive been getting the
Great! It's a wonderful mailing list full of helpful people!
Thanks to all of you
Vittorio
Il Wednesday 09 May 2007 18:57:39 Gabor Grothendieck ha scritto:
Here is one additional solution. This one produces a data frame. The
regular expression removes:
- everything from beginning to first
Hello-
I recently upgraded from R 2.4.1 to R 2.5.0 and have tried to used the
search engine with minimal results. When I enter the command,
help.start()
My browser opens with the main html interface. When I click the link
for the search engine, the search page comes up, but when I enter a
Christophe Pallier wrote:
The Wilcoxon paired rank sign test assumes symmetry.
(cf. http://www.basic.northwestern.edu/statguidefiles/srank_paired_alts.html
)
...of differences, and under the null hypothesis. This is usually rather
uncontroversial. With only 7 different outcomes for
Paul Johnson wrote:
This is a follow up to the message I posted 3 days ago about how to
estimate mixed ordinal logit models. I hope you don't mind that I am
just pasting in the code and comments from an R file for your
feedback. Actual estimates are at the end of the post.
. . .
Paul,
lrm
On Thu, 2007-05-10 at 08:29 -0400, Pietrzykowski, Matthew (GE, Research)
wrote:
Hello-
I recently upgraded from R 2.4.1 to R 2.5.0 and have tried to used the
search engine with minimal results. When I enter the command,
help.start()
My browser opens with the main html interface.
I think you're asking a design question about a Monte Carlo simulation. You
have a population (size 10,000) from which you're defining an empirical
distribution, and you're sampling from this to create pairs of training and
test samples.
You need to ensure that each specific pair of training and
Add
library(MASS)
lda(...)
Justin BEM
Elève Ingénieur Statisticien Economiste
BP 294 Yaoundé.
Tél (00237)9597295.
- Message d'origine
De : elyakhlifi mustapha [EMAIL PROTECTED]
À : R-help@stat.math.ethz.ch
Envoyé le : Jeudi, 10 Mai 2007, 10h08mn 31s
Objet : [R] hi
hi,
I have this
Hi,
Given a date, how do I get the last date of that month? I have
data in the form MM, that I've read as a date using
x$Date -
as.Date(ISOdate(substr(x$YearEnd,1,4),substr(x$YearEnd,5,6),1))
But this gives the first day of the month. To get the last day of the
month, I tried
Hallo,
I had the same problem. Please install R 2.5.0 again and when you are ask:
Startoptions (should be window 3 of installation) then you need to mark YES. In
the next Window you do not need to make changes but in the next : Help-Style
Here you need to mark HTML-help. Then go on with the
Hans-Peter [EMAIL PROTECTED] writes:
- My code gives error and warning messages in some situations. I want to
test that the errors and warnings work, but these flags are the correct
response to the test. In fact, it is an error if I don't get the flag.
How easy is it to set up automatic tests
Dear R-user:
I have a left censored longitudinally measured data set with 4 variables such
as sub (which is id), x (only covariate), y (repeatedly measured outcome
variable) and w (weights) (note, “-5” indicates the left censored value in the
attached data set). I am using following R codes
Hi,
A quick beginner's question. I have two time series, A with
daily data, and another B with data at varying frequencies, but mostly
annual. Both the series are sorted ascending.
I need to merge these two series together in the following way: For any
entry of A, the lookup should match
Hi everyone:
Polyclass is a polytomous logistic regression model using
linear splines and their tensor products. It provides estimates for
conditional class probabilities which can then be used to predict class
labels. I know there is Polyclass package in S-plus. So I'm wondering if
I have a question that must have a simple answer (but eludes me).
I need a row-by-row logical comparison across three numeric variables
in
a data frame: foo$x, foo$y, foo$z. The logic is
if( x y || x z ) 1 else 0
for a particular row.
It is simple and very inefficient to use for(i in
On Tue, 8 May 2007, Frank E Harrell Jr wrote:
Does anyone know of an R function for computing the Greenland-Robins
variance for Mantel-Haenszel relative risks?
Both the (almost identical) meta-analysis packages compute the MH
estimator for relative risks and its standard error.
You don't need apply. Just do
foo$result - ifelse((foo$x foo$y) | (foo$x foo$z), 1, 0)
On 5/10/07, Greg Tarpinian [EMAIL PROTECTED] wrote:
I have a question that must have a simple answer (but eludes me).
I need a row-by-row logical comparison across three numeric variables
in
a data
On Thu, 10 May 2007, [EMAIL PROTECTED] wrote:
Given that you expect some cells to be small, it should not
be a severe task to draw up a list of (a1,b1) values which
correspond to rejection of the null hypothesis (that both
ORs equal 1), and then the simulation using different values
of the
Dear All
I am dealing at the moment with optimization problems with nonlinear
constraints. Regenoud is quite apt to solve that kind of problems, but
the precision of the optimal values for the parameters is sometimes
far from what I need. Optim seems to be more precise, but it can only
accept
Greg Tarpinian wrote:
I have a question that must have a simple answer (but eludes me).
I need a row-by-row logical comparison across three numeric variables
in
a data frame: foo$x, foo$y, foo$z. The logic is
if( x y || x z ) 1 else 0
for a particular row.
It is simple and very
or
with(foo, (x y) * (x z))
On 5/10/07, jim holtman [EMAIL PROTECTED] wrote:
You don't need apply. Just do
foo$result - ifelse((foo$x foo$y) | (foo$x foo$z), 1, 0)
On 5/10/07, Greg Tarpinian [EMAIL PROTECTED] wrote:
I have a question that must have a simple answer (but eludes me).
The TimeIndex class in the 'fame' package handles this kind of stuff with ease.
library(fame)
ym - 200212
z - lastDayOf(ti(100*ym + 1, tif = monthly))
z
[1] 20021231
class: ti
tifName(z)
[1] daily
a 'ti' object is a TimeIndex, and it has a tif (TimeIndexFrequency) embedded
in it.
--
Jeff
ifelse(((x y) | (x z)), 1, 0)
Note in particular the use of | instead of || for elementwise comparisons.
Petr
Greg Tarpinian napsal(a):
I have a question that must have a simple answer (but eludes me).
I need a row-by-row logical comparison across three numeric variables
in
a data frame:
I have a 3D scatterplot and would like to change the displayed range along the
y-axis.
for instance suppose I have:
x = seq(0.2,0.7,0.01)
y = seq(0.4,0.9,0.01)
z = runif(51)
scatterplot3d(x,y,z,xlim=c(0.2,0.7),ylim=c(0.4,0.9),zlim=c(0,1))
But would like the y-axis to read: 0.4, 0.5, 0.6,
?findInterval
On 5/10/07, Patnaik, Tirthankar [EMAIL PROTECTED] wrote:
Hi,
A quick beginner's question. I have two time series, A with
daily data, and another B with data at varying frequencies, but mostly
annual. Both the series are sorted ascending.
I need to merge these two series
See package polspline.
There was once a polyclass package in R, but polspline has superseded it.
On Thu, 10 May 2007, Feng Qiu wrote:
Hi everyone:
Polyclass is a polytomous logistic regression model using
linear splines and their tensor products. It provides estimates for
On 10-May-07 15:07:10, Thomas Lumley wrote:
On Thu, 10 May 2007, [EMAIL PROTECTED] wrote:
Given that you expect some cells to be small, it should not
be a severe task to draw up a list of (a1,b1) values which
correspond to rejection of the null hypothesis (that both
ORs equal 1), and then
On Thu, 10 May 2007, Patnaik, Tirthankar wrote:
Hi,
A quick beginner's question. I have two time series, A with
daily data, and another B with data at varying frequencies, but mostly
annual. Both the series are sorted ascending.
I need to merge these two series together in the
y.ticklab ?
scatterplot3d(x,y,z,xlim=c(0.2,0.7),ylim=c(0.4,0.9),zlim=c(0,1),
y.ticklabs= c( 0.4, 0.5, 0.6, , , 1.0 ))
--- Johnson, Elizabeth [EMAIL PROTECTED] wrote:
I have a 3D scatterplot and would like to change the
displayed range along the y-axis.
for instance suppose I have:
x =
Hi all,
Please let me know the how to include Bayesm with R-2.4.1
Thanks
Jomy
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
Hello!
I am trying to impute missing data and output the results of the imputation.
My data set is called: MyData.
I have a bunch of variables all of which start with Q20_ - and some of them
have missing values.
Here is what I've been doing:
imputationmodel-mice( MyData[ c (grep(Q20_,
Hi Paul,
Solution.tolerance is the right way to increase precision. In your
example, extra precision *is* being obtained, but it is just not
displayed because the number of digits which get printed is controlled
by the options(digits) variable. But the requested solution
precision is in the
Sorry, you wanted or, not and.
with(foo, pmax(x y, x z))
with(foo, as.numeric(x y | x z))
with(foo, 1*(x y | x z))
On 5/10/07, Gabor Grothendieck [EMAIL PROTECTED] wrote:
or
with(foo, (x y) * (x z))
On 5/10/07, jim holtman [EMAIL PROTECTED] wrote:
You don't need apply.
Thanks, Jasjeet, for your reply, but maybe I was not enough clear.
The analytical solution for the optimization problem is the pair
(sqrt(2)/2,sqrt(2)/2),
which, approximately, is
(0.707106781186548,0.707106781186548).
The solution provided by rgenoud, with
solution.tolerance=0.1
Hi,
I just wonder if this is a rounding error by the princomp command in R.
Although this does not make much sense, using a hypothetical dataset, a,
a-matrix(runif(1000),100,10)
I did PCA with the princomp, and compared it with the results estimated
with the eigen and the prcomp commands. And
I don't know of any sources, but the idea is quite simple.
For each constraint that is broken, the penalty is the amount
by which the constraint is broken times a penalty rate. The
total penalty to add to the objective is the sum of penalties
over all constraints.
There is a catch or two when
I was installing the maps package using the R GUI and noticed that it's
unavailable from the PA(1) mirror, which for me resolves to
http://cran.us.r-project.org/. It's available from the other mirrors
I tried. Are the R mirrors generally in sync?
Thanks
Rory
Paul,
The general idea here is to transform a constrained problem into a sequence
of unconstrained optimization problems. When only equality constraints are
involved, a popular way to do this is to augment the objective function with
a Lagrangian term and with a quadratic penalty term. When
Hi, Patrick, Paul, et al.:
see in line
Patrick Burns wrote:
I don't know of any sources, but the idea is quite simple.
For each constraint that is broken, the penalty is the amount
by which the constraint is broken times a penalty rate. The
total penalty to add to the objective is
You need to install the package and then use
library(Bayesm)
See the Manual R Installation and Administration
on the R site for instructions for your operating
system.
If you're using Windows just go to Packages in the
menu bar.
--- Jomy Jose [EMAIL PROTECTED] wrote:
Hi all,
Please
Hi Paul,
I see. You want to increase the population size (pop.size)
option---of lesser importance are the max.generations,
wait.generations and P9 options. For more details, see
http://sekhon.berkeley.edu/papers/rgenoudJSS.pdf.
For example, if I run
a - genoud(myfunc, nvars=2,
Dear ALL:
Could you please let me know how to read SAS data file into R.
Thank you so much for your helps.
Regards;
Abou
==
AbouEl-Makarim Aboueissa, Ph.D.
Assistant Professor of Statistics
Department of Mathematics Statistics
University of Southern Maine
96 Falmouth
Hello,
I'm maximizing a likelihood function with the function optim, but for
different intial parameters (in the input of the optim funtion) , I found
different value for the likelihood function and the parameters estimates, the
causes is that the algorithm has not found the global maximum
see foreign package.
but personally, i think it might be better to transfer through csv or db.
On 5/10/07, AbouEl-Makarim Aboueissa [EMAIL PROTECTED] wrote:
Dear ALL:
Could you please let me know how to read SAS data file into R.
Thank you so much for your helps.
Regards;
Abou
Have a look at the Hmisc package:sas.get or
sasxport.get. In either case you need to have a
working SAS installation on the same machine.
Another way, although you lose label is simply to
export the SAS data file as a csv or delim file and
import it.
--- AbouEl-Makarim Aboueissa
[EMAIL
Hi all
I have a bit of a problem. I want to make a barplot of some data. My
data is of a score that is separated by year and by a limit (above 3 and
below 3 to calculate the score).
YearLimit HSS
1999ALT 0.675
1999VFR 0.521
2000ALT 0.264
2000VFR 0.295
I
Yeah, I want to get all unique combinations of choosing ntest from ntotal.
for example, choosing 4000 training data from 10,000 total data.
Suppose they are sequenced as 1:10,000
One obvious combination is 1:4000
Then I run
sample ((1:1000), 4000)
it may output 4000 numbers:
1, 3, 5,
Thanks a lot, Jasjeet. That is it.
Paul
On 5/10/07, Jasjeet Singh Sekhon [EMAIL PROTECTED] wrote:
Hi Paul,
I see. You want to increase the population size (pop.size)
option---of lesser importance are the max.generations,
wait.generations and P9 options. For more details, see
On 5/10/07, Spencer Graves [EMAIL PROTECTED] wrote:
I don't know of any sources, but the idea is quite simple.
For each constraint that is broken, the penalty is the amount
by which the constraint is broken times a penalty rate. The
total penalty to add to the objective is the sum of
I know. But I am curious about how sample() works.
For a small sample size. choose 1 digit from 0, 1
it only has two combinations. It is easy to test that the below can happen
consecutively.
sample (c(0,1), 1)
[1] 0
sample (c(0,1), 1)
[1] 0
That means, the output did not deplete all unique
Hello,
I have a problem with calculating the VaR of stockfonds.
Here the stockfonds dataset:
http://www.ci.tuwien.ac.at/~weingessel/FStat2006/stockfonds.csv
library(VaR)
library(fPortfolio)
library(e1071)
stock - read.table(stockfonds.csv, header=TRUE, sep=,)
tstock = ts(impute(stock[,2:6]),
Dear R users and developers,
Would any of you be interested in giving a lecture about R and/or R
use cases at the following free software conference that takes place
in Amiens, France, next summer?
We already had a talk on R last year by Yves Croissant
(http://2006.rmll.info/theme_26?lang=en)
Hi,
On Thu, May 10, 2007 at 11:36:18PM +0200, lehobey wrote:
Dear R users and developers,
Would any of you be interested in giving a lecture about R and/or R
use cases at the following free software conference that takes place
in Amiens, France, next summer?
We already had a talk on R
Hello,
In my simulations, I have to use the values of the cumulative distribution
function of a multivariate normal with known mean vector and dispersion matrix.
Please, can you tell me if there is a package in R to do that?
Thank you very much for your greatly appreciate cooperation.
Bernard
Let us first assume that you have enumerated all the local maxima, which is
by no means a trivial thing to assure. How different are the likelihood
values? If they are significantly different, then take the parameter
estimates corresponding to the largest likelihood. If they are not
In my simulations, I have to use the values of the cumulative distribution
function of a multivariate
normal with known mean vector and dispersion matrix. Please, can you tell me
if there is a package in R to do that?
There are two that I know of:
mvtnorm
mnormt
Hi All,
I would like to input a .txt file by using read.table()
the file data.txt:
NameID
IMAGE:131suid=115221
IMAGE:100020851265
IMAGE:100033464770
IMAGE:1000365suid=99969
IMAGE:100050055421
IMAGE:100087564770
IMAGE:1000892399655
IMAGE:1000942suid=112379
On 5/10/07, Spilak,Jacqueline [Edm] [EMAIL PROTECTED] wrote:
Hi all
I have a bit of a problem. I want to make a barplot of some data. My
data is of a score that is separated by year and by a limit (above 3 and
below 3 to calculate the score).
YearLimit HSS
1999ALT 0.675
1999
Hi,
on a
summary(lm(y~x))
are the computed t-values for two.sided or one.sided. By looking on some
tables they seem like they are for two.sided. Is it possible to have them
for one.sided? If this makes sense...
Thanks.
[[alternative HTML version deleted]]
Hello, Wassim:
GENERAL THEORY:
To expand on Ravi's comments, what can you tell us about the
problem? For example, if you have only 1 parameter, you can plot the
log(likelihood) over a wide enough range so you can be confident you've
covered all local maxima. Then pick the max of the
On Thu, 2007-05-10 at 15:58 -0700, Deepayan Sarkar wrote:
On 5/10/07, Spilak,Jacqueline [Edm] [EMAIL PROTECTED] wrote:
Hi all
I have a bit of a problem. I want to make a barplot of some data. My
data is of a score that is separated by year and by a limit (above 3 and
below 3 to
try:
data - read.table(data.txt, sep=\t, header = TRUE, as.is=TRUE)
On 5/10/07, Alex Tsoi [EMAIL PROTECTED] wrote:
Hi All,
I would like to input a .txt file by using read.table()
the file data.txt:
NameID
IMAGE:131suid=115221
IMAGE:100020851265
IMAGE:100033464770
I have searched the r-help files but have not been able to find an answer to
this question. I apologize if this questions has been asked previously.
(Please excuse the ludicrousness of this example, as I have simplified my task
for the purposes of this help inquiry. Please trust me that
On Fri, 2007-05-11 at 00:14 +0100, [EMAIL PROTECTED] wrote:
Hi,
on a
summary(lm(y~x))
are the computed t-values for two.sided or one.sided. By looking on some
tables they seem like they are for two.sided.
Yep.
Is it possible to have them
for one.sided? If this makes sense...
Yep.
You weren't passing in any 'x' and 'y' arguments to the 'fun'.
Try this:
vars.in = data.frame(x='sample1',y='sample2')
examplefun - function(fun, vars=vars.in, ...) {
for(v in 1: ncol(vars) ) {
assign( names(vars)[v] , vars[1,v] , env=.GlobalEnv)
}
Hi List,
Please see the following simple example which illustrate the problem. I'm
using R-2.5.0 in WinXP and
R2HTML 1.58.
Thanks,
Tao
#=test.rnw =
html
body
div
h1 align=centerReport/h1
p
echo=FALSE,results=html=
print(y)
print(\n)
print(paste((, x,
On 10/05/2007 9:08 PM, Tao Shi wrote:
Hi List,
Please see the following simple example which illustrate the problem. I'm
using R-2.5.0 in WinXP and
R2HTML 1.58.
The code in an Sweave document needs to be self-contained. It won't see
variables in some other R session.
If you want to
Thanks, but I need to clarify.
Sometimes fFUN may take only 1 argument, other times it may take several
arguments.
How can I write the code so that the appropriate arguments are automatically
passed to fun, using the column names in vars.in as the argument names and
the contents of row1 of
Dear R users:
I am using survreg for modeling left censored longitudinal data. When I am
using the following code for fitting the tobit model I am getting some output
with an warning message(highlighted with red color):
survreg(Surv(y, y=0, type='left')~x + frailty(id), cytokine.data,
Try this:
vars.in = data.frame(x='sample1',y='sample2', stringsAsFactors=FALSE)
sample1 - c(111,222,333)
sample2 - c(444,555,666)
minus - function(x,y) {get(x)-get(y)}
examplefun - function(fun, vars=vars.in, ...) {
+ # convert 'vars' to a list and call the function
+ do.call(fun,
Achim,
Thanks so much! I should've probably explained what I was trying
to do better (Just an LOCF!). As you've correctly pointed out I'm trying
to merge two time series where the first series is daily (price data),
and the second series is irregular (earnings, balance-sheet data).
While
reply is inline
- Regards,
\\\|///
\\ -- //
( o o )
oOOo-(_)-oOOo
|
| Gaurav Yadav
| Assistant Manager, CCIL, Mumbai (India)
| Mob: +919821286118 Email: [EMAIL PROTECTED]
| Man is made by his belief, as He believes, so He is.
|
Doea anyone know how to compute components of variance analysis? For example, I
have the score of pupils on a test. Each pupil belongs to a school and within
each school I may have several classes? How can I estimate the variance of the
pupils, schools, classes and the errror variance?
Any
http://finzi.psych.upenn.edu/R/library/fCalendar/html/3D-TimeDateSpecDates.html
try this also RSiteSearch(last day of the month) to get more pointers
- Regards,
\\\|///
\\ -- //
( o o )
oOOo-(_)-oOOo
|
| Gaurav Yadav
| Assistant Manager, CCIL,
For 2007:
seq(as.Date('2007-02-01'), length = 12, by = mon) - 1
Current month:
seq( as.Date( format( Sys.Date(), %Y-%m-01)), length = 2, by =
mon)[2] - 1
Eric
Patnaik, Tirthankar wrote:
Hi,
Given a date, how do I get the last date of that month? I have
data in the form MM,
91 matches
Mail list logo