([svyglm object]) command, does anyone know of a good reference on how
to effectively use pearson residuals for model diagnostics (on the basis that
deviance ones are unavailable).
Many thanks.
Marko Stojovic
MSc Applied Statistics student, Birkbeck College, London
[[alternative HTML
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Yes. That's it.
Thanks, a lot, really.
Marko
On 02/27/2015 02:46 PM, David Winsemius wrote:
On Feb 27, 2015, at 4:49 AM, marKo mton...@ffri.hr wrote:
Gee. That simple. I knew that! Thanks a lot. Essentially, I needed
only the diagonal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Gee. That simple. I knew that!
Thanks a lot.
Essentially, I needed only the diagonal elements.
Easily solved by:
diag(outer( X=v1,Y=v2, FUN= fV)
I am sure that there are simpler options, but that works like a charm.
Thanks a lot.
Cheers,
Marko
appreciated.
Thanks,
Marko
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
iQIcBAEBAgAGBQJU72qnAAoJEJcj4KySkkQsLAkP/R7DvO0GiZDRrtHgDna/2xj+
XJd8G/gGfe029lVjg+3i6wfKfZ9CoRH+kHEVnT0/SRYcSAeRu3/fys11sjEgVGnl
a/Go167YRYfDkP/OrY4jKtlULySeiGBxNJwKmk1oCidoodk2mejWdPQ61tBj6ozF
sA+Bzoi7Exh2pp88Eks4+Ynz
,
Marko
--
Marko Tonc(ic'
Assistant Researcher
University of Rijeka
Faculty of Humanities and Social Sciences
Department of Psychology
Sveu?ilis(na Avenija 4, 51000 Rijeka, Croatia
[[alternative HTML version deleted]]
__
R-help@r-project.org
*lam2+v3*lam3+v4*lam4 #change the loading name with
the actual loading (number) or extract them from the objectiveML object
(they are located in model.sem[[15]])
Note that those loadings are unstandardized and that the resulting
variable will not be standardized.
Hope it helps
Regards,
Marko
Hello,
Does an R package exist for communication with instruments using GPIB
(IEEE 488) protocol?
Something similar exists in python for control of instruments over
GPIB, RS232, or USB buses (http://pyvisa.sourceforge.net/)
Thanks,
Marko
__
R-help@r
0.5 ...
1951 1 11 1.3 -0,17
1951 1 22 2.1 Mean (Typ2 1951)
I hope you can help me by solving this problem
Best regards,
Marko
--
View this message in context:
http://r.789695.n4.nabble.com/Mean-and-Timeseries-modelling-tp3686326p3686326.html
Sent from
0.5 ...
1951 1 11 1.3 -0,17
1951 1 22 2.1 Mean (Typ2 1951)
I hope you can help me by solving this problem
Best regards,
Marko
--
View this message in context:
http://r.789695.n4.nabble.com/Mean-and-modelling-values-tp3686341p3686341.html
Sent from the R
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
The only solution I can see is fitting all possib le 2 factor models enabling
interactions and then assessing if interaction term is significant...
any more ideas?
Milicic B. Marko wrote:
I have a huge data set with thousands of variable and one binary
variable. I know that most
Dear R Helpers,
At the moment I'm working on the project to implement optimal binning
function. It will be primarily used as a tool for logistic regression.
something very similar to
http://www2.sas.com/proceedings/forum2008/153-2008.pdf* *but applied in
diferent problem space...*
*The
Hi all R helpers,
I'm trying to comeup with nice and elegant way of detecting consecutive
increases/decreases in the sequence of numbers. I'm trying with combination
of which() and diff() functions but unsuccesifuly.
For example:
sq - c(1, 2, 3, 4, 4, 4, 5, 6, 5, 4, 3, 2, 1, 1, 1, 1, 1);
I'd
Dear R helpers,
I'm trying to build logistic regression model large dataset 360 factors and
850 observations. All 360 factors are known to be good predictors of outcome
variable but I have to find best model with maximum 10 factors. I tried to
fit full model and use stepAIC function to get best
Thank you very much...
That was helpful..
On Jan 15, 2008 12:58 AM, Charles C. Berry [EMAIL PROTECTED] wrote:
On Mon, 14 Jan 2008, Marko Milicic wrote:
Dear all,
I'm trying to process HUGE datasets with R. It's very fast, but I would
like
to optimize it a bit more, by focusing one one
Dear all,
I'm trying to process HUGE datasets with R. It's very fast, but I would like
to optimize it a bit more, by focusing one one column at time. say file
is 1GB big and has 100 columns. In order to prevent out of memory
problems I need to load one column at the time the only
Dear R users,
I'm new but already fascinated R user so please forgive for my
ignorance. I have the problem, I read most of help pages but couldn't
find the solution. The problem follows
I have large data set 10,000 rows and more than 100 columns... Say
something like
17 matches
Mail list logo