forgot to mention, the training and testing dataframes are composed of
4 IVs (one double numeric IV and three factor IVs) and one DV
(dichotomous factor, i.e. true or false).
The training dataframe consists of 48819 rows and test dataframe
consists of 24408 rows.
Thanks again.
Hi everyone. I'm using the kernlab ksvm function with the rbfdot
kernel for a binary classification problem and getting a strange
result back. The predictions seem to be very accurate judging by the
training results provided by the algorithm, but I'm unable to generate
a confusion
That giving the best trade between sensitivity and specificity.
On Sat, May 25, 2019 at 12:47 AM Abby Spurdle wrote:
>
> > Would be possible to automate the selection of the best value?
>
> Can you define "best", precisely?
>
>
--
Best regards,
Luigi
> Would be possible to automate the selection of the best value?
Can you define "best", precisely?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
Dear all,
I am using kernlab to implement an SVM analysis. The model I am
building has the syntax:
`ksvm( ~ , data = , type = "C-svc", kernel =
"rbfdot", kpar = "automatic", C = , prob.model = TRUE)`
Here, I can use different values of `k` to give different costs to the
model. Each time I give a
Dear all,
I have generated a model with KERNLAB using the following steps:
the data is a dataframe df of two numerical variables x and y, and
each point is assigned a z value that is a factor of two levels:
positive and negative. The data has the strucutre:
> str(df)
'data.frame': 1574 obs. of 3
I'm using "Kernlab" to apply the "Weighted Nadaraya Watson" by Kato (2012)
and Hall, Wolff, and Yao (1999).
I need to find this Gaussian Kernel in weights'calculation , where u=
(x-x0):
Kh(u) = h^(−1)*K(u/h).
I used:
rbf1 <- rbfdot(sigma = NULL)
but I have to find out "sigma" as the inverse
I am hoping there are other users of the kernlab package in R who will be able
to solve a puzzle. In the past, I've used the relevance vector machine engine
(rvm) in kernlab and was pleased to see it use all four cores on my PC (running
Windows 8).
But now it only runs on one core and I can't
I am not asking about k-means. I am asking about passing initial
assignments to the kernel k means algorithm. In kernel k-means, centroids
are not defined explicitly. I tried passing initial centroids in the
original feature space though. But, it did not work. The provided example
just sets the
...@gmail.com
À : r-help@r-project.org
Cc : Pascal Oettli kri...@ymail.com
Envoyé le : Mercredi 3 avril 2013 22h53
Objet : Re: [R] kernlab::kkmeans initial centers
I am not asking about k-means. I am asking about passing initial assignments to
the kernel k means algorithm. In kernel k-means
by a 3x4 matrix.
HTH,
Pascal
--
*De :* Ahmed Elgohary aagoh...@gmail.com
*À :* r-help@r-project.org
*Cc :* Pascal Oettli kri...@ymail.com
*Envoyé le :* Mercredi 3 avril 2013 22h53
*Objet :* Re: [R] kernlab::kkmeans initial centers
I am not asking about k-means. I
Hi,
I am trying to pass initial cluster assignments to the kkmeans
methodhttp://rss.acs.unt.edu/Rdoc/library/kernlab/html/kkmeans.htmlof
kernlab. It is not clear to me how I can set the parameter
*centers* with initial cluster centers as stated in the documentation?
thanks,
--ahmed
Hi,
I would say that if you know what k-means algorithm is, you know the
meaning of initial cluster centers.
You can also check the output of the provided example.
Regards,
Pascal
On 04/03/2013 09:27 AM, Ahmed Elgohary wrote:
Hi,
I am trying to pass initial cluster assignments to the
On 26.08.2012 15:33, Reza Salimi-Khorshidi wrote:
Thanks Uwe,
Am I right that in ksvm's internal cross-validation, there is no
guarantee for having *at least one* of each classes in each subset?
That is my guess, but I haven't read the code. Please read it yourself
in case you want more
Hello, together
I'm trying to use user defined kernel. I know that kernlab offer user
defined kernel(custom kernel functions) in R.
I used data spam including package kernlab.
(number of variables=58 number of examples =4061)
i'm user defined kernel's form,
kp=function(d,e){
as=v*d
bs=v*e
On 25.08.2012 02:12, Reza Salimi-Khorshidi wrote:
Dear Uwe,
I appreciate that if you let me know why, when using the attached file,
the following script (two lines) doesn't work once in 10s of times.
Best, Reza
svm.pol4- ksvm(class.labs~ ., data= train.data, prob.model= T, scale=
T, kernel=
Thanks Uwe,
Am I right that in ksvm's internal cross-validation, there is no guarantee
for having *at least one* of each classes in each subset?
Some randomness is involved, and when you get an unfortunate subsample
(e.g. if in the internal cross-validation one class is not selected at all)
it
Dear Uwe,
I appreciate that if you let me know why, when using the attached file, the
following script (two lines) doesn't work once in 10s of times.
Best, Reza
svm.pol4 - ksvm(class.labs ~ ., data = train.data, prob.model = T, scale =
T, kernel = polydot)
svm.pol.prd4 - predict(svm.pol4,
Dear list,
I am using the ksvm function from kernlab as follows:
(1) learning
svm.pol4 - ksvm(class.labs ~ ., data = train.data, prob.model = T, scale
= T, kernel = polydot)
(2) prediction
svm.pol.prd4 - predict(svm.pol4, train.data, type = probabilities)[,2]
But unfortunately, when calling
On 19.08.2012 11:06, Reza Salimi-Khorshidi wrote:
Dear list,
I am using the ksvm function from kernlab as follows:
(1) learning
svm.pol4 - ksvm(class.labs ~ ., data = train.data, prob.model = T, scale
= T, kernel = polydot)
(2) prediction
svm.pol.prd4 - predict(svm.pol4, train.data, type
Hi Uwe,
I can attach the data file to an email or send you a link so you can
download it. Which one do you prefer?
Thanks for your help ...
Best, Reza
On Sun, Aug 19, 2012 at 4:10 PM, Uwe Ligges lig...@statistik.tu-dortmund.de
wrote:
On 19.08.2012 11:06, Reza Salimi-Khorshidi wrote:
Dear
Hi!
The kernlab function kpca() mentions that new observations can be transformed
by using predict. Theres also an example in the documentation, but as you can
see i am getting an error there (As i do with my own data). I'm not sure whats
wrong at the moment. I haven't any predict functions
Hm.. seems like its a problem with loading it in the profile..
If i load it again in the console it works fine. Must have something to do with
the masking.
--
Changed load-order with the package which required stats that did the
masking, and it works now, so, nevermind.
On
Hi!
how do i get to the source code of kpca or even better predict.kpca(which it
tells me doesn't exist but should) ?
(And if anyone has too much time:
Now if i got that right, the @pcv attribute consists of the principal
components, and for kpca, these are defined as projections of some
Hi Jessica,
On Thu, Apr 26, 2012 at 11:59 AM, Jessica Streicher
j.streic...@micromata.de wrote:
Hi!
how do i get to the source code of kpca or even better predict.kpca(which it
tells me doesn't exist but should) ?
Probably you have to do kernlab:::predict.kpca from your R workspace,
but why
Thanks a lot, totally forgot cran there.
Hm.. so they're multiplying some specifically computed Kernelmatrix with the
pcv's.. interesting.. too tired to check the math there, guess i'll just accept
its possible and go to sleep.
Am 26.04.2012 um 18:10 schrieb Steve Lianoglou:
Hi Jessica,
Hi,
I am trying to perform relevance vector machines with the rvm-function from
kernlab.
On one dataset I get this message:
Setting default kernel parameters
Error in if (length(data) != vl) { :
RMate stopped at line 0 of selection
missing value where TRUE/FALSE needed
Calls: rvm ...
Hi,
For another trainingset I get this error message, which again is rather cryptic
to me:
Setting default kernel parameters
Error in array(0, c(n, p)) : 'dim' specifies too large an array
RMate stopped at line 0 of selection
Calls: rvm ... .local - backsolve - as.matrix - chol - diag -
I am using a linear kernel (vanilladot).
By switching the kernel, I actually get rid of the error message, but I would
like to stick to the linear one ...
On 13.02.2012, at 16:23, Martin Batholdy wrote:
Hi,
For another trainingset I get this error message, which again is rather
cryptic
On Feb 13, 2012, at 10:23 AM, Martin Batholdy wrote:
Hi,
For another trainingset I get this error message, which again is
rather cryptic to me:
Just imagine how it seems to us!
Setting default kernel parameters
Error in array(0, c(n, p)) : 'dim' specifies too large an array
RMate
Ok, I am sorry,
My trainingset consists of a 60 x 204 matrix (independent_training – 204
features).
I have 60 continuous labels (dependent_training, ranging from 2.25 to 135).
this is all the code I use:
library(kernlab)
rvm(as.matrix(independent_training), dependent_training, type=regression,
Hi,
On Mon, Feb 13, 2012 at 10:53 AM, Martin Batholdy
batho...@googlemail.com wrote:
Ok, I am sorry,
My trainingset consists of a 60 x 204 matrix (independent_training – 204
features).
I have 60 continuous labels (dependent_training, ranging from 2.25 to 135).
this is all the code I use:
Sorry, this:
options(error=utils:::dum.frames)
Should be:
options(error=utils:::dump.frames)
-steve
--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info:
Hello all,
I'm trying to run a gird parameter search for a svm.
Therefore I'M using the ksvm function from the kernlab package.
svp - ksvm(Ktrain,ytrain,type=nu-svc,nu=C)
The problem is that the optimization algorithm does not return
for certain parameters.
I tried to use
Hi all,
Can anyone tell me for what the kernlab ipop return value dual is? How
does it relate to the solution for a Support Vector Machine solution?
I am trying to use the ipop solver in my (toy) example of a Support
Vector Machine and I am noting that the bias (essentially the offset for
Hi there,
I'm trying to do a regression experiment on a multidimensional
dataset where both x and y in the model are multidimensional
vectors.
I'm using R version 2.9.2, updated packages, on a Linux box.
I've tried gausspr(), ksvm() and rvm(), and the models are
computed fine, but I'm always
Hi,
I am trying to use the splinedot kernel as part of the kernlab package, but
I get the following error:
Error in votematrix[i, ret 0] - votematrix[i, ret 0] + 1 :
NAs are not allowed in subscripted assignments
The parameters that I have used to build the model are:
hi,
I am using R's kernlab package, exactly i am doing classification using
ksvm(.) and predict.ksvm(.).I want use of custom kernel. I am getting some
error.
# Following R code works (with promotergene dataset):
library(kernlab)
s - function(x, y) {
sum((x*y)^1.25)
}
class(s) - kernel
hi,
I am using R's kernlab package, exactly i am doing classification using
ksvm(.) and predict.ksvm(.).I want use of custom kernel. I am getting some
error.
# Following R code works (with promotergene dataset):
library(kernlab)
s - function(x, y) {
sum((x*y)^1.25)
}
class(s) - kernel
Hi, this is a question about the R package kernlab.
I use kernlab as a library in a C++ program. The host application
defines a graph kernel (defined by me), generates a gram matrix and
trains kernlab directly on this gram matrix, like this:
regm-ksvm(K,y,kernel=matrix),
where K is the n x n
Hello list,
I am faced with a two-class classification problem with highly asymetric
class sizes (class one: 99%, class two: 1%).
I'd like to obtain a class probability model, also introducing available
information on the class prior.
Calling kernlab/ksvm with the line
41 matches
Mail list logo