Hi List,
How would I go about best identifying the variables contributing most to
the specific clusters?
eg using either aglomerative or partitioning methods, but with mixed
variables (ie including categorical) eg:
factor(as.integer(runif(min=1, max=5,nrow(USArrests-t1
I am not sure this might help, but you are perhaps lookign at variable
selection. There is a 2006 JASA paper by Raftery and Dean which may
help.
Many thanks,
Ranjan
On Fri, 27 Jul 2007 17:32:02 +1000 [EMAIL PROTECTED] wrote:
Hi List,
How would I go about best identifying the variables
Hi all,
I have a dataset with numeric and factor columns of data which I developed a
Gower Dissimilarity Matrix for (Daisy) and used Agglomerative Nesting
(Agnes) to develop 20 clusters.
I would like to use the 20 clusters to determine cluster membership for a
new dataset (using predict) but
will be classified into.
Neil
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Bearer
Sent: Monday, July 23, 2007 1:39 PM
To: r-help@stat.math.ethz.ch
Subject: [R] Cluster prediction from factor/numeric datasets
Hi all,
I have a dataset with numeric
will be classified into.
Neil
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Bearer
Sent: Monday, July 23, 2007 1:39 PM
To: r-help@stat.math.ethz.ch
Subject: [R] Cluster prediction from factor/numeric datasets
Hi all,
I have a dataset
Dear list
Is v-fold cross-validation for cluster analysis available in R?
If anyone can point me towards the appropriate link I would
greatly appreciate it.
Graham
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing
Hi R users:
Is it any R implementation of a cluster procedure
for large data sets (clara()) but with dissimilary
that can handle continuous, categorical and
nominal variables (daisy()) like
CLARANS (Clustering Large Applications
based up on RANdomized Search), por example?
Thank you for your
It seems nobody else was willing to help here
(when the original poster did not at all follow the posting
guide).
In the mean time, someone else has asked me about part of this,
so let me answer in public :
MM == Martin Maechler [EMAIL PROTECTED]
on Mon, 12 Mar 2007 17:23:30 +0100 writes:
Hi Martin,
In using the Cluster Package, I have results for PAM and DIANA
clustering algorithms (below part and hier objects):
part - pam(trout, bestk)
# PAM results
hier - diana(trout)
# DIANA results
GeneNames -
Hi Vallejo,
I'm pretty busy currently, and feel your question has much more
to do with how to use R more generally than with using the
functions from the cluster package.
So you may get help from other R-help readers,
but maybe only after you have followed the posting-guide
and give a
Dear R-help,
In performing cluster analysis (packages: hopach, cluster, boot, and
many others), I got these errors:
makeoutput(kidney, gene.hobj, bobj, file= kidney.out, gene.names=
gene.acc)
Error: could not find function makeoutput
boot2fuzzy(kidney, bobj, gene.hobj, array.hobj,
Hello,
I would like to know if there is a function in an R library that
allows to do cluster analysis under contiguity constraints ?
Thank you very much for your answer !
Lise Bellanger
--
Lise Bellanger,
Université de Nantes
Département de Mathématiques, Laboratoire Jean
Suppose, we have 3 people called: Francis, Cedric and Nina. Base on what
they have eaten, we want to cluster people by diet, non-diet.
# original data file, named as filename food.csv.
Francis|potato
Francis|chocolate
Francis|chocolate
Francis|milk
Cedric|vegetable
Cedric|vegetable
Cedric|potato
Hi list,
I am doing cluster analysis using the cluster package. I created a dendrogram
using the function plot(agnes(myData)). When I try to change the sise of
labels, it does not work. I tried cex = 1.5, etc. Nothing worked. Can anyone
give me a hint on how to change the sise of the labels,
Dear All,
a long time ago I ran a cluster analysis where the dissimilarity matrix used
consisted of Dmax (or Kolmogorov-Smirnov distance) values. In other words
the maximum difference between two cumulative proportion curves. This all
worked very well indeed. The matrix was calculated using
Dear Kris,
a) how would one go about calculating the matrix of Dmax/KS distance values?
Hmm, I'd implement this directly by comparing the curves on a dense
sequence of equidistant points over a given value
range (hope you know a suitable one) and looking for the maximum
difference...
b) of
On Wed, 18 Oct 2006, Weiwei Shi wrote:
Dear Chris:
I tried to use cor+1 but it still gives me sil width 0 for average.
Well, then it seems that the clustering is not that good.
I don't know your data and there is no theoretical reason why it has to
be positive. You should read the Kaufman
Dear Weiwei,
1. Is there a way of evaluate the effecitives (or seperation) of
clustering (rather than by visualization)?
The function cluster.stats in package fpc computes several cluster
validation statistics (among them the average silhouette width).
Function clusterboot in the same package
Dear Christian:
This is really a good summary. Most of my prev experience was on
classification instead of clustering and this is really a good start
for me. Thank you!
And also hope someone can provide more info and answers to the other questions.
cheers,
weiwei
On 10/18/06, Christian Hennig
Dear Weiwei,
btw, ?cluster.stats does not work on my Mac machine.
version
_
platform i386-apple-darwin8.6.1
arch i386
os darwin8.6.1
system i386, darwin8.6.1
status
major 2
minor 3.1
year 2006
month
Dear Chris:
thanks for the prompt reply!
You are right, dist from pearson has negatives there, which I should
use cor+1 in my case (since negatively correlated genes should be
considered farthest). Thanks.
as to the ?cluster.stats, I double-checked it and I found I need to
restart my JGR, until
Dear Chris:
I have a sample like this
dim(dd.df)
[1] 142 28
and I want to cluster rows;
first of all, I followed the examples for cluster.stats() by
d.dd - dist(dd.df) # use Euclidean
d.4 - cutree(hclust(d.dd), 4) # 4 clusters I tried
cluster.stats(d.dd, d.4) # gives me some results like this:
hi,
is there some good summary on clustering methods in R? It seems there
are many packages involving it.
And I have two questions on clustering here:
1. Is there a way of evaluate the effecitives (or seperation) of
clustering (rather than by visualization)?
2. Is there a search method (like
On 10/17/06, Weiwei Shi [EMAIL PROTECTED] wrote:
is there some good summary on clustering methods in R? It seems there
are many packages involving it.
Gabor provided this very useful link a couple of days back.
http://cran.r-project.org/src/contrib/Views/Cluster.html
jab
--
John
hi,
I just happened to find that page. But it seems too brief to me. For
example, my project involves non-determined cluster number and
non-determined attributes for the would-be-clustered samples. What
kind of methods should I start with?
Thanks a lot for the prompty reply.
W.
On 10/17/06,
Go the R home page (google for R), click on CRAN in left pane, choose
a mirror, click on Task Views in left pane and choose
Cluster.
On 10/17/06, Weiwei Shi [EMAIL PROTECTED] wrote:
hi,
is there some good summary on clustering methods in R? It seems there
are many packages involving it.
And
Hi list,
I am interested in cluster analysis of microarray data. The data was generated
using cDNA method and a loop design.
I was wondering if any one has a suggestion about which package I can use to
analyse such data.
Many thanks in advance
Mahdi
--
---
Mahdi Osman [EMAIL PROTECTED] writes:
Hi list,
I am interested in cluster analysis of microarray data. The data was
generated using cDNA method and a loop design.
I was wondering if any one has a suggestion about which package I can
use to analyse such data.
There are many packages
Wade == Wade Wall [EMAIL PROTECTED]
on Fri, 14 Jul 2006 10:10:11 -0400 writes:
Wade I am trying to run a cluster analysis using Sorenson
Wade (Bray-Curtis) distance measure with flexible beta
Wade linkage method. However, I can't seem to find
Wade flexible beta in any of
Hi all,
I am trying to run a cluster analysis using Sorenson (Bray-Curtis) distance
measure with flexible beta linkage method. However, I can't seem to find
flexible beta in any of the functions/packages I have looked at.
Any help would be appreciated.
[[alternative HTML version
R Users
I'm working on a project where i need to test the accuracy of all
clustering methods available in R. I was searching for some benchmark
datasets to compare the results but couldn't find any sets where the
clusters are represented along with the data.
does anyone know if there is
Hi All,
Except the Rand Index, Dunn Index and Silhouette width, are there
other cluster validation methods in R? Could you please also specify the
function?
Thanks!
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch
Linda;
You might want to look at the package ade4 and inparticular the function
dist.binary. Although you have mentioned Rand Index I would suggest that
you look at the corrected Rand Index for chance agreement as it measures
the agreement between two clusterings resulting from two
Hi there,
I'm trying clustering methods on flow cytometry data. We want to
evaluate the clustering results and compare the validation methods. So
far the cluster validation functions I found in R are:
cluster.stats{fpc}
cl_agreement{clue}
Are there other validation functions in R?
Hello,
I'm playing around with cluster analysis, and am looking for methods to
select the number of clusters. I am aware of methods based on a 'pseudo
F' or a 'pseudo T^2'. Are there packages in R that will generate these
statistics, and/or other statistics to aid in cluster number
Have you checked the amap package? It has been updated just recently and if
I am not wrong there is a method which indicates the best number of k groups
for your data.
Best wishes,
P. Olsson
2006/2/5, John Janmaat [EMAIL PROTECTED]:
Hello,
I'm playing around with cluster analysis, and am
Hi,
as said before, some statistics to estimate the number of clusters are in
the cluster.stats function of package fpc. These are distance-based,
not pseudo F or T^2. They are documented in the book
of Gordon (1999) Classification (see ?cluster.stats for more references).
It also includes
Dear John,
You can play around with cluster.stats function in library fpc, e.g. you
can try:
library(fpc)
library(cluster)
data(xclara)
dM - dist(xclara)
cl - vector()
for(i in 2:7){
cl[i] - cluster.stats(d=dM, clustering=clara(d,i)$cluster,
silhouette=FALSE)$wb.ratio
}
plot(1:6,cl[2:7],
Hello,
I'm trying some cluster analysis, using the hclust command. I am looking for
some help in selecting the 'best' number of clusters. Some software reports
pseudo-F and pseudo-T^2 statistics, for each cluster merge. Is there any way
to generate such statistics simply in R?
Thanks,
Le 05.02.2006 17:50, John Janmaat a écrit :
Hello,
I'm trying some cluster analysis, using the hclust command. I am looking for
some help in selecting the 'best' number of clusters. Some software reports
pseudo-F and pseudo-T^2 statistics, for each cluster merge. Is there any way
to
Markus == Markus Preisetanz [EMAIL PROTECTED]
on Thu, 26 Jan 2006 20:48:29 +0100 writes:
Markus Dear R Specialists,
Markus when trying to cluster a data.frame with about 80.000 rows and 25
columns I get the above error message. I tried hclust (using dist), agnes
(entering the
Dear R Specialists,
when trying to cluster a data.frame with about 80.000 rows and 25 columns I get
the above error message. I tried hclust (using dist), agnes (entering the
data.frame directly) and pam (entering the data.frame directly). What I
actually do not want to do is generate a
Let's do some simple calculation: The dist object from a data set with
8 cases would have
8 * (8 - 1) / 2
elements, each takes 8 bytes to be stored in double precision. That's over
24GB if my arithmetic isn't too flaky. You'd have a devil of a time trying
to do this on a 64-bit
Hi,
I am trying to make a plot similar to figure 2 on page 318 of the finding
groups in data book by kaufman / rousseeuw. I found plot.partition and
clusplot but I don't get them to work from the results returned from a
cmeans() clustering.
How do I have to transform the
Hi All,
I am wondering if there is any literature or any prior implementations
of cluster analysis for only nominal (categorical) variables for a
large dataset, apprx 20,000 rows with 15 variables.
I came across one or two such implementations, but they seem to assume
certain data distributions.
hi,
i'm looking for some help w/ Antoine Lucas's package for Cluster and Tree
Conversion (from r to Xcluster). the r2dct functions includes 2 parameters not
discussed in the .pdf. if i've deciphered them correctly the hr is the labels
and the hc is the Hclust object.
i can properly
Hi,
Im using hclust to make a cluster analysis in Q mode, but I have too many
objects (observations) and its difficult to identify them in the plot. Id like
to get a list with the objects ordered in the same way they appear in the
cluster.
I have already tried order, labels and merge but I
Dear Weiwei,
your question sounds a bit too general and complicated for the R-list.
Perhaps you should look for personal statistical advice.
The quality of methods (and especially distance choice) for down-sampling
ceratinly depends on the structure of the data set. I do not see at the moment
Dear Chris:
You are right and It IS too general. I think I should ask like what
kind of cluster algorithms or functions are available in R , which
might be easier. But for that, I probably can google or use help() in
R to find out. I want to know more about the performance of clustering
on this
Dear listers:
Here I have a question on clustering methods available in R. I am
trying to down-sampling the majority class in a classification problem
on an imbalanced dataset. Since I don't want to lose information in
the original dataset, I don't want to use naive down-sampling: I think
using
Dear R list,
My question:
I'm trying to calculate Mahalanobis distances for 'Species' from the iris data
set as below:
cat('\n')
cat('Cluster analyse of iris data\n')
cat('from Mahalanobis distance obtained of SAS\n')
cat('\n')
n = 3
dat = c(0,
89.86419,
Barbara Diaz wrote:
Hi,
I am using fanny and I have estrange results. I am wondering if
someone out there can help me understand why this happens.
First of all in most of my tries, it gives me a result in
which each
object has equal membership in all clusters. I have read that
Barbara Diaz wrote:
Hi,
I am using fanny and I have estrange results. I am wondering if
someone out there can help me understand why this happens.
First of all in most of my tries, it gives me a result in which each
object has equal membership in all clusters. I have read that that
means the
Hi,
I am using fanny and I have estrange results. I am wondering if
someone out there can help me understand why this happens.
First of all in most of my tries, it gives me a result in which each
object has equal membership in all clusters. I have read that that
means the clustering is entirely
Hi!
Take a look at the packages mclust and flexmix!
They use the EM algorithm for mixture modelling, sometimes called model
based cluster analysis.
Best,
Christian
On Wed, 26 Jan 2005 [EMAIL PROTECTED] wrote:
Hi,
I am looking for a package to do the clustering analysis using the
Hi
I have a problem using the package cluster on my binary data. I want to
try mona at first. But i get the an error.
hc-read.table(all.txt, header=TRUE, sep=\t, row.names=1)
srt(hc)
`data.frame': 51 obs. of 59 variables:
$ G1p : int 2 1 1 1 1 1 1 1 1 1 ...
$ G1q : int 1 1 1 1 1 1 1 1 1 1
On Jan 27, 2005, at 9:06 AM, Morten Mattingsdal wrote:
Hi
I have a problem using the package cluster on my binary data. I want
to try mona at first. But i get the an error.
hc-read.table(all.txt, header=TRUE, sep=\t, row.names=1)
srt(hc)
`data.frame': 51 obs. of 59 variables:
$ G1p : int 2 1
Sean Davis wrote:
On Jan 27, 2005, at 9:06 AM, Morten Mattingsdal wrote:
Hi
I have a problem using the package cluster on my binary data. I want
to try mona at first. But i get the an error.
hc-read.table(all.txt, header=TRUE, sep=\t, row.names=1)
srt(hc)
`data.frame': 51 obs. of 59
Morten,
just a try: is there a constant variable (only 1) in the first dataset?
Christian
On Thu, 27 Jan 2005, Morten Mattingsdal wrote:
Hi
I have a problem using the package cluster on my binary data. I want to
try mona at first. But i get the an error.
hc-read.table(all.txt,
Hi,
I am looking for a package to do the clustering analysis using the
expectation maximization algorithm.
Thanks in advance.
Ming
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
Hi people,
Does anybody know some Density-Based Method for clustering implemented in R?
Thanks,
Fernando Prass
___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
Fernando Prass wrote:
Hi people,
Does anybody know some Density-Based Method for clustering implemented in R?
Have you looked at CRAN package mclust?
Thanks,
Fernando Prass
___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o
Yes, but mclust don't have a density-based algorithm. Mclust have the algorithm
BIC, that is a model-based method...
Fernando Prass
--- Kjetil Brinchmann Halvorsen [EMAIL PROTECTED] escreveu:
Fernando Prass wrote:
Hi people,
Does anybody know some Density-Based Method for clustering
maybe ?kmeans is what you're looking for ...
ingmar
On 10/21/04 2:47 PM, Fernando Prass [EMAIL PROTECTED] wrote:
Yes, but mclust don't have a density-based algorithm. Mclust have the
algorithm
BIC, that is a model-based method...
Fernando Prass
--- Kjetil Brinchmann Halvorsen [EMAIL
No, kmeans is a partition method. I need a model-based method, like DBSCAN or
DENCLUE algorithm...
Fernando Prass
--- Ingmar Visser [EMAIL PROTECTED] escreveu:
maybe ?kmeans is what you're looking for ...
ingmar
On 10/21/04 2:47 PM, Fernando Prass [EMAIL PROTECTED] wrote:
Yes, but
I'm no expert in this, but mclust is `density-based' because it estimates
the density with a mixture of Gaussians. If this is not what you want, you
should clarify what you mean by `density-based'. Do you mean an algorithm
based on kernel estimator of the density?
Andy
From: Fernando Prass
Dear Fernando,
below you find a DBSCAN function I wrote for my own purposes.
It comes with no warranty and without proper documentation, but I followed
the notation of the original KDD-96 DBSCAN paper.
For large data sets, it may be slow.
Best,
Christian
On Thu, 21 Oct 2004, Fernando Prass
AndyL == Liaw, Andy [EMAIL PROTECTED]
on Thu, 21 Oct 2004 09:18:54 -0400 writes:
AndyL I'm no expert in this, but mclust is `density-based'
AndyL because it estimates the density with a mixture of
AndyL Gaussians. If this is not what you want, you should
AndyL clarify what
From: Martin Maechler
AndyL == Liaw, Andy [EMAIL PROTECTED]
on Thu, 21 Oct 2004 09:18:54 -0400 writes:
AndyL I'm no expert in this, but mclust is `density-based'
AndyL because it estimates the density with a mixture of
AndyL Gaussians. If this is not what you want, you
Andy,
I can be wrong, I'm no expert too, but density estimation is different of
density-model. MClust is a model-basead method because use model statistics
from clustering data (more information in
ftp://ftp.u.washington.edu/public/mclust/tr415R.pdf).
I need some package that implement
Dear James,
sorry, this is not really an answer.
I use cutree to obtain clusters from an hclust object.
I do not get from the identify help page that identify should do anything
like what you expect it to do... I tried it out and to my surprise it
behaved as you said, i.e., it indeed does
ChrisH == Christian Hennig [EMAIL PROTECTED]
on Fri, 15 Oct 2004 11:43:53 +0200 (MEST) writes:
ChrisH Dear James,
ChrisH sorry, this is not really an answer.
nor this. I'm answering Christian...
ChrisH I use cutree to obtain clusters from an hclust
ChrisH object. I do
On Friday 15 Oct 2004 10:43 am, you wrote:
PS: It seems that each value is typed twice because classi is named, and
each value is also a name. Try as.vector(classi). (Perhaps a little useful
help in the end?)
Indeed. I have tried, for example:
as.vector(classi[[1]])
and
On Friday 15 Oct 2004 11:02 am, you wrote:
or unname(classi) -- which is slightly more expressive in this
case and possibly more desirable in other situations.
Martin Maechler, ETH Zurich
Thanks, Martin.
I've tried, like you suggested:
un_classi - unname(classi)
but
James == James Foadi [EMAIL PROTECTED]
on Fri, 15 Oct 2004 11:36:14 +0100 writes:
James On Friday 15 Oct 2004 11:02 am, you wrote:
or unname(classi) -- which is slightly more expressive in this
case and possibly more desirable in other situations.
Martin
Hi,
testing the randomness of a cluster analysis is not a well defined
problem, because it depends crucially on your null model. In fpc, there is
nothing like this. Function prabtest in package prabclus performs such a
test, but this is for a particular data structure, namely presence-absence
Hi,
I am wondering if a Monte Carlo method (or equivalent) exist permitting to test the
randomness of a cluster analysis (eg got by
hclust(). I went through the package fpc (maybe too superficially) but dit not find
such method.
Thanks for any hint,
Patrick Giraudoux
Hallo!
I have a distance matrix (class matrix or dist) and an integer vector with cluster
codes and would like to validate the result.
I do NOT have the data matrix. I would like to validate the clustering. I found the
function silhouette. (Does a great job). But I looking for one 2 more (e.g.
Hi all,
Is it possible to run kmeans, pam or clara with a constraint such that
no resulting cluster has fewer than X cases?
These kmeans algorithms often find clusters that are too small for my
use. There are usually a few clusters with 1-10 cases (generally
substantial outliers). I then have
Hi,
i want a cluster-analysis with clara, but getting an
error because in cldat are NA's.
Error in clara(cldat[, 1:3], 4) : Each of the random samples contains objects
between which
no distance can be computed.
cldatx - subset(cldat,select=c(A,B,C))
cldaty - na.omit(cldatx)
Now , clara
Hello,
After reinstalling the whole OS and R as well, I tried to update.packages()
and get the follwing error message:
concerning the mgcv update: atlas2-base is installed and blas as well (on
debian). I haven't found lf77blas, I assume it's a library or something
similar associated with
Martin Wegmann wrote:
Hello,
After reinstalling the whole OS and R as well, I tried to update.packages()
and get the follwing error message:
concerning the mgcv update: atlas2-base is installed and blas as well (on
debian). I haven't found lf77blas, I assume it's a library or something
You need to add atlas2-base-dev:
$ apt-get install atlas2-base-dev
I installed atlas2-base-dev and g77 but know I get the error messages pasted
below. Both (cluster and mgcv) requires lfrtbegin, but that does not seem to
be programm which I can install via apt-get.
Martin
* Installing
On Tue, Sep 30, 2003 at 02:04:23PM +0200, Martin Wegmann wrote:
You need to add atlas2-base-dev:
$ apt-get install atlas2-base-dev
I installed atlas2-base-dev and g77 but know I get the error messages pasted
below. Both (cluster and mgcv) requires lfrtbegin, but that does not seem to
On Tue, Sep 30, 2003 at 02:04:23PM +0200, Martin Wegmann wrote:
You need to add atlas2-base-dev:
$ apt-get install atlas2-base-dev
I installed atlas2-base-dev and g77 but know I get the error messages pasted
below. Both (cluster and mgcv) requires lfrtbegin, but that does not seem to
Is there anyone who would like to give me some examples of plots or data
frames on clustering anaylis?
if so, great thanks in advance!
Files can be sent to my big mail box as [EMAIL PROTECTED]
I want t operform cluster analysis on a set of data, the data is composed of
time-evolution rms
Hi,
it seems that you mix something up. hclust is for dissimilarity based
hierarchical cluster analysis, which has nothing to do with R squared,
Pseudo F
Informative output about the clustering is given as value of the hclust
object, function cutree may help to extract a concrete clustering
87 matches
Mail list logo