Hi Smruti,

It is possible to dynamically resize both the processing and storage
clusters, so it isn't necessary to start with estimates of utilization
necessarily.  Though, where the application requires a lot of storage
or quick disk access, this will affect the types of disks used and
protocols, that is, whether you require a SAN using iSCSI or Fibre
Channel with SCSI disks at what rpm or something faster.

Put together a base image that can be mirrored with application software.

Assign some reliable high-end or dedicated hardware nodes as cluster
fences.  Fencing protects your information in the case of a node
failure in the cluster.

Determine if you'd like Network Attached Storage, for network
available storage and the filesystem, or a SAN, if you'd like to see
block devices and let the clients handle the filesystem.  Source some
dedicated hardware or make the appropriate partitions on each node and
make these network visible.

RedHat Enterprise Linux has some software for managing clusters,
Conga, with support for fencing and dynamic resizing, for example.
Meet luci and rici.

There is software available for distributing work to the nodes,
copying scripts to each, remotely rebooting, installing to all nodes
etc.

Without a lot more information on the application, your question is
difficult to answer.

Hope that helps,

Rahul

2009/12/4 Smruti <[email protected]>:
> Hi,
>
> We have a few hundred physical *nix servers and thinking of virtualizing
> them. So, could you please tell how to go about virtulizating them.
>
> The first step would be obviously to calculate the actual utilization of
> hardware for each server. So, what are the things to consider while
> calculating the utilization of a server.
>
> Thanks & Regards,
> Smruti
> --

_______________________________________________
ilugd mailinglist -- [email protected]
http://frodo.hserus.net/mailman/listinfo/ilugd
Archives at: http://news.gmane.org/gmane.user-groups.linux.delhi 
http://www.mail-archive.com/[email protected]/

Reply via email to