Emmanuel Noobadmin wrote, On 08/05/2010 12:40 AM:
That's the thing, I don't think I can tolerate a slightly behind copy
on the system. The transaction once done, must remain done. A
situation where a node fails right after a transaction was done and
output to user, then recovered to a slightly
On Thu, 2010-08-05 at 11:04 -0400, Todd Denniston wrote:
You speak of transactions in a way that makes me think you are dealing with
databases.
If this is the case, then I suggest you take a few searches over to the drbd
archives** and look for
database issues, IIRC in some cases you are
JohnS wrote, On 08/05/2010 11:24 AM:
On Thu, 2010-08-05 at 11:04 -0400, Todd Denniston wrote:
You speak of transactions in a way that makes me think you are dealing with
databases.
If this is the case, then I suggest you take a few searches over to the drbd
archives** and look for
database
On 8/4/2010 11:40 PM, Emmanuel Noobadmin wrote:
It is good for 2 things - you can snapshot for local 'back-in-time'
copies without using extra space, and you can do incremental
dump/restores from local to remote snapshots.
That sounds good... and bad at the same time because I add yet
On 8/5/10, Todd Denniston todd.dennis...@tsb.cranrdte.navy.mil wrote:
You speak of transactions in a way that makes me think you are dealing with
databases.
That's part of the application suite. Although we do suggest to
clients to have different servers for each particular use, some of
them
On 8/6/10, Les Mikesell lesmikes...@gmail.com wrote:
But even if you have live replicated data you might want historical
snapshots and/or backup copies to protect against software/operator
failure modes that might lose all of the replicated copies at once.
That we already do, daily backups of
On 8/5/2010 12:12 PM, Emmanuel Noobadmin wrote:
What you want is difficult to accomplish even in a local file system. I
think it would be unreasonably expensive (both in speed and cost) to put
your entire data store on something that provides both replication and
transactional guarantees.
On 8/6/10, Les Mikesell lesmikes...@gmail.com wrote:
If you are going to do that, why not also rely on the database engine's
replication which is aware of the transactions? Databases rely on
filesystem write ordering and fsync() actually working - things that
aren't always reliable locally,
On 8/5/2010 3:52 PM, Emmanuel Noobadmin wrote:
The DB will offer a more optimized alternative. A VM image won't.
I'm not quite sure what's the connection here. The database runs
within the VM and is stored in the virtual disk. I'm not using VM to
substitute for a database replication but to
Greetings,
On 8/4/10, Emmanuel Noobadmin centos.ad...@gmail.com wrote:
On 8/4/10, JohnS jse...@gmail.com wrote:
Only concern now is the usual split-brain issue and whether linuxZFS
is matured enough to be used in conjunction as the underlying fs on
Centos5.
Dunno if its relevent, but are we
On 08/03/10 11:31 PM, Rajagopal Swaminathan wrote:
But then is ZFS a Cluster filesystem at all like GFS2/OCFS? Haven't
studied that angle as yet.
its not. and, afaik, the linux implementation of ZFS is not very well
supported, I certainly wouldn't commit to a project relying on it
without
Greetings,
On 8/4/10, John R Pierce pie...@hogranch.com wrote:
On 08/03/10 11:31 PM, Rajagopal Swaminathan wrote:
It just triggered an idea, why not leave the storage blocks as clvm
bricks and the BMR/Restore and the such be delgated to a lower lever
mechnism such as clvm
On 8/4/10, Rajagopal Swaminathan raju.rajs...@gmail.com wrote:
Dunno if its relevent, but are we talking about inband/power or
storage fencing issues for stonith here?
COZ, HA, to the best of my knowledge, requires some form of fencing...
Typically yes, but it doesn't necessarily require
Emmanuel Noobadmin wrote:
ZFS is a local file system as far as I understand it. It's by Solaris
but there are two efforts to port it to Linux, one through userspace
via Fuse and the other through kernel. It seems like the Fuse approach
is more matured and at the moment slightly more
John R Pierce wrote:
On 08/03/10 11:31 PM, Rajagopal Swaminathan wrote:
But then is ZFS a Cluster filesystem at all like GFS2/OCFS? Haven't
studied that angle as yet.
its not. and, afaik, the linux implementation of ZFS is not very well
supported, I certainly wouldn't commit to a
Emmanuel Noobadmin wrote, On 08/03/2010 11:13 AM:
From what I understand, I cannot do the equivalent of network RAID 1
with a normal DRBD/HB style cluster. Gluster with replicate appears to
do exactly that. I can have 2 or more storage servers with real time
duplicates of the same data so that
On 8/4/10, Les Mikesell lesmikes...@gmail.com wrote:
I thought the GPL on the kernel code would not permit the inclusion of less
restricted code like the CDL-covered zfs. For a network share, why not use
That's why the Fuse effort is further along, being in user space it
bypasses the limits of
On 8/4/10, Todd Denniston todd.dennis...@tsb.cranrdte.navy.mil wrote:
To have more than one active server with DRBD (or other disk type shared
between active machines)
you need to be using a file system which supports shared disk resources.
http://www.drbd.org/docs/about/
On 8/4/2010 10:10 AM, Emmanuel Noobadmin wrote:
derivative work or something along those lines.
the OpenSolaris or NexentaStor versions since you wouldn't be using much
else
from the system anyway.
If I really have to, but I was hoping I wouldn't need to learn another
relatively similar
Emmanuel Noobadmin wrote, On 08/04/2010 11:33 AM:
Easier because instead of running gluster raid 0 on top of DRBD raid
1, we can take out the DRBD layer and just use gluster to achieve the
equivalent by distribute on replicate.
More importantly there is the issue of cost, DRBD needs a pair
On 8/4/10, Les Mikesell lesmikes...@gmail.com wrote:
That's sort of the point of nexentastor which gives you a web interface
to manage the filesystems and sharing since you don't need anything
else. But the free community edition only goes to 12 TB. That might be
enough per-host if you are
On 8/3/10, Rudi Ahlers r...@softdux.com wrote:
I'm considering setting up a Lustre cluster system for our XEN virtual
machines as shared storage, mainly for high availability on our XEN
VPS servers.
Does anyone use Lustre in a production environment?
What is your opinions / experiences with
Emmanuel Noobadmin writes:
On 8/3/10, Rudi Ahlers r...@softdux.com wrote:
I'm considering setting up a Lustre cluster system for our XEN virtual
machines as shared storage, mainly for high availability on our XEN
VPS servers.
Does anyone use Lustre in a production environment?
What is
On 8/3/10, Lars Hecking lheck...@users.sourceforge.net wrote:
What do Gluster or Lustre offer that the builtin Red Hat Cluster Suite
does not?
Being a noob admin, I'm not sure and still haven't decided fully on
which way to go, largely because it seems the technologies of choice
are both
On Tue, Aug 3, 2010 at 4:42 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
On 8/3/10, Rudi Ahlers r...@softdux.com wrote:
I'm considering setting up a Lustre cluster system for our XEN virtual
machines as shared storage, mainly for high availability on our XEN
VPS servers.
Does anyone
On Tue, Aug 3, 2010 at 4:45 PM, Lars Hecking
lheck...@users.sourceforge.net wrote:
Emmanuel Noobadmin writes:
On 8/3/10, Rudi Ahlers r...@softdux.com wrote:
I'm considering setting up a Lustre cluster system for our XEN virtual
machines as shared storage, mainly for high availability on our
On Tue, Aug 3, 2010 at 5:13 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
On 8/3/10, Lars Hecking lheck...@users.sourceforge.net wrote:
What do Gluster or Lustre offer that the builtin Red Hat Cluster Suite
does not?
Being a noob admin, I'm not sure and still haven't decided fully on
On Tue, 3 Aug 2010 at 3:45pm, Lars Hecking wrote
Emmanuel Noobadmin writes:
I haven't used Lustre but was also researching on using it for same
purpose as shared storage for VMs. Dropped it in the end from
consideration after some discussion on the Lustre mailing list points
out that it's
On Tue, 3 Aug 2010 at 6:11pm, Rudi Ahlers wrote
On Tue, Aug 3, 2010 at 5:13 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
From what I understand, I cannot do the equivalent of network RAID 1
with a normal DRBD/HB style cluster. Gluster with replicate appears to
do exactly that. I can
Greetings,
On 8/3/10, Joshua Baker-LePain jl...@duke.edu wrote:
On Tue, 3 Aug 2010 at 3:45pm, Lars Hecking wrote
nodes. Also, according to RH's site, RHCS is limited to 16 nodes.
Huh! Last time I checked it, it was far over 1024 nodes or something
like that considering the MRG and RHEV.
On Tue, 2010-08-03 at 13:12 -0400, Joshua Baker-LePain wrote:
Also, according to RH's site, RHCS is limited to 16 nodes.
Gluster has no such limit.
---
https://www.redhat.com/archives/linux-cluster/2010-May/msg3.html
Thus you can have more than 16 Non Supported. A Single node is
On 8/4/10, Rudi Ahlers r...@softdux.com wrote:
With lustre, from what I understand, I could use say 3 or 5 or 50
servers to spread the load across the server and thus have higher IO.
We mainly host shared hosting clients, who often have hundreds
thousands of files in one account. So if their
On 8/4/10, Rudi Ahlers r...@softdux.com wrote:
I'm thinking more in the lines of network RAID10, if it's possible?
Yes, that's one of the thing about Gluster that makes it rather
attractive in theory to me. We can stack various translators in
different ways, in this case, distribute + replicate
On Tue, Aug 3, 2010 at 9:16 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
One of the problem with Lustre's style of distributed storage which
Gluster points out is that the bottleneck is the meta server which
tells clients where to find the actual data. Gluster supposedly scales
with
On Wed, 2010-08-04 at 03:16 +0800, Emmanuel Noobadmin wrote:
I don't have a link because it's in my inbox but you might be able to
find it Question on Lustre redundancy/failure features in Lustre
mailing list archive around 28 Jun 2010.
---
Would this be it?
On Tue, 3 Aug 2010 at 10:04pm, Rudi Ahlers wrote
On Tue, Aug 3, 2010 at 9:16 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
One of the problem with Lustre's style of distributed storage which
Gluster points out is that the bottleneck is the meta server which
tells clients where to
On Tue, Aug 3, 2010 at 10:18 PM, Joshua Baker-LePain jl...@duke.edu wrote:
On Tue, 3 Aug 2010 at 10:04pm, Rudi Ahlers wrote
On Tue, Aug 3, 2010 at 9:16 PM, Emmanuel Noobadmin
centos.ad...@gmail.com wrote:
One of the problem with Lustre's style of distributed storage which
Gluster points out
On Tue, 3 Aug 2010 at 10:26pm, Rudi Ahlers wrote
Thanx for the feedback. This is what I hoped to get from someone
running lustre :)
But I guess I'll look at gluster instead.
You may want to head over to the beowulf mailing list -- you've probably
got a higher probability of finding Lustre
On 8/4/10, JohnS jse...@gmail.com wrote:
Would this be it?
http://www.mail-archive.com/lustre-disc...@lists.lustre.org/msg06952.html
Yes, that is my thread :)
This is something like Virtual Storage like Covalent and IBM have. Very
costly to implement the right way and interesting also.
I'm considering setting up a Lustre cluster system for our XEN virtual
machines as shared storage, mainly for high availability on our XEN
VPS servers.
Does anyone use Lustre in a production environment?
What is your opinions / experiences with it?
One of the main reasons I'm looking at using
40 matches
Mail list logo