The *big* issue I have right now is dealing with the slave machine going
down. Once the master no longer has a connection to the ggated devices,
all processes trying to use the device hang in D status. I have tried
pkill'ing ggatec to no avail and ggatec destroy returns a message of
gctl
Pete French presumably uttered the following on 07/21/08 07:08:
The *big* issue I have right now is dealing with the slave machine going
down. Once the master no longer has a connection to the ggated devices,
all processes trying to use the device hang in D status. I have tried
pkill'ing
On 15/07/2008, at 3:54 PM, Jeremy Chadwick wrote:
We moved all of our production systems off of using dump/restore
solely
because of these aspects. We didn't move to ZFS though; we went with
rsync, which is great, except for the fact that it modifies file
atimes
(hope you use Maildir and
We have deployed an IMAP server running on Cyrus on FreeBSD 6.2, with a
500GB UFS2 partition mirrored with geom_mirror and geom_gate across a
dedicated 1gbps link.
It has proven to be very stable and reliable after appropriate tweaking.
The uptime of the mirror is usually 1-3 months,
With the introduction of zfs to FreeBSD 7.0, a door has opened for more
mirroring options so I would like to get some opinions on what direction
I should take for the following scenario.
Basically I have 2 machines that are clones of each other (master and
slave) wherein one will be serving up
On Tue, Jul 15, 2008 at 10:07:14AM -0400, Sven Willenberger wrote:
3) The send/recv feature of zfs was something I had not even considered
until very recently. My understanding is that this would work by a)
taking a snapshot of master_data1 b) zfs sending that snapshot to
slave_data1 c) via
Jeremy Chadwick wrote:
Compared to UFS2 snapshots (e.g. dump -L or mksnap_ffs), ZFS snapshots
are fantastic. The two main positives for me were:
1) ZFS snapshots take significantly less time to create; I'm talking
seconds or minutes vs. 30-45 minutes. I also remember receiving mail
from
However, I must ask you this: why are you doing things the way you are?
Why are you using the equivalent of RAID 1 but for entire computers? Is
there some reason you aren't using a filer (e.g. NetApp) for your data,
thus keeping it centralised?
I am not the roiginal poster, but I am doing
Sven Willenberger wrote:
[...]
1) I have been using ggated/ggatec on a set of 6.2-REL boxes and find
that ggated tends to fail after some time leaving me rebuilding the
mirror periodically (and gmirror resilvering takes quite some time). Has
ggated/ggatec performance and stability
On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
On Tue, Jul 15, 2008 at 10:07:14AM -0400, Sven Willenberger wrote:
3) The send/recv feature of zfs was something I had not even considered
until very recently. My understanding is that this would work by a)
taking a snapshot of
Pete French wrote:
I am not the roiginal poster, but I am doing something very similar and
can answer that question for you. Some people get paranoid about the
whole single point of failure thing. I originally suggestted that we buy
a filer and have identical servers so if one breaks we
Oliver Fromme wrote:
Yet another way would be to use DragoFly's Hammer file
system which is part of DragonFly BSD 2.0 which will be
released in a few days. It supports remote mirroring,
i.e. mirror source and mirror target can run on different
machines. Of course it is still very new and
You install a filer cluster with two nodes. Then there is
no single point of failure.
Yes, that would be my choice too. Unfortunately it didn't get
done that way. Mind you, the solution we do have is something
I am actually pretty happy with - it's cheap and does the job.
We never wanted 100%
On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
One of the annoyances to ZFS snapshots, however, was that I had to
write my own script to do snapshot rotations (think incremental dump(8)
but using ZFS snapshots).
There is a PR[1] to get something like this in the ports tree.
Wesley Shields wrote:
On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
One of the annoyances to ZFS snapshots, however, was that I had to
write my own script to do snapshot rotations (think incremental dump(8)
but using ZFS snapshots).
There is a PR[1] to get something like
:Oliver Fromme wrote:
:
: Yet another way would be to use DragoFly's Hammer file
: system which is part of DragonFly BSD 2.0 which will be
: released in a few days. It supports remote mirroring,
: i.e. mirror source and mirror target can run on different
: machines. Of course it is still very
On Tue, Jul 15, 2008 at 07:10:05PM +0200, Kris Kennaway wrote:
Wesley Shields wrote:
On Tue, Jul 15, 2008 at 07:54:26AM -0700, Jeremy Chadwick wrote:
One of the annoyances to ZFS snapshots, however, was that I had to
write my own script to do snapshot rotations (think incremental dump(8)
but
On Tue, Jul 15, 2008 at 11:47:57AM -0400, Sven Willenberger wrote:
On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
ZFS's send/recv capability (over a network) is something I didn't have
time to experiment with, but it looked *very* promising. The method is
documented in the
Jeremy Chadwick wrote:
On Tue, Jul 15, 2008 at 11:47:57AM -0400, Sven Willenberger wrote:
On Tue, 2008-07-15 at 07:54 -0700, Jeremy Chadwick wrote:
ZFS's send/recv capability (over a network) is something I didn't have
time to experiment with, but it looked *very* promising. The method is
19 matches
Mail list logo