stream will only use one
path in a LAG.
Doesn't (Open)Solaris have the option of setting the aggregate up as a
FEC or in roundrobin mode?
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP
--
+---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
ascii | | PGP Key
license.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
as well)
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
-stardard raid
setups, they seem tricky.
raidz eats one disk. Like RAID5
raidz2 digests another one. Like RAID6
raidz3 yet another one. Like ... h...
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg
seems silly.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
ascii
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Forgot to cc the list, well here goes...
- Original Message
Subject: Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller
Date: Wed, 24 Mar 2010 17:10:58 +0100
From: Svein Skogen sv...@stillbilde.net
To: Dusan Radovanovic dusa
On 24.03.2010 17:42, Richard Elling wrote:
Nonvolatile write caches are not a problem.
Which is why ZFS isn't a replacement for proper array controllers
(defining proper as those with sufficient battery to leave you with a
seemingly intact filesystem), but a very nice augmentation for them.
and small (4GB?) and works in
a PCI express slot.
Maybe someone should look at implementing the zfs code for the XScale
range of io-processors (such as the IOP333)?
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100
On 22.03.2010 02:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with share and dfstab? And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
NFS only works per
On 21.03.2010 01:25, Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when
On 22.03.2010 13:35, Edward Ned Harvey wrote:
Does cron happen to know how many other scrubs are running, bogging
down
your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's zpool scrub, and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So
On 22.03.2010 16:24, Cooper Hubbell wrote:
I've moved to 7200RPM 2.5 laptop drives over 3.5
drives, for a
combination of reasons: lower-power, better
performance than a
comparable sized 3.5 drives, and generally
lower-capacities meaning
resilver times are smaller. They're a bit more $/GB,
but
On 22.03.2010 18:10, Richard Elling wrote:
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's zpool scrub, and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start
On 21.03.2010 14:26, Edward Ned Harvey wrote:
Most software introduced in Linux clearly violates the UNIX
philosophy.
Hehehe, don't get me started on OSX. ;-) And for the love of all things
sacred, never say OSX is not UNIX. I made that mistake once. Which is not
to say I was proven wrong
On 20.03.2010 17:39, Henk Langeveld wrote:
On 2010-03-15 16:50, Khyron:
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2
levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be
used
We all know that data corruption may happen, even on the most reliable
of hardware. That's why zfs har pool scrubbing.
Could we introduce a zpool option (as in zpool set optionname pool)
for scrub period, in number of hours (with 0 being no automatic
scrubbing).
I see several modern
On 20.03.2010 23:00, Gary Gendel wrote:
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring
On 21.03.2010 00:14, Erik Trimble wrote:
Richard Elling wrote:
I see this on occasion. However, the cause is rarely attributed to a bad
batch of drives. More common is power supplies, HBA firmware, cables,
Pepsi syndrome, or similar.
-- richard
Mmmm. Pepsi Syndrome. I take it this is similar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 10:31, Joerg Schilling wrote:
Svein Skogen sv...@stillbilde.net wrote:
Please, don't compare proper backup drives to that rotating head
non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
tape-shredder.
DDS
.
Star implements incremental backups and restores based on POSIX compliant
archives.
And how does your favourite tool handle zvols?
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv
that the NDMP stack basically was tapedev and/or
autoloader device via network? (i.e. not a backup utility at all but a
method for the software managing the backup to attach the devices)
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:28, Darren J Moffat wrote:
On 18/03/2010 17:26, Svein Skogen wrote:
The utility: Can't handle streams being split (in case of streams being
larger that a single backup media).
I think it should be possible to store the 'zfs send
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:37, Darren J Moffat wrote:
On 18/03/2010 17:34, Svein Skogen wrote:
How would NDMP help with this any more than running a local pipe
splitting the stream (and handling the robotics for feeding in the next
tape)?
Probably
recommending (Open)Solaris and those
speaking about other things gets tweaked in (Open)Solaris'es favor.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
several previous discussions on the topic of ZFS pool backups to tape.
Oh well.
How does backing up the NFSv4 acls help you backup up a zvol (shared for
iSCSI)? Please enlighten me.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 13:31, Svein Skogen wrote:
On 17.03.2010 12:28, Khyron wrote:
Note to readers: There are multiple topics discussed herein. Please
identify which
*SNIP*
How does backing up the NFSv4 acls help you backup up a zvol (shared
some sort of FEC to the stream. It _REALLY_ would solve
the store problem.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv
to the best current
practices zfs-page? ;)
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway
drives to that rotating head
non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
tape-shredder.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X
Things used to be simple.
zfs create -V xxg -o shareiscsi=on pool/iSCSI/mynewvolume
It worked.
Now we've got a new, feature-rich baby in town, called comstar, and so far all
attempts at groking the excuse of a manpage has simply left me with a nasty
headache.
_WHERE_ is the replacement
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
ascii | | PGP Key
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 19:57, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 2:46 PM, Svein Skogen sv...@stillbilde.net
mailto:sv...@stillbilde.net wrote:
Not quite a one liner. After you create the target once (step 3),
you do not have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 22:31, erik.ableson wrote:
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen sv...@stillbilde.net
mailto:sv...@stillbilde.net wrote:
I'll write you a Perl script :)
I think
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
ascii
How does it fare, with regards to BUG ID 689477?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6894775
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Wouldn't his scripts in the other thread (zfs send plugin for Bacula) work with
zvol as well?
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
know, maybe I should have RTFMmed more).
I'll look into it, as soon as I figure WTF?!? this tapeloader is up to.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X
I can't help but keep wondering if not some sort of FEC wrapper (optional of
course) might solve both the backup and some of the long-distance-transfer
(where retransmissions really isn't wanted) issues.
Reason I'm saying long-distance, is this is where latency-on-the-link starts
rearing its
piping the data trough, feel
free to point me in the right direction, because my google-searches
didn't give me any real clues. Maybe my google-fu isn't up to scratch.
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100
the ground.
- -Try to restore both backups...
See any differences in reliability for disasters here?
;)
//Svein
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020
Let's say for a moment I should go for this solution, with the rpool tucked
away on an usb-stick in the same case as the LTO-3 tapes it matches
timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the USB
stick. (If, and that's a big if, I get amanda or bacula to do a job I'm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08.03.2010 13:55, Erik Trimble wrote:
Svein Skogen wrote:
Let's say for a moment I should go for this solution, with the rpool
tucked away on an usb-stick in the same case as the LTO-3 tapes it
matches timelinewise (I'm using HP C8017A kits
- --
- +---+---
/\ |Svein Skogen | sv...@d80.iso100.no
\ / |Solberg Østli 9| PGP Key: 0xE5E76831
X|2020 Skedsmokorset | sv...@jernhuset.no
/ \ |Norway | PGP Key: 0xCE96CE13
| | sv...@stillbilde.net
ascii
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming on next
media?
I was thinking something along the lines of zfs send (when device goes full)
returning
send suspended. To resume insert new media and issue zfs resume IDNUMBER
and receive handling:
zfs
Just disregard this thread. I'm resolving the issue using other methods (not
including Solaris).
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04.03.2010 13:18, Erik Trimble wrote:
Svein Skogen wrote:
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
I was thinking something along the lines of zfs send (when device goes
full
The problem is that all disks I've seen so far, has been more fragile than
tapes (given a real disaster, such as a clumsy sysadmin, or a burning home
falling on top of them)... Trust me to knock over a disk.
//Svein
--
This message posted from opensolaris.org
52 matches
Mail list logo