Thomas Burgess wonsl...@gmail.com wrote:
so star is better than tar?
Star is the oldest OSS tar implementation (it started in 1982).
Star is (in contrary to Sun's tar and to GNU tar) able to create
archives with attributes from recent POSIX standards and it implements
aprox. twice as many
Edward Ned Harvey sola...@nedharvey.com wrote:
I still believe that a set of compressed incremental star archives give
you
more features.
Big difference there is that in order to create an incremental star archive,
star has to walk the whole filesystem or folder that's getting backed up,
Lassi Tuura l...@cern.ch wrote:
I guess what I am after is, for data which really matters to its owners and
which they actually had to recover, did people use tar/pax archives (~ file
level standard archive format), dump/restore (~ semi-standard format based on
files/inodes) or zfs
Miles Nordin car...@ivy.net wrote:
When we brought it up last time, I think we found no one knows of a
userland tool similar to 'ufsdump' that's capable of serializing a ZFS
along with holes, large files, ``attribute'' forks, windows ACL's, and
checksums of its own, and then restoring the
Richard Elling richard.ell...@gmail.com wrote:
OOB, the default OpenSolaris PATH places /usr/gnu/bin ahead
of /usr/bin, so gnu tar is the default. As of b130 (I'm not running
an older build currently) the included gnu tar is version 1.22 which
is the latest as released March 2009 at
Daniel Carosone d...@geek.com.au wrote:
I also don't recommend files 1Gb in size for DVD media, due to
iso9660 limitations. I haven't used UDF enough to say much about any
limitations there.
ISO-9660 supports files up to 8 TB. Do you have a bigger pool?
Jörg
--
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/18/2010 09:37 PM, Peter Jeremy wrote:
Maybe it would be useful if ZFS allowed the reserved space to be
tuned lower but, at least for ZFS v13, the reserved space seems to
actually be a bit less than is needed for ZFS to function reasonably.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/18/2010 09:45 PM, Mattias Pantzare wrote:
No, the reservation in UFS/FFS is to keep the performance up. It will
be harder and harder to find free space as the disk fills. Is is even
more important for ZFS to be able to find free space as all
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/19/2010 12:25 AM, Erik Trimble wrote:
[Block rewrite]
Once all this gets done, I'd think we seldom would need more than a GB
or two as reserve space...
I agree. Without block rewrite, is best to have a percentage instead of
hard limit. After
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/19/2010 01:14 AM, Richard Elling wrote:
For example, b129
includes a fix for CR6869229, zfs should switch to shiny new metaslabs more
frequently.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6869229
I think the CR is worth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/19/2010 01:37 AM, Rodney Lindner wrote:
Hi all,
I was wondering, when blocks are freed as part COW process are the old blocks
put on the top or bottom of the freeblock list?
The question came about looking a thin provisioning using zfs
Supermicro AOC-USASLP-L8i has been confirmed to work flawlessly on OpenSolaris
by several users on these forums already and is AFAIK widely considered the
best controller for building custom Opensolaris ZFS NAS servers
price/performance/compatibility wise in both SOHO NAS and HD HTPC scenarios.
I've got an LDOM that has raw disks exported through redundant service domains
from a J4400. Actually, I've got seven of these LDOM's. On Friday, we had to
power the rack down for UPS maintenance. We gracefully shutdown all the Solaris
instances, waited about 15 min, then powered down the
On Tue, Jan 19, 2010 at 4:17 AM, Andrew Gabriel andrew.gabr...@sun.comwrote:
Actually, this sounds like really good news for ZFS.
ZFS (or rather, Solaris) can make good use of the multi-pathing capability,
only previously available on high speed drives.
Of course, ZFS can make use of any
+1
for zfsdump/zfsrestore
Julian Regel wrote:
When we brought it up last time, I think we found no one knows of a
userland tool similar to 'ufsdump' that's capable of serializing a ZFS
along with holes, large files, ``attribute'' forks, windows ACL's, and
checksums of its own, and then
Maybe there are too many I/Os for this controller.
You may try this settings
B130
echo zfs_txg_synctime_ms/W0t2000 | mdb -kw
echo zfs_vdev_max_pending/W0t5 | mdb -kw
older versions
echo zfs_txg_synctime/W0t2 | mdb -kw
echo zfs_vdev_max_pending/W0t5 | mdb -kw
Andreas
--
This message posted
On Jan 19, 2010, at 4:36 AM, Jesus Cea wrote:
On 01/19/2010 01:14 AM, Richard Elling wrote:
For example, b129
includes a fix for CR6869229, zfs should switch to shiny new metaslabs more
frequently.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6869229
I think the CR is worth
On Jan 19, 2010, at 1:53 AM, Julian Regel wrote:
When we brought it up last time, I think we found no one knows of a
userland tool similar to 'ufsdump' that's capable of serializing a ZFS
along with holes, large files, ``attribute'' forks, windows ACL's, and
checksums of its own, and then
On Tue, Jan 19, 2010 at 11:36 AM, Richard Elling
richard.ell...@gmail.comwrote:
On Jan 19, 2010, at 1:53 AM, Julian Regel wrote:
When we brought it up last time, I think we found no one knows of a
userland tool similar to 'ufsdump' that's capable of serializing a ZFS
along with holes,
I was today reasearching this same phenomenon.
The multipath is required for HA storage solutions with redundant i/o path
backplanes and redundant controllers.
( If a controller fails, the other one can still access the harddisk.)
I read about an LSI SAS-to-SATA bridge what can be attacched
From the web page it looks like this is a card that goes into the
computer system. That's not very useful for enterprise applications,
as they are going to want to use an external array that can be used
by a redundant pair of servers.
The DDRdrive X1 does utilize a
On Tue, Jan 19, 2010 01:02 PM, A. Krijgsman a.krijgs...@draftsman.nl wrote:
I was today reasearching this same phenomenon.
The multipath is required for HA storage solutions with redundant i/o path
backplanes and redundant controllers.
( If a controller fails, the other one can still access
On Tue, Jan 19, 2010 01:36 PM, A. Krijgsman a.krijgs...@draftsman.nl wrote:
Yes, those i meant! (interposer cards, could get on that name!)
The Dell's and HP's would al work in any enviroment?
It's just plain converting between industry standards right?
Or do people have a better
The beauty of ufsdump/ufsrestore is that because it's bundled with the
operating system, I can perform bare metal recovery using a Solaris DVD and
locally attached tape drive. It's simple and arguably essential for system
administrators.
Yep. And it was invented because there was no
Julian Regel wrote:
Based on what I've seen in other comments, you might be right.
Unfortunately, I don't feel comfortable backing up ZFS filesystems
because the tools aren't there to do it (built into the operating
system or using Zmanda/Amanda).
Commercial backup solutions are available
Julian Regel jrmailgate-zfsdisc...@yahoo.co.uk wrote:
When I look at the documentation for Zmanda
(http://docs.zmanda.com/Project:Amanda_Enterprise_3.0/ZMC_Users_Manual/Appendix_A:_Backing_Up_and_Restoring_Solaris_ZFS),
it states that the command used to backup a ZFS filesystem depends on
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
/var/adm/messages containing the trace.
Is there any point in
Joerg Schilling wrote:
Ian Collins i...@ianshome.com wrote:
Julian Regel wrote:
Based on what I've seen in other comments, you might be right.
Unfortunately, I don't feel comfortable backing up ZFS filesystems
because the tools aren't there to do it (built into the operating
system
On Wed, 20 Jan 2010, Ian Collins wrote:
Commercial backup solutions are available for ZFS.
I know tape backup isn't sexy, but it's a reality for many of us and it's
not going away anytime soon.
True, but I wonder how viable its future is. One of my clients requires 17
LT04 types for a full
Running Solaris 10 u8 and ZFS.
Has anyone found a need to tune the kernel with the values below?
*Solaris 10 10/09*: This release includes the zfs_arc_min and
zfs_arc_maxparameter descriptions. For more information, see
zfs_arc_min http://docs.sun.com/app/docs/doc/817-0404/gjheb?a=view and
On Tue, 19 Jan 2010, Frank Middleton wrote:
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
Ian Collins i...@ianshome.com wrote:
Sun's tar also writes ACLs in a way that is 100% non-portable. Star cannot
understand them and probably never will be able to understand this format
as it
is not well defined for a portable program like star.
Is that because they are NFSv4
jk == Jerry K sun.mail.lis...@oryx.cc writes:
jk +1
jk for zfsdump/zfsrestore
-1
I don't think a replacement for the ufsdump/ufsrestore tool is really
needed. From now on, backups just go into Another Zpool.
We need the zfs send stream format commitment (stream format depends
only
Thank you so much Fajar,
You have been incredibly helpful! I will do as you said I am just glad I
have not been going down the wrong path!
Thanks,
Greg
On Thu, Jan 14, 2010 at 4:45 PM, Fajar A. Nugraha fa...@fajar.net wrote:
On Fri, Jan 15, 2010 at 12:33 AM, Gregory Durham
ic == Ian Collins i...@ianshome.com writes:
I know tape backup isn't sexy, but it's a reality for many of
us and it's not going away anytime soon.
ic True, but I wonder how viable its future is. One of my
ic clients requires 17 LT04 types for a full backup, which cost
On Tue, Jan 19, 2010 at 12:16:01PM +0100, Joerg Schilling wrote:
Daniel Carosone d...@geek.com.au wrote:
I also don't recommend files 1Gb in size for DVD media, due to
iso9660 limitations. I haven't used UDF enough to say much about any
limitations there.
ISO-9660 supports files up to
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS
file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200,
and using a SSD drive as a ZIL device. Primary access to this system is via
NFS, and with NFS COMMITs
There is a tendency to conflate backup and archive, both generally
and in this thread. They have different requirements.
Backups should enable quick restore of a full operating image with all
the necessary system level attributes. They concerned with SLA and
uptime and outage and data loss
On Tue, Jan 19, 2010 at 02:24:11PM -0800, Scott Duckworth wrote:
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O
requests on ZFS file systems on a Sun T5220 attached to a Sun
StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL
Hi John,
The message below is a ZFS message, but its not enough to figure out
what is going on in an LDOM environment. I don't know of any LDOMs
experts that hang out on this list so you might post this on the
ldoms-discuss list, if only to get some more troubleshooting data.
I think you are
Message: 3
Date: Tue, 19 Jan 2010 15:48:52 -0500
From: Miles Nordin car...@ivy.net
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability?
Message-ID: oqpr55lt1n@castrovalva.ivy.net
Content-Type: text/plain; charset=us-ascii
I don't think
On Tue, 19 Jan 2010, Scott Duckworth wrote:
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O
requests on ZFS file systems on a Sun T5220 attached to a Sun
StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL
device. Primary access
On Jan 19, 2010, at 14:30, Frank Middleton wrote:
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
On Jan 19, 2010, at 4:26 PM, Allen Eastwood wrote:
Message: 3
Date: Tue, 19 Jan 2010 15:48:52 -0500
From: Miles Nordin car...@ivy.net
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability?
Message-ID: oqpr55lt1n@castrovalva.ivy.net
On Jan 19, 2010, at 18:48 , Richard Elling wrote:
Many people use send/recv or AVS for disaster recovery on the inexpensive
side. Obviously, enterprise backup systems also provide DR capabilities.
Since ZFS has snapshots that actually work, and you can use send/receive
or other backup
No errors reported on any disks.
$ iostat -xe
extended device statistics errors ---
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot
vdc0 0.65.6 25.0 33.5 0.0 0.1 17.3 0 2 0 0 0 0
vdc1 78.1 24.4
I use zfs send/recv in the enterprise and in smaller environments all time and
it's is excellent.
Have a look at how awesome the functionally is in this example.
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs
Regards,
Mike
--
This message posted from
On Tue, 19 Jan 2010, Scott Duckworth wrote:
No errors reported on any disks.
Nothing sticks out in /var/adm/messages on either the primary or cs0 domain.
Thus far there is no evidence that there is anything wrong with your
storage arrays, or even with zfs. The problem seems likely to be
John Meyer wrote:
Looks like this part got cut off somehow:
the filesystem mount point is set to /usr/local/local. I just want to
do a simple backup/restore, can anyone tell me something obvious that I'm not
doing right?
Using OpenSolaris development build 130.
Sounds like bug 6916662,
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it
I was able to solve it, but it actually worried me more than anything.
Basically, I had created the second pool using the mirror as a primary device.
So three disks but two full disk root mirrors.
Shouldn't zpool have detected an active pool and prevented this? The other LDOM
was claiming a
Thus far there is no evidence that there is anything wrong with your
storage arrays, or even with zfs. The problem seems likely to be
somewhere else in the kernel.
Agreed. And I tend to think that the problem lays somewhere in the LDOM
software. I mainly just wanted to get some experienced
Joerg Schilling wrote:
Ian Collins i...@ianshome.com wrote:
Sun's tar also writes ACLs in a way that is 100% non-portable. Star cannot
understand them and probably never will be able to understand this format as it
is not well defined for a portable program like star.
Is that
Allen Eastwood wrote:
On Jan 19, 2010, at 18:48 , Richard Elling wrote:
Many people use send/recv or AVS for disaster recovery on the inexpensive
side. Obviously, enterprise backup systems also provide DR capabilities.
Since ZFS has snapshots that actually work, and you can use send/receive
Star implements this in a very effective way (by using libfind) that is
even
faster that the find(1) implementation from Sun.
Even if I just find my filesystem, it will run for 7 hours. But zfs can
create my whole incremental snapshot in a minute or two. There is no way
star or any other
On Jan 19, 2010, at 22:54 , Ian Collins wrote:
Allen Eastwood wrote:
On Jan 19, 2010, at 18:48 , Richard Elling wrote:
Many people use send/recv or AVS for disaster recovery on the inexpensive
side. Obviously, enterprise backup systems also provide DR capabilities.
Since ZFS has
56 matches
Mail list logo