Thanks for the update Adam, that's good to hear. Do you have a bug ID number
for this, or happen to know which build it's fixed in?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sun, Oct 25, 2009 at 01:45:05AM -0700, Orvar Korvar wrote:
I am trying to backup a large zfs file system to two different
identical hard drives. I have therefore started two commands to backup
myfs and when they have finished, I will backup nextfs
zfs send mypool/m...@now | zfs receive
Greetings to everyone.
I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only I take full responsibility for whatever damage I
cause to my machine by using it.
Now.
I created a tank/test filesystem,
On 26.10.09 14:25, Stathis Kamperis wrote:
Greetings to everyone.
I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only I take full responsibility for whatever damage I
cause to my machine by using
2009/10/26 Victor Latushkin victor.latush...@sun.com:
On 26.10.09 14:25, Stathis Kamperis wrote:
Greetings to everyone.
I'm trying to retrieve the checksumming algorithm on a per-block basis
with zdb(1M). I know it's supposed to be ran by Sun's support
engineers only I take full
Or in OS X with smart folders where you define a set of search terms
and as write operations occur on the known filesystems the folder
contents will be updated to reflect the current state of the attached
filesystems
The structures you defined seemed to be designed around the idea of
Hi Ross,
The CR ID is 6740597:
zfs fletcher-2 is losing its carries
Integrated in Nevada build 114 and the Solaris 10 10/09 release.
This CR didn't get a companion man page bug to update the docs
so I'm working on that now.
The opensolaris.org site seems to be in the middle of its migration
On Oct 25, 2009, at 1:45 AM, Orvar Korvar wrote:
I am trying to backup a large zfs file system to two different
identical hard drives. I have therefore started two commands to
backup myfs and when they have finished, I will backup nextfs
zfs send mypool/m...@now | zfs receive
I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of
replacing an entire disk with another formatted clean disk, you would think the
amount of data would be the same each time
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote:
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of
replacing an entire disk with another formatted clean disk, you would
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote:
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case
of replacing an entire disk with another formatted clean disk, you
Jeremy Kitchen wrote:
Hey folks!
We're using zfs-based file servers for our backups and we've been having
some issues as of late with certain situations causing zfs/zpool
commands to hang.
anyone? this is happening right now and because we're doing a restore I
can't reboot the machine, so
I may be searching for the wrong thing, but I am trying to figure out a way to
set the default quota for child file systems. I tried setting the quota on the
top level, but that is not the desired effect. I'd like to limit, by default,
newly created filesystems under a certain dataset to 10G
Hi,
Simple solution. I did, and it did, and things worked swell! Thanx for
the assist.
I only wish the failure mode were a little easier to interpret...
perhaps I'll try to file an RFE about that...
Jürgen Keil spake thusly, on or about 10/24/09 06:53:
I have a functional OpenSolaris x64
knatte_fnatte_tja...@yahoo.com said:
Is rsync faster? As I have understood it, zfs send.. gives me an exact
replica, whereas rsync doesnt necessary do that, maybe the ACL are not
replicated, etc. Is this correct about rsync vs zfs send?
It is true that rsync (as of 3.0.5, anyway) does not
opensolaris-zfs-disc...@mlists.thewrittenword.com said:
Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given
the choice between insurance and performance, I'll take insurance, though it
depends on your use case. We're using 5-disk RAIDZ2 vdevs.
. . .
Would love to
7.x FW on 2500 and 6000 series doesnot operate the same way as 6.x FW does.
So on some/most loads ignore cache synch commands option may not improve
performance as expected.
Best regards
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone
Hi Bob;
In all 2500 and 6000 series you can assign raid set's to a controller and
that controller becomes the owner of the set.
Generaly not force drives switching between controllers always one
controller owns a disk, and other waits in standby. Some disks use ALUA and
re-route traffic coming
On Oct 26, 2009, at 11:51 AM, Marion Hakanson wrote:
knatte_fnatte_tja...@yahoo.com said:
Is rsync faster? As I have understood it, zfs send.. gives me an
exact
replica, whereas rsync doesnt necessary do that, maybe the ACL are
not
replicated, etc. Is this correct about rsync vs zfs send?
Hı Trevor;
As can be seen from my email adress and signiture below my answer will be
quite biased J
To be honest, while converting every X series server with millions of
alternative configurations to a Fishwork appliance may not be extremely
difficult, it would be impossible to support
On Mon, Oct 26, 2009 at 09:58:05PM +0200, Mertol Ozyoney wrote:
In all 2500 and 6000 series you can assign raid set's to a controller and
that controller becomes the owner of the set.
When I configured all 32-drives on a 6140 array and the expansion
chassis, CAM automatically split the drives
Paul
Being a script hacker like you the only kludge I can think of.
A script that does something like
ls /tmp/foo
sleep
ls /tmp/foo.new
diff /tmp/foo /tmp/foo.new
/tmp/files_that_have_changed
mv /tmp/foo.new /tmp/foo
Or you might be able to knock something up with bart nd zfs snapshots.
On 10/25/09 5:38 PM, Paul Archer wrote:
5:12pm, Cyril Plisko wrote:
while there is no inotify for Solaris, there are similar technologies
available.
Check port_create(3C) and gam_server(1)
I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-
file(1)? :-)
-- richard
On Oct 26, 2009, at 3:17 PM, Carson Gaspar wrote:
On 10/25/09 5:38 PM, Paul Archer wrote:
5:12pm, Cyril Plisko wrote:
while there is no inotify for Solaris, there are similar
technologies
available.
On 10/26/09 3:31 PM, Richard Elling wrote:
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-file(1)? :-)
The docs are... ummm... skimpy is being rather polite. The docs I can find via
Google say that they will launch some random unspecified daemons via d-bus (I
assume gvfsd ans
Hi Jeremy,
Can you use the command below and send me the output, please?
Thanks,
Cindy
# mdb -k
::stacks -m zfs
On 10/26/09 11:58, Jeremy Kitchen wrote:
Jeremy Kitchen wrote:
Hey folks!
We're using zfs-based file servers for our backups and we've been having
some issues as of late with
On Oct 26, 2009, at 3:56 PM, Carson Gaspar wrote:
On 10/26/09 3:31 PM, Richard Elling wrote:
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-
file(1)? :-)
The docs are... ummm... skimpy is being rather polite. The docs I
can find via Google say that they will launch some
Cindy Swearingen wrote:
Hi Jeremy,
Can you use the command below and send me the output, please?
Thanks,
Cindy
# mdb -k
::stacks -m zfs
ack! it *just* fully died. I've had our noc folks reset the machine
and I will get this info to you as soon as it happens again (I'm fairly
I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_create is apparently a system call. (I'm not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write
it.)
I appreciate the suggestions, but I need something a little more
On 10/26/09 5:33 PM, p...@paularcher.org wrote:
I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_create is apparently a system call. (I'm not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write
it.)
I appreciate the
With that said I'm concerned that there appears to be a fork between
the opensource version of ZFS and ZFS that is part of the Sun/Oracle
FishWorks 7nnn series appliances. I understand (implicitly) that
Sun (/Oracle) as a commercial concern, is free to choose their own
priorities in terms
Anyone have any creative solutions for near-synchronous replication between
2 ZFS hosts?
Near-synchronous, meaning RPO X---0
I realize performance will take a hit.
Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Oct 26, 2009, at 20:42, Carson Gaspar wrote:
Unfortunately, I'm trying for a Solaris solution. I already had a
Linux
solution (the 'inotify' I started out with).
And we're on a Solaris mailing list, trying to give you solutions
that work on Solaris. Don't believe everything you read on
I'm having similar issues, with two AOC-USAS-L8i Supermicro 1068e
cards mpt2 and mpt3, running 1.26.00.00IT
It seems to only affect a specific revision of disk. (???)
sd67 Soft Errors: 0 Hard Errors: 127 Transport Errors: 3416
Vendor: ATA Product: WDC WD10EACS-00D Revision: 1A01
On Oct 26, 2009, at 7:36 PM, Mike Watkins wrote:
Anyone have any creative solutions for near-synchronous replication
between 2 ZFS hosts?
Near-synchronous, meaning RPO X---0
Many Solaris solutions are using AVS for this. But you could use
block-level replication from a number of vendors.
I haven't tried this, but this must be very easy with dtrace. How come no one
mentioned it yet? :) You would have to monitor some specific syscalls...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, Oct 26, 2009 at 08:53:50PM -0700, Anil wrote:
I haven't tried this, but this must be very easy with dtrace. How come
no one mentioned it yet? :) You would have to monitor some specific
syscalls...
DTrace is not reliable in this sense: it will drop events rather than
overburden the
38 matches
Mail list logo