If I understand correctly, then the parity block for RAID-Z are also
written in two different atomic operations. As per RAID-5. (the only
difference being each can be of a different stripe size).
HL As with Raid-5 on a four disk stripe, there are four independant
HL writes, and they
mnh wrote:
Hi,
I was wondering if there is any way to read a ZFS snapshot using
system/zfs lib (ie refer to it as a block device).
I dug through the libzfs source but could not find anything that could
enable me to 'read' the contents of a
snapshot/filesystem.
Why ? What problem are you
HL And to clear things - meta data are updated also in a spirit of COW -
HL so metadata are written to new locations and then uber block is
HL atomically updated pointing to new meta data
Victor Latushkin wrote:
Well, to add to this, uber-blocks are also updated in COW fashion -
there is a
See inline near then end...
Tomas Ögren wrote:
On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:
I was wondering if this was a good setup for a 3320 single-bus,
single-host attached JBOD. There are 12 146G disks in this array:
I used:
zpool create pool1 \
raidz2 c2t0d0 c2t1d0 c2t2d0
Darren J Moffat wrote:
mnh wrote:
Hi,
I was wondering if there is any way to read a ZFS snapshot using
system/zfs lib (ie refer to it as a block device).
I dug through the libzfs source but could not find anything that
could enable me to 'read' the contents of a
snapshot/filesystem.
mnh wrote:
Darren J Moffat wrote:
mnh wrote:
Hi,
I was wondering if there is any way to read a ZFS snapshot using
system/zfs lib (ie refer to it as a block device).
I dug through the libzfs source but could not find anything that
could enable me to 'read' the contents of a
Darren J Moffat wrote:
Is there a reason why you can't just walk through the snapshot using
POSIX APIs ? The snapshot is mounted in
rootofdataset/.zfs/snapshot/nameofsnapshot
We cannot walk through the mounted snapshot as it's not just the data
that we are concerned about. We need to
On 18 May, 2007 - Dale Sears sent me these 1,5K bytes:
Tomas Ögren wrote:
On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:
I was wondering if this was a good setup for a 3320 single-bus,
single-host attached JBOD. There are 12 146G disks in this array:
I used:
zpool create
I think it would be handy if a utility could read a full zfs snapshot and
restore subsets of files or directories like using something like tar -xf or
ufsrestore -i.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi
Just playing around with zfs , trying to locate DBMS data files to zpool.
DBMS i mean here are oracle and informix.
currently noticed that read operations perfomance is excelent but all write
operations are not and also write operations performance variates a lot.
My quess for not so good
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM +0800:
Gurus;
I am exceedingly impressed by the ZFS although it is my humble opinion
that Sun is not doing enough evangelizing for it.
What else do you think we should be doing?
David
___
I'm not sure what you want that the file system does not already provide.
you can use cp to copy files out, or find(1) to find them based on time or any
other attribute and then cpio to copy them out.
This message posted from opensolaris.org
___
This is probably a good place to start.
http://blogs.sun.com/realneel/entry/zfs_and_databases
Please post back to the group with your results, I'm sure many of us are
interested.
Thanks,
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
An example would be if you had a raw snapshot on tape. A single file or subset
of files could be restored from it without needing the space to load the full
snapshot into a zpool. This would be handy if you have a zpool with 500GB of
space and 300GB used. If you had a snapshot that was 250GB
Hello all, I am interested in setting up an HA NFS server with zfs as
the storage filesystem on Solaris 10 + Sun Cluster 3.2. This is an HPC
environment with a 70 node cluster attached. File sizes are 1-200meg
or so, with an average around 10meg.
I have two servers, and due to changing specs
On 18-May-07, at 1:57 PM, William D. Hathaway wrote:
An example would be if you had a raw snapshot on tape.
Unless I misunderstand ZFS, you can archive the contents of a
snapshot, but there's no concept of a 'raw snapshot' divorced from a
filesystem.
A single file or subset of files
David Bustos wrote:
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM +0800:
Gurus;
I am exceedingly impressed by the ZFS although it is my humble opinion
that Sun is not doing enough evangelizing for it.
What else do you think we should be doing?
Send Thumpers to
Hello,
with the advent of clones and snapshots, one will of course start
creating them. Which also means destroying them.
Am I the only one who is *extremely* nervous about doing zfs destroy
some/[EMAIL PROTECTED]?
This goes bot manually and automatically in a script. I am very paranoid
about
What about having dedicated commands destroysnapshot, destroyclone,
or remove (less dangerous variant of destroy) that will never do
anything but remove snapshots or clones? Alternatively having something
along the lines of zfs destroy --nofs or zfs destroy --safe.
Another option is to allow
On 18-May-07, at 4:39 PM, Ian Collins wrote:
David Bustos wrote:
... maybe Sun should make more of the
cost savings in storage ZFS offers to gain a cost advantage over the
competition,
Cheaper AND more robust+featureful is hard to beat.
--T
___
homerun wrote:
Hi
Just playing around with zfs , trying to locate DBMS data files to zpool.
DBMS i mean here are oracle and informix.
currently noticed that read operations perfomance is excelent but all write
operations are not and also write operations performance variates a lot.
My quess
queuing theory should explain this rather nicely. iostat measures
%busy by counting if there is an entry in the queue for the clock
ticks. There are two queues, one in the controller and one on the
disk. As you can clearly see the way ZFS pushes the load is very
different than dd or UFS.
--
I explored this a bit and found that the ldi_ioctl in my layered driver does
fail, but fails because of an iappropriate ioctl for device error, which the
underlying ramdisk driver's ioctl returns. So doesn't seem like that's an issue
at all (since I know the storage pool creation is successful
On 5/17/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Phillip,
Thursday, May 17, 2007, 6:30:38 PM, you wrote:
PF [b]Given[/b]: A Solaris 10 u3 server with an externally attached
PF disk array with RAID controller(s)
PF [b]Question[/b]: Is it better to create a zpool from a
PF
Yes, i am also interested in this.
We can't afford two super fast setup so we are looking at having a huge pile
sata to act as a real time backup for all our streams.
So what can AVS do and its limitations are?
Would a just using zfs send and receive do or does AVS make it all seamless?
with the advent of clones and snapshots, one will of course start
creating them. Which also means destroying them.
Am I the only one who is *extremely* nervous about doing zfs destroy
some/[EMAIL PROTECTED]?
This goes bot manually and automatically in a script. I am very paranoid
about
Rather than rehash this, again, from scratch. Refer to a previous rehashing.
http://www.opensolaris.org/jive/thread.jspa?messageID=15363;
-- richard
Peter Schuller wrote:
Hello,
with the advent of clones and snapshots, one will of course start
creating them. Which also means
Rather than rehash this, again, from scratch. Refer to a previous rehashing.
http://www.opensolaris.org/jive/thread.jspa?messageID=15363;
That thread really did quickly move to arguments about confirmations and
their usefulness or annoyance.
I think the idea presented of adding
Hey, that's nothing, I had one zfs file system, then I cloned it, so I
thought that I had two separate file systems. then I was making snaps
of both of them. Then later on I decided I did not need original file
system with its snaps. So I did recursively remove it, all of a sudden
I got a
29 matches
Mail list logo