On Wed, Sep 02, 2009 at 02:54:42PM -0400, Jacob Ritorto wrote:
Torrey McMahon wrote:
3) Performance isn't going to be that great with their design
but...they might not need it.
Would you be able to qualify this assertion? Thinking through it a bit,
even if the disks are better than
A silly question: Why are you using 132 ZFS pools as opposed to a
single ZFS pool with 132 ZFS filesystems?
--Bill
On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote:
I have a test system with 132 (small) ZFS pools[*], as part of our
work to validate a new ZFS-based fileserver
I would recommend the 64-bit system, but make sure your controller card
will work in it, first. The bottleneck will most likely be the incoming
network connection (100MB/s) in any case. Assuming, of course, that you
have more than one disk. With the 64-bit system, you'll run into fewer
issues
You can just do something like this:
# zfs list tank/home/billm
NAMEUSED AVAIL REFER MOUNTPOINT
tank/home/billm83.9G 5.56T 74.1G /export/home/billm
# zdb tank/home/billm
Dataset tank/home/billm [ZPL], ID 83, cr_txg 541, 74.1G, 111066 objects
I may not be understanding your usage case correctly, so bear with me.
Here is what I understand your request to be. Time is increasing from
left to right.
A -- B -- C -- D -- E
\
- F -- G
Where E and G are writable filesystems and the others are snapshots.
I think
I would also suggest setting the recordsize property on the zvol when
you create it to 4k, which is, I think, the native ext3 block size.
If you don't do this and allow ZFS to use it's 128k default blocksize,
then a 4k write from ext3 will turn into a 128k read/modify/write on the
ZFS side. This
On Wed, Sep 05, 2007 at 03:43:38PM -0500, Rob Windsor wrote:
(No, I'm not defending Sun in it's apparent patent-growling, either, it
all sucks IMO.)
In contrast to the positioning by NetApp, Sun didn't start the patent
fight. It was started by StorageTek, well prior to Sun's acquisition of
Did you try issuing:
zpool detach your_pool_name new_device
That should detach the new device and stop the resilver. If you just
want to stop the resilver (and leave the device), you should be able to
do:
zpool scrub -s your_pool_name
Which will stop the scrub/resilver.
--Bill
On
See my blog on this topic:
http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
The quick summary is that if there is more than one vdev comprising the
pool, the copies will be spread across multiple vdevs. If there is only
one, then the copies are spread out physically (at least
When you say rewrites, can you give more detail? For example, are you
rewriting in 8K chunks, random sizes, etc? The reason I ask is because
ZFS will, by default, use 128K blocks for large files. If you then
rewrite a small chunk at a time, ZFS is forced to read 128K, modify the
small chunk
You could easily do this in Solaris today by just using power.conf(4).
Just have it spin down any drives that have been idle for a day or more.
The periodic testing part would be an interesting project to kick off.
--Bill
On Mon, Jan 29, 2007 at 08:21:16PM -0200, Toby Thain wrote:
Hi,
On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
So just to confirm; disabling the zil *ONLY* breaks the semantics of fsync()
and synchronous writes from the application perspective; it will do *NOTHING*
to lessen the
On Sun, Jan 07, 2007 at 06:28:04PM -0500, Dennis Clarke wrote:
Now then, I have a collection of six disks on controller c0 that I would
like to now mirror with this ZPool zfs0. Thats the wrong way of thinking
really. In the SVM world I would create stripes and then mirror them to get
either
On Fri, Jan 05, 2007 at 10:14:21AM -0800, Eric Hill wrote:
I have a pool of 48 500GB disks across four SCSI channels (12 per
channel). One of the disks failed, and was replaced. The pool is now
in a degraded state, but I can't seem to get the pool to be happy with
the replacement. I did a
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
Clearly ZFS file creation is just amazingly heavy even with ZIL
disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron
cores we're in big trouble in the longer term. In the meantime I'm
going to find a new home
They both use checksums and can provide self-healing data.
--Bill
On Tue, Nov 28, 2006 at 02:54:56PM -0700, Jason J. W. Williams wrote:
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Thanks in advance,
J
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED]
Hi Michael. Based on the output, there should be no user-visible file
corruption. ZFS saw a bunch of checksum errors on the disk, but was
able to recover in every instance.
While 2-disk RAID-Z is really a fancy (and slightly more expensive,
CPU-wise) way of doing mirroring, at no point should
On Fri, Sep 15, 2006 at 01:23:31AM -0700, can you guess? wrote:
Implementing it at the directory and file levels would be even more
flexible: redundancy strategy would no longer be tightly tied to path
location, but directories and files could themselves still inherit
defaults from the
On Fri, Sep 15, 2006 at 01:10:25PM -0700, Tim Cook wrote:
the status showed 19.46% the first time I ran it, then 9.46% the
second. The question I have is I added the new disk, but it's showing
the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
On Thu, Sep 14, 2006 at 08:09:07AM -0700, David Smith wrote:
I have run zpool scrub again, and I now see checksum errors again.
Wouldn't the checksum errors gotten fixed with the first zpool scrub?
Can anyone recommend actions I should do at this point?
After running the first scrub, did
On Tue, Aug 22, 2006 at 11:46:30AM -0700, Anton B. Rang wrote:
I realized just now that we're actually sending the wrong variant of
SYNCHRONIZE CACHE, at least for SCSI devices which support SBC-2.
SBC-2 (or possibly even SBC-1, I don't have it handy) added the
SYNC_NV bit to the command. If
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
Yes, ZFS uses this command very frequently. However, it only does this
if the whole disk is under the control of ZFS, I believe; so a
workaround could be to use slices rather than whole disks when
creating a ZFS pool on a buggy
Interesting. When you do the import, try doing this:
zpool import -o ro yourpool
And see if that fares any better. If it works, could you send the
output of zpool status -v? Also, how big is the pool in question?
Either access to the machine, or a way to copy the crash dump would be
On Mon, Jul 31, 2006 at 02:17:00PM -0400, Jan Schaumann wrote:
Is there anybody here who's using ZFS on Apple XRaids and serving them
via NFS? Does anybody have any other ideas what I could do to solve
this? (I have, in the mean time, converted the XRaid to plain old UFS,
and performance is
On Mon, Jul 31, 2006 at 03:59:23PM -0400, Jan Schaumann wrote:
Thanks for the suggestion. However, I'm not sure if the above pipeline
is correct:
2# !! | awk '/dev.dsk/{print $1::print -a vdev_t vdev_nowritecache}'
857a0580::print -a vdev_t vdev_nowritecache
3# !! | mdb -k
0
Hmm.
On Fri, Jul 21, 2006 at 07:22:17AM -0600, Gregory Shaw wrote:
After reading the ditto blocks blog (good article, btw), an idea
occurred to me:
Since we use ditto blocks to preserve critical filesystem data, would
it be practical to add a filesystem property that would cause all
files
On Sat, Jul 22, 2006 at 12:44:16AM +0800, Darren Reed wrote:
Bart Smaalders wrote:
I just swap on a zvol w/ my ZFS root machine.
I haven't been watching...what's the current status of using
ZFS for swap/dump?
Is a/the swap solution to use mkswap and then specify that file
in vfstab?
On Wed, Jul 19, 2006 at 03:10:00AM +0200, [EMAIL PROTECTED] wrote:
So how many of the 128 bits of the blockpointer are used for things
other than to point where the block is?
128 *bits*? What filesystem have you been using? :) We've got
luxury-class block pointers that are 128 *bytes*. We
On Tue, Jul 11, 2006 at 11:03:17PM -0400, David Abrahams wrote:
How can RAID-Z preserve transactional semantics when a single
FS block write requires writing to multiple physical devices?
ZFS uses a technique that's been used in databases for years: phase
trees. First you write all
On Fri, Jul 07, 2006 at 09:50:47AM +0100, Darren J Moffat wrote:
Eric Schrock wrote:
On Thu, Jul 06, 2006 at 09:53:32PM +0530, Pramod Batni wrote:
offtopic query :
How can ZFS require more VM address space but not more VM ?
The real problem is VA fragmentation, not consumption.
On Fri, Jul 07, 2006 at 08:20:50AM -0400, Dennis Clarke wrote:
As near as I can tell the ZFS filesystem has no way to backup easily to a
tape in the same way that ufsdump has served for years and years.
...
Of course it took a number of hours for that I/O error to appear because the
tape
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote:
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
Flash is (can be) a bit more sophisticated. The problem is that they
have a limited write endurance -- typically spec'ed at 100k writes to
any single bit. The
On Fri, Jun 02, 2006 at 12:42:53PM -0700, Philip Brown wrote:
hi folks...
I've just been exposed to zfs directly, since I'm trying it out on
a certain 48-drive box with 4 cpus :-)
I read in the archives, the recent hard drive write cache
thread. in which someone at sun made the claim that
On Thu, May 11, 2006 at 12:34:45PM +0100, Darren J Moffat wrote:
Where does the 12.5% compression rule in zio_compress_data() come from ?
Given that this is in the generic function for all compression
algorithms rather than in the implementation of lzjb I wonder where the
number comes from ?
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:
Eric Schrock wrote:
Using traditional tools or ZFS send/receive?
Traditional (amanda). I'm not seeing a way to dump zfs file systems to
tape without resorting to 'zfs send' being piped through gtar or
something. Even then, the only
On Fri, May 05, 2006 at 10:19:56AM +0200, Constantin Gonzalez wrote:
(apologies if this was discussed before, I _did_ some research, but this
one may have slipped for me...)
I'm in the process of writing a blog on this one. Give me another day
or so.
Looking through the current Sun ZFS
On Thu, May 04, 2006 at 09:55:37AM -0700, Adam Leventhal wrote:
Is there a way, given a dataset or pool, to get some statistics about the
sizes of writes that were made to the underlying vdevs?
Does zdb -bsv pool give you what you want?
--Bill
___
37 matches
Mail list logo