Scott Meilicke wrote:
Obviously iSCSI and NFS are quite different at the storage level, and I
actually like NFS for the flexibility over iSCSI (quotas, reservations,
etc.)
Another key difference between them is that with iSCSI, the VMFS filesystem
(built on the zvol presented as a block
Carson Gaspar wrote:
Not true. The script is simply not intelligent enough. There are really
3 broad kinds of RAM usage:
A) Unused
B) Unfreeable by the kernel (normal process memory)
C) Freeable by the kernel (buffer cache, ARC, etc.)
Monitoring usually should focus on keeping (A+C)
Joerg Schilling wrote:
James Andrewartha jam...@daa.com.au wrote:
Recently there's been discussion [1] in the Linux community about how
filesystems should deal with rename(2), particularly in the case of a crash.
ext4 was found to truncate files after a crash, that had been written with
Lars-Gunnar Persson wrote:
I would like to go back to my question for a second:
I checked with my Nexsan supplier and they confirmed that access to
every single disk in SATABeast is not possible. The smallest entities
I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll
Bob Friesenhahn wrote:
Your idea to stripe two disks per LUN should work. Make sure to use
raidz2 rather than plain raidz for the extra reliability. This
solution is optimized for high data throughput from one user.
Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form of
Miles Nordin wrote:
that SQLite2 should be equally as tolerant of snapshot backups as it
is of cord-yanking.
The special backup features of databases including ``performing a
checkpoint'' or whatever, are for systems incapable of snapshots,
which is most of them. Snapshots are not
Mario Goebbels wrote:
One thing I'd like to see is an _easy_ option to fall back onto older
uberblocks when the zpool went belly up for a silly reason. Something
that doesn't involve esoteric parameters supplied to zdb.
Between uberblock updates, there may be many write operations to a data
Ross wrote:
The problem is they might publish these numbers, but we
really have
no way of controlling what number manufacturers will
choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the current ones you're going to
Miles Nordin wrote:
mj == Moore, Joe joe.mo...@siemens.com writes:
mj For a ZFS pool, (until block pointer rewrite capability) this
mj would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do
Nicolas Williams wrote:
It'd be awesome to have a native directory-dataset conversion feature
in ZFS. And, relatedly, fast moves of files across datasets
in the same
volume. These two RFEs have been discussed to death in the list; see
the archives.
This would be a nice feature to have.
Ross Smith wrote:
My justification for this is that it seems to me that you can split
disk behavior into two states:
- returns data ok
- doesn't return data ok
And for the state where it's not returning data, you can again split
that in two:
- returns wrong data
- doesn't return data
C. Bergström wrote:
Will Murnane wrote:
On Mon, Nov 24, 2008 at 10:40, Scara Maccai [EMAIL PROTECTED] wrote:
Still don't understand why even the one on
http://www.opensolaris.com/, ZFS - A Smashing Hit, doesn't
show the app running in the moment the HD is smashed... weird...
Tommaso Boccali wrote:
Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks.
I would like to try a new brand of HD, by replacing a
spare disk with a new one and build on it a zfs pool.
Unfortunately the official utility to map a disk to the
physical position inside the
Brian Hechinger
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the
backends would be appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS
Nicolas Williams wrote
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help. But then, if the pool is busy writing then your slow ZIL
mirrors would generally be out of sync, thus being of no help
Toby Thain Wrote:
ZFS allows the architectural option of separate storage without losing end to
end protection, so the distinction is still important. Of course this means
ZFS itself runs on the application server, but so what?
The OP in question is not running his network clients on
Ian Collins wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason for buying in server appliances?
Assuming that the application servers can coexist in the only 16GB
Darren J Moffat wrote:
Moore, Joe wrote:
Given the fact that NFS, as implemented in his client
systems, provides no end-to-end reliability, the only data
protection that ZFS has any control over is after the write()
is issued by the NFS server process.
NFS can provided on the wire
I believe the problem you're seeing might be related to deadlock
condition (CR 6745310), if you run pstack on the
iscsi target daemon you might find a bunch of zombie
threads. The fix
is putback to snv-99, give snv-99 a try.
Yes, a pstack of the core I've generated from iscsitgtd does have
I've recently upgraded my x4500 to Nevada build 97, and am having problems with
the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment
(zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets)
for a Windows host and to act as
Bob Friesenhahn
I expect that Sun is realizing that it is already
undercutting much of
the rest of its product line. These minor updates would allow the
X4540 to compete against much more expensive StorageTek SAN hardware.
Assuming, of course that the requirements for the more expensive
Carson Gaspar wrote:
Darren J Moffat wrote:
$ pwd
/cube/builds/darrenm/bugs
$ zfs create -c 6724478
Why -c ? -c for current directory -p partial is
already taken to
mean create all non existing parents and -r relative is
already used
consistently as recurse in other zfs(1)
Bob Friesenhahn wrote:
Something else came to mind which is a negative regarding
deduplication. When zfs writes new sequential files, it
should try to
allocate blocks in a way which minimizes fragmentation
(disk seeks).
It should, but because of its copy-on-write nature, fragmentation
I AM NOT A ZFS DEVELOPER. These suggestions should work, but there
may be other people who have better ideas.
Aaron Berland wrote:
Basically, I have a 3 drive raidz array on internal Seagate
drives. running build 64nv. I purchased 3 add'l USB drives
with the intention of mirroring and then
I'm using an x4500 as a large data store for our VMware environment. I
have mirrored the first 2 disks, and created a ZFS pool of the other 46:
22 pairs of mirrors, and 2 spares (optimizing for random I/O performance
rather than space). Datasets are shared to the VMware ESX servers via
NFS. We
Have you thought of solid state cache for the ZIL? There's a
16GB battery backed PCI card out there, I don't know how much
it costs, but the blog where I saw it mentioned a 20x
improvement in performance for small random writes.
Thought about it, looked in the Sun Store, couldn't find
BillTodd wrote:
In order to be reasonably representative of a real-world
situation, I'd suggest the following additions:
Your suggestions (make the benchmark big enough so seek times are really
noticed) are good. I'm hoping that over the holidays, I'll get to play
with an extra server...
Louwtjie Burger wrote:
Richard Elling wrote:
- COW probably makes that conflict worse
This needs to be proven with a reproducible, real-world
workload before it
makes sense to try to solve it. After all, if we cannot
measure where
we are,
how can we prove that we've
Peter Tribble wrote:
I'm not worried about the compression effect. Where I see problems is
backing up million/tens of millions of files in a single
dataset. Backing up
each file is essentially a random read (and this isn't helped by raidz
which gives you a single disks worth of random read
Jesus Cea wrote:
Darren J Moffat wrote:
Why would you do that when it would reduce your protection
and ZFS boot
can boot from a mirror anyway.
I guess ditto blocks would be protection enough, since the
data would be
duplicated between both disks. Of course, backups are your friend.
Mike Gerdts wrote:
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Frank Cusack
Sent: Friday, August 10, 2007 7:26 AM
To: Tuomas Leikola
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Force ditto block on different vdev?
On August 10, 2007 2:20:30 PM +0300 Tuomas Leikola
[EMAIL
Brian Wilson wrote:
On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
Darren Dunham wrote:
My previous experience with powerpath was that it rode below the
Solaris
device layer. So you couldn't cause trespass by using the wrong
device. It would just go to powerpath which would
Has anyone done a comparison of the reliability and performance of a
mirrored zpool vs. a non-redundant zpool using ditto blocks? What about
a gut-instinct about which will give better performance? Or do I have
to wait until my Thumper arrives to find out for myself?
Also, in selecting where a
34 matches
Mail list logo