Not sure why would you want these 3 together, but lustre and zfs will
work together in Lustre 1.8 version. ZFS will be backend filesystem
for Lustre servers. See this
http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU
Cheers,
-Atul
On Feb 7, 2008 8:39 AM, kilamanjaro [EMAIL
For some time now, I have had zfs pool, created (if I
remeber this correctly) on my x86 opensolaris,
with zfs version 6, and have it accessable on
my Leopard Mac. I ran the ZFS beta on the Leopard beta
with no problems at all. I've now installed the latest zfs RW build
on my Leopard and it work
Lori Alt writes in the netinstall README that a slice
should be available for crash dumps. In order to get
this done the following line should be defined within
the profile:
filesys c0[t0]d0s1 auto swap
So my question is, is this still needed and how to
access a crash dump if it happened?
I just installed nv82 so we'll see how that goes. I'm going to try the
recordsize idea above as well.
A note about UFS: I was told by our local Admin guru that ZFS turns on
write-caching for disks, which is something that a UFS file system should not
have turned on, so that if I convert the
Unfortunately, I don't know the record size of the writes. Is it as simple as
looking @ the size of a file, before and after a client request, and noting the
difference in size? This is binary data, so I don't know if that makes a
difference, but the average write size is a lot smaller than
Hi Ross,
On Thu, 2008-02-07 at 08:30 -0800, Ross wrote:
While playing around with ZFS and iSCSI devices I've managed to remove
an iscsi target before removing the zpool. Now any attempt to delete
the pool (with or without -f) core dumps zpool.
Any ideas how I get rid of this pool?
Yep,
While playing around with ZFS and iSCSI devices I've managed to remove an iscsi
target before removing the zpool. Now any attempt to delete the pool (with or
without -f) core dumps zpool.
Any ideas how I get rid of this pool?
This message posted from opensolaris.org
I notice that files within a snapshot show a different deviceID to stat
than the parent file does. But this is not true when mounted via NFS.
Is this a limitation of the NFS client, or just what the ZFS fileserver
is doing?
Will this change in the future? With NFS4 mirror mounts?
--
Darren
To avoid making multiple posts, I'll just write everything here:
-Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was
the issue.
-Disabling ZIL didn't do anything either
-Still playing with 'recsize' values but it doesn't seem to be doing much...I
don't think I have a
I keep my system synchronized to a USB disk from time to time. The script
works by sending incremental snapshots to a pool on the USB disk, then deleting
those snapshots from the source machine.
A botched script ended up deleting a snapshot that was not successfully
received on the USB disk.
Slight correction. 'recsize' must be a power of 2 so it would be 8192.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
One thing I just observed is that the initial file size is 65796 bytes. When
it gets an update, the file size remains @ 65796.
Is there a minimum file size?
This message posted from opensolaris.org
___
zfs-discuss mailing list
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
John-Paul Drawneek wrote:
| I guess a USB pendrive would be slower than a
| harddisk. Bad performance
| for the ZIL.
A decent pendrive of mine writes at 3-5MB/s. Sure there are faster
ones, but any desktop harddisk can write at 50MB/s.
If you are
With my (COTS) LSI 1068 and 1078 based controllers I get consistently
better performance when I export all disks as jbod (MegaCli -
CfgEachDskRaid0).
I even went through all the loops and hoops with 6120's, 6130's and
even some SGI storage and the result was always the same; better
On Wed, 2008-02-06 at 13:42 -0600, Michael Hale wrote:
Hello everybody,
I'm thinking of building out a second machine as a backup for our mail
spool where I push out regular filesystem snapshots, something like a
warm/hot spare situation.
Our mail spool is currently running snv_67
On Thu, Feb 07, 2008 at 01:54:58PM -0800, Andrew Tefft wrote:
Let's say I have a zfs called pool/backups and it contains two
zfs'es, pool/backups/server1 and pool/backups/server2
I have sharenfs=on for pool/backups and it's inherited by the
sub-zfs'es. I can then nfs mount
RRD4J isn't a DB, per se, so it doesn't really have a record size. In fact,
I don't even know if, when data is written to the binary, whether it is
contiguous or not so the amount written may not directly correlate to a proper
record-size.
I did run your command and found the size patterns
Let's say I have a zfs called pool/backups and it contains two zfs'es,
pool/backups/server1 and pool/backups/server2
I have sharenfs=on for pool/backups and it's inherited by the sub-zfs'es. I can
then nfs mount pool/backups/server1 or pool/backups/server2, no problem.
If I mount pool/backups
William,
It should be fairly easy to find the record size using DTrace. Take an
aggregation of the
the writes happening (aggregate on size for all the write(2) system calls).
This would give fair idea of the IO size pattern.
Does RRD4J have a record size mentioned ? Usually if it is a
Hello,
I have a unique deployment scenario where the marriage
of ZFS zvol and UFS seem like a perfect match. Here are
the list of feature requirements for my use case:
* snapshots
* rollback
* copy-on-write
* ZFS level redundancy (mirroring, raidz, ...)
* compression
* filesystem cache control
Much of the complexity in hardware RAID is in the fault detection, isolation,
and management. The fun part is trying to architect a fault-tolerant system
when the suppliers of the components can not come close to enumerating most of
the possible failure modes.
What happens when a drive's
Because of the mirror mount feature that integrated into that Solaris
Express, build 77.
You can read about here on page 20 of the ZFS Admin Guide:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Andrew Tefft wrote:
Let's say I have a zfs called pool/backups and it contains
Roman Morokutti wrote:
Lori Alt writes in the netinstall README that a slice
should be available for crash dumps. In order to get
this done the following line should be defined within
the profile:
filesys c0[t0]d0s1 auto swap
So my question is, is this still needed and how to
access a
-Still playing with 'recsize' values but it doesn't seem to be doing
much...I don't think I have a good understand of what exactly is being
written...I think the whole file might be overwritten each time
because it's in binary format.
The other thing to keep in mind is that the tunables like
Dave Lowenstein wrote:
Couldn't we move fixing panic the system if it can't find a lun up to
the front of the line? that one really sucks.
That's controlled by the failmode property of the zpool, added in PSARC
2007/567 which was integrated in b77.
--
James Andrewartha
-Setting zfs_nocacheflush, though got me drastically
increased throughput--client requests took, on
average, less than 2 seconds each!
So, in order to use this, I should have a storage
array, w/battery backup, instead of using the
internal drives, correct? I have the option of using
a
Andy Lubel wrote:
With my (COTS) LSI 1068 and 1078 based controllers I get consistently
better performance when I export all disks as jbod (MegaCli -
CfgEachDskRaid0).
Is that really 'all disks as JBOD'? or is it 'each disk as a single
drive RAID0'?
It may not sound different on the
William Fretts-Saxton wrote:
Unfortunately, I don't know the record size of the writes. Is it as simple
as looking @ the size of a file, before and after a client request, and
noting the difference in size? This is binary data, so I don't know if that
makes a difference, but the average
28 matches
Mail list logo