Was it over NFS ?
Was zil_disable set on the server ?
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
-r
Trevor Watson writes:
Anton B. Rang wrote:
Were there any errors reported in
Roch - PAE wrote:
Was it over NFS ?
No, local.
Was zil_disable set on the server ?
Not unless it is set by default. I haven't changed any ZFS params.
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
I have a machine with ZFS connected to a SAN. The space of storage
increased on the SAN. The order format shows the increase in volume
well. But the size of the ZFS pool did not increase. What to make so
that zfs takes into account this increase in volume?
Thanks,
Nathalie.
I've noticed that fstyp on a floppy media formatted with pcfs now needs
somewhere between
30 - 100 seconds to find out that the floppy media is formatted with pcfs.
E.g. on sparc snv_48, I currently observe this:
% time fstyp /vol/dev/rdiskette0/nomedia
pcfs
0.01u 0.10s 1:38.84 0.1%
zfs's
Hello zfs-discuss,
S10U3, one pool with several RAID-Z2 groups, one file system in a
pool with one large file (about 16,7TB). Pool was just imported and
I issued zfs destroy pool/test.
According to zpool iostat there's about 1-3MB/s of reads with 1-2K
IOPS. Using iostat I can see about
Hello Nathalie,
Monday, December 18, 2006, 2:14:29 PM, you wrote:
NPI I have a machine with ZFS connected to a SAN. The space of storage
NPI increased on the SAN. The order format shows the increase in volume
NPI well. But the size of the ZFS pool did not increase. What to make so
NPI that
Neil Perrin wrote:
Having said that I don't think we recommend messing with the transaction
group commit timing.
Yeah I don't think the customer means to tune it this way either, they
were thinking of something like tune_t_fsflushr (is this still in use?)
They want to know when the txg
Thank you to everyone that has replied. It sounds like I have a few options
with regards to upgrading or just waiting and patching the current environment.
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
[ This is for discussion, it doesn't mean I'm actively working on this
functionality at this time or that I might do so in the future. ]
When we get crypto support one way to do secure delete is to destroy
the key. This is usually a much simpler and faster task than erasing
and overwriting
On 12/18/06, Darren J Moffat [EMAIL PROTECTED] wrote:
[ This is for discussion, it doesn't mean I'm actively working on this
functionality at this time or that I might do so in the future. ]
When we get crypto support one way to do secure delete is to destroy
the key. This is usually a
Darren J Moffat wrote:
I think we need 5 distinct places to set the policy:
1) On file delete
This would be a per dataset policy.
The bleaching would happen in a new transaction group
created by the one that did the normal deletion, and would
run only if theoriginal
[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why can't
we use a word like erasing (which is hitherto unused for filesystem use
in Solaris, AFAIK)
and this method doesn't remove all stains from the disk anyway it just
reduces them so they can't be
Additional comments below...
Christine Tran wrote:
Hi,
I guess we are acquainted with the ZFS Wikipedia?
http://en.wikipedia.org/wiki/ZFS
Customers refer to it, I wonder where the Wiki gets its numbers. For
example there's a Sun marketing slide that says unlimited snapshots
contradicted
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
James W. Abendschan wrote:
It took about 3 days to finish
during which the T1000 was basically unusable. (during that time,
sendmail managed to syslog a few messages about how it
was skipping the queue run because the load was at 200 :-)
Glup!.
IMO:
- The hardest problem in the case of bleaching individual files or
datasets is dealing with snapshots/clones:
- blocks not shared with parent/child snapshots can be bleached with
little trouble, of course.
- But what about shared blocks?
IMO we have two options:
Christine Tran wrote:
And the PowerPath question is important, customer is using PP right now.
I haven't heard any powerpath issues. Can you track down what it was
GeorgeW mentioned?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 12/18/06, Robin Harris [EMAIL PROTECTED] wrote:
There's been another sighting of ZFS on Mac. The latest developer
release of Leopard (Mac OS 10.5) has a dialogue box calling out the
Zettabyte File System (ZFS) as an option. The first publication I
saw this is a French website called Mac4Ever
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
mask:rwx
other:r-x
I
Torrey McMahon wrote:
I haven't heard any powerpath issues. Can you track down what it was
GeorgeW mentioned?
Well, the problem is I can't remember. It was during a ZFS TOI class,
and perhaps it was that PP tries to be clever by grouping tsx together
... If there's been no PP issue
James W. Abendschan wrote:
Once the mirror was synced, I disconnected one of the iSCSI boxes
(pulled the ethernet plug from one of the VTraks), did some I/O
on the volume, and Solaris paniced. After it rebooted, I did a
'zpool scrub' and the T1000 again went into la-la land while the
scrubbing
Mike Seda wrote:
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the information I have been gathering
it doesn't
On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the
comment far below...
Jonathan Edwards wrote:
On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment?
Hello Torrey,
Monday, December 18, 2006, 8:38:42 PM, you wrote:
TM Christine Tran wrote:
And the PowerPath question is important, customer is using PP right now.
TM I haven't heard any powerpath issues. Can you track down what it was
TM GeorgeW mentioned?
H...
On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:
I'd say go for both, (a) and (b). Of course, (b) may not be easy to
implement.
Another option would be to warn the user and set
On Mon, 18 Dec 2006, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of
On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:
Or an iovec-style specification. But really, how often will one prefer
this to truncate-and-bleach? Also, the to-be-bleached
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shouldn't cause this if you were using RAID-5 on
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these
slices to a
Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as
listed below:
This is certainly a supportable
BTW, Jeff's posts to zfs-discuss are being rejected with this message [ ... ]
... while the spam is coming through loud clear. ;-)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
31 matches
Mail list logo