Hi, experts,
I install Solaris 10 06/06 x86 on vmware 5.5, and admin zfs by command line and
web, all is good. Web admin is more convenient, I needn't type commands. But
after my computer lost power , and restarted, I get a problem on zfs web admin
(https://hostname:6789/zfs).
The problem is,
So this is the interesting data , right ?
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-back, one LUN
1.1 filebench/varmail for 60s
a. ZFS on top of LUN, atime=off
IO Summary: 490054 ops 8101.6 ops/s, (1246/1247
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high
Hello Roch,
Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R So this is the interesting data , right ?
R 1. 3510, RAID-10 using 24 disks from two enclosures, random
R optimization, 32KB stripe width, write-back, one LUN
R 1.1 filebench/varmail for 60s
R a. ZFS on top of
Hi.
S10U2 + patches, SPARC, Generic_118833-20
I issued zpool create but possibly some (or all) MPxIO devices aren't there
anymore.
Now I can't kill zpool.
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD5275268D600d0
c5t600C0FF0098FD564175B0600d0 mirror
Hello Robert,
Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM Hello Roch,
RM Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R So this is the interesting data , right ?
R 1. 3510, RAID-10 using 24 disks from two enclosures, random
R optimization, 32KB stripe width, write-back,
Due to legacy constraints, I have a rather complicated system that is currently
using Sun QFS (actually the SAM portion of it.) For a lot of reasons, I'd like
to look at moving to ZFS, but would like a sanity check to make sure ZFS is
suitable to this application.
First of all, we are NOT
Hello Robert,
Thursday, August 24, 2006, 4:44:26 PM, you wrote:
RM Hello Robert,
RM Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM Hello Roch,
RM Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R So this is the interesting data , right ?
R 1. 3510, RAID-10 using 24 disks from
Robert,
One of your disks is not responding. I've been trying to track down why
the scsi command is not being timed out but for now check out each of
the devices to make sure they are healthy.
BTW, if you capture a corefile let me know.
Thanks,
George
Robert Milkowski wrote:
Hi.
S10U2 +
Boyd and all,
Just an update of what happened and what the customer found out
regarding the issue.
===
It does appear that the disk is fill up by 140G.
I think I now know what happen. I created a raidz pool and I did not
write any data to it before I just pulled out
On Thu, Aug 24, 2006 at 10:12:12AM -0600, Arlina Goce-Capiral wrote:
It does appear that the disk is fill up by 140G.
So this confirms what I was saying, that they are only able to write
ndisks-1 worth of data (in this case, ~68GB * (3-1) == ~136GB. So there
is no unexpected behavior with
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If
Bill wrote:
Hi, experts,
I install Solaris 10 06/06 x86 on vmware 5.5, and admin zfs by
command line and web, all is good. Web admin is more convenient, I
needn't type commands. But after my computer lost power , and
restarted, I get a problem on zfs web admin
(https://hostname:6789/zfs).
ZFS actually uses the ZAP to handle directory lookups. The ZAP is
not a btree but a specialized hash table where a hash for each
directory entry is generated based on that entry's name. Hence you
won't be doing any sort of linear search through the entire directory
for a file, a hash is
On Thu, Aug 24, 2006 at 10:46:27AM -0700, Noel Dellofano wrote:
ZFS actually uses the ZAP to handle directory lookups. The ZAP is
not a btree but a specialized hash table where a hash for each
directory entry is generated based on that entry's name. Hence you
won't be doing any sort of
On Thu, Aug 24, 2006 at 01:15:51PM -0500, Nicolas Williams wrote:
I just tried creating 150,000 directories in a ZFS roto directory. It
was speedy. Listing individual directories (lookup) is fast.
Glad to hear that it's working well for you!
Listing the large directory isn't, but that turns
Hello George,
Thursday, August 24, 2006, 5:48:08 PM, you wrote:
GW Robert,
GW One of your disks is not responding. I've been trying to track down why
GW the scsi command is not being timed out but for now check out each of
GW the devices to make sure they are healthy.
I know - I unmaped LUNs
Robert Milkowski wrote:
Hello George,
Thursday, August 24, 2006, 5:48:08 PM, you wrote:
GW Robert,
GW One of your disks is not responding. I've been trying to track down why
GW the scsi command is not being timed out but for now check out each of
GW the devices to make sure they are
On 8/24/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We
On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote:
well, by deleting my 4-hourlies I reclaimed most of the data. To
answer some of the questions, its about 15 filesystems (decendents
included). I'm aware of the space used by snapshots overlapping. I was
looking at the total space
On Thu, 24 Aug 2006, Hawk Tsai wrote:
does zfs can be failover between two mechine?
like two host share the storage, does in can failover ?
What do you mean exactly by two host share the storage?
Are you looking for a distributed filesystem? ZFS is a local filesystem.
Are you thinking of
21 matches
Mail list logo