I'm doing a putback onto my local workstation, watching the disk
activity with zpool iostat, when I start to notice something
quite strange...
zpool iostat 1
capacity operationsbandwidth
pool used avail read write read write
-- - - -
Tonight I've been moving some of my personal data around on my
desktop system and have hit some on-disk corruption. As you may
know, I'm cursed, and so this had a high probability of ending badly.
I have two SCSI disks and use live upgrade, and I have a partition,
/aux0, where I tend to keep
Spencer Shepler [EMAIL PROTECTED] wrote:
On Wed, Jonathan Edwards wrote:
On Oct 25, 2006, at 15:38, Roger Ripley wrote:
IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully
Sun will not tarry in following their lead for ZFS.
Hi.
When there's a lot of IOs to a pool then zpool status is really slow.
These are just statistics and it should work quick.
# truss -ED zpool status
[...]
0.0008 0.0001 fstat64(1, 0xFFBFAC40) = 0
pool: nfs-1
0.0015 0.0001 write(1, p o o l : n f s
Jürgen Keil writes:
ZFS 11.0 on Solaris release 06/06, hangs systems when
trying to copy files from my VXFS 4.1 file system.
any ideas what this problem could be?.
What kind of system is that? How much memory is installed?
I'm able to hang an Ultra 60 with 256 MByte of main
Here is the problem I'm trying to solve...
Ive been using a sparc machine as my primary home server for years. A few years
back the motherboard died. I did a nightly backup on an external USB drive
formatted in ufs format. I use a rsync based backup called dirvish, so I
thought I had all the
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of
memory. I guess I need to install more memory in this baby. I just very
surprised that ZFS will required so much to accomplish just a simple task as
a copy. From your experience I can tell might be a memory related issue.
Jürgen Keil writes:
ZFS 11.0 on Solaris release 06/06, hangs systems when
trying to copy files from my VXFS 4.1 file system.
any ideas what this problem could be?.
What kind of system is that? How much memory is installed?
I'm able to hang an Ultra 60 with 256 MByte of
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of
memory.
Oh, 128MB...
ZFS' *minimum* ARC cache size is fixed at 64MB, so ZFS' ARC cache should
already grab slightly more than half of the memory installed in that machine.
Leaving less than 64MB of free memory on your
On Thu, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
On Wed, Jonathan Edwards wrote:
On Oct 25, 2006, at 15:38, Roger Ripley wrote:
IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully
Sun will not tarry in following their lead for ZFS.
On Thu, Oct 26, 2006 at 01:30:46AM -0700, Dan Price wrote:
scsi: WARNING: /[EMAIL PROTECTED],70/[EMAIL PROTECTED] (glm0):
Resetting scsi bus, got incorrect phase from (1,0)
genunix: NOTICE: glm0: fault detected in device; service still available
genunix: NOTICE: glm0: Resetting
See:
6430480 grabbing config lock as writer during I/O load can take excessively long
- Eric
On Thu, Oct 26, 2006 at 04:42:00AM -0700, Robert Milkowski wrote:
Hi.
When there's a lot of IOs to a pool then zpool status is really slow.
These are just statistics and it should work
Robert Milkowski wrote:
Hi.
When there's a lot of IOs to a pool then zpool status is really slow.
These are just statistics and it should work quick.
This is:
6430480 grabbing config lock as writer during I/O load can take
excessively long
eric
# truss -ED zpool status
[...]
It is supposed to work, though I haven't tried it.
Gary Gendel wrote:
Here is the problem I'm trying to solve...
Ive been using a sparc machine as my primary home server for years. A few years
back the motherboard died. I did a nightly backup on an external USB drive
formatted in ufs format.
I have reported similar issues with ZFS taking most of my 2G in one
system and 3G in another. I have been told to add a swap partition which
I normally do not do. It mostly has cleared up the problem however I am
still bugged by needing to do that in the first place. ZFS memory
management
Juergen Keil wrote:
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of
memory.
Oh, 128MB...
Btw, does anyone know if there are any minimum hardware (physical memory)
requirements for using ZFS?
It seems as if ZFS wan't tested that much on machines with 256MB (or
To have user quota with ZFS you have to have a file system per user. However
this leads to a very large number of file systems on a large server. I
understand there is work already in hand to make sharing a large number of file
systems faster, however even mounting a large number of file
On Thu, Oct 26, 2006 at 02:38:32PM -0700, Chris Gerhard wrote:
To have user quota with ZFS you have to have a file system per user.
However this leads to a very large number of file systems on a large
server. I understand there is work already in hand to make sharing a
large number of file
Hi Tim,
I just retried to reproduce it to generate a reliable test case. Unfortunately,
I cannot reproduce the error message. So I really have no idea what might have
cause it
Sorry,
Tom
This message posted from opensolaris.org
___
If you share the file systems the time increases even further but as I
understand it that issue is being worked:
[EMAIL PROTECTED] # time zpool import zpool1
real 7h6m28.62s
user 14m55.28s
sys 5h58m13.24s
[EMAIL PROTECTED] #
Yes, this is a limitation of the antiquated NFS share
We're looking at replacing a current Linux server with a T1000 + a fiber
channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only has a
single drive bay (!) which makes it impossible to follow our normal practice of
mirroring the root file system; naturally the idea of using
1. How do we get the same 4-Way stripe of the LUNs in ZFS (we do
not want any redundancy at ZFS level since it is taken care at HDS
9500 level in hardware)?
Just add the disks to the pool. They'll be automatically striped.
How do do we specify a block size of 6144? do I need
(For some reason I never actually got this as an email; maybe because I'm
not subscribed to zfs-discuss?)
Thanks, Eric.
So do you guys have any suspicions about what is actually failing
here? Is it my drives, or the glm chip? or both? I was wondering
whether new drives were going to help.
On Tue, Sep 05, 2006 at 10:49:11AM +0200, Pawel Jakub Dawidek wrote:
On Tue, Aug 22, 2006 at 12:45:16PM +0200, Pawel Jakub Dawidek wrote:
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
[...]
Just a quick note about progress in my work. I needed slow down a
24 matches
Mail list logo