On Mon, Jun 9, 2008 at 10:44 PM, Robert Thurlow [EMAIL PROTECTED] wrote:
Brandon High wrote:
AFAIK, you're doing the best that you can while playing in the
constraints of ZFS. If you want to use nfs v3 with your clients,
you'll need to use UFS as the back end.
Just a clarification: NFSv3
Hi Marc,
Thanks for all of your suggestions.
I'll restart memtest when I'm next in the office and leave it running overnight.
I can recreate the pool - but I guess the question is am I safe to do this on
the existing setup, or am I going to hit the same issue again sometime?
Assuming I don't
It might just be me, and the 'feel' of it, but it still feels to me that
the system needs to be under more memory pressure before ZFS gives pages
back. This could also be because I'm typically using systems with either
128GB, or = 4GB of RAM, and in the smaller case, not having some
That's odd -- the only way the 'rm' should fail is if it can't
read the znode for that file. The znode is metadata, and is
therefore stored in two distinct places using ditto blocks.
So even if you had one unlucky copy that was damaged on two
of your disks, you should still have another copy
On 6/10/08, Volker A. Brandt [EMAIL PROTECTED] wrote:
It might just be me, and the 'feel' of it, but it still feels to me that
the system needs to be under more memory pressure before ZFS gives pages
back. This could also be because I'm typically using systems with either
128GB, or = 4GB of
Bob Friesenhahn [EMAIL PROTECTED] wrote:
/dev/zero does not have infinite performance. Dd will perform at
least one extra data copy in memory. Since zfs computes checksums it
needs to inspect all of the bytes in what is written. As a result,
zfs will easily know if the block is all
Hi Tomas,
I will try it my self, but it's just that if i google the subject i only find
old entries describing things as kernel panics and system freeze. I'm just
wondering if this problem is fixed in the newer releases, or if there is
another recommended way to keep data stored on different
Scott,
This looks more like bug#*6596237 Stop looking and
start ganging
http://monaco.sfbay/detail.jsf?cr=6596237.
*
What version of Solaris are the production servers
running (S10 or
Opensolaris) ?
Thanks and regards,
Sanjeev.
Hi Sanjeev,
Thanks for the reply. These servers
Richard Elling wrote:
For ZFS, there are some features which conflict
with the
notion of user quotas: compression, copies, and
snapshots come
immediately to mind. UFS (and perhaps VxFS?) do
not have
these features, so accounting space to users is
much simpler.
Indeed, if was was
Sent response by private message.
Today's findings are that the cksum errors appear on the new disk on the other
controller too - so I've ruled out controllers cables. It's probably as Jeff
says - just got to figure out now how to prove the memory is duff.
Ben
This message posted from
On Tue, Jun 10, 2008 at 8:01 AM, Ben Middleton [EMAIL PROTECTED] wrote:
Today's findings are that the cksum errors appear on the new disk on the
other controller too - so I've ruled out controllers cables. It's probably
as Jeff says - just got to figure out now how to prove the memory is
Hi John,
I've done some tests with a SUN X4500 with zfs and "MAID" using the
powerd of Solaris 10 to power down the disks which weren't access for a
configured time. It's working fine...
The only thing I run into was the problem that it took roundabout a
minute to power on 4 disks in a
Tobias Exner wrote:
Hi John,
I've done some tests with a SUN X4500 with zfs and MAID using the
powerd of Solaris 10 to power down the disks which weren't access for
a configured time. It's working fine...
The only thing I run into was the problem that it took roundabout a
minute to
On Tue, Jun 10, 2008 at 9:12 AM, Ben Middleton [EMAIL PROTECTED] wrote:
I'll still try a long memtest run, followed by a rebuild of the errored pool.
I'll have a read around to see if there's anyway of making the memory more
stable on this mobo.
Run it at 800MHz. I have a MSI P35 Platinum
have a look at the screenshots at my article (in German):
http://otmanix.de/2008/06/07/heureka-zfs-boot-auf-sparc-betriebsbereit/
Best regards, Otmanix
This message posted from opensolaris.org
___
zfs-discuss mailing list
Scott wrote:
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of
them have recently developed performance issues where the majority of
time in an spa_sync() will be spent in the space_map_*() functions.
During this time, zpool iostat will show 0 writes to disk, while
Great point. Hadn't thought of it in that way.
I haven't tried truncating a file prior to trying
to remove it. Either way though, I think it is a
bug if once the filesystem fills up, you can't remove
a file.
Brad
On Thu, 2008-06-05 at 21:13 -0600, Keith Bierman wrote:
On Jun 5, 2008, at 8:58
Richard Elling wrote:
Tobias Exner wrote:
Hi John,
I've done some tests with a SUN X4500 with zfs and MAID using the
powerd of Solaris 10 to power down the disks which weren't access for
a configured time. It's working fine...
The only thing I run into was the problem that it took
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
However, some apps will probably be very unhappy if i/o takes 60 seconds
to complete.
It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang on just fine for minor planned and
On Thu, Jun 5, 2008 at 9:12 PM, Joe Little [EMAIL PROTECTED] wrote:
winner is going to be the newer SAS/SATA mixed HBAs from LSI based on
the 1068 chipset, which Sun has been supporting well in newer
hardware.
http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html
Joe
Brandon High wrote:
On Thu, Jun 5, 2008 at 9:12 PM, Joe Little [EMAIL PROTECTED] wrote:
winner is going to be the newer SAS/SATA mixed HBAs from LSI based on
the 1068 chipset, which Sun has been supporting well in newer
hardware.
snv_89 is the same. The ZFS Administration console worked fine to create my
first 2 pools. I've been unable to use it since then. I have the same stack
trace errors.
Did you find a workaround for this issue?
-Rick
This message posted from opensolaris.org
WayCool stuff man, nice post! :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snv_89 is the same. The ZFS Administration console
worked fine to create my first 2 pools. I've been
unable to use it since then. I have the same stack
trace errors.
Did you find a workaround for this issue?
-Rick
Nothing yetdropping to the command line for the moment. Looking
24 matches
Mail list logo