Original Message
Subject: Trial x4500, zfs with NFS and quotas.
Date: Tue, 27 Nov 2007 16:46:33 +0900
From: Jorgen Lundman [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Hello list;
We are users of NetApps, currently needing to expand. We thought to try
a x4500 since
.
Software we use are the usual. Postfix with dovecot, apache with
double-hash, https with TLS/SNI, LDAP for provisioning, pure-ftpd, DLZ,
freeradius. No local config changes needed for any setup, just ldap and
netapp.
Lund
Robert Thurlow wrote:
Jorgen Lundman wrote:
*** NFS Option
and efficiency. But maybe not :) (I would need one lofi dev per
filesystem right?)
Definitely worth remembering if I need to do something small/quick.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
all NFS clients also
need to support it?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
bound by file-system,
even in these times. But there probably is some implementational
reason why it can't be fixed. If it was just that statfs() would
report incorrect values, but write() fail with ENOSPC, this would be
acceptable to me.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix
style, it appears to just work.
# cd /net/x4500/export/mail/m/e/0/0/me118400/
# ls -l
drwxr-xr-x2 nobody nobody 2 Nov 27 11:03 mail
My apologies for the noise.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
You're confusing lofi and lofs, I think. Have a look at man lofs.
Now all _I_ would like is translucent options to that and I'd solve one
of my major headaches.
That I am. I have never used lofs, looks interesting. Thanks.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix
Jorgen Lundman wrote:
You're confusing lofi and lofs, I think. Have a look at man lofs.
Now all _I_ would like is translucent options to that and I'd solve one
of my major headaches.
I can not export lofs on NFS. Just gives invalid path, and:
http://bugs.opensolaris.org/bugdatabase
5.10 Generic_127111-01 sun4u sparc
SUNW,Sun-Fire-V240 Solaris
I can't comment on the bug, although I notice it is categorised under
nfsv4, but the description doesn't seem to match that.
Jorgen Lundman | [EMAIL PROTECTED]
Julian
--
Julian King
Computer Officer, University
17T 4.6M17T 1%/export/test
Jorgen Lundman wrote:
Ah it's a somewhat mis-leading error message:
bash-3.00# mount -F lofs /zpool1/test /export/test
bash-3.00# share -F nfs -o rw,anon=0 /export/test
Could not share: /export/test: invalid path
bash-3.00# umount /export
? Where did that foo directory get created exactly?
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
solution at least. Even appears that I am allowed to
enable compression on the volume.
Thanks
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
.
Perhaps one day it can mind you, it just is not there today.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
(ff0a5a4baa80, fffec7eda6c0)
svc_run+0x171(ff62becb72a0)
svc_do_run+0x85(1)
nfssys+0x748(e, fecf0fc8)
sys_syscall32+0x101()
BAD TRAP: type=e (#pf Page fault) rp=ff001f175320 addr=0 occurred in
module
unknown due to a NULL pointer dereference
--
Jorgen Lundman | [EMAIL
the LOM to the network in case you
have such issues again, you should be able to recover remotely.
Shawn
On Dec 13, 2007, at 10:33 PM, Jorgen Lundman wrote:
NOC staff couldn't reboot it after the quotacheck crash, and I only
just
got around to going to the Datacenter. This time I
there is one solution for us, now it is a matter of balancing
the various numbers and come to a decision.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81
-discuss
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing
neat
if there was!). Doing full rsyncs all the time would probably be slow.
Would it be possible to do a snapshot, then 10 minutes later, another
snapshot and only rsync the differences?
Any advice will be appreciated.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81
servers to latest OpenSolaris.
So, the volumes that need quotas are zpool+ufs, and the rest are zfs.
Darren J Moffat wrote:
Jorgen Lundman wrote:
If we were to get two x4500s, with the idea of keeping one as a
passive standby (serious hardware failure) are there any clever
solutions in doing
works, so I have no idea why
the deal is there.
But apart from that, it is performing sufficient for our needs.
About 86% with compression on the mail volume. (ufs on zdev for quotas).
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work
wanted to know :)
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
busy, and yet nearly no CPU ness.. do I need to tell it to
use more CPU for NFS? What are common settings used for relatively large
NFS usage?
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zfs-discuss
the HDD since the x4500 doesn't want to
mark it dead.
Jorgen Lundman wrote:
Today, we foudn the x4500 NFS stopped responding for about 1 minute, but
the server itself was idle. After a little looking around, we found this:
Apr 16 09:16:00 x4500-01.unix fmd: [ID 441519 daemon.error] SUNW
11:51:54 x4500-01.unix last message repeated 20 times
May 10 11:51:55 x4500-01.unix genunix: [ID 622722 kern.notice] done
(not all i/
o completed)
May 10 11:51:56 x4500-01.unix genunix: [ID 111219 kern.notice] dumping
to /dev/d
sk/c6t0d0s1, offset 65536, content: kernel
--
Jorgen Lundman
.
Assembled 30 August 2007
SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
Even though it dumped, it wrote nothing to /var/crash/. Perhaps because
swap is mirrored.
Jorgen Lundman wrote:
We had a panic around noon on Saturday, which it mostly recovered
itself. All ZFS NFS
the chance, deliberately panic the box to
make sure you can actually capture a dump...
dumpadm is your friend as far as checking where you are going to dump
to, and it it's one side of your swap mirror, that's bad, M'Kay?
:)
Nathan.
Jorgen Lundman wrote:
OK, this is a pretty damn poor
()
ff001e737c60 ufs:ufs_thread_idle+1a1 ()
ff001e737c70 unix:thread_start+8 ()
syncing file systems...
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
that the user is out
of space?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
not be feasible.
Unless there has been some advancement in ZFS in the last 6 months I am
not aware of... like user quotas?
Thanks for your assistance.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
Jorgen Lundman wrote:
On Saturday the X4500 system paniced, and rebooted. For some reason the
/export/saba1 UFS partition was corrupt, and needed fsck. This is why
it did not come back online. /export/saba1 is mounted logging,noatime,
so fsck should never (-ish) be needed.
SunOS x4500-01
... are there better values?)
set ufs_ninode=259594
in /etc/system, and reboot. But it is costly to reboot based only on my
guess. Do you have any other suggestions to explore? Will this help?
Sincerely,
Jorgen Lundman
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext
seconds to complete.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
hardware failures.
thanks,
Ross
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix
maxsize reached 993770
(Increased it by nearly x10 and it still gets a high 'reached').
Lund
Jorgen Lundman wrote:
We are having slow performance with the UFS volumes on the x4500. They
are slow even on the local server. Which makes me think it is (for once)
not NFS related
filesystems if I were
to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500
and reboot? I assume Sol10 5/08 zpool version would be newer, so in
theory it would work.
Comments?
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
transfered zfs send. So, rsyncing smaller bits.
zfs send -i only works if you have a full copy already, which we can't
get from above.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
will read version 2. I see no
script talking about converting a version 2 to a version 1.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
done again.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
PROTECTED]/pci11ab,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0 (sd30):
And I need to get the answer 40. The hd output additionally gives me
sdar ?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
See http://www.sun.com/servers/x64/x4500/arch-wp.pdf page 21.
Ian
Referring to Page 20? That does show the drive order, just like it does
on the box, but not how to map them from the kernel message to drive
slot number.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator
the first to try x4500 here as well.
Anyway, it has almost rebooted, so I need to go remount everything.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0
Jorgen Lundman wrote:
Anyway, it has almost rebooted, so I need to go remount everything.
Not that it wants to stay up for longer than ~20 mins, then hangs. In
that all IO hangs, including nfsd.
I thought this might have been related:
http://sunsolve.sun.com/search/document.do?assetkey
like it, will push it to Sun. Although,
we do have SunSolve logins, can we by-pass the middleman, and avoid the
whole translation fiasco, and log directly with Sun?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
()
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zfs-discuss
zpool status.
Going to get some sleep, and really hope it has been fixed. Thank you to
everyone who helped.
Lund
Jorgen Lundman wrote:
Jorgen Lundman wrote:
Anyway, it has almost rebooted, so I need to go remount everything.
Not that it wants to stay up for longer than ~20 mins, then hangs
, are there methods in AVS to handle fail-back? Since 02 has
been used, it will have newer/modified files, and will need to replicate
backwards until synchronised, before fail-back can occur.
We did ask our vendor, but we were just told that AVS does not support
x4500.
Lund
--
Jorgen Lundman
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zfs
been having similar
troubles to yours in the past.
My system is pretty puny next to yours, but it's been reliable now for
slightly over a month.
On Tue, Jan 27, 2009 at 12:19 AM, Jorgen Lundman lund...@gmo.jp wrote:
The vendor wanted to come in and replace an HDD in the 2nd X4500
behaves
like it. Not sure why it would block zpool, zfs and df commands as
well though?
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
I've been told we got a BugID:
3-way deadlock happens in ufs filesystem on zvol when writing ufs log
but I can not view the BugID yet (presumably due to my accounts weak
credentials)
Perhaps it isn't something we do wrong, that would be a nice change.
Lund
Jorgen Lundman wrote:
I assume
/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
shipped with 16. And I'm sorry but 16 didn't cut it at all :) We
set it at 1024 as it was the highest number I found via Google.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell
with checks for being blacklisted.
Disadvantages are that it is loss of precision, and possibly slower
rescans? Sanity?
But I do not really know the internals of ZFS, so I might be completely
wrong, and everyone is laughing already.
Discuss?
Lund
--
Jorgen Lundman | lund
.
This I did not know, but now that you point it out, this would be the
right way to design it. So the advantage of requiring less ZFS
integration is no longer the case.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
in ufs filesystem on zvol when writng ufs log
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
ZFS send very
often as it is far too slow.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
to support quotas for ZFS
JL send, but consider a rescan to be the answer. We don't ZFS send very
JL often as it is far too slow.
Since build 105 it should be *MUCH* for faster.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
be *MUCH* for faster.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing
is compiling osol compared to,
say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??)
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
Jorgen Lundman wrote:
The website has not been updated yet to reflect its availability (thus
it may not be official yet), but you can get SXCE b114 now from
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?productref=sol-express_b114-full-x86
I tried LUpdate 3 times with same result, burnt the ISO and installed
the old fashioned way, and it boots fine.
Jorgen Lundman wrote:
Most annoying. If su.static really had been static I would be able to
figure out what goes wrong.
When I boot into miniroot/failsafe it works just fine
not implemented, not a problem for us.
perl cpan module Quota does not implement ZFS quotas. :)
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
. Perhaps something to do with that
mount doesn't think it is mounted with quota when local.
I could try mountpoint=legacy and explicitly list rq when mounting maybe .
But we don't need it to work, it was just different from legacy
behaviour. :)
Lund
--
Jorgen Lundman | lund
?
If not, I could potentially use zfs ioctls perhaps to write my own bulk
import program? Large imports are rare, but I was just curious if there
was a better way to issue large amounts of zfs set commands.
Jorgen Lundman wrote:
Matthew Ahrens wrote:
Thanks for the feedback!
Thank you
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support
contract to allow us to run b114 and we're set! :)
Thanks,
Lund
Jorgen Lundman wrote:
We
PM, Jorgen Lundman lund...@gmo.jp wrote:
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support contract to
allow us to run b114 and we're set! :)
Thanks
And alas, grow is completely gone, and no amount of import would see
it. Oh well.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
Rob Logan wrote:
you meant to type
zpool import -d /var/tmp grow
Bah - of course, I can not just expect zpool to know what random
directory to search.
You Sir, are a genius.
Works like a charm, and thank you.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator
,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too
not only size of the snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman
lund...@gmo.jp wrote:
Sorry, yes. It is straight;
# time zfs send
I changed to try zfs send on a UFS on zvolume as well:
received 92.9GB stream in 2354 seconds (40.4MB/sec)
Still fast enough to use. I have yet to get around to trying something
considerably larger in size.
Lund
Jorgen Lundman wrote:
So you recommend I also do speed test on larger
is automatic. But of
course, at the same time, it is MY data, so I'd rather it was using ZFS
and so on.
The Thecus, and QNAP, raids both use Intel chipsets. I am curious if I
picked up an empty box; 2nd hand for next-to-nothing, if I couldn't
re-flash it with osol, or eon, or freenas.
--
Jorgen
good at.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
) but I have not
personally tried it.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
a whole
load of ZFS data. Has someone already been down this road too?
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
of trouble hacking this
together (the current source doesn't compile in isolation on my S10
machine).
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
yet to experience any
problems. But b117 is what 2010/02 version will be based on, so perhaps
that is a better choice. Other versions worth considering?
I know it's a bit vague, but perhaps there is a known panic in a certain
version that I may not be aware.
Lund
--
Jorgen Lundman
/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
x4540 running svn117
# ./zfs-cache-test.ksh zpool1
zfs create zpool1/zfscachetest
creating data file set 93000 files of 8192000 bytes0 under
/zpool1/zfscachetest ...
done1
zfs unmount zpool1/zfscachetest
zfs mount zpool1/zfscachetest
doing initial (unmount/mount) 'cpio -o . /dev/null'
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0
a rogue 9 appeared in your output.
It was just a standard run of 3,000 files.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix
1m20.28s
Doing second 'cpio -C 131072 -o /dev/null'
48000256 blocks
real7m25.34s
user0m6.63s
sys 1m32.04s
Feel free to clean up with 'zfs destroy zboot/zfscachetest'.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya
for
desktops :( They are cheap though! Nothing like being Wall-Mart of Storage!
That is how the pools were created as well. Admittedly it may be down to
our Vendor again.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
uses zfs send, which would be possible,
but 4 minutes is hard to beat when your cluster is under heavy load.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
?
Thanks,
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
out the disk, either.
Jorgen Lundman wrote:
Hello list,
Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs
boot.
Very often, if we needed to grow a cluster by another machine or two, we
would simply clone a running live server. Generally the procedure for
this would
Jorgen Lundman wrote:
However, zpool detach appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1024 1k blocks from the disk, zpool detach
and 5097228.
Ah of course, you have a valid point and mirrors can be used it much
more complicated situations.
Been reading your blog all day, while impatiently waiting for zfs-crypto..
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
, like official Sol 10
10/08. (I am not sure, but zfs send sounds like you already need the
2nd server set up and running with IPs etc? )
Anyway, we have found a procedure now, so it is all possible. But it
would have been nicer to be able to detach the disk politely ;)
Lund
--
Jorgen Lundman
as to whether the application did or not?
This I have not yet wrapped my head around.
For example, I know rsync and tar does not use fdsync (but dovecot does)
on its close(), but does NFS make it fdsync anyway?
Sorry for the giant email.
--
Jorgen Lundman | lund...@lundman.net
Unix
for it, as I doubt it'll
stay standing after the next earthquake. :)
Lund
Jorgen Lundman wrote:
This thread started over in nfs-discuss, as it appeared to be an nfs
problem initially. Or at the very least, interaction between nfs and zil.
Just summarising speeds we have found when untarring
.
Peculiar.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
can live
together and put /var in the data pool. That way we would not need to
rebuild the data-pool and all the work that comes with that.
Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD)
though, I will have to lucreate and reboot one time.
Lund
--
Jorgen Lundman
to start around 80,000.
Anyway, sure has been fun.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
. Especially since SOHO NAS devices
seem to start around 80,000.
Anyway, sure has been fun.
Lund
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund
Some preliminary speed tests, not too bad for a pci32 card.
http://lundman.net/wiki/index.php/Lraid5_iozone
Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Double the price
. ;)
Jorgen Lundman wrote:
I was following Toms Hardware on how they test NAS units. I have 2GB
memory, so I will re-run the test at 4, if I figure out which option
that is.
I used Excel for the graphs in this case, gnuplot did not want to work.
(Nor did Excel mind you)
Bob Friesenhahn wrote
1 - 100 of 128 matches
Mail list logo