Re: [zfs-discuss] zil and root on the same SSD disk

2011-01-13 Thread Jorgen Lundman
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it will always complain about overlapping slices, since *s2 is the entire disk. This warning seems excessive, but -f will ignore it. As for ZIL, the first time I created a slice for it. This worked well, the second

[zfs-discuss] Mirroring raidz ?

2011-01-12 Thread Jorgen Lundman
I have a server, with two external drive cages attached, on separate controllers: c0::dsk/c0t0d0 disk connectedconfigured unknown c0::dsk/c0t1d0 disk connectedconfigured unknown c0::dsk/c0t2d0 disk connected

[zfs-discuss] ZFS panic on blade BL465c G1

2010-10-03 Thread Jorgen Lundman
Hello list, I got a c7000 with BL465c G1 blades to play with and have been trying to get some form of Solaris to work on it. However, this is the state: OpenSolaris 134: Installs with ZFS, but no BNX nic drivers. OpenIndiana 147: Panics on zpool create everytime, even from console. Has no

Re: [zfs-discuss] opensolaris lightweight install

2010-01-06 Thread Jorgen Lundman
On my NAS I use Velitium: http://sourceforge.net/projects/velitium/ which goes down to about 70MB at the smallest. (2010/01/07 15:23), Frank Cusack wrote: been searching and searching ... -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017

Re: [zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-23 Thread Jorgen Lundman
that with ZFS, the number in ls which used to be for blocks actually report the number of entries in the directory (-1). drwxr-xr-x 13 root bin 13 Oct 28 02:58 spool ^^ # ls -la spool | wc -l 14 Which means you can probably add things up a little faster. -- Jorgen

Re: [zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-26 Thread Jorgen Lundman
at least have a solution, even if it is rather unattractive. 12 servers, and has to be done at 2am means I will be testy for a while. Lund Jorgen Lundman wrote: Interesting. Unfortunately, I can not zpool offline, nor zpool detach, nor zpool remove the existing c6t4d0s0 device. I thought

Re: [zfs-discuss] rquota didnot show userquota (Solaris 10)

2009-11-26 Thread Jorgen Lundman
'no quota'? Both systems are nearly fully patched. Any help is appreciated. Thanks in advance. Willi ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund

Re: [zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-25 Thread Jorgen Lundman
=6574286 [*2] http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6739497 -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

[zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-20 Thread Jorgen Lundman
(lng battle there, no support for Opensolaris from anyone in Japan). Can I delete the sucker using zdb? Thanks for any reply, -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500

Re: [zfs-discuss] ZFS directory and file quota

2009-11-18 Thread Jorgen Lundman
, and did not need any changes. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

[zfs-discuss] ZFS dedup vs compression vs ZFS user/group quotas

2009-11-03 Thread Jorgen Lundman
, that is profit to us) Is the space saved with dedup charged in the same manner? I would expect so, I figured some of you would just know. I will check when b128 is out. I don't suppose I can change the model? :) Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3

[zfs-discuss] ZFS user quota, userused updates?

2009-10-19 Thread Jorgen Lundman
...@1029 54.0M local Any suggestions would be most welcome, Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] zfs on s10u8

2009-10-17 Thread Jorgen Lundman
: Any known issues for the new ZFS on solaris 10 update 8? Or is it still wiser to wait doing a zpool upgrade? Because older ABE's can no longer be accessed then. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-30 Thread Jorgen Lundman
bootable Solaris. Very flexible and can put on the Admin GUIs, and so on. https://sourceforge.net/projects/embeddedsolaris/ Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell

[zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jorgen Lundman
x4540, and calling NetApp. I would rather not (more work for me). I understand Sun is probably experiencing some internal turmoil at the moment, but it has been rather frustrating for us. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017

Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jorgen Lundman
releases of Solaris. Thanks Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-31 Thread Jorgen Lundman
and works very well in Solaris. (Package SUNWmv88sx). Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] x4540 dead HDD replacement, remains configured.

2009-08-21 Thread Jorgen Lundman
Nope, that it does not. Ian Collins wrote: Jorgen Lundman wrote: Finally came to the reboot maintenance to reboot the x4540 to make it see the newly replaced HDD. I tried, reboot, then power-cycle, and reboot -- -r, but I can not make the x4540 accept any HDD in that bay. I'm starting

Re: [zfs-discuss] x4540 dead HDD replacement, remains configured.

2009-08-19 Thread Jorgen Lundman
0 0 c5t4d0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-19 Thread Jorgen Lundman
the third-party parts is that the involved support organizations for the software/hardware will make it very clear that such a configuration is quite unsupported. That said, we've had pretty good luck with them. -Greg -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3

Re: [zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-17 Thread Jorgen Lundman
would rather not system(zfs) hack it. Lund Ross wrote: Hi Jorgen, Does that software work to stream media to an xbox 360? If so could I have a play with it? It sounds ideal for my home server. cheers, Ross -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3

Re: [zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-17 Thread Jorgen Lundman
); if (spawn) lion_set_handler(spawn, root_zfs_handler); # zfs set net.lundman:sharellink=on zpool1/media # ./llink -d -v 32 ./llink - Jorgen Lundman v2.2.1 lund...@shinken.interq.or.jp build 1451 (Tue Aug 18 14:02:44 2009) (libdvdnav). : looking for ZFS filesystems : [root] recognising

[zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-16 Thread Jorgen Lundman
the impression that the API is flexible. The ultimate goal is to move away from static paths listed in the config file. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

[zfs-discuss] x4540 dead HDD replacement, remains configured.

2009-08-06 Thread Jorgen Lundman
failed I am fairly certain that if I reboot, it will all come back ok again. But I would like to believe that I should be able to replace a disk without rebooting on a X4540. Any other commands I should try? Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0

Re: [zfs-discuss] Lundman home NAS

2009-08-06 Thread Jorgen Lundman
. I never thought about using it with a motherboard inside. Could you provide a complete parts list? What sort of temperatures at the chip, chipset, and drives did you find? Thanks! -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya

Re: [zfs-discuss] x4540 dead HDD replacement, remains configured.

2009-08-06 Thread Jorgen Lundman
...@6,0:a,raw Perhaps because it was booted with the dead disk in place, it never configured the entire sd5 mpt driver. Why the other hard-disks work I don't know. I suspect the only way to fix this, is to reboot again. Lund Jorgen Lundman wrote: x4540 snv_117 We lost a HDD last night

Re: [zfs-discuss] x4540 dead HDD replacement, remains configured.

2009-08-06 Thread Jorgen Lundman
appreciate you're probably more concerned with getting an answer to your question, but if ZFS needs a reboot to cope with failures on even an x4540, that's an absolute deal breaker for everything we want to do with ZFS. Ross -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81

Re: [zfs-discuss] Lundman home NAS

2009-08-02 Thread Jorgen Lundman
to enable it either). Jorgen Lundman wrote: Ok I have redone the initial tests as 4G instead. Graphs are on the same place. http://lundman.net/wiki/index.php/Lraid5_iozone I also mounted it with nfsv3 and mounted it for more iozone. Alas, I started with 100mbit, so it has taken quite a while

Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
Some preliminary speed tests, not too bad for a pci32 card. http://lundman.net/wiki/index.php/Lraid5_iozone Jorgen Lundman wrote: Finding a SATA card that would work with Solaris, and be hot-swap, and more than 4 ports, sure took a while. Oh and be reasonably priced ;) Double the price

Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
. ;) Jorgen Lundman wrote: I was following Toms Hardware on how they test NAS units. I have 2GB memory, so I will re-run the test at 4, if I figure out which option that is. I used Excel for the graphs in this case, gnuplot did not want to work. (Nor did Excel mind you) Bob Friesenhahn wrote

[zfs-discuss] Lundman home NAS

2009-07-31 Thread Jorgen Lundman
to start around 80,000. Anyway, sure has been fun. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] Lundman home NAS

2009-07-31 Thread Jorgen Lundman
. Especially since SOHO NAS devices seem to start around 80,000. Anyway, sure has been fun. Lund ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Jorgen Lundman
. Peculiar. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Jorgen Lundman
can live together and put /var in the data pool. That way we would not need to rebuild the data-pool and all the work that comes with that. Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD) though, I will have to lucreate and reboot one time. Lund -- Jorgen Lundman

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Jorgen Lundman
for it, as I doubt it'll stay standing after the next earthquake. :) Lund Jorgen Lundman wrote: This thread started over in nfs-discuss, as it appeared to be an nfs problem initially. Or at the very least, interaction between nfs and zil. Just summarising speeds we have found when untarring

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-28 Thread Jorgen Lundman
as to whether the application did or not? This I have not yet wrapped my head around. For example, I know rsync and tar does not use fdsync (but dovecot does) on its close(), but does NFS make it fdsync anyway? Sorry for the giant email. -- Jorgen Lundman | lund...@lundman.net Unix

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
and 5097228. Ah of course, you have a valid point and mirrors can be used it much more complicated situations. Been reading your blog all day, while impatiently waiting for zfs-crypto.. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
, like official Sol 10 10/08. (I am not sure, but zfs send sounds like you already need the 2nd server set up and running with IPs etc? ) Anyway, we have found a procedure now, so it is all possible. But it would have been nicer to be able to detach the disk politely ;) Lund -- Jorgen Lundman

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-23 Thread Jorgen Lundman
out the disk, either. Jorgen Lundman wrote: Hello list, Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs boot. Very often, if we needed to grow a cluster by another machine or two, we would simply clone a running live server. Generally the procedure for this would

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-23 Thread Jorgen Lundman
Jorgen Lundman wrote: However, zpool detach appears to mark the disk as blank, so nothing will find any pools (import, import -D etc). zdb -l will show labels, For kicks, I tried to demonstrate this does indeed happen, so I dd'ed the first 1024 1k blocks from the disk, zpool detach

Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Jorgen Lundman
? Thanks, Matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu ss -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
a rogue 9 appeared in your output. It was just a standard run of 3,000 files. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund...@lundman.net Unix

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
1m20.28s Doing second 'cpio -C 131072 -o /dev/null' 48000256 blocks real7m25.34s user0m6.63s sys 1m32.04s Feel free to clean up with 'zfs destroy zboot/zfscachetest'. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
for desktops :( They are cheap though! Nothing like being Wall-Mart of Storage! That is how the pools were created as well. Admittedly it may be down to our Vendor again. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

[zfs-discuss] ZFS Mirror cloning

2009-07-14 Thread Jorgen Lundman
uses zfs send, which would be possible, but 4 minutes is hard to beat when your cluster is under heavy load. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Jorgen Lundman
x4540 running svn117 # ./zfs-cache-test.ksh zpool1 zfs create zpool1/zfscachetest creating data file set 93000 files of 8192000 bytes0 under /zpool1/zfscachetest ... done1 zfs unmount zpool1/zfscachetest zfs mount zpool1/zfscachetest doing initial (unmount/mount) 'cpio -o . /dev/null'

Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Jorgen Lundman
/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Jorgen Lundman
of trouble hacking this together (the current source doesn't compile in isolation on my S10 machine). -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3

[zfs-discuss] Open Solaris version recommendation? b114, b117?

2009-07-02 Thread Jorgen Lundman
yet to experience any problems. But b117 is what 2010/02 version will be based on, so perhaps that is a better choice. Other versions worth considering? I know it's a bit vague, but perhaps there is a known panic in a certain version that I may not be aware. Lund -- Jorgen Lundman

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Jorgen Lundman
) but I have not personally tried it. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] PicoLCD Was: Best controller card for 8 SATA drives ?

2009-06-21 Thread Jorgen Lundman
a whole load of ZFS data. Has someone already been down this road too? -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] zfs on 32 bit?

2009-06-17 Thread Jorgen Lundman
good at. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-05-28 Thread Jorgen Lundman
is automatic. But of course, at the same time, it is MY data, so I'd rather it was using ZFS and so on. The Thecus, and QNAP, raids both use Intel chipsets. I am curious if I picked up an empty box; 2nd hand for next-to-nothing, if I couldn't re-flash it with osol, or eon, or freenas. -- Jorgen

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-27 Thread Jorgen Lundman
I changed to try zfs send on a UFS on zvolume as well: received 92.9GB stream in 2354 seconds (40.4MB/sec) Still fast enough to use. I have yet to get around to trying something considerably larger in size. Lund Jorgen Lundman wrote: So you recommend I also do speed test on larger

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-26 Thread Jorgen Lundman
, what is the size of the sending zfs? I thought replication speed depends on the size of the sending fs, too not only size of the snapshot being sent. Regards Dirk --On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman lund...@gmo.jp wrote: Sorry, yes. It is straight; # time zfs send

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Jorgen Lundman
PM, Jorgen Lundman lund...@gmo.jp wrote: To finally close my quest. I tested zfs send in osol-b114 version: received 82.3GB stream in 1195 seconds (70.5MB/sec) Yeeaahh! That makes it completely usable! Just need to change our support contract to allow us to run b114 and we're set! :) Thanks

[zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
And alas, grow is completely gone, and no amount of import would see it. Oh well. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
Rob Logan wrote: you meant to type zpool import -d /var/tmp grow Bah - of course, I can not just expect zpool to know what random directory to search. You Sir, are a genius. Works like a charm, and thank you. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-21 Thread Jorgen Lundman
To finally close my quest. I tested zfs send in osol-b114 version: received 82.3GB stream in 1195 seconds (70.5MB/sec) Yeeaahh! That makes it completely usable! Just need to change our support contract to allow us to run b114 and we're set! :) Thanks, Lund Jorgen Lundman wrote: We

[zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
not implemented, not a problem for us. perl cpan module Quota does not implement ZFS quotas. :) -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
. Perhaps something to do with that mount doesn't think it is mounted with quota when local. I could try mountpoint=legacy and explicitly list rq when mounting maybe . But we don't need it to work, it was just different from legacy behaviour. :) Lund -- Jorgen Lundman | lund

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
? If not, I could potentially use zfs ioctls perhaps to write my own bulk import program? Large imports are rare, but I was just curious if there was a better way to issue large amounts of zfs set commands. Jorgen Lundman wrote: Matthew Ahrens wrote: Thanks for the feedback! Thank you

Re: [zfs-discuss] Zfs and b114 version

2009-05-19 Thread Jorgen Lundman
I tried LUpdate 3 times with same result, burnt the ISO and installed the old fashioned way, and it boots fine. Jorgen Lundman wrote: Most annoying. If su.static really had been static I would be able to figure out what goes wrong. When I boot into miniroot/failsafe it works just fine

Re: [zfs-discuss] Zfs and b114 version

2009-05-18 Thread Jorgen Lundman
Jorgen Lundman wrote: The website has not been updated yet to reflect its availability (thus it may not be official yet), but you can get SXCE b114 now from https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?productref=sol-express_b114-full-x86

[zfs-discuss] Zfs and b114 version

2009-05-17 Thread Jorgen Lundman
is compiling osol compared to, say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??) -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-04-20 Thread Jorgen Lundman
-discuss -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailing

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-04-09 Thread Jorgen Lundman
be *MUCH* for faster. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] User quota design discussion..

2009-03-14 Thread Jorgen Lundman
to support quotas for ZFS JL send, but consider a rescan to be the answer. We don't ZFS send very JL often as it is far too slow. Since build 105 it should be *MUCH* for faster. -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
. This I did not know, but now that you point it out, this would be the right way to design it. So the advantage of requiring less ZFS integration is no longer the case. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
in ufs filesystem on zvol when writng ufs log -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
ZFS send very often as it is far too slow. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

[zfs-discuss] User quota design discussion..

2009-03-11 Thread Jorgen Lundman
with checks for being blacklisted. Disadvantages are that it is loss of precision, and possibly slower rescans? Sanity? But I do not really know the internals of ZFS, so I might be completely wrong, and everyone is laughing already. Discuss? Lund -- Jorgen Lundman | lund

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman
/mailman/listinfo/zfs-discuss -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman
shipped with 16. And I'm sorry but 16 didn't cut it at all :) We set it at 1024 as it was the highest number I found via Google. Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell

Re: [zfs-discuss] Replacing HDD in x4500

2009-02-03 Thread Jorgen Lundman
I've been told we got a BugID: 3-way deadlock happens in ufs filesystem on zvol when writing ufs log but I can not view the BugID yet (presumably due to my accounts weak credentials) Perhaps it isn't something we do wrong, that would be a nice change. Lund Jorgen Lundman wrote: I assume

Re: [zfs-discuss] Replacing HDD in x4500

2009-01-27 Thread Jorgen Lundman
been having similar troubles to yours in the past. My system is pretty puny next to yours, but it's been reliable now for slightly over a month. On Tue, Jan 27, 2009 at 12:19 AM, Jorgen Lundman lund...@gmo.jp wrote: The vendor wanted to come in and replace an HDD in the 2nd X4500

Re: [zfs-discuss] Replacing HDD in x4500

2009-01-27 Thread Jorgen Lundman
behaves like it. Not sure why it would block zpool, zfs and df commands as well though? Lund -- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375

[zfs-discuss] Replacing HDD in x4500

2009-01-26 Thread Jorgen Lundman
-- Jorgen Lundman | lund...@lundman.net Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-16 Thread Jorgen Lundman
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell

[zfs-discuss] x4500 vs AVS ?

2008-09-03 Thread Jorgen Lundman
, are there methods in AVS to handle fail-back? Since 02 has been used, it will have newer/modified files, and will need to replicate backwards until synchronised, before fail-back can occur. We did ask our vendor, but we were just told that AVS does not support x4500. Lund -- Jorgen Lundman

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-11 Thread Jorgen Lundman
() -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-11 Thread Jorgen Lundman
zpool status. Going to get some sleep, and really hope it has been fixed. Thank you to everyone who helped. Lund Jorgen Lundman wrote: Jorgen Lundman wrote: Anyway, it has almost rebooted, so I need to go remount everything. Not that it wants to stay up for longer than ~20 mins, then hangs

[zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
done again. Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd30): And I need to get the answer 40. The hd output additionally gives me sdar ? Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
See http://www.sun.com/servers/x64/x4500/arch-wp.pdf page 21. Ian Referring to Page 20? That does show the drive order, just like it does on the box, but not how to map them from the kernel message to drive slot number. Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
the first to try x4500 here as well. Anyway, it has almost rebooted, so I need to go remount everything. Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
Jorgen Lundman wrote: Anyway, it has almost rebooted, so I need to go remount everything. Not that it wants to stay up for longer than ~20 mins, then hangs. In that all IO hangs, including nfsd. I thought this might have been related: http://sunsolve.sun.com/search/document.do?assetkey

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
like it, will push it to Sun. Although, we do have SunSolve logins, can we by-pass the middleman, and avoid the whole translation fiasco, and log directly with Sun? Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman
transfered zfs send. So, rsyncing smaller bits. zfs send -i only works if you have a full copy already, which we can't get from above. -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman
will read version 2. I see no script talking about converting a version 2 to a version 1. -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767

[zfs-discuss] Replacing the boot HDDs in x4500

2008-07-31 Thread Jorgen Lundman
filesystems if I were to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500 and reboot? I assume Sol10 5/08 zpool version would be newer, so in theory it would work. Comments? -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman
hardware failures. thanks, Ross This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | [EMAIL PROTECTED] Unix

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman
maxsize reached 993770 (Increased it by nearly x10 and it still gets a high 'reached'). Lund Jorgen Lundman wrote: We are having slow performance with the UFS volumes on the x4500. They are slow even on the local server. Which makes me think it is (for once) not NFS related

[zfs-discuss] x4500 performance tuning.

2008-07-23 Thread Jorgen Lundman
... are there better values?) set ufs_ninode=259594 in /etc/system, and reboot. But it is costly to reboot based only on my guess. Do you have any other suggestions to explore? Will this help? Sincerely, Jorgen Lundman -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext

Re: [zfs-discuss] x4500 performance tuning.

2008-07-23 Thread Jorgen Lundman
seconds to complete. Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] x4500 panic report.

2008-07-11 Thread Jorgen Lundman
Jorgen Lundman wrote: On Saturday the X4500 system paniced, and rebooted. For some reason the /export/saba1 UFS partition was corrupt, and needed fsck. This is why it did not come back online. /export/saba1 is mounted logging,noatime, so fsck should never (-ish) be needed. SunOS x4500-01

[zfs-discuss] x4500 panic report.

2008-07-06 Thread Jorgen Lundman
() ff001e737c60 ufs:ufs_thread_idle+1a1 () ff001e737c70 unix:thread_start+8 () syncing file systems... -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375

Re: [zfs-discuss] x4500 panic report.

2008-07-06 Thread Jorgen Lundman
that the user is out of space? Lund -- Jorgen Lundman | [EMAIL PROTECTED] Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

  1   2   >