Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it
will always complain about overlapping slices, since *s2 is the entire disk.
This warning seems excessive, but -f will ignore it.
As for ZIL, the first time I created a slice for it. This worked well, the
second
I have a server, with two external drive cages attached, on separate
controllers:
c0::dsk/c0t0d0 disk connectedconfigured unknown
c0::dsk/c0t1d0 disk connectedconfigured unknown
c0::dsk/c0t2d0 disk connected
Hello list,
I got a c7000 with BL465c G1 blades to play with and have been trying to get
some form of Solaris to work on it.
However, this is the state:
OpenSolaris 134: Installs with ZFS, but no BNX nic drivers.
OpenIndiana 147: Panics on zpool create everytime, even from console. Has no
On my NAS I use Velitium: http://sourceforge.net/projects/velitium/ which goes
down to about 70MB at the smallest.
(2010/01/07 15:23), Frank Cusack wrote:
been searching and searching ...
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
that with ZFS, the
number in ls which used to be for blocks actually report the number of
entries in the directory (-1).
drwxr-xr-x 13 root bin 13 Oct 28 02:58 spool
^^
# ls -la spool | wc -l
14
Which means you can probably add things up a little faster.
--
Jorgen
at least have a solution, even if it is rather unattractive. 12 servers,
and has to be done at 2am means I will be testy for a while.
Lund
Jorgen Lundman wrote:
Interesting. Unfortunately, I can not zpool offline, nor zpool
detach, nor zpool remove the existing c6t4d0s0 device.
I thought
'no quota'?
Both systems are nearly fully patched.
Any help is appreciated. Thanks in advance.
Willi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund
=6574286
[*2]
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6739497
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
(lng battle there,
no support for Opensolaris from anyone in Japan).
Can I delete the sucker using zdb?
Thanks for any reply,
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
, and did not need any changes.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
, that is profit to us)
Is the space saved with dedup charged in the same manner? I would expect so, I
figured some of you would just know. I will check when b128 is out.
I don't suppose I can change the model? :)
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3
...@1029 54.0M local
Any suggestions would be most welcome,
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
:
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0
bootable
Solaris. Very flexible and can put on the Admin GUIs, and so on.
https://sourceforge.net/projects/embeddedsolaris/
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell
x4540, and calling NetApp. I
would rather not (more work for me).
I understand Sun is probably experiencing some internal turmoil at the moment,
but it has been rather frustrating for us.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
releases of Solaris.
Thanks
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
and works very well in
Solaris. (Package SUNWmv88sx).
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
Nope, that it does not.
Ian Collins wrote:
Jorgen Lundman wrote:
Finally came to the reboot maintenance to reboot the x4540 to make it
see the newly replaced HDD.
I tried, reboot, then power-cycle, and reboot -- -r,
but I can not make the x4540 accept any HDD in that bay. I'm starting
0 0
c5t4d0 ONLINE 0 0 0
c5t7d0 ONLINE 0 0 0
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0
the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.
-Greg
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3
would rather not system(zfs) hack it.
Lund
Ross wrote:
Hi Jorgen,
Does that software work to stream media to an xbox 360? If so could I have a
play with it? It sounds ideal for my home server.
cheers,
Ross
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3
);
if (spawn) lion_set_handler(spawn, root_zfs_handler);
# zfs set net.lundman:sharellink=on zpool1/media
# ./llink -d -v 32
./llink - Jorgen Lundman v2.2.1 lund...@shinken.interq.or.jp build 1451
(Tue Aug 18 14:02:44 2009) (libdvdnav).
: looking for ZFS filesystems
: [root] recognising
the impression that the API is
flexible. The ultimate goal is to move away from static paths listed in
the config file.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
failed
I am fairly certain that if I reboot, it will all come back ok again.
But I would like to believe that I should be able to replace a disk
without rebooting on a X4540.
Any other commands I should try?
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0
. I never thought
about using it with a motherboard inside.
Could you provide a complete parts list?
What sort of temperatures at the chip, chipset, and drives did you find?
Thanks!
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya
...@6,0:a,raw
Perhaps because it was booted with the dead disk in place, it never
configured the entire sd5 mpt driver. Why the other hard-disks work I
don't know.
I suspect the only way to fix this, is to reboot again.
Lund
Jorgen Lundman wrote:
x4540 snv_117
We lost a HDD last night
appreciate you're probably more concerned with getting an answer to your
question, but if ZFS needs a reboot to cope with failures on even an x4540,
that's an absolute deal breaker for everything we want to do with ZFS.
Ross
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81
to enable it either).
Jorgen Lundman wrote:
Ok I have redone the initial tests as 4G instead. Graphs are on the same
place.
http://lundman.net/wiki/index.php/Lraid5_iozone
I also mounted it with nfsv3 and mounted it for more iozone. Alas, I
started with 100mbit, so it has taken quite a while
Some preliminary speed tests, not too bad for a pci32 card.
http://lundman.net/wiki/index.php/Lraid5_iozone
Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Double the price
. ;)
Jorgen Lundman wrote:
I was following Toms Hardware on how they test NAS units. I have 2GB
memory, so I will re-run the test at 4, if I figure out which option
that is.
I used Excel for the graphs in this case, gnuplot did not want to work.
(Nor did Excel mind you)
Bob Friesenhahn wrote
to start around 80,000.
Anyway, sure has been fun.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
. Especially since SOHO NAS devices
seem to start around 80,000.
Anyway, sure has been fun.
Lund
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund
.
Peculiar.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
can live
together and put /var in the data pool. That way we would not need to
rebuild the data-pool and all the work that comes with that.
Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD)
though, I will have to lucreate and reboot one time.
Lund
--
Jorgen Lundman
for it, as I doubt it'll
stay standing after the next earthquake. :)
Lund
Jorgen Lundman wrote:
This thread started over in nfs-discuss, as it appeared to be an nfs
problem initially. Or at the very least, interaction between nfs and zil.
Just summarising speeds we have found when untarring
as to whether the application did or not?
This I have not yet wrapped my head around.
For example, I know rsync and tar does not use fdsync (but dovecot does)
on its close(), but does NFS make it fdsync anyway?
Sorry for the giant email.
--
Jorgen Lundman | lund...@lundman.net
Unix
and 5097228.
Ah of course, you have a valid point and mirrors can be used it much
more complicated situations.
Been reading your blog all day, while impatiently waiting for zfs-crypto..
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
, like official Sol 10
10/08. (I am not sure, but zfs send sounds like you already need the
2nd server set up and running with IPs etc? )
Anyway, we have found a procedure now, so it is all possible. But it
would have been nicer to be able to detach the disk politely ;)
Lund
--
Jorgen Lundman
out the disk, either.
Jorgen Lundman wrote:
Hello list,
Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs
boot.
Very often, if we needed to grow a cluster by another machine or two, we
would simply clone a running live server. Generally the procedure for
this would
Jorgen Lundman wrote:
However, zpool detach appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1024 1k blocks from the disk, zpool detach
?
Thanks,
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0
a rogue 9 appeared in your output.
It was just a standard run of 3,000 files.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix
1m20.28s
Doing second 'cpio -C 131072 -o /dev/null'
48000256 blocks
real7m25.34s
user0m6.63s
sys 1m32.04s
Feel free to clean up with 'zfs destroy zboot/zfscachetest'.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya
for
desktops :( They are cheap though! Nothing like being Wall-Mart of Storage!
That is how the pools were created as well. Admittedly it may be down to
our Vendor again.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
uses zfs send, which would be possible,
but 4 minutes is hard to beat when your cluster is under heavy load.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
x4540 running svn117
# ./zfs-cache-test.ksh zpool1
zfs create zpool1/zfscachetest
creating data file set 93000 files of 8192000 bytes0 under
/zpool1/zfscachetest ...
done1
zfs unmount zpool1/zfscachetest
zfs mount zpool1/zfscachetest
doing initial (unmount/mount) 'cpio -o . /dev/null'
/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
of trouble hacking this
together (the current source doesn't compile in isolation on my S10
machine).
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
yet to experience any
problems. But b117 is what 2010/02 version will be based on, so perhaps
that is a better choice. Other versions worth considering?
I know it's a bit vague, but perhaps there is a known panic in a certain
version that I may not be aware.
Lund
--
Jorgen Lundman
) but I have not
personally tried it.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
a whole
load of ZFS data. Has someone already been down this road too?
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
good at.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
is automatic. But of
course, at the same time, it is MY data, so I'd rather it was using ZFS
and so on.
The Thecus, and QNAP, raids both use Intel chipsets. I am curious if I
picked up an empty box; 2nd hand for next-to-nothing, if I couldn't
re-flash it with osol, or eon, or freenas.
--
Jorgen
I changed to try zfs send on a UFS on zvolume as well:
received 92.9GB stream in 2354 seconds (40.4MB/sec)
Still fast enough to use. I have yet to get around to trying something
considerably larger in size.
Lund
Jorgen Lundman wrote:
So you recommend I also do speed test on larger
,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too
not only size of the snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman
lund...@gmo.jp wrote:
Sorry, yes. It is straight;
# time zfs send
PM, Jorgen Lundman lund...@gmo.jp wrote:
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support contract to
allow us to run b114 and we're set! :)
Thanks
And alas, grow is completely gone, and no amount of import would see
it. Oh well.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
Rob Logan wrote:
you meant to type
zpool import -d /var/tmp grow
Bah - of course, I can not just expect zpool to know what random
directory to search.
You Sir, are a genius.
Works like a charm, and thank you.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support
contract to allow us to run b114 and we're set! :)
Thanks,
Lund
Jorgen Lundman wrote:
We
not implemented, not a problem for us.
perl cpan module Quota does not implement ZFS quotas. :)
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
. Perhaps something to do with that
mount doesn't think it is mounted with quota when local.
I could try mountpoint=legacy and explicitly list rq when mounting maybe .
But we don't need it to work, it was just different from legacy
behaviour. :)
Lund
--
Jorgen Lundman | lund
?
If not, I could potentially use zfs ioctls perhaps to write my own bulk
import program? Large imports are rare, but I was just curious if there
was a better way to issue large amounts of zfs set commands.
Jorgen Lundman wrote:
Matthew Ahrens wrote:
Thanks for the feedback!
Thank you
I tried LUpdate 3 times with same result, burnt the ISO and installed
the old fashioned way, and it boots fine.
Jorgen Lundman wrote:
Most annoying. If su.static really had been static I would be able to
figure out what goes wrong.
When I boot into miniroot/failsafe it works just fine
Jorgen Lundman wrote:
The website has not been updated yet to reflect its availability (thus
it may not be official yet), but you can get SXCE b114 now from
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?productref=sol-express_b114-full-x86
is compiling osol compared to,
say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??)
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing
be *MUCH* for faster.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
to support quotas for ZFS
JL send, but consider a rescan to be the answer. We don't ZFS send very
JL often as it is far too slow.
Since build 105 it should be *MUCH* for faster.
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
.
This I did not know, but now that you point it out, this would be the
right way to design it. So the advantage of requiring less ZFS
integration is no longer the case.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
in ufs filesystem on zvol when writng ufs log
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
ZFS send very
often as it is far too slow.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
with checks for being blacklisted.
Disadvantages are that it is loss of precision, and possibly slower
rescans? Sanity?
But I do not really know the internals of ZFS, so I might be completely
wrong, and everyone is laughing already.
Discuss?
Lund
--
Jorgen Lundman | lund
/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
shipped with 16. And I'm sorry but 16 didn't cut it at all :) We
set it at 1024 as it was the highest number I found via Google.
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell
I've been told we got a BugID:
3-way deadlock happens in ufs filesystem on zvol when writing ufs log
but I can not view the BugID yet (presumably due to my accounts weak
credentials)
Perhaps it isn't something we do wrong, that would be a nice change.
Lund
Jorgen Lundman wrote:
I assume
been having similar
troubles to yours in the past.
My system is pretty puny next to yours, but it's been reliable now for
slightly over a month.
On Tue, Jan 27, 2009 at 12:19 AM, Jorgen Lundman lund...@gmo.jp wrote:
The vendor wanted to come in and replace an HDD in the 2nd X4500
behaves
like it. Not sure why it would block zpool, zfs and df commands as
well though?
Lund
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
--
Jorgen Lundman | lund...@lundman.net
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zfs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell
, are there methods in AVS to handle fail-back? Since 02 has
been used, it will have newer/modified files, and will need to replicate
backwards until synchronised, before fail-back can occur.
We did ask our vendor, but we were just told that AVS does not support
x4500.
Lund
--
Jorgen Lundman
()
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zfs-discuss
zpool status.
Going to get some sleep, and really hope it has been fixed. Thank you to
everyone who helped.
Lund
Jorgen Lundman wrote:
Jorgen Lundman wrote:
Anyway, it has almost rebooted, so I need to go remount everything.
Not that it wants to stay up for longer than ~20 mins, then hangs
done again.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
PROTECTED]/pci11ab,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0 (sd30):
And I need to get the answer 40. The hd output additionally gives me
sdar ?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
See http://www.sun.com/servers/x64/x4500/arch-wp.pdf page 21.
Ian
Referring to Page 20? That does show the drive order, just like it does
on the box, but not how to map them from the kernel message to drive
slot number.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator
the first to try x4500 here as well.
Anyway, it has almost rebooted, so I need to go remount everything.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0
Jorgen Lundman wrote:
Anyway, it has almost rebooted, so I need to go remount everything.
Not that it wants to stay up for longer than ~20 mins, then hangs. In
that all IO hangs, including nfsd.
I thought this might have been related:
http://sunsolve.sun.com/search/document.do?assetkey
like it, will push it to Sun. Although,
we do have SunSolve logins, can we by-pass the middleman, and avoid the
whole translation fiasco, and log directly with Sun?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo
transfered zfs send. So, rsyncing smaller bits.
zfs send -i only works if you have a full copy already, which we can't
get from above.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
will read version 2. I see no
script talking about converting a version 2 to a version 1.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767
filesystems if I were
to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500
and reboot? I assume Sol10 5/08 zpool version would be newer, so in
theory it would work.
Comments?
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
hardware failures.
thanks,
Ross
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix
maxsize reached 993770
(Increased it by nearly x10 and it still gets a high 'reached').
Lund
Jorgen Lundman wrote:
We are having slow performance with the UFS volumes on the x4500. They
are slow even on the local server. Which makes me think it is (for once)
not NFS related
... are there better values?)
set ufs_ninode=259594
in /etc/system, and reboot. But it is costly to reboot based only on my
guess. Do you have any other suggestions to explore? Will this help?
Sincerely,
Jorgen Lundman
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext
seconds to complete.
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
Jorgen Lundman wrote:
On Saturday the X4500 system paniced, and rebooted. For some reason the
/export/saba1 UFS partition was corrupt, and needed fsck. This is why
it did not come back online. /export/saba1 is mounted logging,noatime,
so fsck should never (-ish) be needed.
SunOS x4500-01
()
ff001e737c60 ufs:ufs_thread_idle+1a1 ()
ff001e737c70 unix:thread_start+8 ()
syncing file systems...
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375
that the user is out
of space?
Lund
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
1 - 100 of 128 matches
Mail list logo