You can try to build it first. It should be able to install the
packages on SXCE since the build process creates sysv packages in
addition to packages within IPS repository.

On Mon, Jul 6, 2009 at 5:20 PM, Eric Bautsch<Eric.Bautsch at sun.com> wrote:
> Yes, that that requires me to move to opensolaris....
>
> Eric
>
>
> Piotr Jasiukajtis wrote:
>
> Yes, You should use Colorado Cluster in this setup:
> http://opensolaris.org/os/community/ha-clusters/announcements/#2009-06-01_Open_HA_Cluster_2009_06_available
>
> On Mon, Jul 6, 2009 at 4:50 PM, Eric Bautsch<Eric.Bautsch at sun.com> wrote:
>
>
> I've tried build 111a which is the lowest build higher than 110 that I have
> hanging around.
> Unfortunately, there appear to be too many changes between build 101a and
> 111a for clusterexpress to function:
> v3.1.4-xvm chgset 'Mon Jun 22 23:27:07 2009 -0700 15914:89e9886c8ad7'
> SunOS Release 5.11 Version snv_111a 64-bit
> Copyright 1983-2009 Sun Microsystems, Inc.? All rights reserved.
> Use is subject to license terms.
> Hostname: cressida
> Configuring devices.
> NIS domain name is cincin
> /usr/cluster/bin/scdidadm:? Could not load DID instance list.
> /usr/cluster/bin/scdidadm:? Cannot open
> /etc/cluster/ccr/global/did_instances.
> Booting as part of a cluster
> name is non-existent for this module
> for a list of valid names, use name '?'
> NOTICE: CMM: Node cressida (nodeid = 1) with votecount = 1 added.
> NOTICE: CMM: Node cressida: attempting to join cluster.
> NOTICE: CMM: Cluster has reached quorum.
> NOTICE: CMM: Node cressida (nodeid = 1) is up; new incarnation number =
> 12466542
> 39.
> NOTICE: CMM: Cluster members: cressida.
> NOTICE: CMM: node reconfiguration #1 completed.
> NOTICE: CMM: Node cressida: joined cluster.
>
> panic[cpu0]/thread=ffffff014bd05580: BAD TRAP: type=e (#pf Page fault)
> rp=ffffff
> 0004d27c10 addr=0 occurred in module "sockfs" due to a NULL pointer
> dereference
>
> clconfig: #pf Page fault
> Bad kernel fault at addr=0x0
> pid=502, pc=0xfffffffff79dc9fb, sp=0xffffff0004d27d00, eflags=0x10246
> cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 660<xmme,fxsr,mce,pae>
> cr2: 0
> ??????? rdi:??????????????? 2 rsi:??????????????? 2 rdx:??????????????? 0
> ??????? rcx:??????????????? 0? r8:??????????????? 0? r9: ffffff0152193ee6
> ??????? rax:??????????????? 0 rbx:??????????????? 2 rbp: ffffff0004d27d50
> ??????? r10:??????????????? a r11: ffffff0004d278a0 r12: ffffff014b1a8008
> ??????? r13:??????????????? 2 r14:??????????????? 0 r15:??????????????? 0
> ??????? fsb:??????????????? 0 gsb: fffffffffbc61930? ds:?????????????? 4b
> ???????? es:?????????????? 4b? fs:??????????????? 0? gs:????????????? 1c3
> ??????? trp:??????????????? e err:??????????????? 2 rip: fffffffff79dc9fb
> ???????? cs:???????????? e030 rfl:??????????? 10246 rsp: ffffff0004d27d00
> ???????? ss:???????????? e02b
>
> ffffff0004d27af0 unix:die+dd ()
> ffffff0004d27c00 unix:trap+1768 ()
> ffffff0004d27c10 unix:cmntrap+12f ()
> ffffff0004d27d50 sockfs:solookup+33 ()
> ffffff0004d27dc0 cl_comm:__1cGipconfOinitialize_int6M_i_+5a ()
> ffffff0004d27de0 cl_comm:__1cGipconfKinitialize6F_i_+29 ()
> ffffff0004d27e10 cl_comm:__1cDORBKinitialize6F_i_+585 ()
> ffffff0004d27e20 cl_comm:cl_orb_initialize+9 ()
> ffffff0004d27e70 cl_load:__1cIclconfig6Fipv_i_+167 ()
> ffffff0004d27e80 cl_comm:cladmin+13 ()
> ffffff0004d27ec0 genunix:cladm+a7 ()
> ffffff0004d27f10 unix:brand_sys_syscall32+1d0 ()
>
> syncing file systems... done
> dumping to /dev/zvol/dsk/rootdisk/dump, offset 65536, content: kernel
> 100% done: 77277 pages dumped, compression ratio 3.79, dump succeeded
> rebooting...
>
> Eric
>
>
>
> Piotr Jasiukajtis wrote:
>
> Hi,
>
> Ok, I am going to test that in the near future. Has anyone created
> similar setup already?
>
> On Wed, Jul 1, 2009 at 9:43 PM, Ashutosh
> Tripathi<Ashutosh.Tripathi at sun.com> wrote:
>
>
> Hi Piotr, Eric, David, All,
>
> ? ? ? ?If i knew for sure, i would have mentioned that.
>
> The best i can do is to suggest that we retry this setup with
> nv110 or later. The Solaris VLAN tagging fix in b110 (CR 6797256)
> is likely to have improved the situation.
>
> ? ? ? ?David already has offered to try this out, hope he can
> accommodate this.
>
> ? ? ? ?As for the OHAC build which is known to work with b110 or later,
> i don't know what to suggest there, perhaps someone else on the alias does.
>
> Regards,
> -ashu
>
>
> Piotr Jasiukajtis wrote:
>
>
> Hi Eric,
>
> Related threads:
> http://www.opensolaris.org/jive/thread.jspa?threadID=88314&tstart=150
> http://www.opensolaris.org/jive/thread.jspa?threadID=89150&tstart=135
>
> Ask Ashutosh for more details. The problem is not fixed yet..
>
> On Mon, Jun 29, 2009 at 11:13 PM, Eric Bautsch<Eric.Bautsch at sun.com>
> wrote:
>
>
> Thanks for find this.
>
> Since I don't follow ha-clusters-discuss and since I don't know what to
> look
> for in the archives there, can someone give me a hint what the issue is?
>
> Thanks again.
> Eric
>
>
> Piotr Jasiukajtis wrote:
>
> Hi Ashu,
>
> Indeed, I think it's the same issue..
>
> Looks like I will try to give it a try with Colorado..
>
> On Mon, Jun 29, 2009 at 8:05 PM, Ashutosh
> Tripathi<Ashutosh.Tripathi at sun.com> wrote:
>
>
> Hi All,
>
> ? ? ? Cross posting to ha-cluster-discuss. This sounds very
> similar to something which was already discussed on that alias.
>
> Regards,
> -ashu
>
>
> Eric Bautsch wrote:
>
>
> Hi David.
>
> Here you go:
> root at phobos # dladm show-vnic
> LINK ? ? ? ? OVER ? ? ? ? SPEED ?MACADDRESS ? ? ? ?MACADDRTYPE
> VID
> xvm1_0 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:11:11:11 ?fixed ? ? ? ? ? ? ? 0
> xvm1_1 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:c:c0:74 ? fixed
> 403
> xvm1_11 ? ? ?net0 ? ? ? ? 100 ? ?0:16:3e:33:6c:4e ?fixed
> 150
> xvm1_10 ? ? ?net0 ? ? ? ? 100 ? ?0:16:3e:4f:98:bc ?fixed ? ? ? ? ? ? ? 40
> xvm1_2 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:c:4a:a6 ? fixed
> 404
> xvm1_3 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:2f:d0:1f ?fixed ? ? ? ? ? ? ? 0
> xvm1_7 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:45:1e:e5 ?fixed ? ? ? ? ? ? ? 0
> xvm1_9 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:6d:9c:9e ?fixed ? ? ? ? ? ? ? 33
> xvm1_5 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:3b:6e:1e ?fixed ? ? ? ? ? ? ? 33
> xvm1_6 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:15:2:3 ? ?fixed ? ? ? ? ? ? ? 40
> xvm1_4 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:6:d8:ed ? fixed ? ? ? ? ? ? ? 7
> xvm1_8 ? ? ? net0 ? ? ? ? 100 ? ?0:16:3e:1b:ee:67 ?fixed ? ? ? ? ? ? ? 7
> root at phobos #
>
> Note that the host phobos is running 117. I'm using 101a as the OS for
> the
> cluster nodes (the DomUs) because that's the latest version for which you
> can get clusterexpress. If I want something later, I need to use
> opensolaris
> instead....
>
> Eric
>
>
> David Edmondson wrote:
>
>
> * Eric.Bautsch at Sun.COM [2009-06-28 23:18:33]
>
>
>
> Yet another stupid question: I'm trying to build two paravirtualised
> Solaris Nevada (build 101a in this case) nodes on two different
> physical hosts.
> I then want to create a cluster between the two virtual nodes. From
> the following error message, I assume this isn't going to work:
> Jun 28 23:07:34 cressida.swangage.co.uk genunix: [ID 852664
> kern.warning] WARNING: clcomm: failed to load driver module v404net -
> v404net paths will not come up.
>
> Note that v404net0 is one of the xnf interfaces which was originally
> created thus in virt-install:
> --network='bridge=net0,vlanid=404'
> (net0 is dladm create-aggr -P L2,L3 -l e1000g0 -l e1000g2 net0 )
>
> and then re-named in the domU with:
> dladm rename-link xnf2 v404net0
>
> Are we planning on making this work?
>
>
>
> I'm not sure that the xVM bits allowed you to specify the VLAN tag in
> build 101a. Please send the output of "dladm show-vnic" when the guest
> is running.
>
> dme.
>
>
>
> --
>
> OpenSolaris <http://www.opensolaris.com/>
> ?Sun <http://www.sun.com/>
> *Eric A. Bautsch*
> BT Annuity Service
> Service Support Manager
>
> Email:
> *eric.bautsch at sun.com* <mailto:eric.bautsch at sun.com>
> Telephone: ? ? ? ? ? ? ?07710 495920
>
> Team Mail:
> *ann-service-support at sun.com* <mailto:ann-service-support at sun.com>
> Team Web: ? ? ? ? ? ? ? http://btannuity.uk/servicesupport/
>
> ? ? ? Sun <http://www.sun.com/solaris/>
>
>
>
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>
>
>
>
>
> --
>
>
> Eric A. Bautsch
> BT Annuity Service
> Service Support Manager
> Email:
> eric.bautsch at sun.com
> Telephone: ? 07710 495920
> Team Mail:
> ann-service-support at sun.com
> Team Web: ? http://btannuity.uk/servicesupport/
>
>
>
>
>
>
> --
>
>
> Eric?A.?Bautsch
> BT?Annuity?Service
> Service?Support?Manager
> Email:
> eric.bautsch at sun.com
> Telephone: ? 07710?495920
> Team Mail:
> ann-service-support at sun.com
> Team Web: ? http://btannuity.uk/servicesupport/
>
>
>
>
> --
>
>
> Eric?A.?Bautsch
> BT?Annuity?Service
> Service?Support?Manager
> Email:
> eric.bautsch at sun.com
> Telephone: ? 07710?495920
> Team Mail:
> ann-service-support at sun.com
> Team Web: ? http://btannuity.uk/servicesupport/



-- 
Piotr Jasiukajtis | estibi | SCA OS0072
http://estseg.blogspot.com

Reply via email to