Re: [zones-discuss] zone hung in shutting_down status
Jerry, Usually zoneadm -z zonename unmount -f works for me but not this time so I diid the followings: I ran the following 2 commands and got out of the state: 1) zoneadm -z zonename reboot -- -s 2) pkill -9 -z zonename (from a second terminal) Zone moved into Ready state and I was able to bring it up again. Ihsan Ihsan -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] non global zone memory allocation enquiry
Use rcapstat Sent from my iPhone On Aug 22, 2008, at 10:15 AM, Gauss Tang - Sun Microsystems [EMAIL PROTECTED] wrote: Dear Joseph, Thanks for your help. prstat -Z can not tell us the value of capped-memory. :) Joseph Maina дµÀ: Try: prstat -Z -Joe On 08/21/08 01:29, Gauss Tang - Sun Microsystems wrote: Dear Expert, We can check the zone memory allocation via command zonecfg -z zonename info capped-memory: physical: 256M But how to check this info after longin the zone? Thanks in advance. Regards, Gauss ___ zones-discuss mailing list zones-discuss@opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Solaris 8 Container with Sun Cluster
YES According to the latest release notes from Solaris Cluster 02/08 Check the following link http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-08+Release+Notes#SunCluster3.22-08ReleaseNotes-GCVSS New Features and Functionality This section describes each of the following new features provided in the Sun Cluster 3.2 2/08 software. Sun Service Tags Support Support for Using NAS Devices From Sun Microsystems as Shared Storage and Quorum Devices Support for EMC Symmetrix Remote Data Facility (SRDF) HA-Containers Support for Solaris Zones Brands Editable Properties in Sun Cluster Manager New Sun Cluster Upgrade Guide Quorum Server Documentation Moved to the Installation and System Administration Guides HA-Containers Support for Solaris Zones Brands In this release, the Sun Cluster Data Service for Solaris Containers supports the following branded zones: SPARC: native, solaris8 x86: native, lx HTH Ihsan Martin Paulucci wrote: Ethan, Guys, To clarify, my question is: Does Sun Cluster 3.2 support that RG node list is a Solaris 8 branded zone?. Regards, Martin. ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] dhcp/zone
James Carlson wrote: I think you're confusing two things. One is the conversion of 'ce' to GLDv3. That might or might not happen -- I know of at least one person who is quite interested in it, but I don't know whether the work will get done. Given the number of ... special ... features in that driver, it'd be an interesting job to say the least. There is an interesting support issue for Trunking and/or Aggregation that needs to be addressed to support of ce and ge by GLDV3. Working with Zones and Containers is primarily for a consolidation effort that will demand network bandwidth. We have been using Trunking and IPMP to layout the best of the network infrastructure for perfomance and availability. Solaris Trunking 1.3 does support ce and ge, unlike Solaris 10 Aggregation (dladm). Since dladm is the focus for future network administration among all the other features builtin for supporting IP-instances, where Aggregation is a sub function of, isn't it prudent to be strategic reason to push for the support of ce and ge by GLDV3 ? The other is the support of the ioctls necessary for assignment of links to a zone, which is needed to support the exclusive IP stack feature, and the tweaks to zoneadmd to make it work. This *HAS* been done and is on a path for release. In S10 currently, the only framework that supports the feature is GLDv3, which is why the exclusive IP stack is tied to GLDv3 support. But non-GLDv3 drivers can also be modified (with some difficulty) to support the feature, and that's what's being done for 'ce.' -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Solaris Containers Replication
Sengor, This has been very well discussed @ http://www.opensolaris.org/jive/thread.jspa;jsessionid=12639754315F020770B3FDF6F98BDB3B?messageID=173929#173929 http://www.opensolaris.org/jive/thread.jspa?messageID=173929#173929 On the other hand, some patches are sun4v only and are not addressed to support sun4u. More, Package differencies might put you into unseen problems and behaviors. In other words, not fully supported yet ! Nevertheless, there is an RFE in the works and its being incorporated in nv. This is an extremely important milestone. 6576592 RFE: zoneadm detach/attach should work between sun4u and sun4v architecture So far what I have heard is that OS Engineers are currently working on getting this done for nevada. Once it is integrated there then a call will be made to see what S10 update should it be part of. No guarantees it would be S10u5 since the code isn't even integrated into nevada yet. Ihsan I Sengor . wrote: Hi, Out of curiosity would this work when moving zones between different platforms? Example move/copy zone from a sun4u system to a sun4v system. On 12/1/07, Ihsan Zaghmouth [EMAIL PROTECTED] wrote: Hi Paul, Clone them localy Detach the Cloned Zones (zoneadm -z zonename detach) Tar or pax them over starting from the zonepath and remote copy the tarballs or simply export/import SAN DG hosting the zones. Prepare them for attachment, zonecfg -z zonename and at prompt create -a target-zonepath Attach the Untared Zones on Target (zoneadm -z zonename attach) Boot them. Make sure when you do all of the above that Source and Target servers are upto the same patch levels and all other resources (Network, SAN FS(s)) same as on source, otherwise you would be looking for troubles for reconciliations Ihsan -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Solaris Containers Replication
Hi Paul, 1. Clone them localy 2. Detach the Cloned Zones (zoneadm -z zonename detach) 3. Tar or pax them over starting from the zonepath and remote copy the tarballs or simply export/import SAN DG hosting the zones. 4. Prepare them for attachment, zonecfg -z zonename and at prompt create -a target-zonepath 5. Attach the Untared Zones on Target (zoneadm -z zonename attach) 6. Boot them. Make sure when you do all of the above that Source and Target servers are upto the same patch levels and all other resources (Network, SAN FS(s)) same as on source, otherwise you would be looking for troubles for reconciliations Ihsan Paul F Mazzola wrote: Need recommendation to determine the best way for replicating Solaris Containers. Here is the situation: (1) We built 4 containers for a customer on 2 different physical hosts. Root file systems are shared. (2) Customer installed all their applications, packages etc. Now customer wants us to replicate those 4 containers exactly as it is now so that on new containers they need not to install the apps/packages again. What is the best way to handle this case and replicate the containers within same or different physical hosts? -- NOTICE: This email message is for the sole use of the intended recipient and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Zone Migration - Sun4u to Sun4v - Supported?
Jerry, How the "update on attach" is planned to be deployed, patch update or next release of solaris10 U5 ? Does this address the patch management of a migrated zone in "configuered" state and abscent from the node intended to be patched ? In other words, If the patch manager on local node and "update on attach" on target node work hand in hand, SAN would be the prefered zonepath for zones, so we may get away from the sprawl of zones and standbys we have to deploy to host applications on different nodes today. We have been waiting for this important feature to architect an efficient simple to administer environment for hosting applications in zones. Please advise. Ihsan <> Jerry Jelinek wrote: Mike Gerdts wrote: On Nov 8, 2007 6:40 AM, LaoTsao(Dr. Tsao) [EMAIL PROTECTED] wrote: hi it seems these issues will be solved by a brandZ for s10, nv etc not? Is that in the works? I've seen an example of how to do it on your own but I haven't heard of any supported variant. The key gotcha with this approach seems to be the sanity check to be sure that the kernel and SUNWzone* from the global zone remain compatible with the bits in the non-global zone(s). I would like to see this functionality, but as I was trying to articulate why in a thread (I think regarding "update on attach") I was having a hard time. I really think that "update on attach" would also solve the issue at hand, now that I think about it. http://www.opensolaris.org/os/community/arc/caselog/2007/621/ Yes, "update on attach", which is out for code review now, will address this. If by "brandZ for s10, nv" you mean running a solaris10 brand on nv, there is nobody working on that that I know of. Jerry ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Zone Migration - Sun4u to Sun4v - Supported?
Jerry Jelinek wrote: Ihsan Zaghmouth wrote: Jerry, How the *update on attach* is planned to be deployed, patch update or next release of solaris10 U5 ? I am currently working on getting this done for nevada. Once it is integrated there then we'll see what S10 update we might be able to get into. I can guarantee it won't be S10u5 since the code isn't even integrated into nv yet. Does this address the *patch management of a migrated zone in configuered state* and abscent from the node intended to be patched ? I am not sure what you mean here. In an HA setup where servers with zones that need to be patched, we amy follow one of 2 methods that are supported : 1) Liveupgrade or Patch an ABE localy and wait for a maintenence window to bring all on ABE, including patched zones on same ABE 2) Switchover all Resource Groups and their associated zones (detach/attach) to the failover node(s), in a planned order for patching the servers (GZ). can't wait for patch process, applications up and running on Failover Patched/Unpatched Server. So, Patch the diserted nodes and Switchover back all Resource Groups and their associated zones (detach/attach), that is where update on attach kicks in for the zones. Patch the switched from node(s) one at a time, and all your hosting servers is upto latest patches, which simplifies the update on attach the next time around. That is what I meant ___ zones-discuss mailing list zones-discuss@opensolaris.org Ihsan ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] RSC cards and zlogin -C to a zone clash of interest
Well I recommended the customer to use the -e option when they zlogin and change it to ? or or any other character they wish for, at least for now. zlogin -e ? -C zonename -e c Specifies a different escape character, c, for the key sequence used to access extended functions and to disconnect from the login. The default escape character is the tilde (~). Enda O'Connor ( Sun Micro Systems Ireland) wrote: Ihsan Zaghmouth wrote: Here's one issue that was raised by a Sun customer ... Looks like we have a clash of ineterst on "~." They have v490s with RSC cards (Remote System Control) and Zones. When they do *console -C "zone"* , then do a *~.* to disconnect from that zones console, it takes them to the *RSC prompt.* If they "console" from there, they go back to the zone console. They can't escape back to the global zone. Anyone seen this before... Any thoughts ? does ~~. help cheers Ihsan ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
[zones-discuss] RSC cards and zlogin -C to a zone clash of interest
Here's one issue that was raised by a Sun customer ... Looks like we have a clash of ineterst on ~. They have v490s with RSC cards (Remote System Control) and Zones. When they do console -C zone , then do a ~. to disconnect from that zones console, it takes them to the RSC prompt. If they console from there, they go back to the zone console. They can't escape back to the global zone. Anyone seen this before... Any thoughts ? cheers Ihsan ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] S10 U3 Live Upgrade with zones
Sergiy, The info doc 72099 is clear about LU with zones still, here is the NOTE: NOTE: This patch list is currently incomplete for Solaris[TM] 10 systems running zones. One patch that impacts patching zones is not available for either SPARC or x86 platforms. If you are not using Live Upgrade on a system involving zones, the patch list below is complete. I guess, we still need to wait for the list of patches to be finalized Ihsan Sergiy Kolodka wrote: Guys, Can someone please confirm or deny if it is possible to do LiveUpgrade from Solaris 11/06 with zones installed to Solaris 8/07 ? I've tried to apply all patches mentioned in 72099 doco from SunSolve, but lucreate keep complaining that I need to install all required patches in order to do upgrade with zones, and I'm pretty sure that I already have all them installed, actually I checked that five times and they are in fact. Whey I deattached zones process went pretty smooth, but that's not what I'm looking for. So, am I missing something or LiveUpgrade with zones still doesn't work ? Thanks ! This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] S10 U3 Live Upgrade with zones
Hi Enda, What is the status of the NOTE in 72099 ? Are you saying that the list is complete and that whatever is there is final and it should work now ? Should the info doc be revised then ? Please advise NOTE: This patch list is currently incomplete for Solaris[TM] 10 systems running zones. One patch that impacts patching zones is not available for either SPARC or x86 platforms. If you are not using Live Upgrade on a system involving zones, the patch list below is complete. Ihsan Enda O'Connor wrote: Hi This should work, are you using the latest released rev's of the patches listed, are you using the packages from 8/07? So what are the exact steps you followed, and the step where the error occurs. Enda Sergiy Kolodka wrote: Guys, Can someone please confirm or deny if it is possible to do LiveUpgrade from Solaris 11/06 with zones installed to Solaris 8/07 ? I've tried to apply all patches mentioned in 72099 doco from SunSolve, but lucreate keep complaining that I need to install all required patches in order to do upgrade with zones, and I'm pretty sure that I already have all them installed, actually I checked that five times and they are in fact. Whey I deattached zones process went pretty smooth, but that's not what I'm looking for. So, am I missing something or LiveUpgrade with zones still doesn't work ? Thanks ! This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] S10 U3 Live Upgrade with zones
Enda, If you go over the 72099 carefully, here is what it says now: The following patch has been withdrawn and until a new version is released you can not use LU on systems with zones Solaris 10 sparc 120272-12 SMA Patch . 120011-14 depends on it The following patch has been withdrawn and until a new version is released you can not use LU on systems with zones Solaris 10 x86 120273-13 SMA patch . 120012-14 depends on this The SMA patch for both SPARC and X86 has been WITHDRAWN. Reason: Adding patch 120272-12 or 120273-13 corrupts the /etc/sma/snmpd.conf file causing the snmpd services not to come up after patching. That is what I am trying to convey. That is the confusion right here ! Its affecting the JP Ihsan Enda O'Connor wrote: Hi Appears there is some confusion here, what is the patch that is missing, I though the patch list was complete by now, 9 thre were issues with the u4 Ku not being available due to a requirement being uprev'ed. I could be wrong, but as u4 is finished all patches are now cut. Enda Ihsan Zaghmouth wrote: Hi Enda, What is the status of the NOTE in 72099 ? Are you saying that the list is complete and that whatever is there is final and it should work now ? *Should the info doc be revised then ? Please advise * NOTE: *This patch list is currently incomplete for Solaris[TM] 10 systems running zones. * One patch that impacts patching zones is not available for either SPARC or x86 platforms. *If you are not using Live Upgrade on a system involving zones, the patch list below is complete.* Ihsan Enda O'Connor wrote: Hi This should work, are you using the latest released rev's of the patches listed, are you using the packages from 8/07? So what are the exact steps you followed, and the step where the error occurs. Enda Sergiy Kolodka wrote: Guys, Can someone please confirm or deny if it is possible to do LiveUpgrade from Solaris 11/06 with zones installed to Solaris 8/07 ? I've tried to apply all patches mentioned in 72099 doco from SunSolve, but lucreate keep complaining that I need to install all required patches in order to do upgrade with zones, and I'm pretty sure that I already have all them installed, actually I checked that five times and they are in fact. Whey I deattached zones process went pretty smooth, but that's not what I'm looking for. So, am I missing something or LiveUpgrade with zones still doesn't work ? Thanks ! This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Detect pkgs installed with -G?
Hi Jeff, Here is my 2 cents, after assuming that pkginfo should hold such an information post installation. The /var/sadm/pkg/pkgname/pkginfo file lists among other info the SUNW_PKG_ALLZONES=true | false for the installed package. pkginfo should have an option to display that, I tried them all ... in vain ! if you write a small script, you can display the /var/sadm/pkg/pkgname/pkginfo, you should be able to distinguish: : cat /var/sadm/pkg/pkgname/pkginfo | grep SUNW_PKG_ALLZONES, and if: SUNW_PKG_ALLZONES=true ... Then installed on all Zones SUNW_PKG_ALLZONES=false ... Only Global -G Ihsan Jeff Victor wrote: How can someone learn whether a package was installed in the global zone *with* -G - or without it? ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Detect pkgs installed with -G?
Much appreciated Shawn. Shawn Ferry wrote: Jeff, Ihsan; You want the command pkgparam, examples below. Shawn On Sep 5, 2007, at 2:54 PM, Ihsan Zaghmouth wrote: Hi Jeff, Here is my 2 cents, after assuming that pkginfo should hold such an information post installation. The /var/sadm/pkg/pkgname/pkginfo file lists among other info the SUNW_PKG_ALLZONES=true | false for the installed package. pkginfo should have an option to display that, I tried them all ... in vain ! pkgparam -v pkgname or pkgparam pkgname param e.g. pkgparam SUNWcsr SUNW_PKG_ALLZONES true pkgparam -v SUNWcsr CLASSES='none ttydefs initd renamenew preserve cronroot passwd tiservices inetdconf definit etcremote nsswitch netconfig deflogin defsu syslogconf ttysrch group inittab etcrpc etcprofile mailxrc shadow locallogin localprofile logadmconf logindevperm nscd fstypes pamconf services rbac renameold dhcpinittab policyconf pkcs11confbase defpasswd vfstab manifest hosts' BASEDIR='/' LANG='' TZ='GMT0' PATH='/sbin:/usr/sbin:/usr/bin:/usr/sadm/install/bin' OAMBASE='/usr/sadm/sysadm' PKG='SUNWcsr' NAME='Core Solaris, (Root)' ARCH='i386' VERSION='11.11,REV=2007.01.05.02.51' SUNW_PRODNAME='SunOS' SUNW_PRODVERS='5.11/snv_55' SUNW_PKGTYPE='root' MAXINST='1000' CATEGORY='system' DESC='core software for a specific instruction-set architecture' VENDOR='Sun Microsystems, Inc.' HOTLINE='Please contact your local service provider' EMAIL='' SUNW_PKGVERS='1.0' SUNW_PKG_ALLZONES='true' SUNW_PKG_HOLLOW='false' SUNW_PKG_THISZONE='false' PSTAMP='elpaso20070105025839' PKGINST='SUNWcsr' PKGSAV='/var/sadm/pkg/SUNWcsr/save' MODIFIED_AFTER_INSTALLED='' INSTDATE='Mar 23 2007 06:09' if you write a small script, you can display the /var/sadm/pkg/pkgname/pkginfo, you should be able to distinguish: : cat /var/sadm/pkg/pkgname/pkginfo | grep SUNW_PKG_ALLZONES, and if: SUNW_PKG_ALLZONES=true ... Then installed on all Zones SUNW_PKG_ALLZONES=false ... Only Global -G Ihsan Jeff Victor wrote: How can someone learn whether a package was installed in the global zone *with* -G - or without it? ___ zones-discuss mailing list zones-discuss@opensolaris.org -- C:\Sun\Presentation2.jpg C:\Sun\Presentation2.jpg ___ zones-discuss mailing list zones-discuss@opensolaris.org -- Shawn Ferry shawn.ferry at sun.com Senior Primary Systems Engineer Sun Managed Operations ___ zones-discuss mailing list zones-discuss@opensolaris.org -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Cannot ping gateway from zone
Could you briefly explain what did you do to fix it and what actions did you commit since then ...? Asif Iqbal wrote: good catch! however I fixed then since then. That was the only thing that was wrong I still have same problem. On 8/31/07, Ihsan Zaghmouth [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Asif, e1000g1:2: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 6 zone zone2 inet 36.15.189.77 http://36.15.189.77 netmask ff00 broadcast 63.171.189.255 http://63.171.189.255 Your broadcast address is wrong, shouldn't be 36.15.189.255 http://36.15.189.255 ? Asif Iqbal wrote: On 8/31/07, Steffen Weiberle [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Can you ping it from 'testzone'? It is configured on the same interface, e1000g1. Exact same problem w/ testzone. I cannot ping the default gw Asif Iqbal wrote: Hi All I have a global zone and two non-global zones on same ip segment. I can ping the default gw from global zone and zone1. But I cannot ping the gw from zone2. Any idea what could be the problem? Here is my setup. ifconfig -a on global lo0: flags=2001000848LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL mtu 8232 index 1 inet 127.0.0.1 http://127.0.0.1 netmask ff00 lo0:1: flags=2001000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL mtu 8232 index 1 zone zone1 inet 127.0.0.1 http://127.0.0.1 netmask ff00 lo0:2: flags=2001000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL mtu 8232 index 1 zone testzone inet 127.0.0.1 http://127.0.0.1 netmask ff00 lo0:3: flags=2001000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL mtu 8232 index 1 zone zone2 inet 127.0.0.1 http://127.0.0.1 netmask ff00 e1000g0: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 2 inet 36.15.189.55 http://36.15.189.55 netmask ff00 broadcast 36.15.189.255 http://36.15.189.255 ether 0:14:4f:3f:eb:30 e1000g1: flags=1000842BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 6 inet 0.0.0.0 http://0.0.0.0 netmask 0 ether 0:14:4f:3f:eb:31 e1000g1:1: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 6 zone testzone inet 36.15.189.21 http://36.15.189.21 netmask ff00 broadcast 36.15.189.255 http://36.15.189.255 e1000g1:2: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 6 zone zone2 inet 36.15.189.77 http://36.15.189.77 netmask ff00 broadcast 63.171.189.255 http://63.171.189.255 e1000g2: flags=1000842BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 8 inet 10.0.0.11 http://10.0.0.11 netmask ff00 broadcast 10.255.255.255 http://10.255.255.255 ether 0:14:4f:3f:eb:32 e1000g3: flags=1000842BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 7 inet 192.168.1.1 http://192.168.1.1 netmask ff00 broadcast 192.168.1.255 http://192.168.1.255 ether 0:14:4f:3f:eb:33 e1000g3:1: flags=1000843UP,BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 7 zone zone1 inet 36.15.189.15 http://36.15.189.15 netmask ff00 broadcast 36.15.189.255 http://36.15.189.255 e1000g3:2: flags=1000842BROADCAST,RUNNING,MULTICAST,IPv4 mtu 1500 index 7 zone zone1 inet 192.168.1.1 http://192.168.1.1 netmask ff00 broadcast 192.168.1.255 http://192.168.1.255 netstat -nr on global Routing Table: IPv4 Destination Gateway Flags Ref Use Interface - - -- - 36.15.189.0 http://36.15.189.0 36.15.189.55 http://36.15.189.55U 1 47174 e1000g0 224.0.0.0 http://224.0.0.0 36.15.189.55 http://36.15.189.55U 1 0 e1000g0 default 36.15.189.254 http://36.15.189.254 UG1 67052 netstat -nr on zone1 Routing Table: IPv4 Destination Gateway Flags Ref Use Interface - - -- - 36.15.189.0 http://36.15.189.0 36.15.189.15 http://36.15.189.15 U 1 6328 e1000g3:1 224.0.0.0 http://224.0.0.036.15.189.15 http://36.15.189.15 U 1 0 e1000g3:1 default 36.15.189.254 http://36.15.189.254 UG1 67062 127.0.0.1 http://127.0.0.1 127.0.0.1 http://127.0.0.1UH3 8152 lo0:1 netstat -nr on zone2 Routing Table: IPv4 Destination Gateway Flags Ref Use Interface - - -- - 36.15.189.0 http://36.15.189.0 36.15.189.77 http://36.15.189.77U 1 4 e1000g1:2 224.0.0.0 http://224.0.0.0 36.15.189.77 http://36.15.189.77U 1 0 e1000g1:2 default 36.15.189.254 http://36.15.189.254 UG1 67178 127.0.0.1 http://127.0.0.1 127.0.0.1 http
[zones-discuss] luzonevfs failed and Zones tool patches
Hi, I have done everything according to the prerequisites and ran liveupgrade20 from the U4 media onto my Solaris10 X64. All patches in the info doc 72099 have been installed afterwards. Still experiencing the following error which I don't have answer for. root # lucreate -n Solaris10U4 -m /:/dev/dsk/c1d0s3:ufs Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environment configurations ERROR: luzonevfs failed on BE path /: ERROR: You must install the Zones tool patches to use this set of Live Upgrade packages to manipulate the boot environment at /. ERROR: cannot determine file system configuration for boot environment Sol10U3-S0 ERROR: cannot cross reference device list with file systems for boot environment Sol10U3-S0 ERROR: cannot cross reference available devices with system configurations ERROR: cannot determine physical and logical storage device availability ERROR: please review all file system configuration options ERROR: cannot create new boot environment using options provided Has anyone experienced this and what are the Zones tool patches beyond what is in 72099 ? Regards -- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Zones question with san device
Peter, You can make filesystems read/write accessible on the fly, for example mounting : (global)# mount -F lofs /usr_sap_zsapr3p /zones/zsapr3p/root/usr/sap This makes global directory /usr_sap_zsapr3p where Veritas filesystem is mounted in global zones, available under /usr/sap in the local zone zsapr3p. You need to add fs with zonecfg to the zone's configuration so you may get the mount-point after zone reboot Hope this helps Ihsan Peter Wilk wrote: All, IHAC asking the following customer has zones that when they commit a veritas filesystem (loopback) from a san storage device the non-global zone requires a reboot..is there another way to commit these files without a reboot ( the san storage device is not using nfs, it is local to the system) Thanks Peter = __ /_/\ /_\\ \Peter Wilk - OS/Security Support /_\ \\ / Sun Microsystems /_/ \/ / / 1 Network Drive, P.O Box 4004 /_/ / \//\ Burlington, Massachusetts 01803-0904 \_\//\ / / 1-800-USA-4SUN, opt 1, opt 1,case number# \_/ / /\ / Email: [EMAIL PROTECTED] \_/ \\ \ = \_\ \\ \_\/ = DAYLIGHT SAVINGS TIME The U.S. Energy Policy Act of 2005 mandates that Daylight Saving Time (DST) in the United States of America start on the second Sunday in March and end on the first Sunday in November starting in 2007. To see how your Sun System or Software may be affected, please visit http://www.sun.com/dst . ___ zones-discuss mailing list zones-discuss@opensolaris.org -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] A zone's project and pool associations
Hi, A general question regarding Zones, Pools and Projects within these Zones. I am aware of zone associations to pools and their psets. Many zones to 1 pool and not visa versa. The question is, if we decide on 2 Pools, could we associate a Zone to pool_1 (set pool = "pool_1" in zonecfg or poolbind) and the Project within this Zone to the other pool via the project.pool = "pool_2" under FSS and is there any reservations for doing so ? cheers Ihsan -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org
[zones-discuss] Re: Where is the nice tool.
Brad, When you will have 1.7 for general availability. As of now, its not accessible Regards Ihsan This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Where is the nice tool.
Hi Brad, When you will have 1.7 for general availability ? As of now, http://mydataexchange.net/zonemgr/zonemgr-1.7.txt is not accessible You don't have permission to access /zonemgr/zonemgr-1.7.txt on this server. Please advise Regards Ihsan Brad Diggs wrote: If you like a CLI flavor of zones management, you might consider trying the Zone Manager (zonemgr). This script greatly simplifies automated zone creation and management. I recently updated the zonemgr to version 1.7 which includes a ton of new features. More info found here: http://mydataexchange.net/zonemgr Also, the Zone Manager has been contributed to the OpenSolaris community and will eventually be hosted as a project on the OpenSolaris.org site once all the legal review is completed. Regards, Brad ___ zones-discuss mailing list zones-discuss@opensolaris.org -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] question on zones and OS level
Jeff Victor wrote: Ihsan Zaghmouth wrote: Peter, What are the type of the Zones ? It would be interesting to try this with a Whole-root Zone, definitely not a Sparse. 1. Fully Backup the 01/06 Whole-Root and then update the GZ with 06/06. 2. You can't keep any zones for now during updates (Zulu initiative working on this ), so you have to unconfigure/delete them You *can* keep zones if you use standard upgrade. You cannot keep zones if you use Live Upgrade. Definitely, missed the differentiation. Thanks for pointing it out 3. After update, Restore the 01/06 Whole-Root and check it out ... 4. Check it out and hope for the best .. Sun would not support this. I agree 100%. This was a hypothesis/thought of desperation/adventure, which realisticaly Sun does not support . What would be the reason to stay behind on 01/06 if 06/06 comes with the best of 01/06 and more ! cheers Ihsan Peter Wilk wrote: All, IHAC that is asking the following..( solaris 5.10) if the global zone is at 01/06 and there are 2 non root zones at 01/06 if the global zone is updated to 06/06 can the user update only 1 zone to 06/06 and leave the other zone at 01/06.. if so is there any issues that I need to communicate -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] CDROM access in a zone
Mahesh, To add a CD-ROM: run zonecfg and add the foolowing statements: add fs set dir=/cdrom set special=/cdrom set type=lofs set options=[nodevices] end Ihsan Mahesh Shakthy wrote: Hello, I am new to zones and I was wondering if anyone has ideas about accessing cdrom drive from the global zone. I know I can add devices and mount the cdrom device in the local zone, but I am curious to know if it can be mounted and accessed similar to how vold does it on the global zone. Thansk in advance. Mahesh This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] VCS failover of non-global zones between systems.
Peter, Personal experience For now and until zone migration options (detach/attach) are available, we have created primary/standby zones (same names and configurations on all failover nodes) with one condition where zones are in the following states: primary=RUNNING and all standby=INSTALLED. VCS will use its ZONE agent to online/offline/clean/monitor zone's status and its application entry-points done with "hawizard" veritas VCS 4.1 tool. Once the zone migration options (detach/attach) are available to the public, same all same OS rules of engagement on cluster nodes (Patches, Packages, ..etc) would have to be stressed out and Zones would be created on their own Shared Storage (1 Volume/1 DiskGroup) for the zone content itself to migrate, so we could experience steps like: Shutdown Application within Zone umount all application NFS shared volumes related to zone's application (NAS case) Halt the Zone to INSTALLED state Detach The Zone (zoneadm -z zonename detach) umount all application volumes related to zone's application (SAN case) Deport All DiskGroups for both Applications and Zone. Reverse the steps on the failed-over node and Attach the Zone (zoneadm -z zonename attach), boot the zone and start the application. At least that is what we have done so far and yet to experiment with zone migration options (detach/attach) shortly. Hope this helps cheers Ihsan Peter Wilk wrote: IHAC that is asking the following question from customer I have a question regarding the VCS failover of non-global zones between systems. Veritas supports a "zone" agent that is used to failover zones between systems under cluster control. There are several documented (by veritas) restrictions on this: (1) You must create a zone.xml file that is unique to cluster member, as well as update the index.xml file for the zone. (2) The zone root must be on shared SAN disk (managed by VxVM) so that it can be visible to both machines. My question is regarding Sun support of this technique. Due to the restrictions of non-global zones for patching and package-adding, when you patch (or add a package to) a system when the failover zone is present, it'll get patched. If it isn't present, it won't. This implies that both (all) systems in the cluster MUST be at identical patch levels. Does Sun support the migration of zones from one machine to another via this technique? Is there an official position? Thanks Peter ___ zones-discuss mailing list zones-discuss@opensolaris.org -- My2Signature Ihsan Zaghmouth Sr. SAP Solution Architect SUN-SAP Business Applications Group (832) 859-2818 (Cell) (713) 784-2818 (Home) (713) 784-2818 (Fax) [EMAIL PROTECTED] ___ zones-discuss mailing list zones-discuss@opensolaris.org