Re: [zones-discuss] zonestat 1.4.1 problem

2009-06-11 Thread Phil Freund
Jeff,

Those lines were already commented out. It looks like the problem is in these 
lines:

500  # Get amount and cap of memory locked by processes in each zone.
   501  $kstat-update();
   502  my $zh = $kstat-{caps};
   503  foreach my $z (keys(%$zh)) {
   504($lkd_use[$z], $lkd_cap[$z]) = @{$kstat-{caps}{$z}
   505   {lockedmem_zone_.$z}}{qw(usage 
value)};
   506  #printf (kstat: lkd_use[$z
   507$lkd_use_sum += $lkd_use[$z];
   508  # $lkd_cap[$z] = $lkd_cap[$z]/1024;
   509  # printf ($z:lkd:%d MB / %d %s.\n, $lkd_use[$z]/1024/1024,
   510  #$lkd_cap[$z](1024^3) ? $lkd_cap[$z]/1024/1024/1024 : 
$lkd_cap[$z]/1024,
   511  #$lkd_cap[$z](1024^3) ? TB : MB);
   512
   513($vm_use[$z], $vm_cap[$z]) = @{$kstat-{caps}{$z}
   514 {swapresv_zone_.$z}}{qw(usage value)};
   515$vm_use_sum += $vm_use[$z];

Thanks,
Phil
-- 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] zonestat 1.4.1 problem

2009-06-10 Thread Phil Freund
I have a couple of servers that are still running U1 but I'd still like to use 
zonestat to get as much info as I can.

I get the following output when I run zonestat 1.4.1 with debug turned on:

root zonestat -l -N
/usr/sbin/prtconf
/bin/pagesize
/bin/echo 'pages_pp_maximum/D;segspt_minfree/D' | mdb -k
/usr/sbin/zoneadm list -v
/usr/sbin/psrinfo
/usr/bin/svcs -H pools
svcs: Pattern 'pools' doesn't match any instances
/bin/ps -eo zone,pset,pid,comm | grep ' [z]*sched'
/usr/bin/ipcs -mbZ
Attempt to access disallowed key 'caps' in a restricted hash at zonestat line 
502.
root

Any ideas on how to fix this?

Thanks,
Phil
-- 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Zones and Veritas Volume Manager... easy

2007-06-06 Thread Phil Freund
Hi Gael,

I went through thinking about this situation almost a year ago and came to the 
conclusion that the zone root filesystems have to be UFS so they can be 
upgraded. I have turned on MPxIO to give me multipathing support for the zone 
roots which are all setup on separate SAN LUNs. I can only have two internal 
disks in my E2900s so I can't use Live Upgrade. The only other way to upgrade 
is using an install DVD and that method supports neither MPxIO nor any of the 
Veritas software. So to upgrade, I have to do the following:
1. Break the boot disk mirror and remove the mirror drive from the boot disk 
group.
2. Unencapsulate the boot disk.
3. Turn off MPxIO (and replace the vfstab with one that points the zoneroots to 
non-MPXIO device names)
4. Reboot the server to make doubly sure the zoneroots all mount correctly. (If 
they don't and you try to upgrade, the upgrade fails after upgrading the global 
and you have a broken system.)
5. Upgrade using the DVD.
6. Turn on MPxIO (and restore the original vfstab).
7. Re-encapsulate the boot disk.
8. Re-mirror the boot disk.

I have done this successfully on one of my Lab servers. There are at least 6 
reboots involved. It's not pretty but it does work. At least this way, I still 
have the original mirror disk to fall back on if the upgrade fails.

Regards,
Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: zones databases

2007-03-22 Thread Phil Freund
Just a reminder that QuickIO is only supported in a global zone. It uses a 
special device that can't be created in a non-global zone. It should be a 
fairly moot issue anyway since QuickIO is only useful for Oracle 8i and lower 
releases, none of which are still supported. QuickIO is still supported in 
Storage Foundation 4.1 but the support is dropped in SF 5.0. On the other hand, 
Oracle 9i and up make use of ODM which is supported in both global and 
non-global zones.

Regards,
Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Re: Re: Re: Patching problem with whole root zones

2007-03-14 Thread Phil Freund
Enda,

All of the zones were booted to milestone all when 119254-34 was applied. Since 
the patch didn't require a reboot, I did it with all zones and global fully 
operational.

Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Patching problem with whole root zones

2007-03-06 Thread Phil Freund
When is it necessary to patch whole root zones directly?

I was trying to make sure that I had all of my zones ready to be updated with 
DST patch 122034-04 so I made sure to install patch utilities patch 119254-34 
on all of the global zones. I never thought to install it on the whole root 
zones separately. 

However, after applying kernel patch 118833-36 and the DST 122034-04 patch  
yesterday, I find that although all of the sparse zones show 118833-36 as being 
applied, none of the whole root zones do. 

In checking out possible reasons, I looked to make sure that 119254-34 was 
installed in the whole root zones and found that it was not. I'm guessing that 
is why 118833-36 doesn't show as installed (and in fact doesn't appear to have 
been installed in the whole root zones based on missing changes in a script in 
/lib/svc/method). However, 122034-04 applied fine to both whole and sparse 
zones and shows up correctly with showrev -p commands.

What am I missing?

TIA,
Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: 3 questions about zones and containers

2006-10-27 Thread Phil Freund
You can also remove a LOFS mounted filesystem from a running zone with no 
problem. I do it all the time.

To do it, logon to the global zone and umount the filesystem with:

umount mount point in the non-global zone

Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Can SAMBA be run in a non-global zone?

2006-10-19 Thread Phil Freund
The blastwave.org Samba distribution doesn't have this issue: its shutdown 
(/etc/init.d/cswsamba stop) uses the pid IDs for smbd, nmbd, and winbindd 
stored in /opt/csw/var/locks/.

A quick FYI on using the blastwave distribution: If you are using sparse zones 
and need to run Samba with winbind, you have to install the Samba packages into 
the global so that the winbind package (CSWsambawb) can add the files to 
/usr/lib. That said, if you don't create a smb.conf file in the global, Samba 
won't start there, so it's not a big issue.

Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Upgrading Solaris 10 with configured zones

2006-09-04 Thread Phil Freund
I have hit a problem with zones in relation to doing Solaris upgrades - I can’t 
do them currently. I have expended considerable effort in trying to be 
positioned to do upgrades but they do not work for me as currently implemented. 
I’ll provide a summary of my situation and then make suggestions as to what I 
think needs to be done to make upgrades work. Let me know how you see it.

Background:  I have production Solaris 10 Update 1 environments installed on 11 
servers that I want to upgrade to Update 2 (or 3 or …) to get the new zone 
features. It is not feasible to rebuild these servers each time an update with 
a desired new feature or improvement is released so upgrades must be able to be 
done. The number of installed non-global zones per server varies from none to a 
current maximum of 11 (expected to reach around 20 on my five E2900 servers). 
On each server with installed zones, the zone root filesystems are each 
configured on individual SAN LUNs (8 GB for sparse zones, 12 GB for whole root 
zones) to make sure there is enough operational space for /var logs, lightly 
used home directories, required open source software, etc. Zones with heavy 
home directory requirements also have a separately mounted LUN for /export/home 
and each zone has 1-to-n separate LUNs for application filesystems. All of the 
servers have Sun-branded Qlogic HBAs that use the Leadville driver for SAN 
attachment which makes sure the drivers are available in the DVD mini-root. 
MPXIO has been turned on to make sure there are multiple paths to the zone root 
LUNs so that there is redundant access in case of HBA failure. This environment 
works fine operationally and for patching.

The Problem: I have tried unsuccessfully to upgrade one of my lab servers that 
is configured with 2 zones and opened a case with support only to be told that 
because the DVD mini-root does not support MPXIO, upgrades cannot be done and 
that I need to open a RFE to get that fixed; Engineering also noted that a 
future release of Live Upgrade will include MPXIO support and that should 
resolve the issue. Not only did the upgrade not work but it left the server OS 
trashed because it had already done the global upgrade before failing because 
it wasn’t able to find the zone roots to finish the upgrade. The real problem 
is that the upgrade process cannot identify and mount the zone roots to do the 
upgrade and as a result, the upgrade fails. Although it superficially appears 
that this is due to MPXIO not being available in the mini-root, isn’t really an 
MPXIO issue but an issue of the zone roots being implemented on external LUNs. 
The same exact problem would exist if the zone roots are mounted on any 
external drive without MPXIO because the controller address of the mountpoint 
in the vfstab would be unlikely to match the controller address allocated by 
the mini-root, again causing the mounts to fail during an upgrade. Live Upgrade 
is not currently an option nor do I expect it to ever be an option (even with 
MPXIO support) both because of the way my zone roots are mounted and because my 
E2900 servers have only 2 internal drives which are mirrored for high 
availability leaving no place to do a Live Upgrade. 

I think this points to a more basic functional integration problem with doing 
upgrades that hasn’t been considered in the overall zone implementation design. 
The design seems to assume that all zone roots are implemented on an internal 
drive; in that scenario, there is no problem because the zone roots are always 
on a drive available and easily identified when booted from the DVD mini-root. 
That’s great in the lab or if there is a huge internal drive that can truly 
hold all zone roots. It would be great if the marketing myth that all zones can 
be tiny were true. Unfortunately, that doesn’t account for the need to run 
zones operationally with usefully sized space for /var, /export, and /opt. In 
my case, that would mean an internal drive capable of holding at least 200 GB 
for zone roots, plus swap, plus around another 12 GB for the global zone OS and 
/var. Even if that size drive were available, I wouldn’t want to use it because 
of much poorer performance for the zones than for those installed on individual 
drives/LUNs due to the drive latency and head position conflict issues of being 
on one single drive. 

I think of a couple of ways to handle upgrades that would solve my problem.

Fix Option 1: The fix needed for the upgrade process requires that the upgrade 
process be able to discover the location of each of the zone roots whether they 
are on internal or external drives and then mount them as part of the upgrade 
process. If a drive can be seen from the mini-root and zones have been 
implemented, the zone root(s) should be able to be discovered using data from 
the boot disk’s vfstab and some kind of probing procedure. At a minimum, LUNs 
visible from HBAs supported in the Leadville driver should be able to be 

[zones-discuss] Zone Backups and NetBackup

2006-08-16 Thread Phil Freund
If you are going to backup a zone using the NetBackup client from within the 
zone and you want to get all the filesystems automatically, you have to do the 
following NetBackup policy setup:

In the NetBackup policy:

1. Enable the Follow NFS option
2. Enable the Cross mount points option
3. Leave the Allow Multiple Streams option disabled
4. Specify / as the only entry in the Backup Selections list

Here's why:
If the Follow NFS option is not enabled, you can't backup NFS or LOFS 
filesystems. They get skipped.

You can't use the ALL_LOCAL_DRIVES directive in the Backup Selection list to 
automatically discover all of the filesystems because the LOFS filesystems are 
not considered as local drives by NetBackup (even though they show up in a 
mount -v output).

If you don't enable the Cross mount points option, you won't get anything but / 
backed up.

The Allow Multiple Streams option won't do any good in this scenario because 
there aren't multiple drives to setup on separate backup streams.

If you need/want to do multiple data streams from within a zone, you can't use 
any automatic discovery. You have to manually setup each stream and filesystem 
within the Backup Selection list.

I have filed an enhancement request with Symantec asking them to change 
processing to allow the ALL_LOCAL_DRIVES processing to include LOFS filesystems 
but I have no idea if they will accept it or what the implementation timeframe 
would be even if it is accepted. For now we're stuck with this workaround. If 
you want this enhancement, file your own request at enhancement dot veritas dot 
com

BTW, I only found this out by accident when looking through my NetBackup All 
Log Entries report and noticing that the LOFS filesystems were being skipped. I 
checked to see what was available for restore and discovered that only / was 
available and every LOFS mounted directory was empty below the directory name. 
The backup executions were returning status code 0 indicating no problems were 
encountered.

Regards,
Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org