Re: [zones-discuss] how dynamic is your zones network configuration?

2010-06-04 Thread Derek McEachern
Never. We haven't ever had the need to change the interface for a zone.

On 6/4/10, Edward Pilatowicz  wrote:
> hey all,
>
> i had a quick questions for all the zones users out there.
>
> after you've configured and installed a zone with ip-type=shared (the
> default), how often do you change the network interfaces assigned to
> that zone via zonecfg(1m)?  frequently? infrequently? never?  only when
> moving from testing to production?  etc...
>
> thanks
> ed
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org
>

-- 
Sent from my mobile device
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread James Carlson
Ketan wrote:
> Let me know what command you want me to run on it for kstat  /truss  
>  
> as per kstat zfs:0:zrcstats:size the size is approximately 40G

Since there are a bunch of ways that the problem that Jason King was
describing could manifest, I think the only way to do this would be to
get the system in a state where Fusion consistently fails to run, and
then start it up with:

truss -fo /tmp/fusion.out fusion-command-and-args...

You'd then have to grovel through the /tmp/fusion.out and find out what
leads up to the failure and see if there's anything suspicious there.

Since Fusion is Oracle and OpenSolaris and ZFS are Oracle, maybe there's
another possibility.  This could be one of those cases where that
hoped-for "synergy" might kick in.  ;-}

-- 
James Carlson 42.703N 71.076W 
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] ZoneMgr 2.0.7 Released...

2010-06-04 Thread Brad Diggs
Hello All,Thanks again to all of the folks that have contributed bugs, fixes, new code and great ideas for the Zone Managerproject.  Today, the final version 2.0.7 is now available at dl.thezonemanager.com.  With the release of this version,I have officially deprecated and removed version 1.8.1.  Version 2.0.7 includes many new features and bug fixes.Here is the list new features in version 2.0.7: * When adding a zone, add support for specifying autoboot, comment, and bootargs via -o option.  As a result of this new feature, the -A feature for disabling autoboot has been depricated. * When adding or cloning a zone, if the root user password of the new non-global zone is not specified via -P or -E, the root user password of the global zone is inherited by the new non-global zone. * Made sparse root filesystem list inherit default values from the contents of the /etc/zones/SUNWdefault.xml file of the global zone. * When creating a whole root zone, use /etc/zones/SUNWdefault.xml to determine what directories should be removed (un-inherited) from the zone configuration. * Added the ability to add and remove directory inheritance via -o "addDir|/dir1[|/dir2|...]" or -o "rmDir|/dir1[|/dir2|...]". * Addedd ability to delete a device from the non-global zone via -a modify -m "del|device|". * Added support for modifying the default router. * Added support for FSS cpu shares with -p 'scpu|number'. * Added status action to list the status of all non-global zones.  The status action shows the state of the zones, the number of frequency of CPUs visible within the zones, and the zone uptime information. * Saved JASS output into its own log file. * Added the ability to apply resource management controls immediately.  This removed the need to reboot the zone when applying resource constraints. * Added support for multiple zonemgr invocations within a single input config file.  Use 'newcmd' to delimit between zonemgr invocations. * Added support for comments within input config file. * Unified the file format and location of all artifacts.  e.g. artifacts are files like output, log files, and configuration files.  Each invocation of the zonemgr script results in the creation of a new folder in ${HOME}/.zonemgr/) where the folder name is the current date and time.  All artifacts created for that invocation are stored in that directory. * With the addition of support for multiple zonemgr invocations as well as multiple zone actions, the artifacts have been broken out per action. * Provide option (-o keep_artifiacts) to keep and list of all artifacts created during the invocation of the zonemgr command.  The default action is to remove all artifacts upon successful completion of the zonemgr. * Added new service management mode called 'jail'.  This disables all but the very bare necessities including ssh to keep the zone running. * Added -o debug option to enable debuging. * Expanded the context of -n  to support multiple pipe delimited zone names.  e.g. -n "zone1[|zone2|zone3|...].  This applies to nearly all actions.  For example, now you can add 3 zones with zonemgr -a add -n "z1|z2|z3". * Enabled parallelization of select actions. * Removed requirement to speicfy a zone name by using a default zone name. If you run zonemgr without specifying a zonename (e.g. -n ), it will use a default zonename of zone where  is an incrementing number with prefixed zeros to keep the number four digits in length. This feature also finds the next available zone name in order to avoid errors when creating a new zone.  For example, if zones zone0001, zone0002, and zone0005 exist.  When I add 3 new zones with -o "dCount|3", zonemgr will create zones zone0003, zone0004, and zone0006. * Add the ability to name the prefix used by the default zone namer via -o "dPrefix|".  The default prefix is 'zone'.  For example, if no zones exist with the prefix'mysql' exist, creating three new zones with the 'mysql' prefix via -o "dPrefix|mysql" -o "dCount|3" will result in three new zones named mysql0001, mysql0002, and mysql0003. * Simplified the service restart flag format to support both multiple invocations of -S  as well as a single invocation with multiple svcs with a single -S "[|2>||...]" format. * Reformatted all of the documentation to conform to a 80 character width format.Here is the list of bugs fixed in version 2.0.7:* Updated all examples in documentation for new usage and new features. * Added exception update_hosts to not update /etc/hosts, if hosts are looked up rather than specified.  e.g. 'hosts' specified rather than IP address. * Fixed bug where applying a swap resource control to a non-global zone failed because multiple swaps exist in global zone. * Fixed bug where zonecfg fails if TERM=xterm-color. * Fixed bug where lofi/lofs filesystems were forced to readonly even for -w. * Fixed bug where netservices was not found. * Fixed bug in ck4fs bug in check_fs. * Fixed bug where quotes of -m flag are being ignored by optarg when inp

Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Ketan
Let me know what command you want me to run on it for kstat  /truss  
 
as per kstat zfs:0:zrcstats:size the size is approximately 40G
-- 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] how dynamic is your zones network configuration?

2010-06-04 Thread Edward Pilatowicz
hey all,

i had a quick questions for all the zones users out there.

after you've configured and installed a zone with ip-type=shared (the
default), how often do you change the network interfaces assigned to
that zone via zonecfg(1m)?  frequently? infrequently? never?  only when
moving from testing to production?  etc...

thanks
ed
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Jason King
Are you sure fusion isn't checking the amount of available memory
itself and just deciding to abort?

It wouldn't be unprecendeted -- if you run Oracle RDBMS on NFS mounts,
it refuses to start unless it sees explicit mount options provided for
the database filesystems (even when they are merely affirming the
default behavior).

If you can, I'd try using truss or such -- I'd be interested to see if
it's running vmstat or looking at some kstats.


On Fri, Jun 4, 2010 at 11:04 AM, James Carlson  wrote:
> Petr Benes wrote:
>>> That leaves unanswered the underlying question: why do you need to do
>>> this at all?  Isn't the ZFS ARC supposed to release memory when the
>>> system is under pressure?  Is that mechanism not working well in some
>>> cases ... ?
>>
>> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6522017
>>
>> " ... Even if the ZFS ARC subsequently frees memory, the kernel cage does
>> not shrink.  It cannot shrink, because pages from the ZFS ARC were
>> interspersed with other kernel pages, so the free space in the
>> physical address range of the cage is fragmented when the ZFS pages
>> are released.  The remaining kernel pages cannot be moved to compress
>> the cage, as kernel memory inside the cage is not relocatable. ..."
>
> Sure ... but that refers specifically to DR-related issues, and that's
> not what the original poster complained about.  His original message
> said that he was having trouble with a large application (Oracle Fusion)
> running on a system using ZFS.  Does Fusion really need contiguous
> kernel memory (why?) or is there something else going on here?
>
> --
> James Carlson         42.703N 71.076W         
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org
>
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Petr Benes
> Sure ... but that refers specifically to DR-related issues,

DR-related issues with kernel cage unable to return memory.
In case you are on a DR-capable system you have troubles with
DR itself. On other HW kernel won't just return memory to OS.


> and that's
> not what the original poster complained about.  His original message
> said that he was having trouble with a large application (Oracle Fusion)
> running on a system using ZFS.  Does Fusion really need contiguous
> kernel memory (why?) or is there something else going on here?
>
> --
> James Carlson 42.703N 71.076W 
>
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread James Carlson
Petr Benes wrote:
>> That leaves unanswered the underlying question: why do you need to do
>> this at all?  Isn't the ZFS ARC supposed to release memory when the
>> system is under pressure?  Is that mechanism not working well in some
>> cases ... ?
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6522017
> 
> " ... Even if the ZFS ARC subsequently frees memory, the kernel cage does
> not shrink.  It cannot shrink, because pages from the ZFS ARC were
> interspersed with other kernel pages, so the free space in the
> physical address range of the cage is fragmented when the ZFS pages
> are released.  The remaining kernel pages cannot be moved to compress
> the cage, as kernel memory inside the cage is not relocatable. ..."

Sure ... but that refers specifically to DR-related issues, and that's
not what the original poster complained about.  His original message
said that he was having trouble with a large application (Oracle Fusion)
running on a system using ZFS.  Does Fusion really need contiguous
kernel memory (why?) or is there something else going on here?

-- 
James Carlson 42.703N 71.076W 
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Petr Benes
> That leaves unanswered the underlying question: why do you need to do
> this at all?  Isn't the ZFS ARC supposed to release memory when the
> system is under pressure?  Is that mechanism not working well in some
> cases ... ?

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6522017

" ... Even if the ZFS ARC subsequently frees memory, the kernel cage does
not shrink.  It cannot shrink, because pages from the ZFS ARC were
interspersed with other kernel pages, so the free space in the
physical address range of the cage is fragmented when the ZFS pages
are released.  The remaining kernel pages cannot be moved to compress
the cage, as kernel memory inside the cage is not relocatable. ..."

Petr
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread James Carlson
Petr Benes wrote:
> add to /etc/system something like (value depends on your needs)
> 
> * limit greedy ZFS to 4 GiB
> set zfs:zfs_arc_max = 4294967296
> 
> And yes, this has nothing to do with zones :-).

That leaves unanswered the underlying question: why do you need to do
this at all?  Isn't the ZFS ARC supposed to release memory when the
system is under pressure?  Is that mechanism not working well in some
cases ... ?

-- 
James Carlson 42.703N 71.076W 
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Petr Benes
add to /etc/system something like (value depends on your needs)

* limit greedy ZFS to 4 GiB
set zfs:zfs_arc_max = 4294967296

And yes, this has nothing to do with zones :-).

Regards,
Petr

On 03/06/2010, Ketan  wrote:
> We are having a server running zfs root with 64G RAM and the system has 3
> zones running oracle fusion app and zfs cache is using 40G memory as per
>
> kstat zfs:0:arcstats:size.   and system shows only 5G of memory is free rest
> is taken by kernel and 2 remaining zones.
>
> Now my problem is that fusion guys are getting not enough memory message
> while starting their application due to top and vmstat shows 5G as free
> memory. But i read ZFS cache releases memory as required by the application
> so why fusion application is not starting up. Is there some we can do to
> decrease the ARC Cache usage on the fly without rebooting the global zone ?
> --
> This message posted from opensolaris.org
> ___
> zones-discuss mailing list
> zones-discuss@opensolaris.org
>
___
zones-discuss mailing list
zones-discuss@opensolaris.org