Thank you guys for your answers! Regarding this networking problem, I think I don't have
time to investigate into it and I have decided to deploy on Solaris 10 instead. The
changes in OpenSolaris is too overwhelming and even Sun cannot catch up with the
documentation (for example, sparse
What is best practice here?
Do not run {x}ntpd in the zones.
Actually there is a use-case for doing so - given that it's a
network-facing appliction, one might want to run xntpd in a non-global
zone for isolation reasons.
___
zones-discuss mailing
I just jumpstarted a T2000 twice. Once with 118833-33 and once with 120011-14.
The first time there was no problems with the November 2006 118833-33 but
when I re-jumpstarted it with the 120011-14 8-07 release I ended up with
multiple messages stating cannot negotiate hypervisor. I checked
zoneadm: zone 'int-sagent-1-z1': WARNING: bge0:1: no matching subnet
found in netmasks(4) for 172.20.46.188; using default of 255.255.0.0.
but my /etc/netmasks (on both the global and local zone) looks good:
What does the netmasks entry in /etc/nsswitch.conf say? A common
issue is that a
Oh. I though that pidentd was supposed to resolve UIDs locally.
That's one of the features of the protocol; it provides here's who
*I* think the user is information back to the requester.
Of course, that's why I thought IDENT was a fairly bogus mechanism
since you're asking the remote system
At least some of the servers that I can't access are using NFSv3
It has been my experience that NFSv4 on Solaris 10 and NFSv3 on other
hosts, including NetApp filers, cause all sorts of problems. Either you
get No Directory or the directory/files are owned by nobody.
I don't know about
I think we already have this as a potentially serious problem for
non-global zones that are NFS clients of the global zone, don't we?
Making it work right would involve either resolving the underlying
deadlock or somehow identifying those self-mounts and doing a lofs
mount from the global zone
2) Lack of requirements - we don't know what people want.
In addition to the requirements already stated by others, another
crucial one is a resolution of the infamous NFS/VM deadlock. There
have been numerous bugs filed over the years concerning it but I
believe the current one is
FYI, you can also use create -b (blank) so you don't have to run
remove-pkg-dir 4 times.
Actually, the documented way to create a whole-root zone *is* to remove
the default inherit-pkg-dir resources. The reason for this is create
-b says to use a blank template - namely, no properties set and
My desire is to have zones be part of the core in Nevada, possibly
by folding them right into SUNWcsu/SUNWcsr.
There's also a related CR open
6421453 RFE: SUNWCzone should be available in SUNWCmreq and
above
Fixing is involves an examination of SUNWzoneu's dependencies
If we want any form of internal consistency, wouldn't we also need to change
were we assign datalink names from zonecfg to dladm?
Thus no more 'net' resource in zonecfg for exclusive-IP zones, but instead
some
dladm set-zone zoneA bge1
Only having dladm show it, and not be able to
As Dan pointed out, there are already other commands such as
ifconfig(1M) and mount(1M) which manipulate or observe resources
assigned to a zones so using dladm(1M) wouldn't be that inconsistent.
Yes, but those provide for manipulation (aka change) and observability in the
same place.
I tried the kill and AFAICT root in the global zone can kill a process
in a non-global zone:
OK. I must be misremembering this. I thought the restriction was
more complex than that.
Within the global zone, the ability to kill a process in a non-global
zone is controlled by the proc_zone
Erik,
Here are my belated comments on the IP Instances design.
There are two documents which describe the design
si-interfaces - a high-level design focusing on the problem the
project solves, and what the user-visible changes are
A general comment that in both documents page
Erik,
One additional comment I meant to include is that I think it would be
useful to add a paragraph on what is possible today with the current
stack in terms of sharing a link versus what will be possible with IP
instances (using separate physical NICs or VLANs) versus what will be
possible
I propose that zlogin be split into two different programs, one
for console access and one for running programs and/or shell.
A simple way to do this (and would be backward compatible) would be to
create a hard link to zlogin, say 'zconsole' that when it is executed
the program can test arg0 and
Fernando,
I have two systems named fdo5 and fdoclt4. All the NICs in both systems
are connected to the same switch. fdoclt4 has 3 zones in it. When I
traceroute from fdo5 to any of the zones, the route has an extra hop (always
18.1.1.142). shouldn't this example resolve to 18.1.1.145
libc.so.1`realfree+0x68(2a3f0, 871, 93ac8, 3a4d8, 0, d)
libc.so.1`_free_unlocked+0xb0(ff1efa54, 0, 932f4, ff1efad4, ff1e8284, 2e460)
libc.so.1`free+0x24(2e460, 1084, 93334, 0, ff1e8284, 1000)
libcurses.so.1`delwin+0x80(0, 2df58, 2c068, fef03994, 0, 0)
libcurses.so.1`delscreen+0x5c(29748,
Could we somehow work the zone name into this? It would be nice for
e.g. poolstat(1) observability. Otherwise the user experience is going
to be all about trying to work out what 'SUNWzone34' maps to, which
seems poor.
We need to have the name begin with SUNW or we could have collisions with
Christine,
LU doesn't work for boxes with zones yet, afaik. zonepath on vxvm volumes
won't work for upgrade from 3/05 (granted, upgrade from 3/05 with zones isn't
supported anyway). I have no reason to think this would work with 1/06
Just to clarify that upgrade from 3/05 when zones are
20 matches
Mail list logo