Re: [zones-discuss] Need to force delete zones

2007-10-31 Thread Jerry Jelinek
Michael Webb - Sun Microsystems wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
  
 please respond directly to me as I am not on the alias.
 I've built a number of zones for Sun Cluster deliveries so I know just
 enough to get by.
 I managed to zorch the /etc/zones~.xml files as well as the index
 entries of several zones and now I cannot fully uninstall or undelete
 them.
 No zoneadm or zonecfg commands will detach, uninstall or delete the
 configurations.
 
 What other recourse do I have to remove the zone's old configuration?
 Are there any links within SWAN that you all point me to.

Could you tell us how your system state got corrupted?
At this point, since your system state is corrupted, you would
have to manually try to clean things up.  The zone commands don't
have any built-in support for dealing with a corrupt system.
If these file got messed up because of manual editing maybe
you could look at a good system and try to manually edit the
data back into a valid configuration.  Otherwise you probably
need to delete everything and start over.

Good luck,
Jerry
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] [Fwd: Cloned zones and printers - redux]

2007-10-31 Thread Mike Gerdts
On 10/31/07, Norm Jacobs [EMAIL PROTECTED] wrote:
 Mike Gerdts wrote:
  It seems as though printers.conf could point to localhost in the
  master zone and the clones would then also point to their respective
  selves.  Is there something broken with that approach?
 
 Not that we don't see it, but we don't recommend configuring more than
 one system (or zone in your situation) as a print server for a network
 attached printer.  Doing so can cause starvation and doesn't provide you
 with an accurate view of the print queue because there are multiple
 queues feeding the device.  The bottom line there is that you can end up
 submitting a print job, appear to be printing or ready to print and sit
 there while other systems (or zones) are busy printing copies of the
 phone book.  Every time you check the queue, you are either printing
 or waiting for the device, but the activity on the device is invisible
 to your system or zone.  Now, if the printer in question isn't going to
 be particularly busy or is only really used by a small number of people,
 it's not a real problem.

In the general case, I agree with you.  This, however, is something
more of an enterprise print architecture argument rather than
addressing the zone configuration.

There are cases where a local print queues for the same physical
printer may be appropriate.  For example, if I have a node-locked
licensed application that prints in a format that is not understood by
the printer, the local print queue may act as a filter that translates
to PS or PCL then forwards to a traditional print server that feeds
print jobs to the real printer.  If the print server does not have a
license for the app (or is otherwise incapable of running the app), it
may not be able to use the application to do the file conversion.
Several years back I dealt with engineering apps that would generate
100's of pages of garbage if File-Print was used.  If the print queue
recognized the print job format and put it through the right filter
(that called the app in command line mode to do a conversion) it would
print a nice single page drawing.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] b75 ip-instances bug?

2007-10-31 Thread John-Paul Drawneek
Ping also seems weird.

I can ping boxes from the zone alright.

But I have big problems ping the zone.

box1 ping zone
65 packets transmitted, 46 packets received, 29% packet loss
round-trip (ms)  min/avg/max/stddev = 0.807/9149./4.02e+04/1.17e+04

zone ping box1
51 packets transmitted, 51 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.669/297.310/5020.301/1011.132
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] b75 ip-instances bug?

2007-10-31 Thread John-Paul Drawneek
Network can ping the zones and global zone

zones and global zone can't ping each other

:(
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] NFS: Cannot share a zfs dataset added to a labeled zone

2007-10-31 Thread Danny Hayes
- I set the mount point as follows.

zfs set mountpoint=/zone/restricted/root/data zone/data

- I then added the dataset to the restricted zone using zonecfg. The full path 
to the dataset is now /zone/restricted/root/zone/restricted/root/data. I am not 
sure if that is what you intended, but it is a result of adding it as a dataset 
to the zone after setting the mountpoint.

- I updated the /zone/restricted/etc/dfs/dfstab with the following line.

/usr/bin/share -F nfs -o rw /zone/restricted/root/zone/data

- During reboot I receive the following error.

cannot mount 'zone/data': mountpoint or dataset is busy
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: 
exit status 1
Oct 31 14:43:08 svc.startd[19960]: svc:/system/filesystem/local:default: Method 
/lib/svc/method/fs-local failed with exit status 95.
Oct 31 14:43:08 svc.startd[19960]: system/filesystem/local:default failed 
fatally: transitioned to maintenance (see 'svcs -xv' for details)

- This is exactly the same problem that prompted the original message. Service 
fail during boot which prevent opening a console. This only occurs when you try 
to share the dataset. If you remove the line from 
/zone/restricted/etc/dfs/dfstab and reboot the zone everything works fine. Any 
ideas what I am doing wrong?
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] ALL_ZONES and other magic names.

2007-10-31 Thread Darren Reed

In reviewing the current interfaces made available from
zones, it appears that ALL_ZONES and GLOBAL_ZONE
are still private interfaces - correct?

If so, if I'm forced to expose the existance of zones
through an API to kernel consumers and then somehow
map zone names into zoneid_t's.  While userspace has
a getzoneidbyname(), there appears to be no equivalent
in the kernel - using zone_find_by_name() appears to
be the only way.  Is that a project private interface or
a consolidation private interface?

It would appear that the current state of the zones
interfaces requires cooperation between a program
in user space that maps a zone name to a zoneid_t
and then passes the zoneid_t into the kernel, correct?

But doing this begats the original problem - there
would seem to be no correct way of specifying all
zones in the zoneid_t if ALL_ZONES is also private.

What I'd like to do is offer an API to consumers in which
they can specify a zone by virtue of its name and resolve
that internally inside the kernel.  In such a design, I might
choose to make a NULL name a wildcard, whilst allowing
the API to also be specific to the global zone or other
local zones.  Thus I'd prefer to avoid exposing zoneid_t's
to the consumers completely, if I can.

Comments?

Darren

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones y diferentes redes

2007-10-31 Thread Wences Michel
Howdy Claudio!

I am not sure what you are trying to do and I will take a stab at it.

The first thing to do is set up your network interface card (nic) for 
the global zone.
Make sure you install nic normally and there is no need set routes.
Next you need to create network interfaces for your none global zones. 
The following is an example on what you should do to create two new zones.
And shows how to add the network interface cards for zone1 and zone2.

global# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1 create
zonecfg:zone1 set zonepath=/zoneroots/zone1
zonecfg:zone1 set autoboot=true
zonecfg:zone1 add net
zonecfg:zone1:net set address=192.9.200.67
zonecfg:zone1:net set physical=hme0
zonecfg:zone1:net end
zonecfg:zone1 ^D

global# zonecfg -z zone2
zone2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone2 create
zonecfg:zone2 set zonepath=/zoneroots/zone2
zonecfg:zone2 set autoboot=true
zonecfg:zone2 add net
zonecfg:zone2:net set address=192.9.201.67
zonecfg:zone2:net set physical=hme0
zonecfg:zone2:net end
zonecfg:zone2 ^D

The physical nic is in the global zone and the virtual nics as created 
for the non global zones.
Traffic between global zone and non global zones is done via local 
network loopback.
So network traffic between zones never leaves the box, so there is no 
need to setup routes. 
Before Solaris 10 8/07 there is no way to dedicate a physical nic to a zone.
In Solaris 8/07 and Solaris Developers Express you can dedicate a 
network card to a zone using ip-instances. 
Ip-instances allows you to dedicate a nic to a non global zone and you 
can set up default routes, place a firewall, etc...

So I think your problem is the routes that you have setup in the global 
zone. 
Get rid of the routes in the global zone and I think everything will 
work as expected.

Hope this helps.

Thanks!

Wences


Claudio J. Chiabai wrote:
 Estoy tratando de configurar zonas en una red diferente a la zona global. 
 Estoy algo confundido y perdido :-) La zona global corresponderia a una 
 subred diferente a las que tiene las zonas. He visto varios post pero solo 
 llego hasta un punto.

 La zona global esta completamente configurado y funcional. Es posible hacer 
 ping de cualquier lado y la maquina responde. Se puede acceder a a los 
 servicios sin problema (ejemplo SSH). Configuro las zonas por la via 
 standard (zonecfg), les configuro una ip y les asigno el dispositivo de red. 

 No puedo hacer ping de ningun lado a la zona que esta corriendo. Desde afuera 
 o dentro de la red de la empresa no responde, inclusive desde la misma zona 
 global.

 Investigando un poco me encuentro con este post : 
 http://forum.java.sun.com/thread.jspa?threadID=5075797messageID=9276196

 Sigo los pasos del enrutado que aparece al final del post. Alli logro que la 
 zona responda al pign desde cualquier lado, desde afuera y desde dentro de la 
 red. Puedo hacer ping de la zona global a la zona no-global. Hasta puedo 
 hacer ping a cualquier maquina externa o interna. O sea no tengo problema 
 alguno con el ping y que la maquina respondan a la zona y esta a las maquinas.

 Pero, por ejemplo, no puedo acceder via ssh. Solo lo puedo hacer desde la 
 zona global. 

 ¿Alguien conoce como setear correctamente el enrutado entre zona global y no 
 global?
 Cualquier informacion que se necesaria no tengo problemas de darlo.
 Agredezco desde ya cualquier orientacion que puedan darme ...
  
  
 This message posted from opensolaris.org
 ___
 zones-discuss mailing list
 zones-discuss@opensolaris.org
   

___
zones-discuss mailing list
zones-discuss@opensolaris.org

Re: [zones-discuss] [security-discuss] NFS: Cannot share a zfs dataset added to a labeled zone

2007-10-31 Thread Glenn Faden
There are some interesting ordering issues with respect to the steps 
required for this configuration:

1. The dataset's mount point must be within the zone's root path for it 
to be mounted read-write within that zone (you can't use lofs).

2. The dataset should not be mounted (by the global zone) at the time 
the (restricted) zone is booted (or the zone boot fails).

3. The default mount point should be changed after the dataset is 
created before assigning the dataset to the zone.

4. The mount point can be changed from within that zone after it is 
mounted (but only to a pathname within the zone).

5. When you specify that the dataset belongs to the zone (via zonecfg) 
it is mounted by the zone when the SMF service filesystem/local runs. 
This happens after the zoneadm boot command completes.

6. The sharing of the mounted filesystem must be done from the global 
zone since labeled zones can't be NFS servers.

When I looked at this more closely (after my second posting) I realized 
that it worked for me by accident (sorry). I did the share command by 
hand after I'd verified that the dataset was properly mounted in the 
restricted zone. But then I told you to edit the dfstab file without 
verifying that would work. As you have reported, that doesn't actually work.

The problem is that the share command in the dfstab file is processed by 
the zoneadm command (while is running in the global zone) and normally 
occurs after all filesystems are mounted (or so I thought). However, in 
the case of zfs datasets, they actually get mounted later (by the zone 
itself, not by zoneadm), so you wind up sharing the mount point before 
it is actually mounted. That makes the mount point busy, and causes 
the SMF service for mounting local filesystem to fail. The result is the 
zone is unusable.

The obvious workaround is the remove the entry from dfstab, and do it 
later in the global zone. I don't have a very elegant solution for 
automating this. All I can think of is a script which does something 
like this:

 MP=your-global-zone-mount-path
NOT_MOUNTED=1
   while ($NOT_MOUNTED); do
 mount -p | grep $MP /dev/null
 NOT_MOUNTED=$?
  sleep 1
 done
share $MP

I haven't explored other solutions, but it may be possible to express 
interest in an SMF property to determine when the zone's local 
filesystem service has completed.

It has been suggested that the share attribute could be specified via 
the zfs(1M) share option, but this won't work since it would be 
interpreted in the labeled zone instead of the global zone. Similarly 
the sharemgr doesn't seem to provide any special support for this case.

Another source of confusion is the specification of the mount point. If 
you are setting it in the global zone, you need to prefix it with the 
zone's root path. But once the zone is running, it can be specified from 
within the zone. In that case, the zone's root path should not be 
specfied. Otherwise you get that string repeated, which is not what you 
want.

I'm sorry this turned out to be a bit more complicated than I thought at 
first. But is can be made to work.

--Glenn

Danny Hayes wrote:
 - I set the mount point as follows.

 zfs set mountpoint=/zone/restricted/root/data zone/data

 - I then added the dataset to the restricted zone using zonecfg. The full 
 path to the dataset is now /zone/restricted/root/zone/restricted/root/data. I 
 am not sure if that is what you intended, but it is a result of adding it as 
 a dataset to the zone after setting the mountpoint.

 - I updated the /zone/restricted/etc/dfs/dfstab with the following line.

 /usr/bin/share -F nfs -o rw /zone/restricted/root/zone/data

 - During reboot I receive the following error.

 cannot mount 'zone/data': mountpoint or dataset is busy
 svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: 
 exit status 1
 Oct 31 14:43:08 svc.startd[19960]: svc:/system/filesystem/local:default: 
 Method /lib/svc/method/fs-local failed with exit status 95.
 Oct 31 14:43:08 svc.startd[19960]: system/filesystem/local:default failed 
 fatally: transitioned to maintenance (see 'svcs -xv' for details)

 - This is exactly the same problem that prompted the original message. 
 Service fail during boot which prevent opening a console. This only occurs 
 when you try to share the dataset. If you remove the line from 
 /zone/restricted/etc/dfs/dfstab and reboot the zone everything works fine. 
 Any ideas what I am doing wrong?
  
  
 This message posted from opensolaris.org
 ___
 security-discuss mailing list
 [EMAIL PROTECTED]
   

___
zones-discuss mailing list
zones-discuss@opensolaris.org