On Tue, Nov 29, 2011 at 10:25 AM, <casper....@oracle.com> wrote: > >>I think the "too many open files" is a generic error message about >>running out of file descriptors. You should check your shell ulimit >>information. > > > Yeah, but mv shouldn't run out of file descriptors or should be > handle to deal with that. > > Are we moving a tree of files?
Recently I had some very weird experiences with running out of FD's while patching Solaris 10 U8 boxes with zones using Live Upgrade. Of the 4 very similar systems, two servers required using raising the FD's ridiculously high (IMO) for the zones using projmod. In this case, the running zones (not the snapshots being patched) were idle. The other two servers required me to change /etc/system in the running zones, reboot them, and then use LU to clone them before the patching would work. I raised the projmod limit to 2 million, and it still wouldn't work. In this case the running zones were doing "real work". Ostensibly, aside from the 2 where the zones were doing "real work", these servers were identical--built at the same time using jumpstart, zones built by cloning a dummy zone and applying changes via a script, same production application installed, and other changes made on 4 of them, etc. Based on this experience, I don't believe FD limit enforcements are not consistently implemented throughout the OS. I've not yet submitted an SR about it due to some unrelated non-technical issues that I've had to deal with. =Nadine= _______________________________________________ zfs-discuss mailing list firstname.lastname@example.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss