Re: [U2] AIX Argument list too long
cd /ud/TEST/_PH_ find . -mtime +90 -exec rm {} \; With find you're working with one file at a time so you should never hit the limit. Another point to remember is using xargs you can control the number of files in each rm. For example find . -mtime +90 | xargs -l20 rm (deletes 20 at a time). A little testing will help you with a sensible number but this speeds things up a lot. HTH Adrian --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
Bounce notice: If you read this list through Indexfocus, Nabble, or any other replicator - and you would like to be able to participate - sign up directly with us. Here's all you need to know: To subscribe or resubscribe please visit http://listserver.u2ug.org/. REPOSTED FOR NON-MEMBER ADDRESS: [EMAIL PROTECTED] One thing everyone should remember when using find is that it is best to specify the exact directory that you want it to work on. cd /ud/production/_PH_ find . -mtime +90 -exec rm {} \; That works fine if you are manually typing it in and have confirmed that you are in the proper directory. But a dangerous problem arises if you are running the find command in a script or you neglect to verify what directory you are in. The . tells find to start searching in the current directory. For example: #!/bin/ksh cd /ud/production/_PH_ find . -mtime +90 -exec rm {} \; The above script works perfectly until the directory structure is changed or a drive mount fails to mount. What happens then is that the cd line fails but the find command is still active in the script. Then the find command then starts executing in whatever directory the script is running in. If it is a cron job run by root then the active directory is: / The find command will then go through the entire disk happily deleting any file older than 90 days and before you know it, you have a lot of extra drive space. To stop that from happening always specify what directory that you want find to start looking in, don't use the ., instead type it in like this: find /ud/production/_PH_ -mtime +90 -exec rm{} \; Sure, it is a little bit more typing but in the long run it will save you a lot of extra work restoring files. Here is how I script my routines to purge old files: #!/bin/ksh if cd /ud/production/_PH_ then find /ud/production/_PH_ -mtime +90 -exec rm {} \; else echo Directory is missing fi If the /ud/production/_PH_ directory is missing or unmounted the script does nothing more than print Directory is missing, it skips the find command altogether. The find command specifically states what directory to work in also. Jim --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
NCARGS value configuration (5.1.0) In AIX 5L Version 5.1, the option has been added to allow the super user or any user belonging to the system group to dynamically change the value of the NCARGS parameters. In previous releases of AIX, these values were permanently defined as 24576, which resulted in a problem similar to that shown below when a large number of arguments are parsed to a command: # rm FILE* ksh: /usr/bin/rm: 0403-027 The parameter list is too long. The value of NCARGS can be increased to overcome this problem. The value can be tuned anywhere within the range of 24576 to 524288 in 4 KB page size increments. To display the value, use the following command. ncargs Purpose: Specifies the maximum allowable size of the ARG/ENV list (in 4KB blocks) when running exec() subroutines. Values: Default: 6; Range: 6 to 128 Display: lsattr -E -l sys0 -a ncargs Change: chdev -l sys0 -a ncargs=NewValue Change takes effect immediately and is preserved over boot. Diagnosis: Users cannot execute any additional processes because the argument list passed to the exec() system call is too long. Tuning: This is a mechanism to prevent the exec() subroutines from failing if the argument list is too long. Please note that tuning to a higher ncargs value puts additional constraints on system memory resources. [EMAIL PROTECTED] wrote: Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Jeff Schasny - Denver, Co, USA jeff at schasny dot com --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] AIX Argument list too long
Use the find statement to select the files you want to process (man find will explain) an then let find execute the mv or rm whatever you need. Example: Find . -name *tobedeleted* -exec rm {} \; -print The -exec will execute something, substituting the {} with the current file name, the\; ends the execute parameters, -print will print the filename on the console. Hope this helps, Andre -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: dinsdag 21 augustus 2007 15:55 To: u2-users@listserver.u2ug.org Subject: [U2] AIX Argument list too long Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Karl Pearson Director of I.T. ATS Industrial Supply, Inc. [EMAIL PROTECTED] http://www.atsindustrial.com 800-789-9300 x29 Local: 801-978-4429 Fax: 801-972-3888 To mess up your Linux PC, you have to really work at it; to mess up a microsoft PC you just have to work on it. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] AIX Argument list too long
Does anyone know: 1. what I'm talking about and what causes it? Commands like rm have what appears to be (not a scientific analysis, and if memory serves) an 8K name space limit, including delimiters... 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? Don't know of one, but neither have I looked; the options that I would use depend upon how you are parsing the directory. If you are using the UNIX find command, you could exec rm if your test is true, which would delete one file at a time as they are encountered; alternatively, you could dump the list to a file, and do a for... command in AIX to delete the files, one at a time in a loop. I'm sure there are other options; these are the two I use most often, which are usually scripted so that, in the latter case, the file I created with the list of files to be deleted is also deleted. Bob Wyatt --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
There is a parameter somewhere that lets you control the maximum size of an AIX input statement, but not only do I not know where this is, if there's a max you're inevitably going to run into it periodically no matter how big it is. So rather than using AIX's poor excuse for globbing, I've been using find with the -exec parameter to remove files that are older than a certain date, such as: cd /ud/TEST/_PH_ find . -mtime +90 -exec rm {} \; With find you're working with one file at a time so you should never hit the limit. -K --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] AIX Argument list too long
Karl: In backup scripts I do the following: # remove any local archives over 10 days old echo Removing 10 days old local archive file(s)... find $LOCAL_ARCHIVE -name '*.tgz' -mtime +10 -exec rm {} \; find $LOCAL_ARCHIVE -name '*.bkf' -mtime +10 -exec rm {} \; find $LOCAL_ARCHIVE -name '*.rar' -mtime +10 -exec rm {} \; echo Ten day old local archives removed from disk on `date` $LOG_FILE ...which cleans up the older backup files. Hope this helps. Bill -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: dinsdag 21 augustus 2007 15:55 To: u2-users@listserver.u2ug.org Subject: [U2] AIX Argument list too long Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Karl Pearson --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] AIX Argument list too long
I usually us the 'find' command with the '-mtime' to pare down the list of files in the directory that I am dealing with. For Instance, if I wanted to look at all files over 30 days, I would: (I am assuming that I have do a 'cd' to the directory. Otherwise the '.' in the command needs to be a directory path) find . -type f -mtime +30 -exec ls -l {} \; -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of [EMAIL PROTECTED] Sent: Tuesday, August 21, 2007 8:55 AM To: u2-users@listserver.u2ug.org Subject: [U2] AIX Argument list too long Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Karl Pearson Director of I.T. ATS Industrial Supply, Inc. [EMAIL PROTECTED] http://www.atsindustrial.com 800-789-9300 x29 Local: 801-978-4429 Fax: 801-972-3888 To mess up your Linux PC, you have to really work at it; to mess up a microsoft PC you just have to work on it. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
If you are using the ls command to list the files in the directory, try using the find command instead. The ls command is faster, but it has issues with really large numbers of files. -- Charlie Rubeor Senior Database Administrator Wiremold/Legrand 60 Woodlawn Street West Hartford, CT 06110 Tel: 860-233-6251 x3498 Fax: 860-523-3690 Email: [EMAIL PROTECTED] -- [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 08/21/2007 09:54 AM Please respond to u2-users@listserver.u2ug.org To u2-users@listserver.u2ug.org cc Subject [U2] AIX Argument list too long Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Karl Pearson Director of I.T. ATS Industrial Supply, Inc. [EMAIL PROTECTED] http://www.atsindustrial.com 800-789-9300 x29 Local: 801-978-4429 Fax: 801-972-3888 To mess up your Linux PC, you have to really work at it; to mess up a microsoft PC you just have to work on it. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
Hi Karl, We ran into this too, and the Unix xargs command took care of it nicely. Example: The following purge will delete the files for a single month (assuming you don't keep more than a year's worth, in which case you'll need to check the year, too). This string will prompt before purging each file: ls -ltr|grep Mar |cut -c 55-99|grep -v Mar|xargs -p -i rm {} Removing the -p purges everything that matches without prompting first: ls -ltr|grep Mar |cut -c 55-99|grep -v Mar|xargs -i rm {} Our ls -ltr output looks like this: -rw-rw-rw- 1 phantom users 654558 Mar 1 02:05 FILE1.DAT -rw-rw-rw- 1 phantom users 638108 Mar 1 02:05 FILE2.DAT -rw-rw-rw- 1 phantom users 7867810 Mar 1 02:14 FILE3.DAT -rw-rw-rw- 1 phantom users 593882 Apr 1 02:04 FILE4.DAT -rw-rw-rw- 1 phantom users 569765 Apr 1 02:04 FILE5.DAT -rw-rw-rw- 1 phantom users 7814653 Apr 1 02:15 FILE6.DAT The grep -v test makes sure that I don't purge a file name that contains the 3-character month. Hope this helps, Tom Derwin [EMAIL PROTECTED] 08/21/07 9:54 AM I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. snip - This e-mail and any attachments may contain CONFIDENTIAL information, including PROTECTED HEALTH INFORMATION. If you are not the intended recipient, any use or disclosure of this information is STRICTLY PROHIBITED; you are requested to delete this e-mail and any attachments, notify the sender immediately, and notify the LabCorp Privacy Officer at [EMAIL PROTECTED] or call (877) 23-HIPAA / (877) 234-4722. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
This is a common problem, where the shell expands the argument list until it blows up. The most common way around this is to use xargs: http://www.unixreview.com/documents/s=8274/sam0306g/ /Scott Ballinger Pareto Corporation Edmonds WA USA 206 713 6006 On 8/21/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Karl Pearson Director of I.T. ATS Industrial Supply, Inc. [EMAIL PROTECTED] http://www.atsindustrial.com 800-789-9300 x29 Local: 801-978-4429 Fax: 801-972-3888 To mess up your Linux PC, you have to really work at it; to mess up a microsoft PC you just have to work on it. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
On 8/21/07, Kevin King [EMAIL PROTECTED] wrote: [snip] cd /ud/TEST/_PH_ find . -mtime +90 -exec rm {} \; With find you're working with one file at a time so you should never hit the limit. Yes, however using find + xargs is more efficient than executing rm on each individual instance of the found file: from http://www.unixreview.com/documents/s=8274/sam0306g/ http://www.unixreview.com/documents/s=8274/sam0306g/... The modern UNIX OS seems to have solved the problem of the *find* command overflowing the command-line buffer. However, using the *find -exec* command is still troublesome. It's better to do this: # remove all files with a txt extension find . -type f -name *.txt -print|xargs rm than this: find . -type f -name *.txt -exec rm {} \; -print Controlling the call to *rm* with *xargs* is more efficient than having the *find* command execute *rm* for each object found. /Scott --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] AIX Argument list too long
...and an, ahem, interesting side effect is that if you're running transaction logging, and you have more actual tlog files than the value of this parameter, transaction logging will fail to start, and will give you a log files full error message, even with completely empty available log files. From: Jeff Schasny [EMAIL PROTECTED] Reply-To: u2-users@listserver.u2ug.org To: u2-users@listserver.u2ug.org Subject: Re: [U2] AIX Argument list too long Date: Tue, 21 Aug 2007 08:51:30 -0600 NCARGS value configuration (5.1.0) In AIX 5L Version 5.1, the option has been added to allow the super user or any user belonging to the system group to dynamically change the value of the NCARGS parameters. In previous releases of AIX, these values were permanently defined as 24576, which resulted in a problem similar to that shown below when a large number of arguments are parsed to a command: # rm FILE* ksh: /usr/bin/rm: 0403-027 The parameter list is too long. The value of NCARGS can be increased to overcome this problem. The value can be tuned anywhere within the range of 24576 to 524288 in 4 KB page size increments. To display the value, use the following command. ncargs Purpose: Specifies the maximum allowable size of the ARG/ENV list (in 4KB blocks) when running exec() subroutines. Values: Default: 6; Range: 6 to 128 Display: lsattr -E -l sys0 -a ncargs Change: chdev -l sys0 -a ncargs=NewValue Change takes effect immediately and is preserved over boot. Diagnosis: Users cannot execute any additional processes because the argument list passed to the exec() system call is too long. Tuning: This is a mechanism to prevent the exec() subroutines from failing if the argument list is too long. Please note that tuning to a higher ncargs value puts additional constraints on system memory resources. [EMAIL PROTECTED] wrote: Hi all. This is a bit off-topic, but I believe the expertise is here. . . I have routines that parse through Unix files in a directory and remove them based on age. If the number of files exceeds some limit I've not been able to narrow down, I get a response from the scripts that it can't do the job because there's too many files in the directory. Does anyone know: 1. what I'm talking about and what causes it? 2. how to solve this through some tunable parameter, preferrably not requiring a kernel rebuild? TIA, -- Jeff Schasny - Denver, Co, USA jeff at schasny dot com --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ _ Puzzles, trivia teasers, word scrambles and more. Play for your chance to win! http://club.live.com/home.aspx?icid=CLUB_hotmailtextlink --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/