Can't you systematically dump it to flat files, delete the period you just 
dumped and zip them and then keep going? 



Your version still has RUNMACRO. Try this: 



"<Directory Name >\BMC Software\ARSystem\runmacro.exe" -o "<Directory 
Name>\Data\<arx form name>" -x "<Server Name>" -U "Demo" -P "<password>" -f 
"<Form Name>" -t arx -a <<port name> -q "'3' < ($DATE$ -(10*60*60*24*1))" 

<Some delete mechanism> 

"<Directory Name>\BMC Software\ARSystem\runmacro.exe" -o "<Directory 
Name>\Data\<arx form name>" -x "<Server Name>" -U "Demo" -P "<password>" -f 
"<Form Name>" -t arx -a <<port name> -q "'3' < ($DATE$ -(9 *60*60*24*1))" 

. 

. 

. 

zip -r "<Direct or y Name>\ All.zip" " <Directory Name> \Data\*.*" 



If you use 8.0 you can use arexport. 



I may not have the syntax exactly right, but systematically you dump a years 
worth of data to a directory and then ZIP it. ARX or XML  is a text file, so 
ZIP loves it. 

You can also run a macro which deletes each years worth of data as you dump it. 
The syntax is in the RUNMACRO description (-e I believe). I believe you used to 
be able to record a delete in a macro. 



You could probably write a Filter to delete records systematically using "RUN 
Process  Application-Delete-Entry " formName " entryID"   with some type of 
Table Loop. 



Also I believe if you set up an Archive, it will dump the data to a form (no 
indexes) and dump the Archive to a .arx or .xml. I would setup an Archive 
capability anyway so that it doesn't happen again. 



Gordon 

----- Original Message -----


From: "Warren R. Baltimore II" <warrenbaltim...@gmail.com> 
To: arslist@ARSLIST.ORG 
Sent: Wednesday, November 14, 2012 10:04:03 AM 
Subject: HUMONGOUS table 

** 
ARS 6.3 patch 16 
ITSM 5.5 
Oracle 10 
Solaris 
  
I've got an audit trail table that has been quietly working for about 8 years 
now.  We started seeing an issue about a month ago that is related to our 
AST:Asset table.  Whenever a change is made and someone is associated with an 
asset, the system grinds to a halt.  Usually, the change will timeout, but it 
will update. 
  
The problem is that at least once a day (usually in the morning) we will get a 
malloc error.  For some reason, the server is not recycling itself when this 
happens so I have to do it. 
  
I've run all sorts of logs, and have come to the conclusion that it's the push 
field to the audit file that is causing the problems. 
  
The Filter was built using the old 1=0 trigger.  I believe that this is 
triggering a table scan against the Audit Trail.  The Audit Trail was never 
built to clean itself up and it has over 57 MILLION records!  
  
Anybody have any idea on a quick, easy, surgical method for knocking this thing 
down to a more manageable size without killing my server? 
  
Also, I know that in later versions, the need to use 1=0 went away.  Any ideas 
if it was still neccesary in 6.3?  I've tried the alternate method, but have 
not had success. 
  
Thanks in advance! 

-- 
Warren R. Baltimore II 
Remedy Developer 
410-533-5367 
_attend WWRUG12 www.wwrug.com ARSlist: "Where the Answers Are"_

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
attend wwrug12 www.wwrug12.com ARSList: "Where the Answers Are"

Reply via email to