And replying to Scott's post to say "thanks for the compliment", but
I've just had another idea ...

How many fields are you using for your select on the master file?
ESPECIALLY if it's just the date, trans that across to your secondary
file, and index it! If it's more than one field, try and work out a
usable trans that will pull all the fields across in one index that you
can run a select over.

Don't forget, declaring an index means that that data is stored in the
file, whether it was calculated, trans'd, or was in the file anyway.

So now you can do a select and purge on your secondary file without ever
having to go near the primary, and the database will take care of making
sure all the required information is available as it's needed ... :-)

Cheers,
Wol

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Scott Richardson
Sent: 15 March 2004 13:08
To: U2 Users Discussion List
Subject: Re: Help Needed regarding performance improvement of delete
query

Great points from Wol, as always.

What kind of /tmp disk space do you have on this system?
(Assuming that /tmp is where UV does some of it's SELECT
scratch pad intermediate writing when processing large queries,
consult your sites actual uvconfig for all of your actual values...).

If this /tmp is small, single physical disk, or heavily fragmented,
this would also contribute to poor query runtime performance.
Ditto on your system's swap space, which should be at least
2X physical memory.

Wol's approach of breaking down the query into selecting
smaller groups of data is a great one. Chip away at the stone,
methodically, consistently, and constantly.

What platform is this on?
What OS version?
What UV Version?
How much memory & disk space?
How much /tmp and swap space?

Are you running this query with other users on the system, who
may be also trying to access the files this query is working with?

Are you runing this at night when it might conflict with a backup
operation?

More food for thought.

Regards,
Scott

----- Original Message ----- 
From: "Anthony Youngman" <[EMAIL PROTECTED]>
To: "U2 Users Discussion List" <[EMAIL PROTECTED]>
Sent: Monday, March 15, 2004 4:02 AM
Subject: RE: Help Needed regarding performance improvement of delete
query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds that
number of records and then stops. So each sequence won't load the system
so badly. Creating a huge select list will stress your ram badly ...
looping through this sequence won't stress the system so badly, though
you really do need to use indices to reduce the stress even more ...

Create an index on various fields that you're using as your select
criteria. If you're selecting old records, then you need to select on
date, and this really will make life both easy and fast. The more
closely you can guarantee that a select, acting on a single index, will
pick up only or mostly records that you are going to delete, the better.
That will SERIOUSLY reduce the time taken and the performance hit.

Cheers,
Wol

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of ashish ratna
Sent: 15 March 2004 08:25
To: [EMAIL PROTECTED]
Subject: Help Needed regarding performance improvement of delete query

Hi All,

We are working for purging of old data from the database. But we are
facing performance problems in this.

We are using select query which is created dynamically on the basis of
number of records. We want to know if there is any limit for size of
query in Universe.

Although in universe help pdf it is mentioned that there is no limit for
the length of select query. But when we run the program on the file with
records more than 0.5 million it gave the error-

"Pid 14433 received a SIGSEGV for stack growth failure. Possible causes:
insufficient memory or swap space, or stack size exceeded maxssiz.

Memory fault(coredump)"

If there is no limitation on the size of query then please suggest some
other possible solution which can help us reducing the time of query and
completing the process successfully without giving the error.

Thanks in advance.

Regards,

Ashish.





************************************************************************
****
*******

This transmission is intended for the named recipient only. It may
contain
private and confidential information. If this has come to you in error
you
must not act on anything disclosed in it, nor must you copy it, modify
it,
disseminate it in any way, or show it to anyone. Please e-mail the
sender to
inform us of the transmission error or telephone ECA International
immediately and delete the e-mail from your information system.

Telephone numbers for ECA International offices are: Sydney +61 (0)2
9911
7799, Hong Kong + 852 2121 2388, London +44 (0)20 7351 5000 and New York
+1
212 582 2333.

************************************************************************
****
*******

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

Reply via email to