Tony,


You can leave the rowcount in.  Do this:

create procedure storedProc_Kill_91stDay
AS
Begin
    set nocount on

    set transaction isolation level serializable
    set rowcount 30000

    declare @myDateDiff bigint
    declare @myNumDeletes bitint
    select @myDateDiff = dateDiff(ss, '01/01/1970', dateAdd(d, -91,
getDate()))
    select @myNumDeletes = count(timeID)
    from newBackupReports
    where timeID <= @myDateDiff
   
    while (@myNumDeletes > 0)
    begin
        delete from newBackupReports
        where timeID <= @myDateDiff
   
        select @myNumDeletes = @myNumDeletes - 30000
    end

    set rowcount 0

   set nocount off
end

Of course keep in any other changes that you have made (removing indexes,
etc.)


Steve

-----Original Message-----
From: Tony Weeg [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 05, 2004 4:34 PM
To: CF-Talk
Subject: Re: cfmx - request timeout setting

got that...and did that in the mean time, but is there a way to chunk
the data, like delete in 30000 row blocks, and start again, etc.?

someone had suggested that as the way to do that?

tw

On Thu, 5 Aug 2004 16:26:34 -0400, DURETTE, STEVEN J (AIT)
<[EMAIL PROTECTED]> wrote:
> Tony,
>
> Remove the "set rowcount 30000"  It is limiting the delete to only the
first
> 30000 rows it encounters.
>
> Also, if you delete that don't forget to delete the "set rowcount 0" at
the
> end.
>
> Steve
>
>
> -----Original Message-----
> From: Tony Weeg [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 05, 2004 4:01 PM
> To: CF-Talk
> Subject: Re: cfmx - request timeout setting
>
> steve.
>
> problem.
>
> its only deleteing 30000 rows, not all of them.
>
> how can this work, deleting all, and not stopping @ 30000 rows deleted?
>
> thanks.
> tony
>
> On Thu, 5 Aug 2004 14:12:39 -0400, Tony Weeg <[EMAIL PROTECTED]> wrote:
> > well...
> >
> > thank you steve.
> >
> > ive just got it down to like 5 min to delete 2million rows.
> >
> > timeID is an int, and yes its indexed.
> >
> > thanks for the ideas...
> >
> > tony
> >
> >
> >
> >
> > On Thu, 5 Aug 2004 14:06:42 -0400, DURETTE, STEVEN J (AIT)
> > <[EMAIL PROTECTED]> wrote:
> > > Tony,
> > >
> > > A few things that might help.
> > >
> > > 1) Use set nocount on and off.
> > > 2) Calculate the datediff once instead of multiple times. (not too
sure
> on
> > > this, I have seen it recalc not at all or on every record whatever the
> sql
> > > server decides).
> > >
> > > Like this:
> > >
> > > create procedure storedProc_Kill_91stDay
> > > AS
> > > Begin
> > >    set nocount on
> > >
> > >    set transaction isolation level serializable
> > >    set rowcount 30000
> > >
> > >    declare @myDateDiff bigint
> > >    select @myDateDiff = dateDiff(ss, '01/01/1970', dateAdd(d, -91,
> > > getDate()))
> > >
> > >    delete from newBackupReports
> > >    where timeID <= @myDateDiff
> > >
> > >    set rowcount 0
> > >
> > >    set nocount off
> > > end
> > >
> > > This might speed it up.  Note the set rowcount 0 should reset the
> rowcount
> > > back to where it should be.
> > >
> > > Also, what kind of field is timeID?  There might be an even faster way
> > > depending on the type.  Also, is this table indexed?  I have heard
that
> if
> > > you drop the index, do your delete and recreate the index that it will
> work
> > > faster.
> > >
> > > Steve
> > >
> > > -----Original Message-----
> > > From: Tony Weeg [mailto:[EMAIL PROTECTED]
> > > Sent: Thursday, August 05, 2004 11:55 AM
> > > To: CF-Talk
> > > Subject: Re: cfmx - request timeout setting
> > >
> > > heres my planned stored proc....but from what i gather, set rowcount,
> > > stops processing @ the rowcount limit...right?
> > >
> > > anyway, here is the stored proc code.
> > >
> > > CREATE PROCEDURE storedProc_Kill_91stDay
> > >
> > > AS
> > >
> > > SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
> > >
> > > SET ROWCOUNT 30000
> > >
> > > delete
> > > from newBackupReports
> > > where timeID
> > > <=
> > > dateDiff(ss,'01/01/1970 00:00:00.000',DATEADD(Day, -91, GetDate()))
> > >
> > > go
> > >
> > > anyway i can tune this puppy up? or any ideas?
> > > thanks!
> > >
> > > tony
> > >
> > > On Thu, 05 Aug 2004 15:35:15 +0000, [EMAIL PROTECTED]
> > > <[EMAIL PROTECTED]> wrote:
> > > > This has been a sucky issue.  If there are network issues or
anything,
> CF
> > > can't tell when accessing the DB and has no graceful way of handling
it.
> We
> > > use Oracle, if the TNS listner goes off line for like 5 minutes, the
CF
> > > server basically chokes and dies.  Maybe there is a Java approach to
> this by
> > > by of JDBC?  I dunno.
> > > >
> > > > TW, maybe you can have that job chunk up the delete?  Have it delete
> only
> > > a few rows at a time to min the impact.
> > > >
> > > > --
> > > >
> > > > ---
> > > > Douglas Knudsen
> > > > http://www.cubicleman.com
> > > > "If you don't like something, change it. If you can't change it,
> change
> > > your attitude. Don't complain." - Maya Angelou
> > > >
> > > > its adv. sql server 2000
> > > > its a sql job.
> > > >
> > > > that aside.
> > > >
> > > > my bigger problem with this...
> > > >
> > > > how does cfmx server/how should cfmx server handle a database that
> > > > just is too busy to feed back a requested recordset?  should it
crash
> > > > the cmfx server? or not?
> > > >
> > > > this is the bigger problem, i think.
> > > >
> > > > ill fix the db.  whats the problem with cfmx?
> > > >
> > > > tw
> > > >
> > > >
> > >  _____
> > >
> > >
>  _____
>
>
  _____
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings] [Donations and Support]

Reply via email to