Marc,

I'm currently running the V91 compiler. I have had intermittant reports from 
a couple of users
that they get an error message after they do about 60 items. Currently I'm 
leaning towards
a variable or something that I am failing to clear or something but haven't 
had time to chase it
yet.

Jan

Note to Razzak:
I am in no way implying that there is a problem with your product.
 


-----Original Message-----
From: "MDRD" <[email protected]>
To: [email protected] (RBASE-L Mailing List)
Date: Thu, 17 Mar 2011 12:19:04 -0500
Subject: [RBASE-L] - RE: Data corruption


Paul

They do a full Unload all at least monthly w/o errors.  So the hiccup and 
corrupt packets seems to be the most likely cause.  But their computer tech 
installed new computers and network stuff and says it checks out.  I know 
just because it looks OK when they check it does not mean it is not causing 
hiccups every few weeks.

Another office had this same problem, we not have them enter 10 rows at a 
time then run Autochk to check for errors to try to narrow down what is 
going on.  Since they started only doing 10 rows at a time they have not had 

the problem, not yet at least.  Either they are lucky or I was thinking that 

doing 50+ rows was maxing something out?  Not sure what that could be.

Thanks, I would to find a packet sniffer that is easy to use,  once in a 
blue moon we have an office with different "corrupt" data in another table. 
They get custnums like 349,545,112  which they do not have any that big.  We 

have PK and FK on the custnum field and you can not enter a bogus custnum 
without RBase stopping you.  So, I always them them it is a computer hiccup 
that is scrambling the data.

Marc


-----Original Message----- 
From: Paul
Sent: Thursday, March 17, 2011 12:01 PM
To: RBASE-L Mailing List
Subject: [RBASE-L] - RE: Data corruption

Marc-

Grab a copy and do the 'Razzak routine' to verify data integrity.  Unload, 
Load and watch for errors!   Do this on a copy.  Also RScope your local copy 

bakup from them and see what the results are, or simply start with a Autochk 

full   review the results.

If all is good,  it is most likely a hiccup as you say.  Most likely packet 
loss etc.....   But would get you started in looking in the right direction. 

Data secure and no issues - Packet loss and you are lucky (client is real 
lucky one)  Data/rows bad   no see what happened on reload,  issues might be 

from code (eep, cmd, etc...)  or packet loss.  Depends on the problem.

A lot of new sniffers out there,  I ran across a real neat one and if I 
locate it I will let you know.


Paul




-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of MDRD
Sent: Thursday, March 17, 2011 12:01 PM
To: RBASE-L Mailing List
Subject: [RBASE-L] - RE: Data corruption

Paul

No RScope at that location.
I wonder if this is a computer hiccup,

Thanks Marc

-----Original Message-----
From: Paul
Sent: Thursday, March 17, 2011 10:54 AM
To: RBASE-L Mailing List
Subject: [RBASE-L] - RE: Data corruption

Does not look like RScope: " Anyway, their tech found this degug file and 
thought it might help. "

location=5 idprev=140053707 to_ptr=140058625 rputrow_rowid=140083201
location=5 idprev=140774603 to_ptr=140779521 rputrow_rowid=140812289
location=5 idprev=142175435 to_ptr=142180353 rputrow_rowid=142196737
location=5 idprev=148008139 to_ptr=148013057 rputrow_rowid=148086785
location=5 idprev=154881227 to_ptr=154902529 rputrow_rowid=154918913
location=5 idprev=153980107 to_ptr=153985025 rputrow_rowid=154001409
location=5 idprev=164162763 to_ptr=164167681 rputrow_rowid=164192257
location=5 idprev=181005515 to_ptr=181010433 rputrow_rowid=181026817


However row pointers are most likely wrong,  do you have RScope for this 
location?




Paul





-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of MDRD
Sent: Thursday, March 17, 2011 10:19 AM
To: RBASE-L Mailing List
Subject: [RBASE-L] - Data corruption

Well its back,  this one office that was corruption free for about a year
just had it again.

It is always the same table for some strange reason.  It makes me wonder if
there is some atypical data in one of the look up tables for some code that
is only used once in a great while, when that lookup code is entered the
“atypical” data corrupts the table.  Just trying to think out of the 
box.

If it was my code or RBase I would think it would happen more often, unless
it is a combination of things.  I also wondered it it could be when they use
that form to enter 50+ rows of data at once, if some sort of limit or cache
fills up the corruption hits?

Anyway, their tech found this degug file and thought it might help.  I am
not sure when this file was generated, I am not doing anything that I know
of to create it.

This is an important office, they belong to a group of about 20 users I
have.  They are using a 7.6 compiled app, they never had this problem using
7.5 but I updated this form and changed a Note field to Varchar, if that
gives a clue, also I changed some code.  But if my  code was bad I would
expect this error daily.  Either way, unless I can nail this down it will be
hard to get them to update again.

Thanks
Marc

location=5 idprev=140053707 to_ptr=140058625 rputrow_rowid=140083201
location=5 idprev=140774603 to_ptr=140779521 rputrow_rowid=140812289
location=5 idprev=142175435 to_ptr=142180353 rputrow_rowid=142196737
location=5 idprev=148008139 to_ptr=148013057 rputrow_rowid=148086785
location=5 idprev=154881227 to_ptr=154902529 rputrow_rowid=154918913
location=5 idprev=153980107 to_ptr=153985025 rputrow_rowid=154001409
location=5 idprev=164162763 to_ptr=164167681 rputrow_rowid=164192257
location=5 idprev=181005515 to_ptr=181010433 rputrow_rowid=181026817

Reply via email to