I agree with all of bryan's suggestions.  If you can't get access to the actual 
db there is not
much to be done.  My usual customer support answer to this situation would be 
to tell you to
shut the db and do a consistency check on it, which would read every page from 
the table
and would certainly run into the error you got eventually if there was a 
persistent problem.
Given the size of the db and that derby has no optimizations for db's of this 
size that is likely
to take some time.

From the stack I can tell you that the problem is in a base page, not an index. 
 Which is
much harder to fix if it is persistent.   In derby db's the output
 Container(0, 30832) is saying container in segment 0 (seg0 directory) and 
container id
30832  (impressed by the number of containers that db has gone through).  Also 
you
will see system catalog talk about conglomerate numbers.  In derby currently 
there is
always a 1-1 mapping of conglomerate num to container number.
Ancient history, in cloudscape we thought we might need the
abstraction and it was a pain to do the map at the lowest level so we took the 
opportunity
when we redid the arch to make it 1-1 for "now" but allow a map if anyone 
wanted to
do one in the future:
And here is a note from bryan minus 6 years on how
to go from that number in the error to file name and table name.:
http://bryanpendleton.blogspot.com/2009/09/whats-in-those-files-in-my-derby-db.html

A quick check if you could get a ls -l of the seg0 directory would be to look 
at the size of
the associated file and do the math bryan mentioned to see if the file now has 
a full page.
including the page size if you figure it out would help as derby page size vs 
file system page
size can be an issue  - but usually only on machine crashes.

I would suggest filing a JIRA for this.  If it really is the case that you got 
the I/O error for a
non-persistent problem it may be that derby can be improved to avoid it.  
Before the code
was changed to use FileChannel's derby often had retry loops on I/O errors - 
especially on
reads of pages from disk.  In the long past this just avoided some intermittent 
i/o problems
that were in most case network related (even though we likely did not support 
the network
disk officially).  Not sure if the old retry code is still around in the trunk 
as it was for running
in older JVM's.

Also I have also seen wierd timing errors from maybe multiple processing 
accessing the same
file (like backup/virus/... vs the sever), but mostly on windows OS vs unix 
based ones.

Getting a partial page read is a very weird error for derby as it goes out of 
its way to write
only full pages.
On 9/3/2015 5:39 PM, Bryan Pendleton wrote:
On 9/3/2015 3:35 PM, Bergquist, Brett wrote:
Reached end of file while attempting to read a whole page

You should probably take a close read through all the
discussion on this slightly old Derby JIRA Issue:

https://issues.apache.org/jira/browse/DERBY-5234

There are some suggestions about how to diagnose the
conglomerate in question in more detail, and also some
observations about possible causes and possible courses
of action you can take subsequently.

thanks,

bryan




--
email:    Mike Matrigali - [email protected]
linkedin: https://www.linkedin.com/in/MikeMatrigali

Reply via email to