Re: Can fix corrupt file? (Compaction step)

2010-01-26 Thread JKnight JKnight
Dear Mr Jonathan, I've patched code with 720.patch and run SSTableExport and get error: java.lang.OutOfMemoryError: Java heap space at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:84) at

Re: Can fix corrupt file? (Compaction step)

2010-01-20 Thread Jonathan Ellis
It turns out that it's not just a corrupt row -- the second half of the Data file is overwritten with index entries instead of actual data. I'll track progress in https://issues.apache.org/jira/browse/CASSANDRA-720. -Jonathan On Sun, Jan 17, 2010 at 10:30 PM, Jonathan Ellis jbel...@gmail.com

Re: Can fix corrupt file? (Compaction step)

2010-01-17 Thread Jonathan Ellis
The row size data is incorrect, so there's no way to recover using just the data file. It can be done by using the redundant information in the index, though. Should get that done tomorrow. On Thu, Jan 14, 2010 at 9:35 PM, Jonathan Ellis jbel...@gmail.com wrote: I am working on a patch for

Re: Can fix corrupt file? (Compaction step)

2010-01-14 Thread Jonathan Ellis
I am working on a patch for you. On Thu, Jan 14, 2010 at 9:21 PM, JKnight JKnight beukni...@gmail.com wrote: Dear all, This is my data model Keyspace Name=FeedUsers     ColumnFamily CompareWith=BytesType Name=FeedUsersHome / /Keyspace Could you help me to detect problem? Thank a

Re: Can fix corrupt file? (Compaction step)

2010-01-13 Thread Jonathan Ellis
What is your CF definition in your config file? On Sun, Jan 10, 2010 at 7:59 PM, JKnight JKnight beukni...@gmail.com wrote: The attachment contains data that raise error in compact step. Could you help me to detect the problem?

Re: Can fix corrupt file? (Compaction step)

2010-01-11 Thread JKnight JKnight
Dear Mr Jonathan, Did you get the attachment? On Sun, Jan 10, 2010 at 8:59 PM, JKnight JKnight beukni...@gmail.comwrote: The attachment contains data that raise error in compact step. Could you help me to detect the problem? On Fri, Jan 8, 2010 at 3:09 PM, Jonathan Ellis jbel...@gmail.com

Re: Can fix corrupt file? (Compaction step)

2010-01-08 Thread JKnight JKnight
Dear Mr Jonathan, With the larger sstable, I don't have any problem. So I think that the error does not related to the heap size. And my data model does not use SuperColumn, so I think the the number of columns in row is not the problem. I have tried to delete error row and accept data lost. On

Re: Can fix corrupt file? (Compaction step)

2010-01-08 Thread Jonathan Ellis
Can you gzip the sstable that OOMs and send it to me off-list? On Fri, Jan 8, 2010 at 11:26 AM, JKnight JKnight beukni...@gmail.com wrote: Dear Mr Jonathan, With the larger sstable, I don't have any problem. So I think that the error does not related to the heap size. And my data model does

Can fix corrupt file? (Compaction step)

2010-01-07 Thread JKnight JKnight
Dear all, In compact step, I found the error in file SSTableScanner.java at the following method public IteratingRow next() { try { if (row != null) row.skipRemaining(); assert !file.isEOF();

Re: Can fix corrupt file? (Compaction step)

2010-01-07 Thread Jonathan Ellis
do you get any errors when running sstable2json on the files being compacted? On Thu, Jan 7, 2010 at 3:49 AM, JKnight JKnight beukni...@gmail.com wrote: Dear all, In compact step, I found the error in file SSTableScanner.java at the following method     public IteratingRow next()    

Re: Can fix corrupt file? (Compaction step)

2010-01-07 Thread JKnight JKnight
Yes. The error is *ERROR: Out of memory deserializing row 2829049. * I've tried to delete this row immediately. But now Cassandra does not support delete immediately. Maybe I have to code it. On Thu, Jan 7, 2010 at 12:28 PM, Jonathan Ellis jbel...@gmail.com wrote: do you get any errors when

Re: Can fix corrupt file? (Compaction step)

2010-01-07 Thread Jonathan Ellis
How many columns do you have in your rows? How big a heap are you giving to sstable2json? On Thu, Jan 7, 2010 at 9:37 PM, JKnight JKnight beukni...@gmail.com wrote: Yes. The error is ERROR: Out of memory deserializing row 2829049. I've tried to delete this row immediately. But now Cassandra