I'm running 1.0.6 on both clusters.

After running a nodetool repair on all machines, everything seems to be 
behaving correctly, and AFAIK, no data has been lost.

If what you say is true and the exception was preventing a file from being 
used, then I imagine that the nodetool repair corrected that data from replicas.

Unfortunately, the only steps I have I outlined below.

I suspect it had something to do with that particular data set, however. When I 
did the exact same steps for a different data set, the error did not appear, 
and the streaming proceeded as normal. Perhaps a particular SSTable in the set 
was corrupted?

Scott
________________________________
From: aaron morton [aa...@thelastpickle.com]
Sent: Wednesday, January 18, 2012 1:52 AM
To: user@cassandra.apache.org
Subject: Re: JMX BulkLoad weirdness

I'd need the version number to be sure, but it looks like that error will stop 
the node from actually using the data that has been streamed to it. The file is 
been received, the aux files (bloom etc) are created, the file is opened but 
the exception stops the file from been used.

I've not looked at the JMX bulk load for a while. If you google around you may 
find some examples.

If you have some more steps to repo we may be able to look into it.

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 17/01/2012, at 2:42 AM, Scott Fines wrote:

Unfortunately, I'm not doing a 1-1 migration; I'm moving data from a 15-node to 
a 6-node cluster. In this case, that means an excessive amount of time spent 
repairing data put on to the wrong machines.

Also, the bulkloader's requirement of having either a different IP address or a 
different machine is something that I don't really want to bother with, if I 
can activate it through JMX.

It seems like the JMX bulkloader works perfectly fine, however, except for the 
error that I mentioned below. So I suppose I'll ask again, is that error 
something to be concerned about?

Thanks,

Scott
________________________________
From: aaron morton [aa...@thelastpickle.com<mailto:aa...@thelastpickle.com>]
Sent: Sunday, January 15, 2012 12:07 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: JMX BulkLoad weirdness

If you are doing a straight one-to-one copy from one cluster to another try…

1) nodetool snapshot on each prod node for the system and application key 
spaces.
2) rsync system and app key space snapshots
3) update the yaml files on the new cluster to have the correct initial_tokens. 
This is not necessary as they are stored in the system KS, but it is limits 
surprises later.
4) Start the new cluster.

For bulk load you will want to use the sstableloader 
http://www.datastax.com/dev/blog/bulk-loading


Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 14/01/2012, at 3:32 AM, Scott Fines wrote:

Hi all,

I'm trying to copy a column family from our production cluster to our 
development one for testing purposes, so I thought I would try the bulkload 
API. Since I'm lazy, I'm using the Cassandra bulkLoad JMX call from one of the 
development machines. Here are the steps I followed:

1. (on production C* node): nodetool flush <keyspace> <CF>
2. rsync SSTables from production C* node to development C* node
3. bulkLoad SSTables through JMX

But when I do that, on one of the development C* nodes, I keep getting this 
exception:

java.lang.NullPointerException
at org.apache.cassandra.io.sstable.SSTable.getMinimalKey(SSTable.java:156)
at 
org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
at 
org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:302)
at 
org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:156)
at 
org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:88)
at 
org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:184)

After which, the node itself seems to stream data successfully (I'm in the 
middle of checking that right now).

Is this an error that I should be concerned about?

Thanks,

Scott

Reply via email to