Re: SSTableLoader Question

2018-02-19 Thread shalom sagges
Sounds good.

Thanks for the explanation!

On Sun, Feb 18, 2018 at 5:15 PM, Rahul Singh 
wrote:

> If you don’t have access to the file you don’t have access to the file.
> I’ve seen this issue several times. It’s he easiest low hanging fruit to
> resolve. So figure it out and make sure that it’s Cassandra.Cassandra from
> root to he Data folder and either run as root or sudo it.
>
> If it’s compacted it won’t be there so you won’t have the file. I’m not
> aware of this event being communicated to Sstableloader via SEDA. Besides,
> the sstable that you are loading SHOULD not be live. If you at streaming a
> life sstable, it means you are using sstableloader not as it is designed to
> be used - which is with static files.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Feb 18, 2018, 9:22 AM -0500, shalom sagges ,
> wrote:
>
> Not really sure with which user I ran it (root or cassandra), although I
> don't understand why a permission issue will generate a File not Found
> exception?
>
> And in general, what if a file is being streamed and got compacted before
> the streaming ended. Does Cassandra know how to handle this?
>
> Thanks!
>
> On Sun, Feb 18, 2018 at 3:58 PM, Rahul Singh  > wrote:
>
>> Check permissions maybe? Who owns the files vs. who is running
>> sstableloader.
>>
>> --
>> Rahul Singh
>> rahul.si...@anant.us
>>
>> Anant Corporation
>>
>> On Feb 18, 2018, 4:26 AM -0500, shalom sagges ,
>> wrote:
>>
>> Hi All,
>>
>> C* version 2.0.14.
>>
>> I was loading some data to another cluster using SSTableLoader. The
>> streaming failed with the following error:
>>
>>
>> Streaming error occurred
>> java.lang.RuntimeException: java.io.*FileNotFoundException*:
>> /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file
>> or directory)
>> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
>> r.open(CompressedRandomAccessReader.java:59)
>> at org.apache.cassandra.io.sstable.SSTableReader.openDataReader
>> (SSTableReader.java:1409)
>> at org.apache.cassandra.streaming.compress.CompressedStreamWrit
>> er.write(CompressedStreamWriter.java:55)
>> at org.apache.cassandra.streaming.messages.OutgoingFileMessage$
>> 1.serialize(OutgoingFileMessage.java:59)
>> at org.apache.cassandra.streaming.messages.OutgoingFileMessage$
>> 1.serialize(OutgoingFileMessage.java:42)
>> at org.apache.cassandra.streaming.messages.StreamMessage.serial
>> ize(StreamMessage.java:45)
>> at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMes
>> sageHandler.sendMessage(ConnectionHandler.java:339)
>> at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMes
>> sageHandler.run(ConnectionHandler.java:311)
>> at java.lang.Thread.run(Thread.java:722)
>> Caused by: java.io.*FileNotFoundException*:
>> /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file
>> or directory)
>> at java.io.RandomAccessFile.open(Native Method)
>> at java.io.RandomAccessFile.(RandomAccessFile.java:233)
>> at org.apache.cassandra.io.util.RandomAccessReader.(Rando
>> mAccessReader.java:58)
>> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
>> r.(CompressedRandomAccessReader.java:76)
>> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
>> r.open(CompressedRandomAccessReader.java:55)
>> ... 8 more
>>  WARN 18:31:35,938 [Stream #7243efb0-1262-11e8-8562-d19d5fe7829c] Stream
>> failed
>>
>>
>>
>> Did I miss something when running the load? Was the file suddenly missing
>> due to compaction?
>> If so, did I need to disable auto compaction or stop the service
>> beforehand? (didn't find any reference to compaction in the docs)
>>
>> I know it's an old version, but I didn't find any related bugs on "File
>> not found" exceptions.
>>
>> Thanks!
>>
>>
>>
>


Re: SSTableLoader Question

2018-02-18 Thread Rahul Singh
If you don’t have access to the file you don’t have access to the file. I’ve 
seen this issue several times. It’s he easiest low hanging fruit to resolve. So 
figure it out and make sure that it’s Cassandra.Cassandra from root to he Data 
folder and either run as root or sudo it.

If it’s compacted it won’t be there so you won’t have the file. I’m not aware 
of this event being communicated to Sstableloader via SEDA. Besides, the 
sstable that you are loading SHOULD not be live. If you at streaming a life 
sstable, it means you are using sstableloader not as it is designed to be used 
- which is with static files.

--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Feb 18, 2018, 9:22 AM -0500, shalom sagges , wrote:
> Not really sure with which user I ran it (root or cassandra), although I 
> don't understand why a permission issue will generate a File not Found 
> exception?
>
> And in general, what if a file is being streamed and got compacted before the 
> streaming ended. Does Cassandra know how to handle this?
>
> Thanks!
>
> > On Sun, Feb 18, 2018 at 3:58 PM, Rahul Singh  
> > wrote:
> > > Check permissions maybe? Who owns the files vs. who is running 
> > > sstableloader.
> > >
> > > --
> > > Rahul Singh
> > > rahul.si...@anant.us
> > >
> > > Anant Corporation
> > >
> > > On Feb 18, 2018, 4:26 AM -0500, shalom sagges , 
> > > wrote:
> > > > Hi All,
> > > >
> > > > C* version 2.0.14.
> > > >
> > > > I was loading some data to another cluster using SSTableLoader. The 
> > > > streaming failed with the following error:
> > > >
> > > >
> > > > Streaming error occurred
> > > > java.lang.RuntimeException: java.io.FileNotFoundException: 
> > > > /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file 
> > > > or directory)
> > > >     at 
> > > > org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
> > > >     at 
> > > > org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1409)
> > > >     at 
> > > > org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
> > > >     at 
> > > > org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
> > > >     at 
> > > > org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
> > > >     at 
> > > > org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
> > > >     at 
> > > > org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
> > > >     at 
> > > > org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
> > > >     at java.lang.Thread.run(Thread.java:722)
> > > > Caused by: java.io.FileNotFoundException: 
> > > > /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file 
> > > > or directory)
> > > >     at java.io.RandomAccessFile.open(Native Method)
> > > >     at java.io.RandomAccessFile.(RandomAccessFile.java:233)
> > > >     at 
> > > > org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
> > > >     at 
> > > > org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
> > > >     at 
> > > > org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
> > > >     ... 8 more
> > > >  WARN 18:31:35,938 [Stream #7243efb0-1262-11e8-8562-d19d5fe7829c] 
> > > > Stream failed
> > > >
> > > >
> > > >
> > > > Did I miss something when running the load? Was the file suddenly 
> > > > missing due to compaction?
> > > > If so, did I need to disable auto compaction or stop the service 
> > > > beforehand? (didn't find any reference to compaction in the docs)
> > > >
> > > > I know it's an old version, but I didn't find any related bugs on "File 
> > > > not found" exceptions.
> > > >
> > > > Thanks!
> > > >
> > > >
>


Re: SSTableLoader Question

2018-02-18 Thread shalom sagges
Not really sure with which user I ran it (root or cassandra), although I
don't understand why a permission issue will generate a File not Found
exception?

And in general, what if a file is being streamed and got compacted before
the streaming ended. Does Cassandra know how to handle this?

Thanks!

On Sun, Feb 18, 2018 at 3:58 PM, Rahul Singh 
wrote:

> Check permissions maybe? Who owns the files vs. who is running
> sstableloader.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Feb 18, 2018, 4:26 AM -0500, shalom sagges ,
> wrote:
>
> Hi All,
>
> C* version 2.0.14.
>
> I was loading some data to another cluster using SSTableLoader. The
> streaming failed with the following error:
>
>
> Streaming error occurred
> java.lang.RuntimeException: java.io.*FileNotFoundException*:
> /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file
> or directory)
> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
> r.open(CompressedRandomAccessReader.java:59)
> at org.apache.cassandra.io.sstable.SSTableReader.openDataReader
> (SSTableReader.java:1409)
> at org.apache.cassandra.streaming.compress.CompressedStreamWrit
> er.write(CompressedStreamWriter.java:55)
> at org.apache.cassandra.streaming.messages.OutgoingFileMessage$
> 1.serialize(OutgoingFileMessage.java:59)
> at org.apache.cassandra.streaming.messages.OutgoingFileMessage$
> 1.serialize(OutgoingFileMessage.java:42)
> at org.apache.cassandra.streaming.messages.StreamMessage.
> serialize(StreamMessage.java:45)
> at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMes
> sageHandler.sendMessage(ConnectionHandler.java:339)
> at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMes
> sageHandler.run(ConnectionHandler.java:311)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.*FileNotFoundException*: /data1/keyspace1/table1/
> keyspace1-table1-jb-65174-Data.db (No such file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:233)
> at org.apache.cassandra.io.util.RandomAccessReader.(Rando
> mAccessReader.java:58)
> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
> r.(CompressedRandomAccessReader.java:76)
> at org.apache.cassandra.io.compress.CompressedRandomAccessReade
> r.open(CompressedRandomAccessReader.java:55)
> ... 8 more
>  WARN 18:31:35,938 [Stream #7243efb0-1262-11e8-8562-d19d5fe7829c] Stream
> failed
>
>
>
> Did I miss something when running the load? Was the file suddenly missing
> due to compaction?
> If so, did I need to disable auto compaction or stop the service
> beforehand? (didn't find any reference to compaction in the docs)
>
> I know it's an old version, but I didn't find any related bugs on "File
> not found" exceptions.
>
> Thanks!
>
>
>


Re: SSTableLoader Question

2018-02-18 Thread Rahul Singh
Check permissions maybe? Who owns the files vs. who is running sstableloader.

--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Feb 18, 2018, 4:26 AM -0500, shalom sagges , wrote:
> Hi All,
>
> C* version 2.0.14.
>
> I was loading some data to another cluster using SSTableLoader. The streaming 
> failed with the following error:
>
>
> Streaming error occurred
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file or 
> directory)
>     at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
>     at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1409)
>     at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
>     at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
>     at 
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
>     at 
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>     at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
>     at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
>     at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.FileNotFoundException: 
> /data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file or 
> directory)
>     at java.io.RandomAccessFile.open(Native Method)
>     at java.io.RandomAccessFile.(RandomAccessFile.java:233)
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
>     at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
>     at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
>     ... 8 more
>  WARN 18:31:35,938 [Stream #7243efb0-1262-11e8-8562-d19d5fe7829c] Stream 
> failed
>
>
>
> Did I miss something when running the load? Was the file suddenly missing due 
> to compaction?
> If so, did I need to disable auto compaction or stop the service beforehand? 
> (didn't find any reference to compaction in the docs)
>
> I know it's an old version, but I didn't find any related bugs on "File not 
> found" exceptions.
>
> Thanks!
>
>


SSTableLoader Question

2018-02-18 Thread shalom sagges
Hi All,

C* version 2.0.14.

I was loading some data to another cluster using SSTableLoader. The
streaming failed with the following error:


Streaming error occurred
java.lang.RuntimeException: java.io.*FileNotFoundException*:
/data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file or
directory)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.
open(CompressedRandomAccessReader.java:59)
at org.apache.cassandra.io.sstable.SSTableReader.
openDataReader(SSTableReader.java:1409)
at org.apache.cassandra.streaming.compress.
CompressedStreamWriter.write(CompressedStreamWriter.java:55)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.
serialize(OutgoingFileMessage.java:59)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.
serialize(OutgoingFileMessage.java:42)
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(
StreamMessage.java:45)
at org.apache.cassandra.streaming.ConnectionHandler$
OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
at org.apache.cassandra.streaming.ConnectionHandler$
OutgoingMessageHandler.run(ConnectionHandler.java:311)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.*FileNotFoundException*:
/data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such file or
directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at org.apache.cassandra.io.util.RandomAccessReader.(
RandomAccessReader.java:58)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<
init>(CompressedRandomAccessReader.java:76)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.
open(CompressedRandomAccessReader.java:55)
... 8 more
 WARN 18:31:35,938 [Stream #7243efb0-1262-11e8-8562-d19d5fe7829c] Stream
failed



Did I miss something when running the load? Was the file suddenly missing
due to compaction?
If so, did I need to disable auto compaction or stop the service
beforehand? (didn't find any reference to compaction in the docs)

I know it's an old version, but I didn't find any related bugs on "File not
found" exceptions.

Thanks!


Re: [Marketing Mail] Re: [Marketing Mail] Re: sstableloader question

2016-10-12 Thread Osman YOZGATLIOGLU
Hello,

It's about 2500 sstables worth 25TB of data.
-t parameter doesn't change -t 1000 and -t 1
Most probably I face some limitation at target cluster.
I'm preparing to split sstables and run up to ten parallel sstableloader 
sessions.

Regards,
Osman

On 11-10-2016 21:46, Rajath Subramanyam wrote:
How many sstables are you trying to load ? Running sstableloaders in parallel 
will help. Did you try setting the "-t" parameter and see if you are getting 
the expected throughput ?

- Rajath


Rajath Subramanyam


On Mon, Oct 10, 2016 at 2:02 PM, Osman YOZGATLIOGLU 
> wrote:
Hello,

Thank you Adam and Rajath.

I'll split input sstables and run parallel jobs for each.
I tested this approach and run 3 parallel sstableloader job without -t 
parameter.
I raised stream_throughput_outbound_megabits_per_sec parameter from 200 to 600 
Mbit/sec at all of target nodes.
But each job runs about 10MB/sec only and generates about 100Mbit'sec network 
traffic.
At total this can be much more. Source and target servers has plenty of unused 
cpu, io and network resource.
Do you have any idea how can I increase speed of sstableloader job?

Regards,
Osman

On 10-10-2016 22:05, Rajath Subramanyam wrote:
Hi Osman,

You cannot restart the streaming only to the failed nodes specifically. You can 
restart the sstableloader job itself. Compaction will eventually take care of 
the redundant rows.

- Rajath


Rajath Subramanyam


On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson 
>>
 wrote:
It'll start over from the beginning.


On Sunday, October 9, 2016, Osman YOZGATLIOGLU 
>>
 wrote:
Hello,

I have running a sstableloader job.
Unfortunately some of nodes restarted since beginnig streaming.
I see streaming stop for those nodes.
Can I restart those streaming somehow?
Or if I restart sstableloader job, will it start from beginning?

Regards,
Osman


This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.


--

Adam Hutson
Data Architect | DataScale
+1 (417) 
224-5212
a...@datascale.io>




This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.




This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.


Re: [Marketing Mail] Re: sstableloader question

2016-10-11 Thread Rajath Subramanyam
How many sstables are you trying to load ? Running sstableloaders in
parallel will help. Did you try setting the "-t" parameter and see if you
are getting the expected throughput ?

- Rajath


Rajath Subramanyam


On Mon, Oct 10, 2016 at 2:02 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> Thank you Adam and Rajath.
>
> I'll split input sstables and run parallel jobs for each.
> I tested this approach and run 3 parallel sstableloader job without -t
> parameter.
> I raised stream_throughput_outbound_megabits_per_sec parameter from 200
> to 600 Mbit/sec at all of target nodes.
> But each job runs about 10MB/sec only and generates about 100Mbit'sec
> network traffic.
> At total this can be much more. Source and target servers has plenty of
> unused cpu, io and network resource.
> Do you have any idea how can I increase speed of sstableloader job?
>
> Regards,
> Osman
>
> On 10-10-2016 22:05, Rajath Subramanyam wrote:
> Hi Osman,
>
> You cannot restart the streaming only to the failed nodes specifically.
> You can restart the sstableloader job itself. Compaction will eventually
> take care of the redundant rows.
>
> - Rajath
>
> 
> Rajath Subramanyam
>
>
> On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson  @datascale.io>> wrote:
> It'll start over from the beginning.
>
>
> On Sunday, October 9, 2016, Osman YOZGATLIOGLU <
> osman.yozgatlio...@krontech.com>
> wrote:
> Hello,
>
> I have running a sstableloader job.
> Unfortunately some of nodes restarted since beginnig streaming.
> I see streaming stop for those nodes.
> Can I restart those streaming somehow?
> Or if I restart sstableloader job, will it start from beginning?
>
> Regards,
> Osman
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>
>
> --
>
> Adam Hutson
> Data Architect | DataScale
> +1 (417) 224-5212
> a...@datascale.io
>
>
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>


Re: [Marketing Mail] Re: sstableloader question

2016-10-10 Thread Osman YOZGATLIOGLU
Hello,

Thank you Adam and Rajath.

I'll split input sstables and run parallel jobs for each.
I tested this approach and run 3 parallel sstableloader job without -t 
parameter.
I raised stream_throughput_outbound_megabits_per_sec parameter from 200 to 600 
Mbit/sec at all of target nodes.
But each job runs about 10MB/sec only and generates about 100Mbit'sec network 
traffic.
At total this can be much more. Source and target servers has plenty of unused 
cpu, io and network resource.
Do you have any idea how can I increase speed of sstableloader job?

Regards,
Osman

On 10-10-2016 22:05, Rajath Subramanyam wrote:
Hi Osman,

You cannot restart the streaming only to the failed nodes specifically. You can 
restart the sstableloader job itself. Compaction will eventually take care of 
the redundant rows.

- Rajath


Rajath Subramanyam


On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson 
> wrote:
It'll start over from the beginning.


On Sunday, October 9, 2016, Osman YOZGATLIOGLU 
> wrote:
Hello,

I have running a sstableloader job.
Unfortunately some of nodes restarted since beginnig streaming.
I see streaming stop for those nodes.
Can I restart those streaming somehow?
Or if I restart sstableloader job, will it start from beginning?

Regards,
Osman


This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.


--

Adam Hutson
Data Architect | DataScale
+1 (417) 224-5212
a...@datascale.io




This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.


Re: sstableloader question

2016-10-10 Thread Rajath Subramanyam
Hi Osman,

You cannot restart the streaming only to the failed nodes specifically. You
can restart the sstableloader job itself. Compaction will eventually take
care of the redundant rows.

- Rajath


Rajath Subramanyam


On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson  wrote:

> It'll start over from the beginning.
>
>
> On Sunday, October 9, 2016, Osman YOZGATLIOGLU <
> osman.yozgatlio...@krontech.com> wrote:
>
>> Hello,
>>
>> I have running a sstableloader job.
>> Unfortunately some of nodes restarted since beginnig streaming.
>> I see streaming stop for those nodes.
>> Can I restart those streaming somehow?
>> Or if I restart sstableloader job, will it start from beginning?
>>
>> Regards,
>> Osman
>>
>>
>> This e-mail message, including any attachments, is for the sole use of
>> the person to whom it has been sent, and may contain information that is
>> confidential or legally protected. If you are not the intended recipient or
>> have received this message in error, you are not authorized to copy,
>> distribute, or otherwise use this message or its attachments. Please notify
>> the sender immediately by return e-mail and permanently delete this message
>> and any attachments. KRON makes no warranty that this e-mail is error or
>> virus free.
>>
>
>
> --
>
> Adam Hutson
> Data Architect | DataScale
> +1 (417) 224-5212
> a...@datascale.io
>


Re: sstableloader question

2016-10-09 Thread Adam Hutson
It'll start over from the beginning.

On Sunday, October 9, 2016, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> I have running a sstableloader job.
> Unfortunately some of nodes restarted since beginnig streaming.
> I see streaming stop for those nodes.
> Can I restart those streaming somehow?
> Or if I restart sstableloader job, will it start from beginning?
>
> Regards,
> Osman
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>


-- 

Adam Hutson
Data Architect | DataScale
+1 (417) 224-5212
a...@datascale.io


sstableloader question

2016-10-09 Thread Osman YOZGATLIOGLU
Hello,

I have running a sstableloader job.
Unfortunately some of nodes restarted since beginnig streaming.
I see streaming stop for those nodes.
Can I restart those streaming somehow?
Or if I restart sstableloader job, will it start from beginning?

Regards,
Osman


This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.