Re: Issues while using TWCS compaction and Bulkloader

2017-04-28 Thread Alain RODRIGUEZ
Hi Eugene.


>1. What could have caused the page fault, and/or bloom filter false
>positives?
>2. What's the right strategy for running repairs?
>
>
I am not seeing how related these 2 questions are. Given your mix of
questions I am guessing you tried to repair, probably using incremental
repairs. I imagine that it could have lead to create a lot of SSTables
because of anti-compactions. This big amount of SSTables would indeed
reduce bloom filter and page caching efficiency and create high latency.
It's a common issue on first run of incremental repair...

You probably don't need to repair, and if data fits into memory, you
probably don't want to repair using incremental repairs anyway. There are
some downsides and bugs around this feature.

Given your other email I already answered I guess you don't need to repair,
as data is temporary and only using TTL. I would ensure a strong
consistency by using CL = LOCAL_QUORUM on reads and write and not care
about entropy in your case as written data will 'soon' be deleted (but I
can be missing important context).

Sorry we did not answer your questions earlier, I hope it is still useful.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-03-28 0:26 GMT+02:00 eugene miretsky <eugene.miret...@gmail.com>:

> Hi,
>
> We have a Cassandra 3.0.8 cluster, and we use the Bulkloader
> <http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated>
> to upload time series data nightly. The data has a 3day TTL, and the
> compaction window unit is 1h.
>
> Generally the data fits into memory, all reads are served from OS page
> cache, and the cluster works fine. However, we had a few unexplained
> incidents:
>
>1. High page fault ratio: The happened ones, for 3-4 days and was
>resolved after we restarted the cluster. Have not been able to reproduce it
>since.
>2. High Bloom number of bloom filter false positive: Same as above.
>
> Several questions:
>
>1. What could have caused the page fault, and/or bloom filter false
>positives?
>2. What's the right strategy for running repairs?
>   1. Are repairs even required? We don't generate any tombstones.
>   2. The following article suggests that incremental repairs should
>   not be used with Date Tiered compactions, does it also apply to TWCS?
>   https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/
>   opsRepairNodesManualRepair.html
>   
> <https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesManualRepair.html>
>
> Cheers,
> Eugene
>


Issues while using TWCS compaction and Bulkloader

2017-03-27 Thread eugene miretsky
Hi,

We have a Cassandra 3.0.8 cluster, and we use the Bulkloader
<http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated>
to upload time series data nightly. The data has a 3day TTL, and the
compaction window unit is 1h.

Generally the data fits into memory, all reads are served from OS page
cache, and the cluster works fine. However, we had a few unexplained
incidents:

   1. High page fault ratio: The happened ones, for 3-4 days and was
   resolved after we restarted the cluster. Have not been able to reproduce it
   since.
   2. High Bloom number of bloom filter false positive: Same as above.

Several questions:

   1. What could have caused the page fault, and/or bloom filter false
   positives?
   2. What's the right strategy for running repairs?
  1. Are repairs even required? We don't generate any tombstones.
  2. The following article suggests that incremental repairs should not
  be used with Date Tiered compactions, does it also apply to TWCS?
  
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesManualRepair.html

Cheers,
Eugene


EOFException in bulkloader, then IllegalStateException

2014-01-27 Thread Erik Forsberg

Hi!

I'm bulkloading from Hadoop to Cassandra. Currently in the process of 
moving to new hardware for both Hadoop and Cassandra, and while 
testrunning bulkload, I see the following error:


Exception in thread Streaming to /2001:4c28:1:413:0:1:1:12:1 
java.lang.RuntimeException: java.io.EOFException at 
com.google.common.base.Throwables.propagate(Throwables.java:155) at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) 
at java.lang.Thread.run(Thread.java:662) Caused by: java.io.EOFException 
at java.io.DataInputStream.readInt(DataInputStream.java:375) at 
org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:193) 
at 
org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:180) 
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91) 
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
... 3 more


I see no exceptions related to this on the destination node 
(2001:4c28:1:413:0:1:1:12:1).


This makes the whole map task fail with:

2014-01-27 10:46:50,878 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:forsberg (auth:SIMPLE) cause:java.io.IOException: 
Too many hosts failed: [/2001:4c28:1:413:0:1:1:12]
2014-01-27 10:46:50,878 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Too many hosts failed: [/2001:4c28:1:413:0:1:1:12]
at 
org.apache.cassandra.hadoop.BulkRecordWriter.close(BulkRecordWriter.java:244)
at 
org.apache.cassandra.hadoop.BulkRecordWriter.close(BulkRecordWriter.java:209)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:540)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:650)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
2014-01-27 10:46:50,880 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
for the task

The failed task was on hadoop worker node hdp01-12-4.

However, hadoop later retries this map task on a different hadoop worker node 
(hdp01-10-2), and that retry succeeds.

So that's weird, but I could live with it. Now, however, comes the real trouble 
- the hadoop job does not finish due to one task running on hdp01-12-4 being 
stuck with this:

Exception in thread Streaming to /2001:4c28:1:413:0:1:1:12:1 
java.lang.IllegalStateException: target reports current file is 
/opera/log2/hadoop/mapred/local/taskTracker/forsberg/jobcache/job_201401161243_0288/attempt_201401161243_0288_m_000473_0/work/tmp/iceland_test/Data_hourly/iceland_test-Data_hourly-ib-1-Data.db
 but is 
/opera/log6/hadoop/mapred/local/taskTracker/forsberg/jobcache/job_201401161243_0288/attempt_201401161243_0288_m_00_0/work/tmp/iceland_test/Data_hourly/iceland_test-Data_hourly-ib-1-Data.db
at 
org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:154)
at 
org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:45)
at 
org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:199)
at 
org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:180)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

This just sits there forever, or at least until the hadoop task timeout kicks 
in.

So two questions here:

1) Any clues on what might cause the first EOFException? It seems to appear for 
*some* of my bulkloads. Not all, but frequent enough to be a problem. Like, 
every 10:th bulkload I do seems to have the problem.

2) The second problem I have a feeling could be related to 
https://issues.apache.org/jira/browse/CASSANDRA-4223, but with the extra quirk 
that with the bulkload case, we have *multiple java processes* creating 
streaming sessions on the same host, so streaming session IDs are not unique.

I'm thinking 2) happens because the EOFException made the streaming session in 
1) sit around on the target node without being closed.

This is on Cassandra 1.2.1. I know that's pretty old, but I would like to avoid 
upgrading 

Re: EOFException in bulkloader, then IllegalStateException

2014-01-27 Thread Erik Forsberg

On 2014-01-27 12:56, Erik Forsberg wrote:
This is on Cassandra 1.2.1. I know that's pretty old, but I would like 
to avoid upgrading until I have made this migration from old to new 
hardware. Upgrading to 1.2.13 might be an option.


Update: Exactly the same behaviour on Cassandra 1.2.13.

Thanks,
\EF


Re: EOFException in bulkloader, then IllegalStateException

2014-01-27 Thread Robert Coli
On Mon, Jan 27, 2014 at 5:44 AM, Erik Forsberg forsb...@opera.com wrote:

  On 2014-01-27 12:56, Erik Forsberg wrote:

 This is on Cassandra 1.2.1. I know that's pretty old, but I would like to
 avoid upgrading until I have made this migration from old to new hardware.
 Upgrading to 1.2.13 might be an option.


 Update: Exactly the same behaviour on Cassandra 1.2.13.


If I were you, I would :

1) search for existing issues on JIRA
2) failing to find 1), file a JIRA with repro details

=Rob


Re: Issues running Bulkloader program on AIX server

2013-04-05 Thread aaron morton
 Caused by: java.lang.UnsatisfiedLinkError: snappyjava (Not found in 
 java.library.path)

You do not have the snappy compression library installed. 

http://www.datastax.com/docs/1.1/troubleshooting/index#cannot-initialize-class-org-xerial-snappy-snappy

Cheers

-
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 4/04/2013, at 1:36 PM, praveen.akun...@wipro.com wrote:

 Hi All, 
 
 Sorry, my environment is as below: 
 
 3 node cluster with Cassandra 1.1.9 provided with DSE 3.0 on Linux
 We are trying to run the bulk loader from AIX 6.1 server. Java version 1.5. 
 
 Regards, 
 Praveen
 
 From: Praveen Akunuru praveen.akun...@wipro.com
 Date: Thursday, April 4, 2013 12:21 PM
 To: user@cassandra.apache.org user@cassandra.apache.org
 Subject: Issues running Bulkloader program on AIX server
 
 Hi All, 
 
 I am facing issues with running java Bulkloader program from a AIX server. 
 The program is working fine on Linux server. I am receiving the below error 
 on AIX. Can anyone help me in getting this working?
 
 java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
 at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
 at org.xerial.snappy.Snappy.clinit(Snappy.java:44)
 at java.lang.J9VMInternals.initializeImpl(Native Method)
 at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
 at 
 org.apache.cassandra.io.compress.SnappyCompressor.create(SnappyCompressor.java:45)
 at 
 org.apache.cassandra.io.compress.SnappyCompressor.isAvailable(SnappyCompressor.java:55)
 at 
 org.apache.cassandra.io.compress.SnappyCompressor.clinit(SnappyCompressor.java:37)
 at java.lang.J9VMInternals.initializeImpl(Native Method)
 at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
 at org.apache.cassandra.config.CFMetaData.clinit(CFMetaData.java:82)
 at java.lang.J9VMInternals.initializeImpl(Native Method)
 at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
 at 
 org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:80)
 at 
 org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:93)
 at BulkLoadExample.main(BulkLoadExample.java:55)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
 at java.lang.reflect.Method.invoke(Method.java:611)
 at 
 org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
 Caused by: java.lang.UnsatisfiedLinkError: snappyjava (Not found in 
 java.library.path)
 at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1011)
 at 
 java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:975)
 at java.lang.System.loadLibrary(System.java:469)
 at 
 org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
 ... 25 more
 log4j:WARN No appenders could be found for logger 
 (org.apache.cassandra.io.compress.SnappyCompressor).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
 info.
 Unhandled exception
 Type=Segmentation error vmState=0x
 J9Generic_Signal_Number=0004 Signal_Number=000b Error_Value= 
 Signal_Code=0032
 Handler1=09001000A06FF5A0 Handler2=09001000A06F60F0
 
 Regards, 
 Praveen
 Wipro Limited (Company Regn No in UK - FC 019088) 
 Address: Level 2, West wing, 3 Sheldon Square, London W2 6PS, United Kingdom. 
 Tel +44 20 7432 8500 Fax: +44 20 7286 5703
 
 VAT Number: 563 1964 27
 
 (Branch of Wipro Limited (Incorporated in India at Bangalore with limited 
 liability vide Reg no L9KA1945PLC02800 with Registrar of Companies at 
 Bangalore, India. Authorized share capital: Rs 5550 mn))
 
 Please do not print this email unless it is absolutely necessary.
 
 The information contained in this electronic message and any attachments to 
 this message are intended for the exclusive use of the addressee(s) and may 
 contain proprietary, confidential or privileged information. If you are not 
 the intended recipient, you should not disseminate, distribute or copy this 
 e-mail. Please notify the sender immediately and destroy all copies of this 
 message and any attachments

Issues running Bulkloader program on AIX server

2013-04-04 Thread praveen.akunuru
Hi All,

I am facing issues with running java Bulkloader program from a AIX server. The 
program is working fine on Linux server. I am receiving the below error on AIX. 
Can anyone help me in getting this working?

java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.clinit(Snappy.java:44)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at 
org.apache.cassandra.io.compress.SnappyCompressor.create(SnappyCompressor.java:45)
at 
org.apache.cassandra.io.compress.SnappyCompressor.isAvailable(SnappyCompressor.java:55)
at 
org.apache.cassandra.io.compress.SnappyCompressor.clinit(SnappyCompressor.java:37)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at org.apache.cassandra.config.CFMetaData.clinit(CFMetaData.java:82)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:80)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:93)
at BulkLoadExample.main(BulkLoadExample.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Caused by: java.lang.UnsatisfiedLinkError: snappyjava (Not found in 
java.library.path)
at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1011)
at 
java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:975)
at java.lang.System.loadLibrary(System.java:469)
at 
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 25 more
log4j:WARN No appenders could be found for logger 
(org.apache.cassandra.io.compress.SnappyCompressor).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Unhandled exception
Type=Segmentation error vmState=0x
J9Generic_Signal_Number=0004 Signal_Number=000b Error_Value= 
Signal_Code=0032
Handler1=09001000A06FF5A0 Handler2=09001000A06F60F0

Regards,
Praveen

Wipro Limited (Company Regn No in UK FC 019088)
Address: Level 2, West wing, 3 Sheldon Square, London W2 6PS, United Kingdom. 
Tel +44 20 7432 8500 Fax: +44 20 7286 5703 

VAT Number: 563 1964 27

(Branch of Wipro Limited (Incorporated in India at Bangalore with limited 
liability vide Reg no L9KA1945PLC02800 with Registrar of Companies at 
Bangalore, India. Authorized share capital  Rs 5550 mn))

Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: Issues running Bulkloader program on AIX server

2013-04-04 Thread praveen.akunuru
Hi All,

Sorry, my environment is as below:


  1.  3 node cluster with Cassandra 1.1.9 provided with DSE 3.0 on Linux
  2.  We are trying to run the bulk loader from AIX 6.1 server. Java version 
1.5.

Regards,
Praveen

From: Praveen Akunuru 
praveen.akun...@wipro.commailto:praveen.akun...@wipro.com
Date: Thursday, April 4, 2013 12:21 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Issues running Bulkloader program on AIX server

Hi All,

I am facing issues with running java Bulkloader program from a AIX server. The 
program is working fine on Linux server. I am receiving the below error on AIX. 
Can anyone help me in getting this working?

java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.clinit(Snappy.java:44)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at 
org.apache.cassandra.io.compress.SnappyCompressor.create(SnappyCompressor.java:45)
at 
org.apache.cassandra.io.compress.SnappyCompressor.isAvailable(SnappyCompressor.java:55)
at 
org.apache.cassandra.io.compress.SnappyCompressor.clinit(SnappyCompressor.java:37)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at org.apache.cassandra.config.CFMetaData.clinit(CFMetaData.java:82)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:80)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.init(SSTableSimpleUnsortedWriter.java:93)
at BulkLoadExample.main(BulkLoadExample.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at 
org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Caused by: java.lang.UnsatisfiedLinkError: snappyjava (Not found in 
java.library.path)
at java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1011)
at 
java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:975)
at java.lang.System.loadLibrary(System.java:469)
at 
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 25 more
log4j:WARN No appenders could be found for logger 
(org.apache.cassandra.io.compress.SnappyCompressor).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Unhandled exception
Type=Segmentation error vmState=0x
J9Generic_Signal_Number=0004 Signal_Number=000b Error_Value= 
Signal_Code=0032
Handler1=09001000A06FF5A0 Handler2=09001000A06F60F0

Regards,
Praveen

Wipro Limited (Company Regn No in UK FC 019088)
Address: Level 2, West wing, 3 Sheldon Square, London W2 6PS, United Kingdom. 
Tel +44 20 7432 8500 Fax: +44 20 7286 5703 

VAT Number: 563 1964 27

(Branch of Wipro Limited (Incorporated in India at Bangalore with limited 
liability vide Reg no L9KA1945PLC02800 with Registrar of Companies at 
Bangalore, India. Authorized share capital  Rs 5550 mn))

Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: BulkLoader

2011-12-09 Thread Alain RODRIGUEZ
Hi, I'm running a 4 nodes Cassandra cluster, and I'm facing the same
problem (node not present on nodetool ring, but unreachable on CLI describe
cluster...). I'm currently running version 1.0.2, but I have update from
0.8.x, the problem may exist since a while, I don't really know. I can't
stop my cluster, but I don't know how to apply the patch to repair the
consequences of this bug by removing definitely this node.

Can someone tell me the way to proceed to apply a patch and call this java
function ?

Thank you,

Alain

2011/11/18 Giannis Neokleous gian...@generalsentiment.com

 Thanks for the info Brandon. I'll do the upgrade once 0.8.8 is released.


 On Wed, Nov 16, 2011 at 2:43 PM, Brandon Williams dri...@gmail.comwrote:

 On Mon, Nov 14, 2011 at 2:49 PM, Giannis Neokleous
 gian...@generalsentiment.com wrote:
  Hello everyone,
 
  We're using the bulk loader to load data every day to Cassandra. The
  machines that use the bulkloader are diferent every day so their IP
  addresses change. When I do describe cluster i see all the unreachable
  nodes that keep piling up for the past few days. Is there a way to
 remove
  those IP addresses without terminating the whole cluster at the same
 time
  and restarting it?
 
  The unreachable nodes cause issues when we want to make schema changes
 to
  all the nodes or when we want to truncate a CF.
 
  Any suggestions?


 It sounds like you're running into
 https://issues.apache.org/jira/browse/CASSANDRA-3351 so the first step
 would be to upgrade to a version that has it fixed.

 Unfortunately, this won't solve the problem, just prevent it from
 happening in the future.  To remove the old nodes, you can apply
 https://issues.apache.org/jira/browse/CASSANDRA-3337 on one node and
 call the JMX method for the unreachable endpoints.

 -Brandon





Re: BulkLoader

2011-12-09 Thread Alain RODRIGUEZ
By the way, nice comment on the patch // do not pass go, do not collect
200 dollars, just gtfo, it looks like you have some fun while developping
Cassandra @Datastax ;)

Alain

2011/12/9 Alain RODRIGUEZ arodr...@gmail.com

 Hi, I'm running a 4 nodes Cassandra cluster, and I'm facing the same
 problem (node not present on nodetool ring, but unreachable on CLI describe
 cluster...). I'm currently running version 1.0.2, but I have update from
 0.8.x, the problem may exist since a while, I don't really know. I can't
 stop my cluster, but I don't know how to apply the patch to repair the
 consequences of this bug by removing definitely this node.

 Can someone tell me the way to proceed to apply a patch and call this java
 function ?

 Thank you,

 Alain

 2011/11/18 Giannis Neokleous gian...@generalsentiment.com

 Thanks for the info Brandon. I'll do the upgrade once 0.8.8 is released.


 On Wed, Nov 16, 2011 at 2:43 PM, Brandon Williams dri...@gmail.comwrote:

 On Mon, Nov 14, 2011 at 2:49 PM, Giannis Neokleous
 gian...@generalsentiment.com wrote:
  Hello everyone,
 
  We're using the bulk loader to load data every day to Cassandra. The
  machines that use the bulkloader are diferent every day so their IP
  addresses change. When I do describe cluster i see all the
 unreachable
  nodes that keep piling up for the past few days. Is there a way to
 remove
  those IP addresses without terminating the whole cluster at the same
 time
  and restarting it?
 
  The unreachable nodes cause issues when we want to make schema changes
 to
  all the nodes or when we want to truncate a CF.
 
  Any suggestions?


 It sounds like you're running into
 https://issues.apache.org/jira/browse/CASSANDRA-3351 so the first step
 would be to upgrade to a version that has it fixed.

 Unfortunately, this won't solve the problem, just prevent it from
 happening in the future.  To remove the old nodes, you can apply
 https://issues.apache.org/jira/browse/CASSANDRA-3337 on one node and
 call the JMX method for the unreachable endpoints.

 -Brandon






Re: BulkLoader

2011-11-18 Thread Giannis Neokleous
Thanks for the info Brandon. I'll do the upgrade once 0.8.8 is released.

On Wed, Nov 16, 2011 at 2:43 PM, Brandon Williams dri...@gmail.com wrote:

 On Mon, Nov 14, 2011 at 2:49 PM, Giannis Neokleous
 gian...@generalsentiment.com wrote:
  Hello everyone,
 
  We're using the bulk loader to load data every day to Cassandra. The
  machines that use the bulkloader are diferent every day so their IP
  addresses change. When I do describe cluster i see all the unreachable
  nodes that keep piling up for the past few days. Is there a way to remove
  those IP addresses without terminating the whole cluster at the same time
  and restarting it?
 
  The unreachable nodes cause issues when we want to make schema changes to
  all the nodes or when we want to truncate a CF.
 
  Any suggestions?


 It sounds like you're running into
 https://issues.apache.org/jira/browse/CASSANDRA-3351 so the first step
 would be to upgrade to a version that has it fixed.

 Unfortunately, this won't solve the problem, just prevent it from
 happening in the future.  To remove the old nodes, you can apply
 https://issues.apache.org/jira/browse/CASSANDRA-3337 on one node and
 call the JMX method for the unreachable endpoints.

 -Brandon



Re: BulkLoader

2011-11-16 Thread Brandon Williams
On Mon, Nov 14, 2011 at 2:49 PM, Giannis Neokleous
gian...@generalsentiment.com wrote:
 Hello everyone,

 We're using the bulk loader to load data every day to Cassandra. The
 machines that use the bulkloader are diferent every day so their IP
 addresses change. When I do describe cluster i see all the unreachable
 nodes that keep piling up for the past few days. Is there a way to remove
 those IP addresses without terminating the whole cluster at the same time
 and restarting it?

 The unreachable nodes cause issues when we want to make schema changes to
 all the nodes or when we want to truncate a CF.

 Any suggestions?


It sounds like you're running into
https://issues.apache.org/jira/browse/CASSANDRA-3351 so the first step
would be to upgrade to a version that has it fixed.

Unfortunately, this won't solve the problem, just prevent it from
happening in the future.  To remove the old nodes, you can apply
https://issues.apache.org/jira/browse/CASSANDRA-3337 on one node and
call the JMX method for the unreachable endpoints.

-Brandon


Re: BulkLoader

2011-11-15 Thread Giannis Neokleous
Hi Ernie,

The nodes are not part of the ring so I don't think remove token will help.
They're just marked as unreachable only when I call describe cluster. When
I do nodetool ring the nodes don't show up there.

-Giannis

On Mon, Nov 14, 2011 at 10:55 PM, ehers...@gmail.com ehers...@gmail.comwrote:

 Giannis,

 From here:
 http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely

 Have you tried nodetool removetoken ?

 Ernie


 On Mon, Nov 14, 2011 at 4:20 PM, mike...@thomsonreuters.com wrote:

 Hello Giannis,

 ** **

 Can you share a little bit on how to use the bulk loader ?  We’re
 considering use  bulk loader for a  user case. 

 ** **

 Thanks,

 Mike 

 ** **

 *From:* Giannis Neokleous [mailto:gian...@generalsentiment.com]
 *Sent:* Monday, November 14, 2011 2:50 PM
 *To:* user@cassandra.apache.org
 *Subject:* BulkLoader

 ** **

 Hello everyone,

 We're using the bulk loader to load data every day to Cassandra. The
 machines that use the bulkloader are diferent every day so their IP
 addresses change. When I do describe cluster i see all the unreachable
 nodes that keep piling up for the past few days. Is there a way to remove
 those IP addresses without terminating the whole cluster at the same time
 and restarting it?

 The unreachable nodes cause issues when we want to make schema changes to
 all the nodes or when we want to truncate a CF.

 Any suggestions?

 -Giannis

 This email was sent to you by Thomson Reuters, the global news and
 information company. Any views expressed in this message are those of the
 individual sender, except where the sender specifically states them to be
 the views of Thomson Reuters.





Re: BulkLoader

2011-11-15 Thread Giannis Neokleous
Hi Mike,

I'll try and write a blog post soon about it and share some information.

-Giannis

On Tue, Nov 15, 2011 at 7:49 AM, Giannis Neokleous 
gian...@generalsentiment.com wrote:

 Hi Ernie,

 The nodes are not part of the ring so I don't think remove token will
 help. They're just marked as unreachable only when I call describe cluster.
 When I do nodetool ring the nodes don't show up there.

 -Giannis


 On Mon, Nov 14, 2011 at 10:55 PM, ehers...@gmail.com 
 ehers...@gmail.comwrote:

 Giannis,

 From here:
 http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely

 Have you tried nodetool removetoken ?

 Ernie


 On Mon, Nov 14, 2011 at 4:20 PM, mike...@thomsonreuters.com wrote:

 Hello Giannis,

 ** **

 Can you share a little bit on how to use the bulk loader ?  We’re
 considering use  bulk loader for a  user case. 

 ** **

 Thanks,

 Mike 

 ** **

 *From:* Giannis Neokleous [mailto:gian...@generalsentiment.com]
 *Sent:* Monday, November 14, 2011 2:50 PM
 *To:* user@cassandra.apache.org
 *Subject:* BulkLoader

 ** **

 Hello everyone,

 We're using the bulk loader to load data every day to Cassandra. The
 machines that use the bulkloader are diferent every day so their IP
 addresses change. When I do describe cluster i see all the unreachable
 nodes that keep piling up for the past few days. Is there a way to remove
 those IP addresses without terminating the whole cluster at the same time
 and restarting it?

 The unreachable nodes cause issues when we want to make schema changes
 to all the nodes or when we want to truncate a CF.

 Any suggestions?

 -Giannis

 This email was sent to you by Thomson Reuters, the global news and
 information company. Any views expressed in this message are those of the
 individual sender, except where the sender specifically states them to be
 the views of Thomson Reuters.






RE: BulkLoader

2011-11-15 Thread mike.li
Thanks, Giannis.  Looking forward to ...

Mike

From: Giannis Neokleous [mailto:gian...@generalsentiment.com]
Sent: Tuesday, November 15, 2011 7:01 AM
To: ehers...@gmail.com
Cc: user@cassandra.apache.org
Subject: Re: BulkLoader

Hi Mike,

I'll try and write a blog post soon about it and share some information.

-Giannis
On Tue, Nov 15, 2011 at 7:49 AM, Giannis Neokleous 
gian...@generalsentiment.commailto:gian...@generalsentiment.com wrote:
Hi Ernie,

The nodes are not part of the ring so I don't think remove token will help. 
They're just marked as unreachable only when I call describe cluster. When I do 
nodetool ring the nodes don't show up there.

-Giannis

On Mon, Nov 14, 2011 at 10:55 PM, ehers...@gmail.commailto:ehers...@gmail.com 
ehers...@gmail.commailto:ehers...@gmail.com wrote:
Giannis,

From here:
http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely

Have you tried nodetool removetoken ?

Ernie


On Mon, Nov 14, 2011 at 4:20 PM, 
mike...@thomsonreuters.commailto:mike...@thomsonreuters.com wrote:
Hello Giannis,

Can you share a little bit on how to use the bulk loader ?  We're considering 
use  bulk loader for a  user case.

Thanks,
Mike

From: Giannis Neokleous 
[mailto:gian...@generalsentiment.commailto:gian...@generalsentiment.com]
Sent: Monday, November 14, 2011 2:50 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: BulkLoader

Hello everyone,

We're using the bulk loader to load data every day to Cassandra. The machines 
that use the bulkloader are diferent every day so their IP addresses change. 
When I do describe cluster i see all the unreachable nodes that keep piling 
up for the past few days. Is there a way to remove those IP addresses without 
terminating the whole cluster at the same time and restarting it?

The unreachable nodes cause issues when we want to make schema changes to all 
the nodes or when we want to truncate a CF.

Any suggestions?

-Giannis

This email was sent to you by Thomson Reuters, the global news and information 
company. Any views expressed in this message are those of the individual 
sender, except where the sender specifically states them to be the views of 
Thomson Reuters.




This email was sent to you by Thomson Reuters, the global news and information 
company. Any views expressed in this message are those of the individual 
sender, except where the sender specifically states them to be the views of 
Thomson Reuters.

BulkLoader

2011-11-14 Thread Giannis Neokleous
Hello everyone,

We're using the bulk loader to load data every day to Cassandra. The
machines that use the bulkloader are diferent every day so their IP
addresses change. When I do describe cluster i see all the unreachable
nodes that keep piling up for the past few days. Is there a way to remove
those IP addresses without terminating the whole cluster at the same time
and restarting it?

The unreachable nodes cause issues when we want to make schema changes to
all the nodes or when we want to truncate a CF.

Any suggestions?

-Giannis


RE: BulkLoader

2011-11-14 Thread mike.li
Hello Giannis,

Can you share a little bit on how to use the bulk loader ?  We're considering 
use  bulk loader for a  user case.

Thanks,
Mike

From: Giannis Neokleous [mailto:gian...@generalsentiment.com]
Sent: Monday, November 14, 2011 2:50 PM
To: user@cassandra.apache.org
Subject: BulkLoader

Hello everyone,

We're using the bulk loader to load data every day to Cassandra. The machines 
that use the bulkloader are diferent every day so their IP addresses change. 
When I do describe cluster i see all the unreachable nodes that keep piling 
up for the past few days. Is there a way to remove those IP addresses without 
terminating the whole cluster at the same time and restarting it?

The unreachable nodes cause issues when we want to make schema changes to all 
the nodes or when we want to truncate a CF.

Any suggestions?

-Giannis

This email was sent to you by Thomson Reuters, the global news and information 
company. Any views expressed in this message are those of the individual 
sender, except where the sender specifically states them to be the views of 
Thomson Reuters.

Re: BulkLoader

2011-11-14 Thread ehers...@gmail.com
Giannis,

From here:
http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely

Have you tried nodetool removetoken ?

Ernie


On Mon, Nov 14, 2011 at 4:20 PM, mike...@thomsonreuters.com wrote:

 Hello Giannis,

 ** **

 Can you share a little bit on how to use the bulk loader ?  We’re
 considering use  bulk loader for a  user case. 

 ** **

 Thanks,

 Mike 

 ** **

 *From:* Giannis Neokleous [mailto:gian...@generalsentiment.com]
 *Sent:* Monday, November 14, 2011 2:50 PM
 *To:* user@cassandra.apache.org
 *Subject:* BulkLoader

 ** **

 Hello everyone,

 We're using the bulk loader to load data every day to Cassandra. The
 machines that use the bulkloader are diferent every day so their IP
 addresses change. When I do describe cluster i see all the unreachable
 nodes that keep piling up for the past few days. Is there a way to remove
 those IP addresses without terminating the whole cluster at the same time
 and restarting it?

 The unreachable nodes cause issues when we want to make schema changes to
 all the nodes or when we want to truncate a CF.

 Any suggestions?

 -Giannis

 This email was sent to you by Thomson Reuters, the global news and
 information company. Any views expressed in this message are those of the
 individual sender, except where the sender specifically states them to be
 the views of Thomson Reuters.


BulkLoader

2011-07-13 Thread Stephen Pope
 I'm trying to figure out how to use the BulkLoader, and it looks like there's 
no way to run it against a local machine, because of this:

SetInetAddress hosts = Gossiper.instance.getLiveMembers();
hosts.remove(FBUtilities.getLocalAddress());
if (hosts.isEmpty())
throw new IllegalStateException(Cannot load any sstable, 
no live member found in the cluster);

 Is this intended behavior? May I ask why? We'd like to be able to run it 
against the local machine.

 Cheers,
 Steve


RE: BulkLoader

2011-07-13 Thread Stephen Pope
 I think I've solved my own problem here. After generating the sstable using 
json2sstable it looks like I can simply copy the created sstable into my data 
directory.

 Can anyone think of any potential problems with doing it this way?

-Original Message-
From: Stephen Pope [mailto:stephen.p...@quest.com] 
Sent: Wednesday, July 13, 2011 9:32 AM
To: user@cassandra.apache.org
Subject: BulkLoader

 I'm trying to figure out how to use the BulkLoader, and it looks like there's 
no way to run it against a local machine, because of this:

SetInetAddress hosts = Gossiper.instance.getLiveMembers();
hosts.remove(FBUtilities.getLocalAddress());
if (hosts.isEmpty())
throw new IllegalStateException(Cannot load any sstable, 
no live member found in the cluster);

 Is this intended behavior? May I ask why? We'd like to be able to run it 
against the local machine.

 Cheers,
 Steve


Re: BulkLoader

2011-07-13 Thread Jonathan Ellis
Sure, that will work fine with a single machine.  The advantage of
bulkloader is it handles splitting the sstable up and sending each
piece to the right place(s) when you have more than one.

On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com wrote:
  I think I've solved my own problem here. After generating the sstable using 
 json2sstable it looks like I can simply copy the created sstable into my data 
 directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any sstable, 
 no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


RE: BulkLoader

2011-07-13 Thread Stephen Pope
 Fair enough. My original question stands then. :) 

 Why aren't you allowed to talk to a local installation using BulkLoader?

-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com] 
Sent: Wednesday, July 13, 2011 11:06 AM
To: user@cassandra.apache.org
Subject: Re: BulkLoader

Sure, that will work fine with a single machine.  The advantage of
bulkloader is it handles splitting the sstable up and sending each
piece to the right place(s) when you have more than one.

On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com wrote:
  I think I've solved my own problem here. After generating the sstable using 
 json2sstable it looks like I can simply copy the created sstable into my data 
 directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any sstable, 
 no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: BulkLoader

2011-07-13 Thread Jonathan Ellis
Because it's hooking directly into gossip, so the local instance it's
ignoring is the bulkloader process, not Cassandra.

You'd need to run the bulkloader from a different IP, than Cassandra.

On Wed, Jul 13, 2011 at 8:22 AM, Stephen Pope stephen.p...@quest.com wrote:
  Fair enough. My original question stands then. :)

  Why aren't you allowed to talk to a local installation using BulkLoader?

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:06 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Sure, that will work fine with a single machine.  The advantage of
 bulkloader is it handles splitting the sstable up and sending each
 piece to the right place(s) when you have more than one.

 On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com wrote:
  I think I've solved my own problem here. After generating the sstable using 
 json2sstable it looks like I can simply copy the created sstable into my 
 data directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any sstable, 
 no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


RE: BulkLoader

2011-07-13 Thread Stephen Pope
 Ahhh..ok. Thanks.

-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com] 
Sent: Wednesday, July 13, 2011 11:35 AM
To: user@cassandra.apache.org
Subject: Re: BulkLoader

Because it's hooking directly into gossip, so the local instance it's
ignoring is the bulkloader process, not Cassandra.

You'd need to run the bulkloader from a different IP, than Cassandra.

On Wed, Jul 13, 2011 at 8:22 AM, Stephen Pope stephen.p...@quest.com wrote:
  Fair enough. My original question stands then. :)

  Why aren't you allowed to talk to a local installation using BulkLoader?

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:06 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Sure, that will work fine with a single machine.  The advantage of
 bulkloader is it handles splitting the sstable up and sending each
 piece to the right place(s) when you have more than one.

 On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com wrote:
  I think I've solved my own problem here. After generating the sstable using 
 json2sstable it looks like I can simply copy the created sstable into my 
 data directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any sstable, 
 no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: BulkLoader

2011-07-13 Thread Sylvain Lebresne
Also note that if you have a cassandra node running on the local node
from which you want to bulk load sstables, there is a JMX
(StorageService-bulkLoad) call to do just that. May be simpler than
using sstableloader if that is what you want to do.

--
Sylvain

On Wed, Jul 13, 2011 at 3:46 PM, Stephen Pope stephen.p...@quest.com wrote:
  Ahhh..ok. Thanks.

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:35 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Because it's hooking directly into gossip, so the local instance it's
 ignoring is the bulkloader process, not Cassandra.

 You'd need to run the bulkloader from a different IP, than Cassandra.

 On Wed, Jul 13, 2011 at 8:22 AM, Stephen Pope stephen.p...@quest.com wrote:
  Fair enough. My original question stands then. :)

  Why aren't you allowed to talk to a local installation using BulkLoader?

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:06 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Sure, that will work fine with a single machine.  The advantage of
 bulkloader is it handles splitting the sstable up and sending each
 piece to the right place(s) when you have more than one.

 On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com wrote:
  I think I've solved my own problem here. After generating the sstable 
 using json2sstable it looks like I can simply copy the created sstable into 
 my data directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any 
 sstable, no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com



Re: BulkLoader

2011-07-13 Thread Sylvain Lebresne
I'll have to apologize on that one. Just saw that the JMX call I was
talking about doesn't work as it should.
I'll fix that for 0.8.2 but in the meantime you'll want to use
sstableloader on a different IP as pointed by Jonathan.

--
Sylvain

On Wed, Jul 13, 2011 at 5:11 PM, Sylvain Lebresne sylv...@datastax.com wrote:
 Also note that if you have a cassandra node running on the local node
 from which you want to bulk load sstables, there is a JMX
 (StorageService-bulkLoad) call to do just that. May be simpler than
 using sstableloader if that is what you want to do.

 --
 Sylvain

 On Wed, Jul 13, 2011 at 3:46 PM, Stephen Pope stephen.p...@quest.com wrote:
  Ahhh..ok. Thanks.

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:35 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Because it's hooking directly into gossip, so the local instance it's
 ignoring is the bulkloader process, not Cassandra.

 You'd need to run the bulkloader from a different IP, than Cassandra.

 On Wed, Jul 13, 2011 at 8:22 AM, Stephen Pope stephen.p...@quest.com wrote:
  Fair enough. My original question stands then. :)

  Why aren't you allowed to talk to a local installation using BulkLoader?

 -Original Message-
 From: Jonathan Ellis [mailto:jbel...@gmail.com]
 Sent: Wednesday, July 13, 2011 11:06 AM
 To: user@cassandra.apache.org
 Subject: Re: BulkLoader

 Sure, that will work fine with a single machine.  The advantage of
 bulkloader is it handles splitting the sstable up and sending each
 piece to the right place(s) when you have more than one.

 On Wed, Jul 13, 2011 at 7:47 AM, Stephen Pope stephen.p...@quest.com 
 wrote:
  I think I've solved my own problem here. After generating the sstable 
 using json2sstable it looks like I can simply copy the created sstable 
 into my data directory.

  Can anyone think of any potential problems with doing it this way?

 -Original Message-
 From: Stephen Pope [mailto:stephen.p...@quest.com]
 Sent: Wednesday, July 13, 2011 9:32 AM
 To: user@cassandra.apache.org
 Subject: BulkLoader

  I'm trying to figure out how to use the BulkLoader, and it looks like 
 there's no way to run it against a local machine, because of this:

                SetInetAddress hosts = Gossiper.instance.getLiveMembers();
                hosts.remove(FBUtilities.getLocalAddress());
                if (hosts.isEmpty())
                    throw new IllegalStateException(Cannot load any 
 sstable, no live member found in the cluster);

  Is this intended behavior? May I ask why? We'd like to be able to run it 
 against the local machine.

  Cheers,
  Steve




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com




 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com