RE: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper
Yes it seems it was not taking the classpath for the Cassandra connector. Added it to driver class path argument but got into another error Used below command now spark-submit --class ldCassandraTable ./target/scala-2.10/merlin-spark-cassandra-poc_2.10-0.0.1.jar /home/analytics/Documents/test_wfctotal.dat test_wfctotal --driver-class-path /home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar and getting new error Spark assembly has been built with Hive, including Datanucleus jars on classpath 15/04/03 13:46:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable :/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar:/home/analytics/Installers/spark-1.1.1/conf:/home/analytics/Installers/spark-1.1.1/assembly/target/scala-2.10/spark-assembly-1.1.1-hadoop1.0.4.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-rdbms-3.2.1.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-core-3.2.2.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-api-jdo-3.2.1.jar 15/04/03 13:46:46 WARN LoadSnappy: Snappy native library not loaded Records Loaded to 15/04/03 13:46:54 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(NODE02.int.kronos.com,60755) not found From: Dave Brosius [mailto:dbros...@mebigfatguy.com] Sent: Friday, April 03, 2015 9:15 AM To: user@cassandra.apache.org Subject: Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper This is what i meant by 'initial cause' Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper So it is in fact a classpath problem Here is the class in question https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/mapper/ColumnMapper.scala Maybe it would be worthwhile to put this at the top of your main method System.out.println(System.getProperty( java.class.path); and show what that prints. What version of the cassandra and what version of the cassandra-spark connector are you using, btw? On 04/02/2015 11:16 PM, Tiwari, Tarun wrote: Sorry I was unable to reply for couple of days. I checked the error again and can’t see any other initial cause. Here is the full error that is coming. Exception in thread main java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) From: Dave Brosius [mailto:dbros...@mebigfatguy.com] Sent: Tuesday, March 31, 2015 8:46 PM To: user@cassandra.apache.orgmailto:user@cassandra.apache.org Subject: Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper Is there an 'initial cause' listed under that exception you gave? As NoClassDefFoundError is not exactly the same as ClassNotFoundException. It meant that ColumnMapper couldn't initialize it's static initializer, it could be because some other class couldn't be found, or it could be some other non classloader related error. On 2015-03-31 10:42, Tiwari, Tarun wrote: Hi Experts, I am getting java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper while running a app to load data to Cassandra table using the datastax spark connector Is there something else I need to import in the program or dependencies? RUNTIME ERROR: Exception in thread main java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Below is my scala program /*** ld_Cassandra_Table.scala ***/ import
Re: Cassandra - Storm
I¹d recommend using Storm¹s State abstraction. Check out: https://github.com/hmsonline/storm-cassandra-cql -brian --- Brian O'Neill Chief Technology Officer Health Market Science, a LexisNexis Company 215.588.6024 Mobile @boneill42 http://www.twitter.com/boneill42 This information transmitted in this email message is for the intended recipient only and may contain confidential and/or privileged material. If you received this email in error and are not the intended recipient, or the person responsible to deliver it to the intended recipient, please contact the sender at the email above and delete this email and any attachments and destroy any copies thereof. Any review, retransmission, dissemination, copying or other use of, or taking any action in reliance upon, this information by persons or entities other than the intended recipient is strictly prohibited. From: Vanessa Gligor vanessagli...@gmail.com Reply-To: user@cassandra.apache.org Date: Friday, April 3, 2015 at 1:13 AM To: user@cassandra.apache.org Subject: Cassandra - Storm Hi all, Did anybody use Cassandra for the tuple storage in Storm? I have this scenario: I have a spout (getting messages from RabbitMQ) and I want to save all these messages in Cassandra using a bolt. What is the best choice regarding the connection to the DB? I have read about Hector API. I used it, but for now I wasn't able to add a new row in a column family. Any help would be appreciated. Regards, Vanessa.
Huge number of sstables after adding server to existing cluster
Hi, we are running test cassandra cluster of 8 nodes running in single DC using Simple snitch and DateTieredCompactionStrategy after adding new node(9th) to the cluster we see that number of sstables on newly joined server roughly equals to sum of all sstables on all servers in the cluster. and that number is huge as tens of thousands of sstables on newly added server. Q1:is that what we should expect to happen? Furthermore newly added server seems isn't overloaded, basically there are no pending/scheduled compactions but the number of sstables isn't decreasing. Q2:what could be the reason of not reducing number of sstables? Q3:what we need to do to reduce number of sstables per server? Thanks for your help
Astyanax Thrift Frame Size Hardcoded - Breaks Ring Describe
I know this list isn't the right place to discuss driver issues in general, but I thought I'd offer a word of warning to anyone still using Astyanax related to an issue we ran into over the weekend. Astyanax has a hard coded maximum Thrift frame size. There is a pull request to expose it to configuration, but it has not been accepted ( https://github.com/Netflix/astyanax/pull/547). The reason why this matters beyond you're doing reads which are far too large is that Thrift is used by Astyanax for ring discovery (RING_DESCRIBE), which it may be using under the hood even if you don't think you are (several connection pool types imply RING_DESCRIBE and may override your own alternate configuration). When you exceed around 64,000 vnodes (250 nodes at 256 vnodes), ring discovery no longer fits in a single Thrift frame. Even if you increase the maximum frame size in Cassandra, Astyanax will not take advantage of it. The consequence is that Astyanax will talk *only* to hosts in the seeds list. I've opened issue https://github.com/Netflix/astyanax/issues/577 Astyanax is no longer maintained, so I don't really expect that to go anywhere, which is why I thought it might be a good idea to issue a general warning. This should hopefully be a helpful nudge for anyone still using Astyanax: it's time to find a new driver. We bumped into this over the weekend during routine cluster expansion. Very, very fortunately for us, we were just days away from retiring that project as our very last Astyanax use case. We were able to limp along by providing a significant subset of our hosts in the seeds list, but that would have been a difficult way to operate long term.
Re: Huge number of sstables after adding server to existing cluster
I remember once that happening to me. The SStables were way beyond the limit (32 default) but the compaction were still not starting. All I did was nodetool enableautocompaction keyspace table and the compaction immediately started and count of SSTables were down to normal level. It was little surprising to me as well because I had never disabled compaction in the first place. -Pranay On Fri, Apr 3, 2015 at 10:18 AM, Robert Coli rc...@eventbrite.com wrote: On Fri, Apr 3, 2015 at 4:57 AM, Mantas Klasavičius mantas.klasavic...@gmail.com wrote: Q1:is that what we should expect to happen? A known problem with the current streaming paradigm when combined with vnodes is that newly bootstrapped nodes do a bunch of compaction. Q2:what could be the reason of not reducing number of sstables? nodetool setcompactionthroughput 0 # note, if you don't have spare i/o, this could negatively affect service time Q3:what we need to do to reduce number of sstables per server? Make sure you're compacting faster than you're writing and wait. =Rob http://twitter.com/rcolidba
Re: Astyanax Thrift Frame Size Hardcoded - Breaks Ring Describe
On Fri, Apr 3, 2015 at 11:16 AM, Eric Stevens migh...@gmail.com wrote: Astyanax is no longer maintained, so I don't really expect that to go anywhere, which is why I thought it might be a good idea to issue a general warning. This should hopefully be a helpful nudge for anyone still using Astyanax: it's time to find a new driver. I'm not contesting, but do you have a citation for this? If so, providing it would strengthen your nudge. :D =Rob
Re: Huge number of sstables after adding server to existing cluster
On Fri, Apr 3, 2015 at 4:57 AM, Mantas Klasavičius mantas.klasavic...@gmail.com wrote: Q1:is that what we should expect to happen? A known problem with the current streaming paradigm when combined with vnodes is that newly bootstrapped nodes do a bunch of compaction. Q2:what could be the reason of not reducing number of sstables? nodetool setcompactionthroughput 0 # note, if you don't have spare i/o, this could negatively affect service time Q3:what we need to do to reduce number of sstables per server? Make sure you're compacting faster than you're writing and wait. =Rob http://twitter.com/rcolidba
Re: Huge number of sstables after adding server to existing cluster
I agree with Pranay. I have experienced exactly the same on C* 2.1.2. /Thomas. 2015-04-03 19:33 GMT+02:00 Pranay Agarwal agarwalpran...@gmail.com: I remember once that happening to me. The SStables were way beyond the limit (32 default) but the compaction were still not starting. All I did was nodetool enableautocompaction keyspace table and the compaction immediately started and count of SSTables were down to normal level. It was little surprising to me as well because I had never disabled compaction in the first place. -Pranay On Fri, Apr 3, 2015 at 10:18 AM, Robert Coli rc...@eventbrite.com wrote: On Fri, Apr 3, 2015 at 4:57 AM, Mantas Klasavičius mantas.klasavic...@gmail.com wrote: Q1:is that what we should expect to happen? A known problem with the current streaming paradigm when combined with vnodes is that newly bootstrapped nodes do a bunch of compaction. Q2:what could be the reason of not reducing number of sstables? nodetool setcompactionthroughput 0 # note, if you don't have spare i/o, this could negatively affect service time Q3:what we need to do to reduce number of sstables per server? Make sure you're compacting faster than you're writing and wait. =Rob http://twitter.com/rcolidba
Re: Huge number of sstables after adding server to existing cluster
On Fri, Apr 3, 2015 at 1:04 PM, Thomas Borg Salling tbsall...@tbsalling.dk wrote: I agree with Pranay. I have experienced exactly the same on C* 2.1.2. 2.1.2 had a serious bug which resulted in extra files, which is different from the overall issue I am referring to. =Rob
Re: Astyanax Thrift Frame Size Hardcoded - Breaks Ring Describe
Sorry, I thought it was more or less formally deprecated, but a little quick searching doesn't support the idea. I could swear I recall reading an article about how Netflix had mostly transitioned to Datastax Java Driver, but I can't find that now... so maybe it was a dream? However the activity graph on Github, and the number of open and unresponded issues doesn't suggest a project under active development (I think there's 1 commit this year?). Thrift is frozen, so even if Astyanax is still being tended, its days are probably numbered as long as it remains Thrift centric. That said, I think my declaration of the death of Astyanax was probably greatly exaggerated. On Fri, Apr 3, 2015 at 7:45 PM, graham sanderson gra...@vast.com wrote: It is very stable for us; we don’t use it in many cases (generally older stuff where it was the best choice), but I think it is a little harsh to write it off On Apr 3, 2015, at 1:55 PM, Robert Coli rc...@eventbrite.com wrote: On Fri, Apr 3, 2015 at 11:16 AM, Eric Stevens migh...@gmail.com wrote: Astyanax is no longer maintained, so I don't really expect that to go anywhere, which is why I thought it might be a good idea to issue a general warning. This should hopefully be a helpful nudge for anyone still using Astyanax: it's time to find a new driver. I'm not contesting, but do you have a citation for this? If so, providing it would strengthen your nudge. :D =Rob
Re: Huge number of sstables after adding server to existing cluster
As does 2.1.3 On Apr 3, 2015, at 5:36 PM, Robert Coli rc...@eventbrite.com wrote: On Fri, Apr 3, 2015 at 1:04 PM, Thomas Borg Salling tbsall...@tbsalling.dk mailto:tbsall...@tbsalling.dk wrote: I agree with Pranay. I have experienced exactly the same on C* 2.1.2. 2.1.2 had a serious bug which resulted in extra files, which is different from the overall issue I am referring to. =Rob smime.p7s Description: S/MIME cryptographic signature
Re: Astyanax Thrift Frame Size Hardcoded - Breaks Ring Describe
It is very stable for us; we don’t use it in many cases (generally older stuff where it was the best choice), but I think it is a little harsh to write it off On Apr 3, 2015, at 1:55 PM, Robert Coli rc...@eventbrite.com wrote: On Fri, Apr 3, 2015 at 11:16 AM, Eric Stevens migh...@gmail.com mailto:migh...@gmail.com wrote: Astyanax is no longer maintained, so I don't really expect that to go anywhere, which is why I thought it might be a good idea to issue a general warning. This should hopefully be a helpful nudge for anyone still using Astyanax: it's time to find a new driver. I'm not contesting, but do you have a citation for this? If so, providing it would strengthen your nudge. :D =Rob smime.p7s Description: S/MIME cryptographic signature