Re: How can I add blank values instead of null values in cassandra ?

2019-09-10 Thread Swen Moczarski
When using prepared statements, you could use "unset": https://github.com/datastax/java-driver/blob/4.x/manual/core/statements/prepared/README.md#unset-values That should solve the tombstone problem but might need code changes. Regards, Swen Am Di., 10. Sept. 2019 um 04:50 Uhr schrieb Nitan

Re: How can I add blank values instead of null values in cassandra ?

2019-09-09 Thread Nitan Kainth
You can set default values in driver but that also little code change Regards, Nitan Cell: 510 449 9629 > On Sep 9, 2019, at 8:15 PM, buchi adddagada wrote: > > We are using DSE 5.1.0 & Spring boot Java. > > While we are trying to insert data into cassandra , java by default inserts > null

How can I add blank values instead of null values in cassandra ?

2019-09-09 Thread buchi adddagada
We are using DSE 5.1.0 & Spring boot Java. While we are trying to insert data into cassandra , java by default inserts null values in cassandra tables which is causing huge tombstones. Instead of changing code in java to insert null values, can you control anywhere at driver level ? Thanks,

Re: How can I check cassandra cluster has a real working function of high availability?

2019-06-28 Thread Nimbus Lin
To Sir Oleksandr : Thank you! Sincerely Nimbuslin(Lin JiaXin) Mobile: 0086 180 5986 1565 Mail: jiaxin...@live.com From: Oleksandr Shulgin Sent: Monday, June 17, 2019 7:19 AM To: User Subject: Re: How can I check cassandra cluster has a real working

Re: How can I check cassandra cluster has a real working function of high availability?

2019-06-17 Thread Oleksandr Shulgin
hough cassandrfa's consistency is a per-operation > setting, isn't there a whole all operations' consistency setting method? > 3rd question: how can I can cassandra cluster's running variables as > mysql's show global variables? such as hidden variable of auto_bootstrap? > Hi, For t

How can I check cassandra cluster has a real working function of high availability?

2019-06-15 Thread Nimbus Lin
' consistency setting method? 3rd question: how can I can cassandra cluster's running variables as mysql's show global variables? such as hidden variable of auto_bootstrap? Thank you! Sincerely Nimbuslin(Lin JiaXin) Mobile: 0086 180 5986 1565 Mail: jiaxin...@live.com

Re: Can I cancel a decommissioning procedure??

2019-06-05 Thread Alain RODRIGUEZ
Sure, you're welcome, glad to hear it worked! =) Thanks for letting us know/reporting this back here, it might matter for other people as well. C*heers! Alain Le mer. 5 juin 2019 à 07:45, William R a écrit : > Eventually after the reboot the decommission was cancelled. Thanks a lot > for the

Re: Can I cancel a decommissioning procedure??

2019-06-05 Thread William R
Eventually after the reboot the decommission was cancelled. Thanks a lot for the info! Cheers Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Tuesday, June 4, 2019 10:59 PM, Alain RODRIGUEZ wrote: >> the issue is that the rest nodes in the

Re: Can I cancel a decommissioning procedure??

2019-06-04 Thread William R
Hi Alain, Thank you for your comforting reply :) I did restart, still waiting to come up (normally takes ~ 30 minutes) , the issue is that the rest nodes in the cluster marked it as DL (DOWN/LEAVING) thats why I am kinda stressed.. Lets see once is up! Sent with

Re: Can I cancel a decommissioning procedure??

2019-06-04 Thread Alain RODRIGUEZ
Hello William, At the moment we keep the node down before figure out a way to cancel that. > Off the top of my head, a restart of the node is the way to go to cancel a decommission. I think you did the right thing and your safety measure is also the fix here :). Did you try to bring it up

Can I cancel a decommissioning procedure??

2019-06-04 Thread William R
Hi, Was an accidental decommissioning of a node and we really need to to cancel it.. is there any way? At the moment we keep the node down before figure out a way to cancel that. Thanks

can i delete a sstable with Estimated droppable tombstones > 1, manually?

2019-03-19 Thread onmstester onmstester
Running: SSTablemetadata /THE_KEYSPACE_DIR/mc-1421-big-Data.db result was: Estimated droppable tombstones: 1.2 Having STCS and data disk usage of 80% (do not have enough free space for normal compaction), Is it OK to just: 1. stop Cassandra, 2. delete mc-1421* and then 3. start Cassandra?

Re: can i...

2019-03-07 Thread Nick Hatfield
mailto:user@cassandra.apache.org>> Subject: Re: can i... Send the details On Thu, Mar 7, 2019 at 8:45 AM Nick Hatfield mailto:nick.hatfi...@metricly.com>> wrote: Use this email to get some insight on how to fix database issues in our cluster?

Re: can i...

2019-03-07 Thread Surbhi Gupta
Send the details On Thu, Mar 7, 2019 at 8:45 AM Nick Hatfield wrote: > Use this email to get some insight on how to fix database issues in our > cluster? >

can i...

2019-03-07 Thread Nick Hatfield
Use this email to get some insight on how to fix database issues in our cluster?

Re: How can I limit the non-heap memory for Cassandra

2019-01-18 Thread Alain RODRIGUEZ
Hello Chris, I must admit I am a bit confused about what you need exactly, I'll try to do my best :). > would like to place limits on it to avoid it becoming a “noisy neighbor” But we also don’t want it killed by the oom killer, so just placing limits > on the container won't help. This

How can I limit the non-heap memory for Cassandra

2019-01-02 Thread Chris Mildebrandt
Hi, Is there’s a way to limit Cassandra’s off-heap memory usage? I can’t find a way to limit the memory used for row caches, bloom filters, etc. We’re running Cassandra in a container and would like to place limits on it to avoid it becoming a “noisy neighbor”. But we also don’t want it killed by

Re: Can I sort it as a result of group by?

2018-04-10 Thread onmstester onmstester
I'm using apache spark on top of cassandra for such cases Sent using Zoho Mail On Mon, 09 Apr 2018 18:00:33 +0430 DuyHai Doan doanduy...@gmail.com wrote No, sorting by column other than clustering column is not possible On Mon, Apr 9, 2018 at 11:42 AM, Eunsu Kim

Re: Can I sort it as a result of group by?

2018-04-09 Thread DuyHai Doan
No, sorting by column other than clustering column is not possible On Mon, Apr 9, 2018 at 11:42 AM, Eunsu Kim wrote: > Hello, everyone. > > I am using 3.11.0 and I have the following table. > > CREATE TABLE summary_5m ( > service_key text, > hash_key int, >

Can I sort it as a result of group by?

2018-04-09 Thread Eunsu Kim
Hello, everyone. I am using 3.11.0 and I have the following table. CREATE TABLE summary_5m ( service_key text, hash_key int, instance_hash int, collected_time timestamp, count int, PRIMARY KEY ((service_key), hash_key, instance_hash, collected_time) ) And I can sum

where can i buy cassandra spring applications

2017-10-17 Thread Lutaya Shafiq Holmes
where can i buy cassandra spring applications, I need to purchase a fully built Cassandr Web Application in Eclipse, Where Can I get one? - Forexample on Evanto Market , and Theme Forest I ca get WordPress, Drupal, PHP and other systems, Where can I get Spring Cassandra Applications

Re: How can I install a Java Spring Application running Cassandra on to AWS

2017-10-17 Thread Lutaya Shafiq Holmes
Thank YOU On 10/17/17, Who Dadddy <qwerty15...@gmail.com> wrote: > http://lmgtfy.com/?q=install+java+app+on+AWS > <http://lmgtfy.com/?q=install+java+app+on+AWS> > >> On 17 Oct 2017, at 15:32, Lutaya Shafiq Holmes <lutayasha...@gmail.com> >> wrote: >> &

Re: How can I install a Java Spring Application running Cassandra on to AWS

2017-10-17 Thread Who Dadddy
http://lmgtfy.com/?q=install+java+app+on+AWS <http://lmgtfy.com/?q=install+java+app+on+AWS> > On 17 Oct 2017, at 15:32, Lutaya Shafiq Holmes <lutayasha...@gmail.com> wrote: > > How can I install a Java Spring Application running Cassandra on to AWS > -- > Lutaaya S

How can I install a Java Spring Application running Cassandra on to AWS

2017-10-17 Thread Lutaya Shafiq Holmes
How can I install a Java Spring Application running Cassandra on to AWS -- Lutaaya Shafiq Web: www.ronzag.com | i...@ronzag.com Mobile: +256702772721 | +256783564130 Twitter: @lutayashafiq Skype: lutaya5 Blog: lutayashafiq.com http://www.fourcornersalliancegroup.com/?a=shafiqholmes "The

How Can I get started with Using Cassandra and Netbeans- Please help

2017-09-29 Thread Lutaya Shafiq Holmes
How Can I get started with Using Cassandra and Netbeans- Please help -- Lutaaya Shafiq Web: www.ronzag.com | i...@ronzag.com Mobile: +256702772721 | +256783564130 Twitter: @lutayashafiq Skype: lutaya5 Blog: lutayashafiq.com http://www.fourcornersalliancegroup.com/?a=shafiqholmes "The

RE: Can I have multiple datacenter with different versions of Cassandra

2017-09-12 Thread Durity, Sean R
No – the general answer is that you cannot stream between major versions of Cassandra. I would upgrade the existing ring, then add the new DC. Sean Durity From: Chuck Reynolds [mailto:creyno...@ancestry.com] Sent: Thursday, May 18, 2017 11:20 AM To: user@cassandra.apache.org Subject: Can I

Re: Can I have multiple datacenter with different versions of Cassandra

2017-05-18 Thread Jonathan Haddad
> skype daemeon.c.m.reiydelle > USA 415.501.0198 > > On May 18, 2017 8:20 AM, "Chuck Reynolds" <creyno...@ancestry.com> wrote: > >> I have a need to create another datacenter and upgrade my existing >> Cassandra from 2.1.13 to Cassandra 3.0.9. >> >

Re: Can I have multiple datacenter with different versions of Cassandra

2017-05-18 Thread daemeon reiydelle
Cassandra from 2.1.13 to Cassandra 3.0.9. > > > > Can I do this as one step? Create a new Cassandra ring that is version > 3.0.9 and replicate the data from an existing ring that is Cassandra 2.1.13? > > > > After replicating to the new ring if possible them I would upgrade the old > ring to Cassandra 3.0.9 >

Can I have multiple datacenter with different versions of Cassandra

2017-05-18 Thread Chuck Reynolds
I have a need to create another datacenter and upgrade my existing Cassandra from 2.1.13 to Cassandra 3.0.9. Can I do this as one step? Create a new Cassandra ring that is version 3.0.9 and replicate the data from an existing ring that is Cassandra 2.1.13? After replicating to the new ring

Re: How can I efficiently export the content of my table to KAFKA

2017-04-28 Thread Tobias Eriksson
Stromberger <chris.stromber...@gmail.com> Date: Thursday, 27 April 2017 at 15:50 To: "user@cassandra.apache.org" <user@cassandra.apache.org> Subject: Re: How can I efficiently export the content of my table to KAFKA Maybe https://www.confluent.io/blog/kafka-connect-cassandra-

Re: How can I efficiently export the content of my table to KAFKA

2017-04-27 Thread Chris Stromberger
Maybe https://www.confluent.io/blog/kafka-connect-cassandra-sink-the-perfect-match/ On Wed, Apr 26, 2017 at 2:49 PM, Tobias Eriksson < tobias.eriks...@qvantel.com> wrote: > Hi > > I would like to make a dump of the database, in JSON format, to KAFKA > > The database contains lots of data,

Re: How can I efficiently export the content of my table to KAFKA

2017-04-26 Thread Justin Cameron
;user@cassandra.apache.org" <user@cassandra.apache.org> > *Date: *Thursday, 27 April 2017 at 01:36 > *To: *"user@cassandra.apache.org" <user@cassandra.apache.org> > *Subject: *Re: How can I efficiently export the content of my table to > KAFKA > > > > You

Re: How can I efficiently export the content of my table to KAFKA

2017-04-26 Thread Tobias Eriksson
27 April 2017 at 01:36 To: "user@cassandra.apache.org" <user@cassandra.apache.org> Subject: Re: How can I efficiently export the content of my table to KAFKA You could probably save yourself a lot of hassle by just writing a Spark job that scans through the entire table, converts

Re: How can I efficiently export the content of my table to KAFKA

2017-04-26 Thread Justin Cameron
You could probably save yourself a lot of hassle by just writing a Spark job that scans through the entire table, converts each row to JSON and dumps the output into a Kafka topic. It should be fairly straightforward to implement. Spark will manage the partitioning of "Producer" processes for you

How can I efficiently export the content of my table to KAFKA

2017-04-26 Thread Tobias Eriksson
Hi I would like to make a dump of the database, in JSON format, to KAFKA The database contains lots of data, millions and in some cases billions of “rows” I will provide the customer with an export of the data, where they can read it off of a KAFKA topic My thinking was to have it scalable such

Re: How can I scale my read rate?

2017-03-27 Thread Alexander Dejanovski
By default the TokenAwarePolicy does shuffle replicas, and it can be disabled if you want to only hit the primary replica for the token range you're querying : http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/policies/TokenAwarePolicy.html On Mon, Mar 27, 2017 at 9:41 AM Avi

Re: How can I scale my read rate?

2017-03-27 Thread Avi Kivity
Is the driver doing the right thing by directing all reads for a given token to the same node? If that node fails, then all of those reads will be directed at other nodes, all oh whom will be cache-cold for the the failed node's primary token range. Seems like the driver should distribute

Re: How can I scale my read rate?

2017-03-26 Thread Anthony Grasso
Keep in mind there are side effects to increasing to RF = 4 - Storage requirements for each node will increase. Depending on the number of nodes in the cluster and the size of the data this could be significant. - Whilst the number of available coordinators increases, the number of

Re: How can I scale my read rate?

2017-03-26 Thread Eric Stevens
Yes, throughput for a given partition key cannot be improved with horizontal scaling. You can increase RF to theoretically improve throughput on that key, but actually in this case smart clients might hold you back, because they're probably token aware, and will try to serve that read off the

Re: Using datastax driver, how can I read a non-primitive column as a JSON string?

2017-03-24 Thread Vladimir Yudovin
Hi, why not used SELECT JSON * FROM as described here https://www.datastax.com/dev/blog/whats-new-in-cassandra-2-2-json-support ? Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting On Thu, 23 Mar 2017 13:08:30 -0400 S G sg.online.em...@gmail.com wrote

Re: How can I scale my read rate?

2017-03-23 Thread Alain Rastoul
On 24/03/2017 01:00, Eric Stevens wrote: Assuming an even distribution of data in your cluster, and an even distribution across those keys by your readers, you would not need to increase RF with cluster size to increase read performance. If you have 3 nodes with RF=3, and do 3 million reads,

Re: How can I scale my read rate?

2017-03-23 Thread Eric Stevens
Assuming an even distribution of data in your cluster, and an even distribution across those keys by your readers, you would not need to increase RF with cluster size to increase read performance. If you have 3 nodes with RF=3, and do 3 million reads, with good distribution, each node has served

Using datastax driver, how can I read a non-primitive column as a JSON string?

2017-03-23 Thread S G
Hi, I have several non-primitive columns in my cassandra tables. Some of them are user-defined-types UDTs. While querying them through datastax driver, I want to convert such UDTs into JSON values. More specifically, I want to get JSON string for the value object below: Row row =

Re: How can I scale my read rate?

2017-03-20 Thread Alain Rastoul
On 20/03/2017 22:05, Michael Wojcikiewicz wrote: Not sure if someone has suggested this, but I believe it's not sufficient to simply add nodes to a cluster to increase read performance: you also need to alter the ReplicationFactor of the keyspace to a larger value as you increase your cluster

Re: How can I scale my read rate?

2017-03-20 Thread Alain Rastoul
On 20/03/2017 02:35, S G wrote: 2) https://docs.datastax.com/en/developer/java-driver/3.1/manual/statements/prepared/ tells me to avoid preparing select queries if I expect a change of columns in my table down the road. The problem is also related to select * which is considered bad practice

Re: How can I scale my read rate?

2017-03-19 Thread James Carman
eThreadExecutor(); > //executor = Executors.newFixedThreadPool(3); // Tried this too, no effect > //executor = Executors.newFixedThreadPool(10); // Tried this too, no effect > Futures.addCallback(future, callback, executor); > > Can I improve the above code in some way? > Are ther

Re: How can I scale my read rate?

2017-03-19 Thread Alain Rastoul
On 19/03/2017 02:54, S G wrote: Forgot to mention that this vmstat picture is for the client-cluster reading from Cassandra. Hi SG, Your numbers are low, 15k req/sec would be ok for a single node, for a 12 nodes cluster, something goes wrong... how do you measure the throughput? As

Re: How can I scale my read rate?

2017-03-18 Thread S G
FutureCallback callback = new MyFutureCallback(); > executor = MoreExecutors.sameThreadExecutor(); > //executor = Executors.newFixedThreadPool(3); // Tried this too, no effect > //executor = Executors.newFixedThreadPool(10); // Tried this too, no > effect > Futures.addCallback(future, callback, executor); > &

Re: How can I scale my read rate?

2017-03-18 Thread S G
, callback, executor); Can I improve the above code in some way? Are there any JMX metrics that can tell me what's going on? >From the vmstat command, I see that CPU idle time is about 70% even though I am running about 60 threads per VM Total 20 client-VMs with 8 cores each are querying a Cass

Re: How can I scale my read rate?

2017-03-18 Thread S G
dows 10 Phone > > > > *Von: *Arvydas Jonusonis <arvydas.jonuso...@gmail.com> > *Gesendet: *Samstag, 18. März 2017 19:08 > *An: *user@cassandra.apache.org > *Betreff: *Re: How can I scale my read rate? > > > > ..then you're not taking advantage of request p

AW: How can I scale my read rate?

2017-03-18 Thread j.kesten
+1 for executeAsync – had a long time to argue that it’s not bad as with good old rdbms. Gesendet von meinem Windows 10 Phone Von: Arvydas Jonusonis Gesendet: Samstag, 18. März 2017 19:08 An: user@cassandra.apache.org Betreff: Re: How can I scale my read rate? ..then you're not taking

Re: Can I do point in time recover using nodetool

2017-03-08 Thread Hannu Kröger
Yes, It's possible. I haven't seen good instructions online though. The Cassandra docs are quite bad as well. I think I asked about it in this list and therefore I suggest you check the mailing list archive as Mr. Roth suggested. Hannu On Wed, 8 Mar 2017 at 10.50, benjamin roth

Re: Can I do point in time recover using nodetool

2017-03-08 Thread benjamin roth
I remember a very similar question on the list some months ago. The short answer is that there is no short answer. I'd recommend you search the mailing list archive for "backup" or "recover". 2017-03-08 10:17 GMT+01:00 Bhardwaj, Rahul : > Hi All, > > > > Is there any

Can I do point in time recover using nodetool

2017-03-08 Thread Bhardwaj, Rahul
Hi All, Is there any possibility of restoring cassandra snapshots to point in time without using opscenter ? Thanks and Regards Rahul Bhardwaj

Can I monitor Read Repair from the logs

2016-11-04 Thread James Rothering
What should I grep for in the logs to see if read repair is happening on a table?

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-19 Thread Alain RODRIGUEZ
Hi, I am not sure I understood your message correctly but I will try to answer it. but, I think, in Cassandra case, it seems a matter of how much data we use > with how much memory we have. If you are saying you can use poor commodity servers (vertically scale poorly) and just add nodes

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-11 Thread Hiroyuki Yamada
Thank you all to respond and discuss my question. I agree with you all basically, but, I think, in Cassandra case, it seems a matter of how much data we use with how much memory we have. As Jack's (and datastax's) suggestion, I also used 4GM RAM machine (t2.medium) with 1 billion records (about

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-10 Thread Robert Coli
On Thu, Mar 10, 2016 at 3:27 AM, Alain RODRIGUEZ wrote: > So, like Jack, I globally really not recommend it unless you know what you > are doing and don't care about facing those issues. > Certainly a spectrum of views here, but everyone (including OP) seems to agree with

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-10 Thread Alain RODRIGUEZ
+1 for Rob comment. I would add that I have been learning a lot from running t1.micro (then small, medium, Large, ..., i2.2XL) on AWS machines (800 MB RAM). I had to tweak every single parameter in cassandra.yaml and cassandra-env.sh. So I leaned a lot about internals, I had to! Even if I am glad

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-09 Thread Jack Krupansky
Thanks, Rob, but... I'll continue to do my best to strongly (vehemently, or is there an even stronger word for me to use?!) discourage use of Cassandra in under 4/8 GB of memory. Hey, I just want people to be happy, and trying to run Cassandra in under 8 GB (or 4 GB for dev) is just... asking for

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-07 Thread Ben Bromhead
+1 for http://opensourceconnections.com/blog/2013/08/31/building- the-perfect-cassandra-test-environment/ We also run Cassandra on t2.mediums for our Developer clusters. You can force Cassandra to

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-07 Thread Robert Coli
On Fri, Mar 4, 2016 at 8:27 PM, Jack Krupansky wrote: > Please review the minimum hardware requirements as clearly documented: > > http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningHardware.html > That is a document for Datastax Cassandra, not

Re: How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-04 Thread Jack Krupansky
ecommended to run C* in such low memory environment, > but I am wondering what can I do (what configurations to change) to make > it a little more stable in such environment. > (I understand the following configuration is very tight and not very > recommended but I just want to make

How can I make Cassandra stable in a 2GB RAM node environment ?

2016-03-04 Thread Hiroyuki Yamada
work so far. (which means I still get OOM eventually) I know it is not very recommended to run C* in such low memory environment, but I am wondering what can I do (what configurations to change) to make it a little more stable in such environment. (I understand the following configuration is very

Re: How can I specify the file_data_directories for a keyspace

2015-08-25 Thread Jeff Jirsa
At this point, it is only/automatically managed by cassandra, but if you’re clever with mount points you can probably work around the limitation. From: Ahmed Eljami Reply-To: user@cassandra.apache.org Date: Tuesday, August 25, 2015 at 2:09 AM To: user@cassandra.apache.org Subject: How can

How can I specify the file_data_directories for a keyspace

2015-08-25 Thread Ahmed Eljami
When I defines several file_data_directories in cassandra.yaml, would it be possible to specify the location keyspace and tables ? or it is * only* and *automatically* managed by Cassandra. Thx. -- Ahmed ELJAMI

Can I run upgrade sstables on many nodes on one time

2015-08-13 Thread Ola Nowak
Hi all, I'm trying to update my 6 node cluster from 2.0.11 to 2.1.8. I'm following this update procedure: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeCassandraDetails.html and the point 8 says: If you are upgrading from a major version (for example, from Cassandra 1.2 to 2.0)

RE: Can I run upgrade sstables on many nodes on one time

2015-08-13 Thread SEAN_R_DURITY
. Sean Durity Lead Cassandra Admin, Big Data Team From: Ola Nowak [mailto:ola.nowa...@gmail.com] Sent: Thursday, August 13, 2015 5:30 AM To: user@cassandra.apache.org Subject: Can I run upgrade sstables on many nodes on one time Hi all, I'm trying to update my 6 node cluster from 2.0.11 to 2.1.8. I'm

Which JMX item can I use to see total cluster (or data center) Read and Write volumes?

2014-11-14 Thread Bob Nilsen
Hi all, Within DataStax OpsCenter I can see metrics that show total traffic volume for a cluster and each data center. How can I find these same numbers amongst all the JMX items? Thanks, -- Bob Nilsen rwnils...@gmail.com

Re: Which JMX item can I use to see total cluster (or data center) Read and Write volumes?

2014-11-14 Thread Tyler Hobbs
DataStax OpsCenter I can see metrics that show total traffic volume for a cluster and each data center. How can I find these same numbers amongst all the JMX items? Thanks, -- Bob Nilsen rwnils...@gmail.com -- Tyler Hobbs DataStax http://datastax.com/

Re: Can I call getBytes on a text column to get the raw (already encoded UTF8)

2014-06-24 Thread Olivier Michallat
of content, and encoding/decoding performance has really bitten us in the future. So I try to avoid transparent encoding/decoding if I can avoid it. So right now, I have a huge blob of text that's a 'text' column. Logically it *should* be text, because that's what it is... Can I just keep

Re: Can I call getBytes on a text column to get the raw (already encoded UTF8)

2014-06-24 Thread Robert Stupp
encoding/decoding if I can avoid it. So right now, I have a huge blob of text that's a 'text' column. Logically it *should* be text, because that's what it is... Can I just keep it as text so our normal tools work on it, but get it as raw UTF8 if I call getBytes? This way I can call

Re: Can I call getBytes on a text column to get the raw (already encoded UTF8)

2014-06-24 Thread Kevin Burton
really bitten us in the future. So I try to avoid transparent encoding/decoding if I can avoid it. So right now, I have a huge blob of text that's a 'text' column. Logically it *should* be text, because that's what it is... Can I just keep it as text so our normal tools work on it, but get

Can I call getBytes on a text column to get the raw (already encoded UTF8)

2014-06-23 Thread Kevin Burton
blob of text that's a 'text' column. Logically it *should* be text, because that's what it is... Can I just keep it as text so our normal tools work on it, but get it as raw UTF8 if I call getBytes? This way I can call getBytes and then send it right over the wire as pre-encoded UTF8 data

Re: Can I call getBytes on a text column to get the raw (already encoded UTF8)

2014-06-23 Thread DuyHai Doan
to push LOTS of content, and encoding/decoding performance has really bitten us in the future. So I try to avoid transparent encoding/decoding if I can avoid it. So right now, I have a huge blob of text that's a 'text' column. Logically it *should* be text, because that's what it is... Can I just

Re: can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-19 Thread Jens Rantil
...and temporarily adding more nodes and rebalancing is not an option?— Sent from Mailbox On Wed, Jun 18, 2014 at 9:39 PM, Brian Tarbox tar...@cabotresearch.com wrote: I don't think I have the space to run a major compaction right now (I'm above 50% disk space used already) and compaction can

can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-18 Thread Brian Tarbox
I have a column family that only stores the last 5 days worth of some data...and yet I have files in the data directory for this CF that are 3 weeks old. They take the form: keyspace-CFName-ic--Filter.db keyspace-CFName-ic--Index.db keyspace-CFName-ic--Data.db

Re: can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-18 Thread Robert Coli
On Wed, Jun 18, 2014 at 10:56 AM, Brian Tarbox tar...@cabotresearch.com wrote: I have a column family that only stores the last 5 days worth of some data...and yet I have files in the data directory for this CF that are 3 weeks old. Are you using TTL? If so :

Re: can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-18 Thread Brian Tarbox
Rob, Thank you! We are not using TTL, we're manually deleting data more than 5 days old for this CF. We're running 1.2.13 and are using size tiered compaction (this cf is append-only i.e.zero updates). Sounds like we can get away with doing a (stop, delete old-data-file, restart) process on a

Re: can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-18 Thread Robert Coli
On Wed, Jun 18, 2014 at 12:05 PM, Brian Tarbox tar...@cabotresearch.com wrote: Thank you! We are not using TTL, we're manually deleting data more than 5 days old for this CF. We're running 1.2.13 and are using size tiered compaction (this cf is append-only i.e.zero updates). Sounds like

Re: can I kill very old data files in my data folder (I know that sounds crazy but....)

2014-06-18 Thread Brian Tarbox
I don't think I have the space to run a major compaction right now (I'm above 50% disk space used already) and compaction can take extra space I think? On Wed, Jun 18, 2014 at 3:24 PM, Robert Coli rc...@eventbrite.com wrote: On Wed, Jun 18, 2014 at 12:05 PM, Brian Tarbox

Re: [OT]: Can I have a non-delivering subscription?

2014-02-24 Thread Edward Capriolo
You can setup the mail to deliver one per day as well. On Saturday, February 22, 2014, Robert Wille rwi...@fold3.com wrote: Yeah, it¹s called a rule. Set one up to delete everything from user@cassandra.apache.org. On 2/22/14, 10:32 AM, Paul LeoNerd Evans leon...@leonerd.org.uk wrote: A

[OT]: Can I have a non-delivering subscription?

2014-02-22 Thread Paul LeoNerd Evans
A question about the mailing list itself, rather than Cassandra. I've re-subscribed simply because I have to be subscribed in order to send to the list, as I sometimes try to when people Cc questions about my Net::Async::CassandraCQL perl module to me. However, if I want to read the list, I

Re: [OT]: Can I have a non-delivering subscription?

2014-02-22 Thread Robert Wille
Yeah, it¹s called a rule. Set one up to delete everything from user@cassandra.apache.org. On 2/22/14, 10:32 AM, Paul LeoNerd Evans leon...@leonerd.org.uk wrote: A question about the mailing list itself, rather than Cassandra. I've re-subscribed simply because I have to be subscribed in order to

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-09-23 Thread Cyril Scetbon
I tried with 1.2.10 and don't meet the issue anymore. Regards -- Cyril SCETBON On Sep 19, 2013, at 10:28 PM, Cyril Scetbon cyril.scet...@free.fr wrote: Hi, Did you try to build 1.2.10 and to use it for your tests ? I've got the same issue and will give it a try as soon as it's released

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-09-19 Thread Cyril Scetbon
Hi, Did you try to build 1.2.10 and to use it for your tests ? I've got the same issue and will give it a try as soon as it's released (expected at the end of the week). Regards -- Cyril SCETBON On Sep 2, 2013, at 3:09 PM, Miguel Angel Martin junquera mianmarjun.mailingl...@gmail.com wrote:

Re: How can I switch from multiple disks to a single disk?

2013-09-17 Thread Robert Coli
On Tue, Sep 17, 2013 at 4:01 PM, Juan Manuel Formoso jform...@gmail.comwrote: Anyone who knows for sure if this would work? Sankalp Kohli (whose last name is phonetically awesome!) has pointed you in the correct direction. To be a bit more explicit : 1) determine if sstable names are unique

Re: How can I switch from multiple disks to a single disk?

2013-09-17 Thread Juan Manuel Formoso
Thanks! But, shouldn't I be able to just stop Cassandra, copy the files, change the config and restart? Why should I drain? My RF+consistency level can handle one replica down (I forgot to mention that in my OP, apologies) Would it work in theory? On Tuesday, September 17, 2013, Robert Coli

Re: How can I switch from multiple disks to a single disk?

2013-09-17 Thread Robert Coli
On Tue, Sep 17, 2013 at 5:57 PM, Juan Manuel Formoso jform...@gmail.comwrote: Thanks! But, shouldn't I be able to just stop Cassandra, copy the files, change the config and restart? Why should I drain? If you drain, you reduce to zero the chance of having some problem with the SSTables

Re: How can I switch from multiple disks to a single disk?

2013-09-16 Thread sankalp kohli
on my Cassandra nodes. When I finish compacting, cleaning up, and repairing, I'd like to remove them and return to one disk per node. What is the procedure to make the switch? Can I just kill cassandra, move the data from one disk to the other, remove the configuration for the second disk

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-09-02 Thread Miguel Angel Martin junquera
hi: I test this in cassandra 1.2.9 new version and the issue still persists . :-( Miguel Angel Martín Junquera Analyst Engineer. miguelangel.mar...@brainsins.com 2013/8/30 Miguel Angel Martin junquera mianmarjun.mailingl...@gmail.com I try this: *rows = LOAD

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-09-02 Thread Miguel Angel Martin junquera
hi all: More info : https://issues.apache.org/jira/browse/CASSANDRA-5941 I tried this (and gen. cassandra 1.2.9) but do not work for me, git clone http://git-wip-us.apache.org/repos/asf/cassandra.git cd cassandra git checkout cassandra-1.2 patch -p1

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-08-30 Thread Miguel Angel Martin junquera
I try this: *rows = LOAD 'cql://keyspace1/test?page_size=1split_size=4where_clause=age%3D30' USING CqlStorage();* *dump rows;* *ILLUSTRATE rows;* *describe rows;* * * *values2= FOREACH rows GENERATE TOTUPLE (id) as (mycolumn:tuple(name,value));* *dump values2;* *describe values2;* * *

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-08-28 Thread Miguel Angel Martin junquera
hi all: Regards Still i can resolve this issue. . does anybody have this issue or try to test this simple example? i am stumped I can not find a solution working. I appreciate any comment or help 2013/8/22 Miguel Angel Martin junquera mianmarjun.mailingl...@gmail.com hi all:

Re: how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-08-28 Thread Miguel Angel Martin junquera
hi: I can not understand why the schema is define like *id:chararray,age:int,title:chararray and it does not define like tuples or bag tuples, if we have pair key-values columns* * * * * *I try other time to change schema but it does not work.* * * *any ideas ...* * * *perhaps, is the issue

how can i get the column value? Need help!.. cassandra 1.28 and pig 0.11.1

2013-08-22 Thread Miguel Angel Martin junquera
hi all: I,m testing the new CqlStorage() with cassandra 1.28 and pig 0.11.1 I am using this sample data test: http://frommyworkshop.blogspot.com.es/2013/07/hadoop-map-reduce-with-cassandra.html And I load and dump data Righ with this script: *rows = LOAD

Re: Can I create a counter column family with many rows in 1.1.10?

2013-03-06 Thread Alain RODRIGUEZ
, PRIMARY KEY (aid, key1, key2, key3) ) First, when I do so I have no error shown, but I *can't* see this CF appear in my OpsCenter. update composite_counter set value = value + 5 where aid = '1' and key1 = 'test1' and key2 = 'test2' and key3 = 'test3'; works as expected too. But how can I have multiple

RE: Can I create a counter column family with many rows in 1.1.10?

2013-03-06 Thread Mateus Ferreira e Freitas
in order to create it? From: aa...@thelastpickle.com Subject: Re: Can I create a counter column family with many rows in 1.1.10? Date: Tue, 5 Mar 2013 23:47:38 -0800 To: user@cassandra.apache.org Note that CQL 3 in 1.1 is compatible with CQL 3 in 1.2. Also you do not have to use CQL 3, you can

RE: Can I create a counter column family with many rows in 1.1.10?

2013-03-06 Thread Mateus Ferreira e Freitas
I got it now. From: mateus.ffrei...@hotmail.com To: user@cassandra.apache.org Subject: RE: Can I create a counter column family with many rows in 1.1.10? Date: Wed, 6 Mar 2013 08:42:37 -0300 Ah, I'ts with many columns, not rows. I use this in cql 2-3 create table cnt (key text PRIMARY KEY

Re: Can I create a counter column family with many rows in 1.1.10?

2013-03-06 Thread aaron morton
' and key2 = 'test2' and key3 = 'test3'; works as expected too. But how can I have multiple counter columns using the schemaless property of cassandra ? I mean before, when I created counter CF with cli, things like this used to work: update composite_counter set 'value2' = 'value2' + 5 where aid

  1   2   >