Set of SSTables have the same set of ancestors

2016-10-11 Thread Rajath Subramanyam
Hello Cassandra-users,

I logged into my test Cassandra cluster and saw this today. I find it very
unusual that a set of SSTables have the same set of ancestors:

$ sstablemetadata b800ks3-colla1-ka-*-Statistics.db | grep "Ancestors\|SSTable:"
SSTable: ./b800ks3-colla1-ka-1876
Ancestors: [1840, 1745, 1827, 1782, 1862, 1863, 1864, 1865, 1849, 1407]
SSTable: ./b800ks3-colla1-ka-2419
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2420
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2421
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2422
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2423
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2424
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2425
Ancestors: [2417, 2418, 2373, 2374, 2375, 2376, 2377, 2378]
SSTable: ./b800ks3-colla1-ka-2430
Ancestors: [2428, 2429]


What could have potentially caused this ? Nobody has run a sstablesplit on
any of these SSTables and I do not see any "anticompaction" words in the
log file.

Thank you !

Regards,
Rajath

Rajath Subramanyam


Re: Question on Read Repair

2016-10-11 Thread Jeff Jirsa
Yes:

 

https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/db/ConsistencyLevel.java#L286

 

 

 

From: Anubhav Kale 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, October 11, 2016 at 11:45 AM
To: "user@cassandra.apache.org" 
Subject: RE: Question on Read Repair

 

Thank you.

 

Interesting detail. Does it work the same way for other consistency levels as 
well ?

 

From: Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com] 
Sent: Tuesday, October 11, 2016 10:29 AM
To: user@cassandra.apache.org
Subject: Re: Question on Read Repair

 

If the failuredetector knows that the node is down, it won’t attempt a read, 
because the consistency level can’t be satisfied – none of the other replicas 
will be repaired.

 

 

From: Anubhav Kale 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, October 11, 2016 at 10:24 AM
To: "user@cassandra.apache.org" 
Subject: Question on Read Repair

 

Hello,

 

This is more of a theory / concept question. I set CL=ALL and do a read. Say 
one replica was down, will the rest of the replicas get repaired as part of 
this ? (I am hoping the answer is yes).

 

Thanks !


CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and may 
be legally privileged. If you are not the intended recipient, do not disclose, 
copy, distribute, or use this email or any attachments. If you have received 
this in error please let the sender know and then delete the email and all 
attachments.


CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and may 
be legally privileged. If you are not the intended recipient, do not disclose, 
copy, distribute, or use this email or any attachments. If you have received 
this in error please let the sender know and then delete the email and all 
attachments.


smime.p7s
Description: S/MIME cryptographic signature


Re: Is there any way to throttle the memtable flushing throughput?

2016-10-11 Thread Ben Bromhead
A few thoughts on the larger problem at hand.

The AWS instance type you are using is not appropriate for a production
workload. Also with memtable flushes that cause spiky write throughput it
sounds like your commitlog is on the same disk as your data directory,
combined with the use of non-SSD EBS I'm not surprised this is happening.
The small amount of memory on the node could also mean your flush writers
are getting backed up (blocked), possibly causing JVM heap pressure and
other fun things (you can check this with nodetool tpstats).

Before you get into tuning memtable flushing I would do the following:

   - Reset your commitlog_sync settings back to default
   - Use an EC2 instance type with at least 15GB of memory, 4 cores and is
   EBS optimized (dedicated EBS bandwidth)
   - Use gp2 or io2 EBS volumes
   - Put your commitlog on a separate EBS volume.
   - Make sure your memtable_flush_writers are not being blocked, if so
   increase the number of flush writers (no more than # of cores)
   - Optimize your read_ahead_kb size and compression_chunk_length to keep
   those EBS reads as small as possible.

Once you have fixed the above, memtable flushing should not be an issue.
Even if you can't/don't want to upgrade the instance type, the other steps
will help things.

Ben

On Tue, 11 Oct 2016 at 10:23 Satoshi Hikida  wrote:

> Hi,
>
> I'm investigating the read/write performance of the C* (Ver. 2.2.8).
> However, I have an issue about memtable flushing which forces the spiky
> write throughput. And then it affects the latency of the client's requests.
>
> So I want to know the answers for the following questions.
>
> 1. Is there any way that throttling the write throughput of the memtable
> flushing? If it exists, how can I do that?
> 2. Is there any way to reduce the spike of the write bandwidth during the
> memtable flushing?
>(I'm in trouble because the delay of the request increases when the
> spike of the write bandwidth occurred)
>
> I'm using one C* node for this investigation. And C* runs on an EC2
> instance (2vCPU, 4GB memory), In addition, I attach two magnetic disks to
> the instance, one stores system data(root file system.(/)), the other
> stores C* data (data files and commit logs).
>
> I also changed a few configurations.
> - commitlog_sync: batch
> - commitlog_sync_batch_window_in_ms: 2
> (Using default value for the other configurations)
>
>
> Regards,
> Satoshi
>
> --
Ben Bromhead
CTO | Instaclustr 
+1 650 284 9692
Managed Cassandra / Spark on AWS, Azure and Softlayer


Re: [Marketing Mail] Re: sstableloader question

2016-10-11 Thread Rajath Subramanyam
How many sstables are you trying to load ? Running sstableloaders in
parallel will help. Did you try setting the "-t" parameter and see if you
are getting the expected throughput ?

- Rajath


Rajath Subramanyam


On Mon, Oct 10, 2016 at 2:02 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> Thank you Adam and Rajath.
>
> I'll split input sstables and run parallel jobs for each.
> I tested this approach and run 3 parallel sstableloader job without -t
> parameter.
> I raised stream_throughput_outbound_megabits_per_sec parameter from 200
> to 600 Mbit/sec at all of target nodes.
> But each job runs about 10MB/sec only and generates about 100Mbit'sec
> network traffic.
> At total this can be much more. Source and target servers has plenty of
> unused cpu, io and network resource.
> Do you have any idea how can I increase speed of sstableloader job?
>
> Regards,
> Osman
>
> On 10-10-2016 22:05, Rajath Subramanyam wrote:
> Hi Osman,
>
> You cannot restart the streaming only to the failed nodes specifically.
> You can restart the sstableloader job itself. Compaction will eventually
> take care of the redundant rows.
>
> - Rajath
>
> 
> Rajath Subramanyam
>
>
> On Sun, Oct 9, 2016 at 7:38 PM, Adam Hutson  @datascale.io>> wrote:
> It'll start over from the beginning.
>
>
> On Sunday, October 9, 2016, Osman YOZGATLIOGLU <
> osman.yozgatlio...@krontech.com>
> wrote:
> Hello,
>
> I have running a sstableloader job.
> Unfortunately some of nodes restarted since beginnig streaming.
> I see streaming stop for those nodes.
> Can I restart those streaming somehow?
> Or if I restart sstableloader job, will it start from beginning?
>
> Regards,
> Osman
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>
>
> --
>
> Adam Hutson
> Data Architect | DataScale
> +1 (417) 224-5212
> a...@datascale.io
>
>
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>


RE: Question on Read Repair

2016-10-11 Thread Anubhav Kale
Thank you.

Interesting detail. Does it work the same way for other consistency levels as 
well ?

From: Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
Sent: Tuesday, October 11, 2016 10:29 AM
To: user@cassandra.apache.org
Subject: Re: Question on Read Repair

If the failuredetector knows that the node is down, it won’t attempt a read, 
because the consistency level can’t be satisfied – none of the other replicas 
will be repaired.


From: Anubhav Kale 
>
Reply-To: "user@cassandra.apache.org" 
>
Date: Tuesday, October 11, 2016 at 10:24 AM
To: "user@cassandra.apache.org" 
>
Subject: Question on Read Repair

Hello,

This is more of a theory / concept question. I set CL=ALL and do a read. Say 
one replica was down, will the rest of the replicas get repaired as part of 
this ? (I am hoping the answer is yes).

Thanks !

CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and may 
be legally privileged. If you are not the intended recipient, do not disclose, 
copy, distribute, or use this email or any attachments. If you have received 
this in error please let the sender know and then delete the email and all 
attachments.


Re: Question on Read Repair

2016-10-11 Thread Edward Capriolo
This is theory but not the all practice. The failure detector heartbeats is
a process happening outside the read.

Take for example a cluster with Replication Factor 3.
At time('1) the failure detector might read three nodes as UP.
A request "soon after '1" issued at time(`2) might start a read process.
One of the three nodes may not respond within the read timeout window.Call
the end of the read timeout window time('3)
Note: Anti-entropy read-repair like Read repair is set to only happen a
fraction of requests.
Note: Anti-entropy read-repair is (async) not guaranteed not retried (might
need a fact check but fairly sure of this)
A read-repair may be issue at time('4) moments after time('3).
Those read repairs could fail or pass as well.

The long and short is well the day may be repaired after a read of ALL.
There is no guarantee that it will be.

.



On Tue, Oct 11, 2016 at 1:29 PM, Jeff Jirsa 
wrote:

> If the failuredetector knows that the node is down, it won’t attempt a
> read, because the consistency level can’t be satisfied – none of the other
> replicas will be repaired.
>
>
>
>
>
> *From: *Anubhav Kale 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Tuesday, October 11, 2016 at 10:24 AM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Question on Read Repair
>
>
>
> Hello,
>
>
>
> This is more of a theory / concept question. I set CL=ALL and do a read.
> Say one replica was down, will the rest of the replicas get repaired as
> part of this ? (I am hoping the answer is yes).
>
>
>
> Thanks !
> 
> CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and
> may be legally privileged. If you are not the intended recipient, do not
> disclose, copy, distribute, or use this email or any attachments. If you
> have received this in error please let the sender know and then delete the
> email and all attachments.
>


Re: Question on Read Repair

2016-10-11 Thread Jeff Jirsa
If the failuredetector knows that the node is down, it won’t attempt a read, 
because the consistency level can’t be satisfied – none of the other replicas 
will be repaired.

 

 

From: Anubhav Kale 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, October 11, 2016 at 10:24 AM
To: "user@cassandra.apache.org" 
Subject: Question on Read Repair

 

Hello,

 

This is more of a theory / concept question. I set CL=ALL and do a read. Say 
one replica was down, will the rest of the replicas get repaired as part of 
this ? (I am hoping the answer is yes).

 

Thanks !


CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and may 
be legally privileged. If you are not the intended recipient, do not disclose, 
copy, distribute, or use this email or any attachments. If you have received 
this in error please let the sender know and then delete the email and all 
attachments.


smime.p7s
Description: S/MIME cryptographic signature


Question on Read Repair

2016-10-11 Thread Anubhav Kale
Hello,

This is more of a theory / concept question. I set CL=ALL and do a read. Say 
one replica was down, will the rest of the replicas get repaired as part of 
this ? (I am hoping the answer is yes).

Thanks !


Is there any way to throttle the memtable flushing throughput?

2016-10-11 Thread Satoshi Hikida
Hi,

I'm investigating the read/write performance of the C* (Ver. 2.2.8).
However, I have an issue about memtable flushing which forces the spiky
write throughput. And then it affects the latency of the client's requests.

So I want to know the answers for the following questions.

1. Is there any way that throttling the write throughput of the memtable
flushing? If it exists, how can I do that?
2. Is there any way to reduce the spike of the write bandwidth during the
memtable flushing?
   (I'm in trouble because the delay of the request increases when the
spike of the write bandwidth occurred)

I'm using one C* node for this investigation. And C* runs on an EC2
instance (2vCPU, 4GB memory), In addition, I attach two magnetic disks to
the instance, one stores system data(root file system.(/)), the other
stores C* data (data files and commit logs).

I also changed a few configurations.
- commitlog_sync: batch
- commitlog_sync_batch_window_in_ms: 2
(Using default value for the other configurations)


Regards,
Satoshi


Does increment/decrement by 0 generate any commits ?

2016-10-11 Thread Dorian Hoxha
I just have a bunch of counters in 1 row, and I want to selectively update
them. And I want to keep prepared queries. But I don't want to keep 30
prepared queries (1 for each counter column, but keep only 1). So in most
cases, I will increment 1 column by positive integer and the others by 0.

Makes sense ?


Re: mapper.save() throws a ThreadPool error (Java)

2016-10-11 Thread Ali Akhtar
Uh, yeah, I'm a moron. I was doing this inside a try/catch block, and the
class containing my session was autoclosing the session at the end of the
try/ catch (i.e try (Environment env = new Environment() ).

Nvm, I'm an idiot

On Tue, Oct 11, 2016 at 8:29 PM, Ali Akhtar  wrote:

> This is a little urgent, so any help would be greatly appreciated.
>
> On Tue, Oct 11, 2016 at 8:22 PM, Ali Akhtar  wrote:
>
>> I'm creating a session, connecting to it, then creating a
>> mappingManager(), then obtaining a mapper for MyPojo.class
>>
>> If I then try to do mapper.save(myPojo), I get the following stacktrace:
>>
>> Oct 11, 2016 8:16:26 PM com.google.common.util.concurrent.ExecutionList
>> executeListener
>> SEVERE: RuntimeException while executing runnable
>> com.google.common.util.concurrent.Futures$ChainingListenable
>> Future@5164e29 with executor com.google.common.util.concurr
>> ent.MoreExecutors$ListeningDecorator@5f77d54d
>> java.util.concurrent.RejectedExecutionException: Task
>> com.google.common.util.concurrent.Futures$ChainingListenable
>> Future@5164e29 rejected from java.util.concurrent.ThreadPoo
>> lExecutor@53213dad[Terminated, pool size = 0, active threads = 0, queued
>> tasks = 0, completed tasks = 0]
>> at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.
>> rejectedExecution(ThreadPoolExecutor.java:2047)
>> at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExe
>> cutor.java:823)
>> at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolEx
>> ecutor.java:1369)
>> at com.google.common.util.concurrent.MoreExecutors$ListeningDec
>> orator.execute(MoreExecutors.java:484)
>> at com.google.common.util.concurrent.ExecutionList.executeListe
>> ner(ExecutionList.java:156)
>> at com.google.common.util.concurrent.ExecutionList.add(Executio
>> nList.java:101)
>> at com.google.common.util.concurrent.AbstractFuture.addListener
>> (AbstractFuture.java:170)
>> at com.google.common.util.concurrent.Futures.transform(Futures.java:608)
>> at com.datastax.driver.core.SessionManager.toPreparedStatement(
>> SessionManager.java:200)
>> at com.datastax.driver.core.SessionManager.prepareAsync(Session
>> Manager.java:161)
>> at com.datastax.driver.core.AbstractSession.prepareAsync(Abstra
>> ctSession.java:134)
>> at com.datastax.driver.mapping.Mapper.getPreparedQueryAsync(Map
>> per.java:121)
>> at com.datastax.driver.mapping.Mapper.saveQueryAsync(Mapper.java:224)
>> at com.datastax.driver.mapping.Mapper.saveAsync(Mapper.java:307)
>> at com.datastax.driver.mapping.Mapper.save(Mapper.java:270)
>>
>>
>>
>> Any ideas what's causing this? Afaik I'm doing all the steps asked for
>>
>>
>


Re: mapper.save() throws a ThreadPool error (Java)

2016-10-11 Thread Ali Akhtar
This is a little urgent, so any help would be greatly appreciated.

On Tue, Oct 11, 2016 at 8:22 PM, Ali Akhtar  wrote:

> I'm creating a session, connecting to it, then creating a
> mappingManager(), then obtaining a mapper for MyPojo.class
>
> If I then try to do mapper.save(myPojo), I get the following stacktrace:
>
> Oct 11, 2016 8:16:26 PM com.google.common.util.concurrent.ExecutionList
> executeListener
> SEVERE: RuntimeException while executing runnable com.google.common.util.
> concurrent.Futures$ChainingListenableFuture@5164e29 with executor
> com.google.common.util.concurrent.MoreExecutors$
> ListeningDecorator@5f77d54d
> java.util.concurrent.RejectedExecutionException: Task
> com.google.common.util.concurrent.Futures$ChainingListenableFuture@5164e29
> rejected from java.util.concurrent.ThreadPoolExecutor@53213dad[Terminated,
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(
> ThreadPoolExecutor.java:2047)
> at java.util.concurrent.ThreadPoolExecutor.reject(
> ThreadPoolExecutor.java:823)
> at java.util.concurrent.ThreadPoolExecutor.execute(
> ThreadPoolExecutor.java:1369)
> at com.google.common.util.concurrent.MoreExecutors$
> ListeningDecorator.execute(MoreExecutors.java:484)
> at com.google.common.util.concurrent.ExecutionList.
> executeListener(ExecutionList.java:156)
> at com.google.common.util.concurrent.ExecutionList.add(
> ExecutionList.java:101)
> at com.google.common.util.concurrent.AbstractFuture.
> addListener(AbstractFuture.java:170)
> at com.google.common.util.concurrent.Futures.transform(Futures.java:608)
> at com.datastax.driver.core.SessionManager.toPreparedStatement(
> SessionManager.java:200)
> at com.datastax.driver.core.SessionManager.prepareAsync(
> SessionManager.java:161)
> at com.datastax.driver.core.AbstractSession.prepareAsync(
> AbstractSession.java:134)
> at com.datastax.driver.mapping.Mapper.getPreparedQueryAsync(
> Mapper.java:121)
> at com.datastax.driver.mapping.Mapper.saveQueryAsync(Mapper.java:224)
> at com.datastax.driver.mapping.Mapper.saveAsync(Mapper.java:307)
> at com.datastax.driver.mapping.Mapper.save(Mapper.java:270)
>
>
>
> Any ideas what's causing this? Afaik I'm doing all the steps asked for
>
>


mapper.save() throws a ThreadPool error (Java)

2016-10-11 Thread Ali Akhtar
I'm creating a session, connecting to it, then creating a mappingManager(),
then obtaining a mapper for MyPojo.class

If I then try to do mapper.save(myPojo), I get the following stacktrace:

Oct 11, 2016 8:16:26 PM com.google.common.util.concurrent.ExecutionList
executeListener
SEVERE: RuntimeException while executing runnable
com.google.common.util.concurrent.Futures$ChainingListenableFuture@5164e29
with executor
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator@5f77d54d
java.util.concurrent.RejectedExecutionException: Task
com.google.common.util.concurrent.Futures$ChainingListenableFuture@5164e29
rejected from java.util.concurrent.ThreadPoolExecutor@53213dad[Terminated,
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
at
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:484)
at
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
at
com.google.common.util.concurrent.ExecutionList.add(ExecutionList.java:101)
at
com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:170)
at com.google.common.util.concurrent.Futures.transform(Futures.java:608)
at
com.datastax.driver.core.SessionManager.toPreparedStatement(SessionManager.java:200)
at
com.datastax.driver.core.SessionManager.prepareAsync(SessionManager.java:161)
at
com.datastax.driver.core.AbstractSession.prepareAsync(AbstractSession.java:134)
at com.datastax.driver.mapping.Mapper.getPreparedQueryAsync(Mapper.java:121)
at com.datastax.driver.mapping.Mapper.saveQueryAsync(Mapper.java:224)
at com.datastax.driver.mapping.Mapper.saveAsync(Mapper.java:307)
at com.datastax.driver.mapping.Mapper.save(Mapper.java:270)



Any ideas what's causing this? Afaik I'm doing all the steps asked for


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Justin Cameron
I'm not sure about using it in a SimpleStatement in the Java driver (you
might need to test this), but the QueryBuilder does have support for in()
where you pass a list is the parameter:

see
http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/querybuilder/QueryBuilder.html#in-java.lang.String-java.util.List-


On Tue, 11 Oct 2016 at 07:24 Ali Akhtar  wrote:

Justin,

I'm asking how to bind a parameter for IN queries thru the java driver.

On Tue, Oct 11, 2016 at 7:22 PM, Justin Cameron 
wrote:

You need to specify the values themselves.

CREATE TABLE user (
id int,
type text,
val1 int,
val2 text,
PRIMARY KEY ((id, category), val1, val2)
);

SELECT * FROM user WHERE id = 1 AND type IN ('user', 'admin') AND val1 = 3
AND val2 IN ('a', 'v', 'd');

On Tue, 11 Oct 2016 at 07:11 Ali Akhtar  wrote:

Do you send the values themselves, or send them as an array / collection?
Or will both work?

On Tue, Oct 11, 2016 at 7:10 PM, Justin Cameron 
wrote:

You can pass multiple values to the IN clause, however they can only be
used on the last column in the partition key and/or the last column in the
full primary key.

Example:

'Select * from my_table WHERE pk = 'test' And ck IN (1, 2)'


On Tue, 11 Oct 2016 at 06:15 Ali Akhtar  wrote:

If I wanted to create an accessor, and have a method which does a query
like this:

'Select * from my_table WHERE pk = ? And ck IN (?)'

And there were multiple options that could go inside the IN() query, how
can I specify that? Will it e.g, let me pass in an array as the 2nd
variable?

-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.





-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.





-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Justin Cameron
You need to specify the values themselves.

CREATE TABLE user (
id int,
type text,
val1 int,
val2 text,
PRIMARY KEY ((id, category), val1, val2)
);

SELECT * FROM user WHERE id = 1 AND type IN ('user', 'admin') AND val1 = 3
AND val2 IN ('a', 'v', 'd');

On Tue, 11 Oct 2016 at 07:11 Ali Akhtar  wrote:

Do you send the values themselves, or send them as an array / collection?
Or will both work?

On Tue, Oct 11, 2016 at 7:10 PM, Justin Cameron 
wrote:

You can pass multiple values to the IN clause, however they can only be
used on the last column in the partition key and/or the last column in the
full primary key.

Example:

'Select * from my_table WHERE pk = 'test' And ck IN (1, 2)'


On Tue, 11 Oct 2016 at 06:15 Ali Akhtar  wrote:

If I wanted to create an accessor, and have a method which does a query
like this:

'Select * from my_table WHERE pk = ? And ck IN (?)'

And there were multiple options that could go inside the IN() query, how
can I specify that? Will it e.g, let me pass in an array as the 2nd
variable?

-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.





-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Ali Akhtar
Justin,

I'm asking how to bind a parameter for IN queries thru the java driver.

On Tue, Oct 11, 2016 at 7:22 PM, Justin Cameron 
wrote:

> You need to specify the values themselves.
>
> CREATE TABLE user (
> id int,
> type text,
> val1 int,
> val2 text,
> PRIMARY KEY ((id, category), val1, val2)
> );
>
> SELECT * FROM user WHERE id = 1 AND type IN ('user', 'admin') AND val1 =
> 3 AND val2 IN ('a', 'v', 'd');
>
> On Tue, 11 Oct 2016 at 07:11 Ali Akhtar  wrote:
>
> Do you send the values themselves, or send them as an array / collection?
> Or will both work?
>
> On Tue, Oct 11, 2016 at 7:10 PM, Justin Cameron 
> wrote:
>
> You can pass multiple values to the IN clause, however they can only be
> used on the last column in the partition key and/or the last column in the
> full primary key.
>
> Example:
>
> 'Select * from my_table WHERE pk = 'test' And ck IN (1, 2)'
>
>
> On Tue, 11 Oct 2016 at 06:15 Ali Akhtar  wrote:
>
> If I wanted to create an accessor, and have a method which does a query
> like this:
>
> 'Select * from my_table WHERE pk = ? And ck IN (?)'
>
> And there were multiple options that could go inside the IN() query, how
> can I specify that? Will it e.g, let me pass in an array as the 2nd
> variable?
>
> --
>
> Justin Cameron
>
> Senior Software Engineer | Instaclustr
>
>
>
>
> This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
> Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
>
>
>
>
> --
>
> Justin Cameron
>
> Senior Software Engineer | Instaclustr
>
>
>
>
> This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
> Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
>


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Ali Akhtar
Ah, thanks, good catch.

If I send a List / Array as value for the last param, will that get bound
as expected?

On Tue, Oct 11, 2016 at 7:16 PM, horschi  wrote:

> Hi Ali,
>
> do you perhaps want "'Select * from my_table WHERE pk = ? And ck IN ?'" ?
> (Without the brackets around the question mark)
>
> regards,
> Ch
>
> On Tue, Oct 11, 2016 at 3:14 PM, Ali Akhtar  wrote:
>
>> If I wanted to create an accessor, and have a method which does a query
>> like this:
>>
>> 'Select * from my_table WHERE pk = ? And ck IN (?)'
>>
>> And there were multiple options that could go inside the IN() query, how
>> can I specify that? Will it e.g, let me pass in an array as the 2nd
>> variable?
>>
>
>


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Ali Akhtar
Do you send the values themselves, or send them as an array / collection?
Or will both work?

On Tue, Oct 11, 2016 at 7:10 PM, Justin Cameron 
wrote:

> You can pass multiple values to the IN clause, however they can only be
> used on the last column in the partition key and/or the last column in the
> full primary key.
>
> Example:
>
> 'Select * from my_table WHERE pk = 'test' And ck IN (1, 2)'
>
>
> On Tue, 11 Oct 2016 at 06:15 Ali Akhtar  wrote:
>
>> If I wanted to create an accessor, and have a method which does a query
>> like this:
>>
>> 'Select * from my_table WHERE pk = ? And ck IN (?)'
>>
>> And there were multiple options that could go inside the IN() query, how
>> can I specify that? Will it e.g, let me pass in an array as the 2nd
>> variable?
>>
> --
>
> Justin Cameron
>
> Senior Software Engineer | Instaclustr
>
>
>
>
> This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
> Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
>


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread horschi
Hi Ali,

do you perhaps want "'Select * from my_table WHERE pk = ? And ck IN ?'" ?
(Without the brackets around the question mark)

regards,
Ch

On Tue, Oct 11, 2016 at 3:14 PM, Ali Akhtar  wrote:

> If I wanted to create an accessor, and have a method which does a query
> like this:
>
> 'Select * from my_table WHERE pk = ? And ck IN (?)'
>
> And there were multiple options that could go inside the IN() query, how
> can I specify that? Will it e.g, let me pass in an array as the 2nd
> variable?
>


Re: Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Justin Cameron
You can pass multiple values to the IN clause, however they can only be
used on the last column in the partition key and/or the last column in the
full primary key.

Example:

'Select * from my_table WHERE pk = 'test' And ck IN (1, 2)'


On Tue, 11 Oct 2016 at 06:15 Ali Akhtar  wrote:

> If I wanted to create an accessor, and have a method which does a query
> like this:
>
> 'Select * from my_table WHERE pk = ? And ck IN (?)'
>
> And there were multiple options that could go inside the IN() query, how
> can I specify that? Will it e.g, let me pass in an array as the 2nd
> variable?
>
-- 

Justin Cameron

Senior Software Engineer | Instaclustr




This email has been sent on behalf of Instaclustr Pty Ltd (Australia) and
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Java Driver - Specifying parameters for an IN() query?

2016-10-11 Thread Ali Akhtar
If I wanted to create an accessor, and have a method which does a query
like this:

'Select * from my_table WHERE pk = ? And ck IN (?)'

And there were multiple options that could go inside the IN() query, how
can I specify that? Will it e.g, let me pass in an array as the 2nd
variable?