Re: Suggestions for migrating data from cassandra

2018-05-15 Thread Michael Dykman
I don't know that there are any projects out there addressing this but I
advise you to study LOAD ... INFILE in the MySQL manual specific to your
target version. It basically describes a CSV format, where a given file
represents a subset of data for a specific table. It is far and away the
fastest method for loading huge amounts of data into MySQL
non-transactionally.

On the downside, you are likely going to have to author your own Cassandra
client tool to generate those files.

On Tue, May 15, 2018, 6:59 AM Jing Meng,  wrote:

> Hi guys, for some historical reason, our cassandra cluster is currently
> overloaded and operating on that somehow becomes a nightmare. Anyway,
> (sadly) we're planning to migrate cassandra data back to mysql...
>
> So we're not quite clear how to migrating the historical data from
> cassandra.
>
> While as I know there is the COPY command, I wonder if it works in product
> env where more than hundreds gigabytes data are present. And, if it does,
> would it impact server performance significantly?
>
> Apart from that, I know spark-connector can be used to scan data from c*
> cluster, but I'm not that familiar with spark and still not sure whether
> write data to mysql database can be done naturally with spark-connector.
>
> Are there any suggestions/best-practice/read-materials doing this?
>
> Thanks!
>


Re: cqlsh commands for importing .CSV files into cassandra

2015-04-08 Thread Michael Dykman
http://docs.datastax.com/en/cql/3.0/cql/cql_reference/copy_r.html

This only works through cqlsh.

On Wed, Apr 8, 2015 at 1:48 PM, Divya Divs divya.divi2...@gmail.com wrote:

 hi
 Please tell me the cqlsh commands for importing .csv file datasets into
 cassandra. please help to start. Iam using windows




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Store data with cassandra

2015-03-20 Thread Michael Dykman
You seem to be missing the point here.

Cassandra does not manage files, it manages data in a highly distributed
cluster.  If you are attempting to manage files, you are quite simply using
the wrong tool and Cassandra is not for you.

On Fri, Mar 20, 2015 at 9:10 AM, jean paul researche...@gmail.com wrote:

 I have used this tutoriel to create my data base
 http://planetcassandra.org/insert-select-records/

 /var/lib/cassandra/data# ls
 demo  system  system_traces
 :/var/lib/cassandra/data# cd demo/
 :/var/lib/cassandra/data/demo# ls
 users
 :/var/lib/cassandra/data/demo# cd users/
 :/var/lib/cassandra/data/demo/users# ls
 :/var/lib/cassandra/data/demo/users#

 i find nothing in /var/lib/cassandra/data/demo/users!


 2015-03-20 13:06 GMT+01:00 jean paul researche...@gmail.com:

 Hello All;
 Please,
 i have created this table.

 lastname | age | city  | email   | firstname
 --+-+---+-+---
 Doe |  36 | Beverly Hills |   jane...@email.com |  Jane Byrne |  24
 | San Diego |  robby...@email.com |   Rob Smith |  46 |
 Sacramento
 | johnsm...@email.com |  John

 So, my question, where this data is saved ? in ./var/lib/cassandra/data ?



 My end goal is to store a file with cassandra and to see on which node
 my file is stored ?

 thanks a lot for help
 Best Regards.





-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Documentation of batch statements

2015-03-03 Thread Michael Dykman
I have a minor complaint about the documentation.  On the page for Batch
Statements:

http://www.datastax.com/documentation/cql/3.0/cql/cql_reference/batch_r.html

It states: In the context of a Cassandra batch operation, atomic means
that if any of the batch succeeds, all of it will.

While the above statement may be strictly true, it is misleading.  A more
accurate statement would be

 ...if any of the batch FAILS, all of it will.

As originally written, a naive reader might assume that atomicity pivots on
success; the point of atomicity is reliable failure.

-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Node joining take a long time

2015-02-20 Thread Michael Dykman
I believe the consensus is: upgrade to 2.1.3

On Fri, 20 Feb 2015 01:17 曹志富 cao.zh...@gmail.com wrote:

 So ,what can I do???Waiting for 2.1.4 or upgrade to 2.1.3??

 --
 曹志富
 手机:18611121927
 邮箱:caozf.zh...@gmail.com
 微博:http://weibo.com/boliza/

 2015-02-20 3:16 GMT+08:00 Robert Coli rc...@eventbrite.com:

 On Thu, Feb 19, 2015 at 7:34 AM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 I'm sure Rob will be along shortly to say that 2.1.2 is, in his
 opinion, broken for production use...an opinion I'd agree with. So bare
 that in mind if you are running a production cluster.


 If you speak of the devil, he will appear.

 But yes, really, run 2.1.1 or 2.1.3, 2.1.2 is a bummer. Don't take the
 brown 2.1.2.

 This commentary is likely unrelated to the problem the OP is having,
 which I would need the information Mark asked for to comment on. :)

 =Rob





Re: Error while starting Cassandra for the first time

2015-02-04 Thread Michael Dykman
I would start looking in /home/csduser/cassandra/conf/cassandra.yaml.

Perhaps you could validate the YAML format of that file with an independent
tool such as http://yaml-online-parser.appspot.com/

On Wed, Feb 4, 2015 at 5:23 PM, Krish Donald gotomyp...@gmail.com wrote:

 Hi,

 I am getting below error:
 Not able to understand why ??

 [csduser@master bin]$ ./cassandra -f
 CompilerOracle: inline
 org/apache/cassandra/db/AbstractNativeCell.compareTo
 (Lorg/apache/cassandra/db/composites/Composite;)I
 CompilerOracle: inline
 org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned
 (Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
 CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
 (Ljava/nio/ByteBuffer;[B)I
 CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
 ([BLjava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned
 (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/lang/Object;JILjava/lang/Object;JI)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
 INFO  22:17:19 Hostname: master.my.com
 INFO  22:17:19 Loading settings from
 file:/home/csduser/cassandra/conf/cassandra.yaml
 ERROR 22:17:20 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:120)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:96)
 [apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:448)
 [apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:537)
 [apache-cassandra-2.1.2.jar:2.1.2]
 Caused by: org.yaml.snakeyaml.scanner.ScannerException: while scanning a
 simple key; could not found expected ':';  in 'reader', line 33, column 1:
 # See http://wiki.apache.org/cas ...
 ^
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:460)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:558)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
 ~[snakeyaml-1.11.jar:na]
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
 ~[snakeyaml-1.11.jar:na]
 at org.yaml.snakeyaml.Yaml.load(Yaml.java:412)
 ~[snakeyaml-1.11.jar:na]
 at
 org.apache.cassandra.config.YamlConfigurationLoader.logConfig(YamlConfigurationLoader.java:126)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:104)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 6 common frames omitted
 Invalid yaml
 Fatal configuration error; unable to start. See log for stacktrace.


 Thanks
 Krish




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Can not connect with cqlsh to something different than localhost

2014-12-08 Thread Michael Dykman
The difference is what interface your service is listening on. What is the
output of

$ netstat -ntl | grep 9042

On Mon, 8 Dec 2014 07:21 Richard Snowden richard.t.snow...@gmail.com
wrote:

 I left listen_address blank - still I can't connect (connection refused).

 cqlsh - OK
 cqlsh ubuntu - fail (ubuntu is my hostname)
 cqlsh 192.168.111.136 - fail

 telnet 192.168.111.136 9042 from outside the VM gives me a connection
 refused.

 I just started a Tomcat in my VM and did a telnet 192.168.111.136 8080
 from outside the VM  - and got the expected result (Connected to
 192.168.111.136. Escape character is '^]'.

 So what's so special in Cassandra?


 On Mon, Dec 8, 2014 at 12:18 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 Listen address needs the actual address, not the interface.  This is best
 accomplished by setting up proper hostnames for each machine (through DNS
 or hosts file) and leaving listen_address blank, as it will pick the
 external ip.  Otherwise, you'll need to set the listen address to the IP of
 the machine you want on each machine.  I find the former to be less of a
 pain to manage.


 On Mon Dec 08 2014 at 2:49:55 AM Richard Snowden 
 richard.t.snow...@gmail.com wrote:

 This did not work either. I changed /etc/cassandra.yaml and restarted 
 Cassandra (I even restarted the machine to make 100% sure).

 What I tried:

 1) listen_address: localhost
- connection OK (but of course I can't connect from outside the VM to 
 localhost)

 2) Set listen_interface: eth0
- connection refused

 3) Set listen_address: 192.168.111.136
- connection refused


 What to do?


  Try:
  $ netstat -lnt
  and see which interface port 9042 is listening on. You will likely need to
  update cassandra.yaml to change the interface. By default, Cassandra is
  listening on localhost so your local cqlsh session works.

  On Sun, 7 Dec 2014 23:44 Richard Snowden richard.t.snow...@gmail.com
  wrote:

   I am running Cassandra 2.1.2 in an Ubuntu VM.
  
   cqlsh or cqlsh localhost works fine.
  
   But I can not connect from outside the VM (firewall, etc. disabled).
  
   Even when I do cqlsh 192.168.111.136 in my VM I get connection 
   refused.
   This is strange because when I check my network config I can see that
   192.168.111.136 is my IP:
  
   root@ubuntu:~# ifconfig
  
   eth0  Link encap:Ethernet  HWaddr 00:0c:29:02:e0:de
 inet addr:192.168.111.136  Bcast:192.168.111.255
   Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:fe02:e0de/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:16042 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8638 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:21307125 (21.3 MB)  TX bytes:709471 (709.4 KB)
  
   loLink encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:65536  Metric:1
 RX packets:550 errors:0 dropped:0 overruns:0 frame:0
 TX packets:550 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:148053 (148.0 KB)  TX bytes:148053 (148.0 KB)
  
  
   root@ubuntu:~# cqlsh 192.168.111.136 9042
   Connection error: ('Unable to connect to any servers', 
   {'192.168.111.136':
   error(111, Tried connecting to [('192.168.111.136', 9042)]. Last error:
   Connection refused)})
  
  
   What to do?
  





Re: Can not connect with cqlsh to something different than localhost

2014-12-07 Thread Michael Dykman
Try:
$ netstat -lnt
and see which interface port 9042 is listening on. You will likely need to
update cassandra.yaml to change the interface. By default, Cassandra is
listening on localhost so your local cqlsh session works.

On Sun, 7 Dec 2014 23:44 Richard Snowden richard.t.snow...@gmail.com
wrote:

 I am running Cassandra 2.1.2 in an Ubuntu VM.

 cqlsh or cqlsh localhost works fine.

 But I can not connect from outside the VM (firewall, etc. disabled).

 Even when I do cqlsh 192.168.111.136 in my VM I get connection refused.
 This is strange because when I check my network config I can see that
 192.168.111.136 is my IP:

 root@ubuntu:~# ifconfig

 eth0  Link encap:Ethernet  HWaddr 00:0c:29:02:e0:de
   inet addr:192.168.111.136  Bcast:192.168.111.255
 Mask:255.255.255.0
   inet6 addr: fe80::20c:29ff:fe02:e0de/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:16042 errors:0 dropped:0 overruns:0 frame:0
   TX packets:8638 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:21307125 (21.3 MB)  TX bytes:709471 (709.4 KB)

 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:550 errors:0 dropped:0 overruns:0 frame:0
   TX packets:550 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:148053 (148.0 KB)  TX bytes:148053 (148.0 KB)


 root@ubuntu:~# cqlsh 192.168.111.136 9042
 Connection error: ('Unable to connect to any servers', {'192.168.111.136':
 error(111, Tried connecting to [('192.168.111.136', 9042)]. Last error:
 Connection refused)})


 What to do?



Re: PHP - Cassandra integration

2014-11-11 Thread Michael Dykman
I believe you are looking for this.

   https://github.com/rmcfrazier/phpbinarycql

On Tue, Nov 11, 2014 at 1:07 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 In production?


 On Mon Nov 10 2014 at 6:06:41 AM Spencer Brown lilspe...@gmail.com
 wrote:

 I'm using /McFrazier/PhpBinaryCql/


 On Mon, Nov 10, 2014 at 1:48 AM, Akshay Ballarpure 
 akshay.ballarp...@tcs.com wrote:

 Hello,
 I am working on PHP cassandra integration, please let me know which
 library is good from scalability and performance perspective ?

 Best Regards
 Akshay Ballarpure
 Tata Consultancy Services
 Cell:- 9985084075
 Mailto: akshay.ballarp...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.IT Services
Business Solutions
Consulting
 

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you





-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Cassandra trigger following the CQL for Cassandra 2.0 tutorial does not work

2014-07-28 Thread Michael Dykman
This is not really a Cassandra problem: it's a basic Java packaging problem.

First of all, the java source file is not what the runtime needs to
see in your jar; it needs the class file which was generated by
compiling it.  That compiled class needs to be in a folder
corresponding to it's package, which I infer is
'org.apache.cassandra.triggers' from your earlier jar creation command
(obviously not the command you used to create the jar you just showed
us the dump of):
  jar cvf 
/etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
org.apache.cassandra.triggers.InvertedIndex.class

The META-INF looks fine, but that class should be listed as

org/apache/cassandfra/triggers/InvertedIndex.class

When you compile the source, does the compiler NOT put the binary in
an appropriate folder?


On Mon, Jul 28, 2014 at 9:04 AM, Martin Marinov
martin.mari...@securax.org wrote:
 Correction.
 It's:

 unzip -l /etc/cassandra/triggers/InvertedIndex.jar
 Archive:  /etc/cassandra/triggers/InvertedIndex.jar
   Length  DateTimeName
 -  -- -   
 0  2014-07-25 10:07   META-INF/
68  2014-07-25 10:07   META-INF/MANIFEST.MF
  2761  2014-07-25 10:07   InvertedIndex.java
 - ---
  2829 3 files




 On 07/28/2014 04:02 PM, Martin Marinov wrote:

 The output is:

 unzip /etc/cassandra/triggers/InvertedIndex.jar
 Archive:  /etc/cassandra/triggers/InvertedIndex.jar
creating: META-INF/
   inflating: META-INF/MANIFEST.MF
   inflating: InvertedIndex.java


 On 07/28/2014 03:54 PM, Michael Dykman wrote:

 I wonder; your jar creation looks a little off and I suspect your problem
 might be the jar format which would lead to the failure to load.  Could you
 list the output of
 $ unzip -l my.jar?

 On Jul 28, 2014 3:49 AM, Martin Marinov martin.mari...@securax.org
 wrote:

 I did:
 ls /etc/cassandra/triggers/
 InvertedIndex.jar  README.txt


 But I'm not sure I'm creating the jar correctly.

 I'm running:
 jar cvf
 /etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
 org.apache.cassandra.triggers.InvertedIndex.class

 where org.apache.cassandra.triggers.InvertedIndex.class contains the java
 code on this url:

 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=examples/triggers/src/org/apache/cassandra/triggers/InvertedIndex.java;hb=HEAD

 Also I tried with the name of the .class file and the .jar file as:
 InvertedIndex.jar, InvertedIndex.class
 and as
 org.apache.cassandra.triggers.InvertedIndex.jar,
 org.apache.cassandra.triggers.InvertedIndex.class


 On 07/28/2014 10:43 AM, DuyHai Doan wrote:

 Did you put the jar into the /lib folder on the server ? I know it's a
 basic question but it's the first idea to come in mind


 On Mon, Jul 28, 2014 at 9:36 AM, Martin Marinov
 martin.mari...@securax.org wrote:

 Anybody got an idea on this matter ?


 On 07/24/2014 06:34 PM, Martin Marinov wrote:

 Hi,

 I posted the question on stackoverflow:

 http://stackoverflow.com/questions/24937425/cassandra-trigger-following-the-cql-for-cassandra-2-0-tutorial-does-not-work

 On 07/24/2014 06:25 PM, Martin Marinov wrote:

 Hi,

 I'm following the tutorial at:
 http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/trigger_r.html

 I'm using Cassandra 2.0.9 with Oracle Java (java version 1.7.0_60).

 I have downloaded the example Java Class from
 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=examples/triggers/src/org/apache/cassandra/triggers/InvertedIndex.java;hb=HEAD
 into a file named org.apache.cassandra.triggers.InvertedIndex.jar .

 I then created a jar using:

 jar cvf
 /etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
 org.apache.cassandra.triggers.InvertedIndex.class

 I restarted the cassandra service.

 The trigger does not exist:

 cqlsh:mykeyspace CREATE TRIGGER myTrigger ON myTable USING
 'org.apache.cassandra.triggers.InvertedIndex';
 Bad Request: Trigger class 'org.apache.cassandra.triggers.InvertedIndex'
 doesn't exist

 What am I doing wrong ?


 --
 Best Regards,

 Martin Marinov
 Securax LTD

 NOTICE:  This email and any file transmitted are confidential and/or
 legally privileged and intended only for the person(s) directly
 addressed.  If you are not the intended recipient, any use, copying,
 transmission, distribution, or other forms of dissemination is strictly
 prohibited.  If you have received this email in error, please notify the
 sender immediately and permanently delete the email and files, if any.



 --
 Best Regards,

 Martin Marinov
 Securax LTD

 NOTICE:  This email and any file transmitted are confidential and/or
 legally privileged and intended only for the person(s) directly
 addressed.  If you are not the intended recipient, any use, copying,
 transmission, distribution, or other forms of dissemination is strictly
 prohibited.  If you have received this email in error, please notify

Re: Cassandra trigger following the CQL for Cassandra 2.0 tutorial does not work

2014-07-28 Thread Michael Dykman
How to compile is a much biggest question and probably way off topic
for this list.   I think you need a starter tutorial on building and
packaing java; there is more than one concept at work.

you will need a compiler of the appropriate version. that source has
external dependancies declared in the imports from these packages:
 org.apache.cassandra.io
 org.apache.cassandra.db
 org.slf4j

The rest of the import are internal to Java's standard library.

I think you need to master the art of building and running a jar
before you can reasonably hope to be playing with cassandra triggers.


On Mon, Jul 28, 2014 at 10:15 AM, Martin Marinov
martin.mari...@securax.org wrote:
 I'm not compiling it.
 I just download it from:

 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=examples/triggers/src/org/apache/cassandra/triggers/InvertedIndex.java;hb=HEAD

 Save the downloaded file into a .class file and jar it.

 How do I compile it ?


 On 07/28/2014 05:11 PM, Michael Dykman wrote:

 This is not really a Cassandra problem: it's a basic Java packaging
 problem.

 First of all, the java source file is not what the runtime needs to
 see in your jar; it needs the class file which was generated by
 compiling it.  That compiled class needs to be in a folder
 corresponding to it's package, which I infer is
 'org.apache.cassandra.triggers' from your earlier jar creation command
 (obviously not the command you used to create the jar you just showed
 us the dump of):
jar cvf
 /etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
 org.apache.cassandra.triggers.InvertedIndex.class

 The META-INF looks fine, but that class should be listed as

 org/apache/cassandfra/triggers/InvertedIndex.class

 When you compile the source, does the compiler NOT put the binary in
 an appropriate folder?


 On Mon, Jul 28, 2014 at 9:04 AM, Martin Marinov
 martin.mari...@securax.org wrote:

 Correction.
 It's:

 unzip -l /etc/cassandra/triggers/InvertedIndex.jar
 Archive:  /etc/cassandra/triggers/InvertedIndex.jar
Length  DateTimeName
 -  -- -   
  0  2014-07-25 10:07   META-INF/
 68  2014-07-25 10:07   META-INF/MANIFEST.MF
   2761  2014-07-25 10:07   InvertedIndex.java
 - ---
   2829 3 files




 On 07/28/2014 04:02 PM, Martin Marinov wrote:

 The output is:

 unzip /etc/cassandra/triggers/InvertedIndex.jar
 Archive:  /etc/cassandra/triggers/InvertedIndex.jar
 creating: META-INF/
inflating: META-INF/MANIFEST.MF
inflating: InvertedIndex.java


 On 07/28/2014 03:54 PM, Michael Dykman wrote:

 I wonder; your jar creation looks a little off and I suspect your problem
 might be the jar format which would lead to the failure to load.  Could
 you
 list the output of
 $ unzip -l my.jar?

 On Jul 28, 2014 3:49 AM, Martin Marinov martin.mari...@securax.org
 wrote:

 I did:
 ls /etc/cassandra/triggers/
 InvertedIndex.jar  README.txt


 But I'm not sure I'm creating the jar correctly.

 I'm running:
 jar cvf
 /etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
 org.apache.cassandra.triggers.InvertedIndex.class

 where org.apache.cassandra.triggers.InvertedIndex.class contains the
 java
 code on this url:


 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=examples/triggers/src/org/apache/cassandra/triggers/InvertedIndex.java;hb=HEAD

 Also I tried with the name of the .class file and the .jar file as:
 InvertedIndex.jar, InvertedIndex.class
 and as
 org.apache.cassandra.triggers.InvertedIndex.jar,
 org.apache.cassandra.triggers.InvertedIndex.class


 On 07/28/2014 10:43 AM, DuyHai Doan wrote:

 Did you put the jar into the /lib folder on the server ? I know it's a
 basic question but it's the first idea to come in mind


 On Mon, Jul 28, 2014 at 9:36 AM, Martin Marinov
 martin.mari...@securax.org wrote:

 Anybody got an idea on this matter ?


 On 07/24/2014 06:34 PM, Martin Marinov wrote:

 Hi,

 I posted the question on stackoverflow:


 http://stackoverflow.com/questions/24937425/cassandra-trigger-following-the-cql-for-cassandra-2-0-tutorial-does-not-work

 On 07/24/2014 06:25 PM, Martin Marinov wrote:

 Hi,

 I'm following the tutorial at:

 http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/trigger_r.html

 I'm using Cassandra 2.0.9 with Oracle Java (java version 1.7.0_60).

 I have downloaded the example Java Class from

 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=examples/triggers/src/org/apache/cassandra/triggers/InvertedIndex.java;hb=HEAD
 into a file named org.apache.cassandra.triggers.InvertedIndex.jar .

 I then created a jar using:

 jar cvf
 /etc/cassandra/triggers/org.apache.cassandra.triggers.InvertedIndex.jar
 org.apache.cassandra.triggers.InvertedIndex.class

 I restarted the cassandra service.

 The trigger does not exist:

 cqlsh:mykeyspace CREATE TRIGGER myTrigger ON myTable USING

Re: Which way to Cassandraville?

2014-07-22 Thread Michael Dykman
Removing *QL from application code is not really an indicator of the
maturity of a technology. ORMs and automatic type mapping in general
tend to be very easy things for a developer to work with allowing for
rapid prototypes, but those applications are often ill-suited to being
deployed is high-volume environments.

I have used a wide variety of ORMs over the last 15 years, hibernate
being a favourite at which I am held to have some expertise, but when
I am creating an app for the real world in situations where I can
expect several million requests/day, I do not touch them.


On Tue, Jul 22, 2014 at 5:10 PM, Jake Luciani jak...@gmail.com wrote:
 Checkout datastax devcenter which is a GUI datamodelling tool for cql3

 http://www.datastax.com/what-we-offer/products-services/devcenter


 On Sun, Jul 20, 2014 at 7:17 PM, jcllings jclli...@gmail.com wrote:

 So I'm a Java application developer and I'm trying to find entry points
 for learning to work with Cassandra.
 I just finished reading Cassandra: The Definitive Guide which seems
 pretty out of date and while very informative as to the technology that
 Cassandra uses, was not very helpful from the perspective of an
 application developer.

 Having said that, what Java clients should I be looking at?  Are there
 any reasonably mature PoJo mapping techs for Cassandra analogous to
 Hibernate? I can't say that I'm looking forward to yet another *QL
 variant but I guess CQL is going to be a necessity.  What, if any, GUI
 tools are available for working with Cassandra, for data modelling?

 Jim C.




 --
 http://twitter.com/tjake



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: How to column slice with CQL + 1.2

2014-07-17 Thread Michael Dykman
The last term in this query is redundant.  Any time column1 = 1, we
may reasonably expect that it is also = 2 as that's where 1 is found.
If you remove the last term, you elimiate the error and non of the
selection logic.

SELECT * FROM CF WHERE key='X' AND column1=1 AND column2=3 AND
column34 AND column1=2;

On Thu, Jul 17, 2014 at 6:23 PM, Mike Heffner m...@librato.com wrote:
 What is the proper way to perform a column slice using CQL with 1.2?

 I have a CF with a primary key X and 3 composite columns (A, B, C). I'd like
 to find records at:

 key=X
 columns  (A=1, B=3, C=4) AND
columns = (A=2)

 The Query:

 SELECT * FROM CF WHERE key='X' AND column1=1 AND column2=3 AND column34 AND
 column1=2;

 fails with:

 DoGetMeasures: column1 cannot be restricted by both an equal and an inequal
 relation

 This is against Cassandra 1.2.16.

 What is the proper way to perform this query?


 Cheers,

 Mike

 --

   Mike Heffner m...@librato.com
   Librato, Inc.




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


RE: Why is yum pulling in open JDK ?

2014-07-07 Thread Michael Dykman
It comes down to licencing issues. Sun and now Oracle has always been very
particular about what they see as bundling.  While they have repos for
ubuntu, redhat,centos,suse, etc. they don't allow those repos to be
installed in standard distributions unless you are paying them a fee for
doing so. You, the system owner/admin are free to install it on your own
systems, as long as you acquire it from them, not your OS provider.

I have been doing java on linux for a long time and it has ever been a
pain. I still find important java artifacts in some distros which want to
depend on gcj. For this reason, while I am glad to maintain java itself
through an Oracle-provided ppa, I manage systems built on java without the
use of apt/yum/etc. I gave up long ago on the idea that sane java
integration was something that open platforms can provide as long as Oracle
keeps that part closed.
On Jul 7, 2014 6:25 AM, Cox, Cory (Agoda) cory@agoda.com wrote:

  I have had the same issue. Not an expert on this… but I think it is more
 a consequence of the CentOS repo than cassandra rpm. The Oracle JVM
 packages are not available and it appears you need to download (after
 accepting license) the rpm and use the rpm command to install the package.
 Wget is also problematic as the url appears to be littered with other html
 in the response… I had to download and scp to the box and then install Java
 BEFORE installing Casandra to avoid the dependency triggering an auto
 install of the openjdk.



 Any repo experts please jump in…



 Thanks,

 [image: Cory M Cox Signature small]
 Cory Cox

 Senior Database Administrator

 [image:
 http://sharepoint.agoda.local/PR/Communications/Agoda%20logo%20with%20slogan.png]

 a Priceline ® company



 *From:* Wim Deblauwe [mailto:wim.debla...@gmail.com]
 *Sent:* Monday, July 07, 2014 13:50
 *To:* user@cassandra.apache.org
 *Subject:* Re: Why is yum pulling in open JDK ?



 Hi,



 I am very aware that Cassandra needs Java. I was just wondering why
 'openjdk' is the dependency while it is advised to use the Oracle Java.



 regards,



 Wim



 2014-07-06 21:54 GMT+02:00 Patricia Gorla patri...@thelastpickle.com:

 Wim,



  openjdk



 Java is a dependency of Cassandra, so if you do not have Java already
 installed on your computer, yum will automatically do so. The Oracle Java
 JVM must be installed separately.



  dsc20, cassandra20



 The first installation target is for Datastax Community version 2.0, while
 the latter installs Apache Cassandra 2.0



 Cheers,

 --

 Patricia Gorla

 @patriciagorla



 Consultant

 Apache Cassandra Consulting

 http://www.thelastpickle.com http://thelastpickle.com



 --
 This message is confidential and is for the sole use of the intended
 recipient(s). It may also be privileged or otherwise protected by copyright
 or other legal rules. If you have received it by mistake please let us know
 by reply email and delete it from your system. It is prohibited to copy
 this message or disclose its content to anyone. Any confidentiality or
 privilege is not waived or lost by any mistaken delivery or unauthorized
 disclosure of the message. All messages sent to and from Agoda may be
 monitored to ensure compliance with company policies, to protect the
 company's interests and to remove potential malware. Electronic messages
 may be intercepted, amended, lost or deleted, or contain viruses.



Re: HA Proxy

2014-06-27 Thread Michael Dykman
NO, really it can't. I know little of Hector, but when using the
datastax driver,  Cassandra provides a highly available set of
connections; a single Cassandra session will have explicit connections
to all of the node which make up a given cluster.

The Cassandra server itself relies on a lot of cross-node talk which
means that they must be visible to each other.  In this case, HA proxy
will just get in the way and make you less highly available.

On Fri, Jun 27, 2014 at 11:10 AM, Richard Jennings
richardjenni...@gmail.com wrote:
 Can a Cassandra client such as Hector operate successfully behind a HA Proxy
 where the cluster is represented by a single IP Address?

 Regards



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Installing Datastax Cassandra 1.2.15 Using Yum (Java Issue)

2014-03-27 Thread Michael Dykman
Java on linux has *always* been a hassle. Recently, installing ant via
apt-get on an active ubuntu still want to yank in components of GCJ
shudder.  Back to the tar-ball.

On Thu, Mar 27, 2014 at 4:53 PM, Jon Forrest jon.forr...@xoom.com wrote:


 On 3/27/2014 1:41 PM, prem yadav wrote:

 I have noticed that too. But even though dse installs opsndjk, it never
 gets used. So you should be ok.


 But with two version of Java installed you then have to
 make extra sure that Oracle Java is being used.

 It just seems like a good idea to follow the Datastax
 documentation, even if Datastax makes it difficult to
 do so.

 It would be great to know the origin of this issue.


 Jon Forrest

 The information transmitted in this email is intended only for the person or
 entity to which it is addressed, and may contain material confidential to
 Xoom Corporation, and/or its subsidiary, buyindiaonline.com Inc. Any review,
 retransmission, dissemination or other use of, or taking of any action in
 reliance upon, this information by persons or entities other than the
 intended recipient(s) is prohibited. If you received this email in error,
 please contact the sender and delete the material from your files.



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Kernel keeps killing cassandra process - OOM

2014-03-22 Thread Michael Dykman
What JVM are you running on? What, if any, memory constraints are you
passing to the process?
 On Mar 22, 2014 10:48 AM, prem yadav ipremya...@gmail.com wrote:

 Hi,
 I have a 3 node cassandra test cluster. The nodes have 4 GB total memory/2
 cores. Cassndra run with all default settings.
 But, the cassandra process keeps getting killed due to OOM. Cassandra
 version in use is 1.1.9.
 here are the settings in use:

 compaction_throughput_mb_per_sec: 16
 row_cache_save_period: 0
 encryption_options:
   keystore: conf/.keystore
   internode_encryption: none
   truststore: conf/.truststore
   algorithm: SunX509
   cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,
 TLS_RSA_WITH_AES_256_CBC_SHA]

   protocol: TLS
   store_type: JKS
 multithreaded_compaction: false
 #authority: org.apache.cassandra.auth.AllowAllAuthority
 populate_io_cache_on_flush: false
 storage_port: 7000
 key_cache_save_period: 14400
 hinted_handoff_throttle_delay_in_ms: 1
 trickle_fsync_interval_in_kb: 10240
 rpc_timeout_in_ms: 1
 dynamic_snitch_update_interval_in_ms: 100
 column_index_size_in_kb: 64
 thrift_framed_transport_size_in_mb: 15
 hinted_handoff_enabled: true
 dynamic_snitch_reset_interval_in_ms: 60
 reduce_cache_capacity_to: 0.6
 snapshot_before_compaction: false
 request_scheduler_options: {weights: null, default_weight: 5,
 throttle_limit: 80}
 request_scheduler: org.apache.cassandra.scheduler.RoundRobinScheduler
 incremental_backups: false
 commitlog_sync: periodic
 trickle_fsync: false
 rpc_keepalive: true
 max_hint_window_in_ms: 360
 commitlog_segment_size_in_mb: 32
 thrift_max_message_length_in_mb: 16
 request_scheduler_id: keyspace
 cluster_name: TVCASCLU002
 memtable_flush_queue_size: 4
 index_interval: 128
 authenticator: com.datastax.bdp.cassandra.auth.PasswordAuthenticator
 authorizer: com.datastax.bdp.cassandra.auth.CassandraAuthorizer
 auth_replication_options:
 replication_factor : 3
 #DC1: 3
 row_cache_size_in_mb: 0
 row_cache_provider: SerializingCacheProvider
 dynamic_snitch_badness_threshold: 0.1
 commitlog_sync_period_in_ms: 1
 auto_snapshot: true
 concurrent_reads: 32
 in_memory_compaction_limit_in_mb: 64
 endpoint_snitch: org.apache.cassandra.locator.PropertyFileSnitch
 flush_largest_memtables_at: 0.75
 reduce_cache_sizes_at: 0.85
 partitioner: org.apache.cassandra.dht.RandomPartitioner
 saved_caches_directory: /var/lib/cassandra/saved_caches
 ssl_storage_port: 7001

 rpc_port: 9160
 commitlog_directory: /var/lib/cassandra/commitlog

 rpc_server_type: sync
 compaction_preheat_key_cache: true
 concurrent_writes: 32
 data_file_directories: [/var/lib/cassandra/data]
 initial_token: 56713727820156410577229101238628035242
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
   parameters:


 rpc_server_type: sync #changed from sync to hsha

 rpc_min_threads: 2
 rpc_max_threads: 64

 How do I resolve this?




DataStax C++ on Ubuntu - linking question

2014-03-06 Thread Michael Dykman
I managed to get libcql.so built without error but am somewhat
confused about the depending libraries.   In the documentation, I read
that the XX-mt version of boost are required (which makes all kinds of
sence) and files generated by cmake seem to agree as I have:

In ./CMakeFiles/cql.dir/build.make
...
libcql.so.0.7.0: /usr/lib/libboost_system-mt.so
libcql.so.0.7.0: /usr/lib/libboost_thread-mt.so
...

which are the version I would expect.

When I examine the resulting shared library though, I see this:

mdykman@sage:~/projects/datastax-cpp-driver$ ldd libcql.so
linux-vdso.so.1 =  (0x7fff5d5fc000)
   libboost_system.so.1.46.1 = /usr/lib/libboost_system.so.1.46.1
(0x7fbf2f999000)
 libboost_thread.so.1.46.1 = /usr/lib/libboost_thread.so.1.46.1
(0x7fbf2f78)
libssl.so.1.0.0 = /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x7fbf2f521000)
libcrypto.so.1.0.0 = /lib/x86_64-linux-gnu/libcrypto.so.1.0.0
(0x7fbf2f146000)
libpthread.so.0 = /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fbf2ef29000)
libstdc++.so.6 = /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x7fbf2ec28000)
libm.so.6 = /lib/x86_64-linux-gnu/libm.so.6 (0x7fbf2e92c000)
libgcc_s.so.1 = /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x7fbf2e716000)
libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7fbf2e355000)
libdl.so.2 = /lib/x86_64-linux-gnu/libdl.so.2 (0x7fbf2e151000)
libz.so.1 = /lib/x86_64-linux-gnu/libz.so.1 (0x7fbf2df3a000)
/lib64/ld-linux-x86-64.so.2 (0x7fbf301eb000)

I have confirmed that /usr/lib has both -mt and non-mt  version of all
the shared libraries.

Does anyone know why my version appears to have linked to the
single-threaded version and what I can do to fix that?


-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: C++ build under Ubuntu 12.04

2014-03-05 Thread Michael Dykman
The only listed dependencies: boost and libssh.  I am not even
slightly uncertain if they are installed. Not only did I confirm them
yesterday via dpkg (having installed both via apt-get from Ubuntu's
core repos), I have been explicitly coding against them both for the
past several months on this same workstation.  I can see them all at
thier relative paths and have a couple of working make files then
reference them.  They are also the only items mentioned in the error
message when my build fails:

mdykman@sage:~/projects/datastax-cpp-driver$ cmake .
-- info CMAKE_BINARY_DIR: /home/mdykman/projects/datastax-cpp-driver
-- Could NOT find Boost
CMake Error at 
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91
(MESSAGE):
  Could NOT find LIBSSH2 (missing: LIBSSH2_LIBRARIES LIBSSH2_INCLUDE_DIRS)
Call Stack (most recent call first):
  /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:252
(_FPHSA_FAILURE_MESSAGE)
  extra/ccm_bridge/cmake/Modules/FindLIBSSH2.cmake:51
(find_package_handle_standard_args)
  extra/ccm_bridge/CMakeLists.txt:37 (find_package)


-- Configuring incomplete, errors occurred!

open ssl is installed in an obvious place /usr/include/openssl/ssl.h

as is boost:asio  /usr/include/boost/asio.hpp

Does anyone have a hint as to how to edit/debug the search paths being
used by cmake?

On Wed, Mar 5, 2014 at 11:39 AM, Michael Shuler mich...@pbandjelly.org wrote:
 On 03/04/2014 05:33 PM, Michael Dykman wrote:

 I am getting errors running the cmake file in a *very* recent download
 of the C++ driver's source tree.  It seems to be failing to find
 either boost::asio or openssl libraries.  I defineately have these
 both installed having developed against them recently (and rechecked
 with dpkg today).

 While I have brushed up against cmake before, I have never had to
 modify CMakeLists.txt before.  Could someone please advise me how to
 adjust that filoe so it can find the external dependencies?


 You shouldn't need to edit.  Perhaps you are just missing one of the
 dependencies and think you have everything installed  :)

 From a fresh ec2 instance:  http://12.am/tmp/cpp-driver_setup.txt

 That's how I work through build dependencies - granted that was quick,
 without searching apt-cache, since I've done this before and have a list of
 build deps.  Hope this helps!

 --
 Kind regards,
 Michael



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: C++ build under Ubuntu 12.04

2014-03-05 Thread Michael Dykman
I stand corrected. I did not have libssh2 and while I did have
libboost-all-dev (see below), I did not have the all the specific
packages indicated

 ii  libboost-all-dev   1.48.0.2
 Boost C++ Libraries development files (ALL, default
version)

So, I ran the full apt-get command that you suggested (as indicated in
instruction_win_lin.txt.txt):

  sudo apt-get install build-essential cmake libasio-dev
libboost-system-dev libboost-thread-dev libboost-test-dev
libboost-program-options-dev libssh2-1-dev

which succeeded and repeated the cmake steps.  As you can see from the
messages below, the boost::asio library is *still* not being found.

mdykman@sage:~/projects/datastax-cpp-driver$ cmake  .
-- info CMAKE_BINARY_DIR: /home/mdykman/projects/datastax-cpp-driver
-- Could NOT find Boost
-- Found LIBSSH2: /usr/lib/libssh2.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/mdykman/projects/datastax-cpp-driver

I have thoroughly read instruction_win_lin.txt.txt but no solution is
presenting itself to me.  I am more than happy to do my own deep-dive
if someone could suggest how I go about instructing cmake to find the
boost libraries? Expand the search paths?

On Wed, Mar 5, 2014 at 12:14 PM, Michael Shuler mich...@pbandjelly.org wrote:
 On 03/05/2014 10:55 AM, Michael Dykman wrote:

 The only listed dependencies: boost and libssh.  I am not even
 slightly uncertain if they are installed. Not only did I confirm them
 yesterday via dpkg (having installed both via apt-get from Ubuntu's
 core repos), I have been explicitly coding against them both for the
 past several months on this same workstation.  I can see them all at
 thier relative paths and have a couple of working make files then
 reference them.  They are also the only items mentioned in the error
 message when my build fails:

 mdykman@sage:~/projects/datastax-cpp-driver$ cmake .
 -- info CMAKE_BINARY_DIR: /home/mdykman/projects/datastax-cpp-driver
 -- Could NOT find Boost
 CMake Error at
 /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91
 (MESSAGE):
Could NOT find LIBSSH2 (missing: LIBSSH2_LIBRARIES
 LIBSSH2_INCLUDE_DIRS)
 Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:252
 (_FPHSA_FAILURE_MESSAGE)
extra/ccm_bridge/cmake/Modules/FindLIBSSH2.cmake:51
 (find_package_handle_standard_args)
extra/ccm_bridge/CMakeLists.txt:37 (find_package)


 -- Configuring incomplete, errors occurred!

 open ssl is installed in an obvious place /usr/include/openssl/ssl.h


 libssl != libssh

 The LIBSSH2 error is in the output I posted, along with the next command
 being the solution: 'apt-get install libssh2-1-dev'


 as is boost:asio  /usr/include/boost/asio.hpp

 Does anyone have a hint as to how to edit/debug the search paths being
 used by cmake?


 It's also documented in
 https://github.com/datastax/cpp-driver/blob/master/instruction_win_lin.txt.txt
 with the exception that libboost-filesystem-dev and libboost-log-dev (not
 available in wheezy/12.04) are no longer needed, per
 https://datastax-oss.atlassian.net/browse/CPP-36

 All in one line:

 sudo apt-get install build-essential cmake libasio-dev libboost-system-dev
 libboost-thread-dev libboost-test-dev libboost-program-options-dev
 libssh2-1-dev

 --
 Michael



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: C++ build under Ubuntu 12.04

2014-03-05 Thread Michael Dykman
It looks like a cmake message error, as you suggested. Running 'make'
against the result of 'cmake' grinds through without an error and the
result is indeed linked to boost although it is not the multi-threaded
version of boost (which is installed on my system) as I would have
expected.

mdykman@sage:~/projects/datastax-cpp-driver$ ldd libcql.so
linux-vdso.so.1 =  (0x7fff61e88000)
libboost_system.so.1.46.1 = /usr/lib/libboost_system.so.1.46.1
(0x7fe490a8b000)
libboost_thread.so.1.46.1 = /usr/lib/libboost_thread.so.1.46.1
(0x7fe490872000)
libssl.so.1.0.0 = /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x7fe490613000)
libcrypto.so.1.0.0 = /lib/x86_64-linux-gnu/libcrypto.so.1.0.0
(0x7fe490238000)
libpthread.so.0 = /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fe49001b000)
libstdc++.so.6 = /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x7fe48fd1a000)
libm.so.6 = /lib/x86_64-linux-gnu/libm.so.6 (0x7fe48fa1e000)
libgcc_s.so.1 = /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x7fe48f808000)
libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7fe48f447000)
libdl.so.2 = /lib/x86_64-linux-gnu/libdl.so.2 (0x7fe48f243000)
libz.so.1 = /lib/x86_64-linux-gnu/libz.so.1 (0x7fe48f02c000)
/lib64/ld-linux-x86-64.so.2 (0x7fe4912dd000)

I will write up that bug report.  After lunch.

On Wed, Mar 5, 2014 at 1:08 PM, Michael Shuler mich...@pbandjelly.org wrote:
 On 03/05/2014 11:53 AM, Michael Dykman wrote:

 I stand corrected. I did not have libssh2 and while I did have
 libboost-all-dev (see below), I did not have the all the specific
 packages indicated

   ii  libboost-all-dev   1.48.0.2
   Boost C++ Libraries development files (ALL, default
 version)

 So, I ran the full apt-get command that you suggested (as indicated in
 instruction_win_lin.txt.txt):

sudo apt-get install build-essential cmake libasio-dev
 libboost-system-dev libboost-thread-dev libboost-test-dev
 libboost-program-options-dev libssh2-1-dev

 which succeeded and repeated the cmake steps.  As you can see from the
 messages below, the boost::asio library is *still* not being found.

 mdykman@sage:~/projects/datastax-cpp-driver$ cmake  .
 -- info CMAKE_BINARY_DIR: /home/mdykman/projects/datastax-cpp-driver
 -- Could NOT find Boost
 -- Found LIBSSH2: /usr/lib/libssh2.so
 -- Configuring done
 -- Generating done
 -- Build files have been written to:
 /home/mdykman/projects/datastax-cpp-driver

 I have thoroughly read instruction_win_lin.txt.txt but no solution is
 presenting itself to me.  I am more than happy to do my own deep-dive
 if someone could suggest how I go about instructing cmake to find the
 boost libraries? Expand the search paths?


 Oh, I missed the NOT in my example, too.  Could you report this as a bug?
 I'm not sure if just the cmake message is an error, since it builds fine, or
 if it is, in fact, not finding/using the libs.

 https://datastax-oss.atlassian.net/browse/CPP

 --
 Michael



-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


C++ build under Ubuntu 12.04

2014-03-04 Thread Michael Dykman
I am getting errors running the cmake file in a *very* recent download
of the C++ driver's source tree.  It seems to be failing to find
either boost::asio or openssl libraries.  I defineately have these
both installed having developed against them recently (and rechecked
with dpkg today).

While I have brushed up against cmake before, I have never had to
modify CMakeLists.txt before.  Could someone please advise me how to
adjust that filoe so it can find the external dependencies?

-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: How should clients handle the user defined types in 2.1?

2014-02-25 Thread Michael Dykman
Please do.  I too am working at a driver implementation and would be
delighted to be spared the research.
On Feb 25, 2014 11:29 AM, Theo Hultberg t...@iconara.net wrote:

 thanks for the high level description of the format, I'll see if I can
 make a stab at implementing support for custom types now.

 and maybe I should take all of the reverse engineering I've done of the
 type encoding and decoding and send a pull request for the protocol spec,
 or write an appendix.

 T#


 On Tue, Feb 25, 2014 at 12:10 PM, Sylvain Lebresne 
 sylv...@datastax.comwrote:


 Is there any documentation on how CQL clients should handle the new user
 defined types coming in 2.1? There's nothing in the protocol specification
 on how to handle custom types as far as I can see.


 Can't say there is much documentation so far for that. As for the spec,
 it was written in a time where user defined types didn't existed and so as
 far as the protocol is concerned so far, user defined types are handled by
 the protocol as a custom type, i.e the full internal class is returned.
 And so ...



 For example, I tried creating the address type from the description of
 CASSANDRA-5590, and this is how its metadata looks (the metadata for a
 query contains a column with a custom type and this is the description of
 it):


 org.apache.cassandra.db.marshal.UserType(user_defined_types,61646472657373,737472656574:org.apache.cassandra.db.marshal.UTF8Type,63697479:org.apache.cassandra.db.marshal.UTF8Type,7a69705f636f6465:org.apache.cassandra.db.marshal.Int32Type,70686f6e6573:org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type))

 Is the client supposed to parse that description, and in that case how?


 ... yes, for now you're supposed to parse that description. Which is not
 really much documented outside of looking up the Cassandra code, but I can
 tell you that the first parameter of the UserType is the keyspace name the
 type has been defined in, the second is the type name hex encoded, and the
 rest is list of fields and their type. Each field name is hex encoded and
 separated from it's type by ':'. And that's about it.

 We will introduce much shorted definitions in the next iteration of the
 native protocol, but it's yet unclear when that will happen.

 --
 Sylvain