Re: Can't connect to Cassandra server

2015-07-19 Thread Umang Shah
You also have to change the same IP which is 192.248.15.219 for seeds
inside cassandra.yaml file.

then try to connect, it will work.

Thanks,
Umang Shah

On Sun, Jul 19, 2015 at 1:52 AM, Chamila Wijayarathna 
cdwijayarat...@gmail.com wrote:

 Hi Ajay,

 I tried that also, but still getting the same result.

 On Sun, Jul 19, 2015 at 2:08 PM, Ajay ajay.ga...@gmail.com wrote:

 Try with the correct IP address as below:

 cqlsh 192.248.15.219 -u sinmin -p xx

 CQL documentation -
 http://docs.datastax.com/en/cql/3.0/cql/cql_reference/cqlsh.html

 On Sun, Jul 19, 2015 at 2:00 PM, Chamila Wijayarathna 
 cdwijayarat...@gmail.com wrote:

 Hello all,

 After starting cassandra, I tried to connect to cassandra from cqlsh and
 java, but it fails to do so.

 Following is the error I get while trying to connect to cqlsh.

 cqlsh -u sinmin -p xx
 Connection error: ('Unable to connect to any servers', {'127.0.0.1':
 error(111, Tried connecting to [('127.0.0.1', 9042)]. Last error:
 Connection refused)})

 I have set listen_address and rpc_address in cassandra.yaml to the ip
 address of server address like follows.

 listen_address:192.248.15.219
 rpc_address:192.248.15.219

 Following is what I found from cassandra system.log.

 https://gist.githubusercontent.com/cdwijayarathna/a14586a9e39a943f89a0/raw/system%20log

 Following is the netstat result I got.

 maduranga@ubuntu:/var/log/cassandra$ netstat
 Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address   Foreign Address
 State
 tcp0  0 ubuntu:ssh  103.21.166.35:54417
 ESTABLISHED
 tcp0  0 ubuntu:1522 ubuntu:30820
  ESTABLISHED
 tcp0  0 ubuntu:30820ubuntu:1522
 ESTABLISHED
 tcp0256 ubuntu:ssh  175.157.41.209:42435
  ESTABLISHED
 Active UNIX domain sockets (w/o servers)
 Proto RefCnt Flags   Type   State I-Node   Path
 unix  9  [ ] DGRAM7936 /dev/log
 unix  3  [ ] STREAM CONNECTED 11737
 unix  3  [ ] STREAM CONNECTED 11736
 unix  3  [ ] STREAM CONNECTED 10949
  /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 10948
 unix  2  [ ] DGRAM10947
 unix  2  [ ] STREAM CONNECTED 10801
 unix  3  [ ] STREAM CONNECTED 10641
 unix  3  [ ] STREAM CONNECTED 10640
 unix  3  [ ] STREAM CONNECTED 10444
  /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 10443
 unix  3  [ ] STREAM CONNECTED 10437
  /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 10436
 unix  3  [ ] STREAM CONNECTED 10430
  /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 10429
 unix  2  [ ] DGRAM10424
 unix  3  [ ] STREAM CONNECTED 10422
  /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 10421
 unix  2  [ ] DGRAM10420
 unix  2  [ ] STREAM CONNECTED 10215
 unix  2  [ ] STREAM CONNECTED 10296
 unix  2  [ ] STREAM CONNECTED 9988
 unix  2  [ ] DGRAM9520
 unix  3  [ ] STREAM CONNECTED 8769
 /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 8768
 unix  2  [ ] DGRAM8753
 unix  2  [ ] DGRAM9422
 unix  3  [ ] STREAM CONNECTED 7000
 @/com/ubuntu/upstart
 unix  3  [ ] STREAM CONNECTED 8485
 unix  2  [ ] DGRAM7947
 unix  3  [ ] STREAM CONNECTED 6712
 /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 6711
 unix  3  [ ] STREAM CONNECTED 7760
 /var/run/dbus/system_bus_socket
 unix  3  [ ] STREAM CONNECTED 7759
 unix  3  [ ] STREAM CONNECTED 7754
 unix  3  [ ] STREAM CONNECTED 7753
 unix  3  [ ] DGRAM7661
 unix  3  [ ] DGRAM7660
 unix  3  [ ] STREAM CONNECTED 6490
 @/com/ubuntu/upstart
 unix  3  [ ] STREAM CONNECTED 6475

 What is the issue here? Why I can't connect to Cassandra server? How can
 I fix this?

 Thank You!

 --
 *Chamila Dilshan Wijayarathna,*
 Software Engineer
 Mobile:(+94)788193620
 WSO2 Inc., http://wso2.com/





 --
 *Chamila Dilshan Wijayarathna,*
 Software Engineer
 Mobile:(+94)788193620
 WSO2 Inc., http://wso2.com/




-- 
Regards,
Umang Shah
+919886829019


Re: ROperationTimedOut in selerct count statement in cqlsh

2015-04-22 Thread Umang Shah
In that case you have to increase the readtimeout as some suggested.

On Wed, Apr 22, 2015 at 10:06 AM, Mich Talebzadeh m...@peridale.co.uk
wrote:

 Thanks Umang.



 I have 9GB of memory free here out of 24GB



 cassandra@rhes564::/apps/cassandra free

  total   used   free sharedbuffers cached

 Mem:  24675328   147745329900796  0 5399009097992

 -/+ buffers/cache:5136640   19538688

 Swap: 24579440668   24578772



 But I am still getting the same error



 cqlsh:ase select count(1) from t;

 OperationTimedOut: errors={}, last_host=127.0.0.1



 thx



 Mich Talebzadeh



 http://talebzadehmich.wordpress.com



 Author of the books* A Practitioner’s Guide to Upgrading to Sybase** ASE
 15, **ISBN 978-0-9563693-0-7*.

 co-author *Sybase Transact SQL Guidelines Best Practices, ISBN
 978-0-9759693-0-4*

 *Publications due shortly:*

 *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
 Coherence Cache*

 *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
 one out shortly



 NOTE: The information in this email is proprietary and confidential. This
 message is for the designated recipient only, if you are not the intended
 recipient, you should destroy it immediately. Any information in this
 message shall not be understood as given or endorsed by Peridale Ltd, its
 subsidiaries or their employees, unless expressly so stated. It is the
 responsibility of the recipient to ensure that this email is virus free,
 therefore neither Peridale Ltd, its subsidiaries nor their employees accept
 any responsibility.



 *From:* Umang Shah [mailto:shahuma...@gmail.com]
 *Sent:* 22 April 2015 10:44
 *To:* user@cassandra.apache.org
 *Subject:* Re: ROperationTimedOut in selerct count statement in cqlsh



 HI,



 It is common problem, if your machine has 4 GB of RAM then you can only
 retrieve records about 20 so you have to increase the RAM of your
 system to avoid this problem..



 Thanks,

 Umang Shah



 On Wed, Apr 22, 2015 at 9:34 AM, Mich Talebzadeh m...@peridale.co.uk
 wrote:

 Hi,



 I have a table of 300,000 rows.



 When I try to do a simple



 cqlsh:ase select count(1) from t;

 OperationTimedOut: errors={}, last_host=127.0.0.1



 Appreciate any feedback



 Thanks,



 Mich





 NOTE: The information in this email is proprietary and confidential. This
 message is for the designated recipient only, if you are not the intended
 recipient, you should destroy it immediately. Any information in this
 message shall not be understood as given or endorsed by Peridale Ltd, its
 subsidiaries or their employees, unless expressly so stated. It is the
 responsibility of the recipient to ensure that this email is virus free,
 therefore neither Peridale Ltd, its subsidiaries nor their employees accept
 any responsibility.







 --

 Regards,

 Umang Shah

 +919886829019




-- 
Regards,
Umang Shah
+919886829019


Re: ROperationTimedOut in selerct count statement in cqlsh

2015-04-22 Thread Umang Shah
HI,

It is common problem, if your machine has 4 GB of RAM then you can only
retrieve records about 20 so you have to increase the RAM of your
system to avoid this problem..

Thanks,
Umang Shah

On Wed, Apr 22, 2015 at 9:34 AM, Mich Talebzadeh m...@peridale.co.uk
wrote:

 Hi,



 I have a table of 300,000 rows.



 When I try to do a simple



 cqlsh:ase select count(1) from t;

 OperationTimedOut: errors={}, last_host=127.0.0.1



 Appreciate any feedback



 Thanks,



 Mich





 NOTE: The information in this email is proprietary and confidential. This
 message is for the designated recipient only, if you are not the intended
 recipient, you should destroy it immediately. Any information in this
 message shall not be understood as given or endorsed by Peridale Ltd, its
 subsidiaries or their employees, unless expressly so stated. It is the
 responsibility of the recipient to ensure that this email is virus free,
 therefore neither Peridale Ltd, its subsidiaries nor their employees accept
 any responsibility.






-- 
Regards,
Umang Shah
+919886829019


Re: What will be system configuration for retrieving few GB of data

2014-10-19 Thread Umang Shah
Thanks Mohammed.. This the Answer which i am looking for.

Regards,
Umang Shah

On Sat, Oct 18, 2014 at 5:11 AM, Mohammed Guller moham...@glassbeam.com
wrote:

  With 8GB RAM, the default heap size is 2GB, so you will quickly start
 running out of heap space if you do large reads. What is a large read? It
 depends on the number of columns in each row and data in each column. It
 could 100,000 rows for some and 300,000 for others. In addition, remember
 that Java adds a lot of overhead to data in memory, so a 8 character string
 will not occupy just 8 bytes in memory, but a lot more.



 In general, avoid large reads in C*. If it is absolutely must and you
 cannot repartition the data, then use a driver that supports paging.



 Mohammed



 *From:* Umang Shah [mailto:shahuma...@gmail.com]
 *Sent:* Wednesday, October 15, 2014 10:46 PM
 *To:* user@cassandra.apache.org
 *Subject:* What will be system configuration for retrieving few GB of
 data



 Hi,



 I am facing many problem after storing certain limit of records in
 cassandra, and giving outofmemoryerror.



 I have 8GB of RAM in my system, so how much records i can expect to
 retrieve by using select query?



 and what will be the configuration for those people who are retrieving
 15-20 GB of data?



 Can somebody explain me how to improve read performance then it will be
 great help, i tried


 http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html



 such things but no help.





 --

 Regards,

 Umang Shah

 shahuma...@gmail.com




-- 
Regards,
Umang V.Shah
+919886829019


What will be system configuration for retrieving few GB of data

2014-10-15 Thread Umang Shah
Hi,

I am facing many problem after storing certain limit of records in
cassandra, and giving outofmemoryerror.

I have 8GB of RAM in my system, so how much records i can expect to
retrieve by using select query?

and what will be the configuration for those people who are retrieving
15-20 GB of data?

Can somebody explain me how to improve read performance then it will be
great help, i tried
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html

such things but no help.


-- 
Regards,
Umang Shah
shahuma...@gmail.com


Re: Difference in retrieving data from cassandra

2014-09-26 Thread Umang Shah
Hey Jonathan,

Thanks for your reply.
i created schema structure in this manner

CREATE SCHEMA schemaname WITH replication = { 'class' : 'SimpleStrategy',
'replication_factor' : 1 };
and table according to requirement.

I didn't used node structure.

So will it be the reason for performance?

And can you also tell me what is the difference between the structure i
used and in Node Structure.

Regards,
Umang Shah
BI-ETL Developer

On Thu, Sep 25, 2014 at 4:48 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 You'll need to provide a bit of information.  To start, a query trace
 from would be helpful.


 http://www.datastax.com/documentation/cql/3.0/cql/cql_reference/tracing_r.html

 (self promo) You may want to read over my blog post regarding
 diagnosing problems in production.  I've covered diagnosing slow
 queries:
 http://rustyrazorblade.com/2014/09/cassandra-summit-recap-diagnosing-problems-in-production/


 On Thu, Sep 25, 2014 at 4:21 AM, Umang Shah shahuma...@gmail.com wrote:
  Hi All,
 
  I am using cassandra with Pentaho PDI kettle, i have installed cassandra
 in
  Amazon EC2 instance and in local-machine, so when i am trying to retrieve
  data from local machine using Pentaho PDI it is taking few seconds (not
 more
  then 20 seconds) and if i do the same using production data-base it takes
  almost 3 minutes for the same number of data , which is huge difference.
 
  So if anybody can give me some comments of solution that what i need to
  check for this or how can i narrow down this difference?
 
  on local machine and production server RAM is same.
  Local machine is windows environment and production is Linux.
 
  --
  Regards,
  Umang V.Shah
  BI-ETL Developer



 --
 Jon Haddad
 http://www.rustyrazorblade.com
 twitter: rustyrazorblade




-- 
Regards,
Umang V.Shah
+919886829019


Difference in retrieving data from cassandra

2014-09-25 Thread Umang Shah
Hi All,

I am using cassandra with Pentaho PDI kettle, i have installed cassandra in
Amazon EC2 instance and in local-machine, so when i am trying to retrieve
data from local machine using Pentaho PDI it is taking few seconds (not
more then 20 seconds) and if i do the same using production data-base it
takes almost 3 minutes for the same number of data , which is huge
difference.

So if anybody can give me some comments of solution that what i need to
check for this or how can i narrow down this difference?

on local machine and production server RAM is same.
Local machine is windows environment and production is Linux.

-- 
Regards,
Umang V.Shah
BI-ETL Developer


Re: Performance testing in Cassandra

2014-09-10 Thread Umang Shah
Hi Malay ,

you can do below things,

cassandra stress tool is inside /tools/bin/cassandra-stress

for performing inserts and reads to test a keyspace to measure performace

cassandra-stress [options] [-o [operation name]]

-o (--operation name) : INSERT,READ,ETC.. (default INSERT)
-t (--threads) : processor threads to use for operation (default 50)
-k (--keep going) : ignore errors during insert and read
-n (--num-keys) : number of records to insert (default 1,00,000)

for more use cassandra-stress help by below command

bin/cassandra-stress -h

and complate documentation is available on it's website

www.datastax.com/documentation/cassandra/2.0/cassandra/tools/

Thanks,
Umang Shah
shahuma...@gmail.com

On Wed, Sep 10, 2014 at 5:36 AM, Malay Nilabh malay.nil...@lntinfotech.com
wrote:

  Hi



 Anyone can you please let me know the steps for performance testing in
 Cassandra using Stress tools.



 *Regards,*

 *Malay Nilabh*

 BIDW BU/ Big Data CoE

 LT Infotech Ltd, Hinjewadi,Pune

 [image: Description: image001]: +91-20-66571746

 [image: Description: Description: Description: Description:
 cid:image002.png@01CF1EAD.959B9290]+91-73-879-00727

 Email: malay.nil...@lntinfotech.com

 *|| Save Paper - Save Trees || *



 --
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail




-- 
Regards,
Umang V.Shah
+919886829019


Re: Bulk load in cassandra

2014-08-27 Thread Umang Shah
Hi Malay,

Yesterday i answered for your question but you didn't replied back whether
it worked for you or not.

Anyways you mean by importing text file into cassandra.

you can do that by following way.

COPY keyspace.columnfamily (column1, column2,...) FROM 'temp.csv' (location
of file);

for directly executing above command your file has to be in cassandra/bin
location.

Thanks,
Umang Shah
Pentaho BI-ETL Developer
shahuma...@gmail.com


On Wed, Aug 27, 2014 at 12:13 PM, Malay Nilabh malay.nil...@lntinfotech.com
 wrote:

  Hi

 I installed Cassandra on one node successfully using CLI I am able to add
 a table to the keyspace as well as  retrieve the data from the table. My
 query is if I have text file on my local file system and I want to load on
 Cassandra cluster or you can say bulk load. How can I achieve that. Please
 help me out.



 Regards

 *Malay Nilabh*

 BIDW BU/ Big Data CoE

 LT Infotech Ltd, Hinjewadi,Pune

 [image: Description: image001]: +91-20-66571746

 [image: Description: Description: Description: Description:
 cid:image002.png@01CF1EAD.959B9290]+91-73-879-00727

 Email: malay.nil...@lntinfotech.com

 *|| Save Paper - Save Trees || *



 --
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail




-- 
Regards,
Umang V.Shah
+919886829019


Re: Cassandra Installation

2014-08-26 Thread Umang Shah
Hi Malay,

Have a look at this video, this will give you very clear instruction how
you can achieve your output.

https://www.youtube.com/watch?v=Wohi9B-1Omc

Thanks,
Umang Shah
Pentaho BI-ETL Developer
shahuma...@gmail.com


On Tue, Aug 26, 2014 at 12:41 PM, Malay Nilabh malay.nil...@lntinfotech.com
 wrote:

  Hi



 I want to setup one node Cassandra cluster on my Ubuntu machine which has
 Java 1.7 along with oracle jdk and I have already downloaded the cassandra
 2.0 tar file, so I need full document to setup single node Cassandra
 cluster please guide me through this.



 Thanks  Regards

 *Malay Nilabh*

 BIDW BU/ Big Data CoE

 LT Infotech Ltd, Hinjewadi,Pune

 [image: Description: image001]: +91-20-66571746

 [image: Description: Description: Description: Description:
 cid:image002.png@01CF1EAD.959B9290]+91-73-879-00727

 Email: malay.nil...@lntinfotech.com

 *|| Save Paper - Save Trees || *



 --
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail




-- 
Regards,
Umang V.Shah
+919886829019


problem with copy command and heap size

2014-06-26 Thread umang shah
 

1) i am using below commands for coping data

 

COPY events.standardevents (uuid, data, name, time, tracker, type, userid)
TO 'temp.csv'; 

 

truncate standardevents;

 

COPY event.standardeventstemp (uuid, data, name, time, tracker, type,
userid) FROM 'temp.csv';

 

if table is not containing any field with uuid data-type above commands are
working fine but if it is containig data-type as a uuid then it is giving me

below error

 

Bad Request: Invalid STRING constant (3a1ccec0-ef77-11e3-9e56-22000ae3163a)
for name of type uuid

 

aborting import at column #0, previously inserted values are still present. 

 

Below is the description of my column-family

 

CREATE TABLE standardevents (

  uuid uuid PRIMARY KEY,

  data text,

  name text,

  time text,

  tracker text,

  type text,

  userid text

) WITH

  bloom_filter_fp_chance=0.01 AND

  caching='KEYS_ONLY' AND

  comment='' AND

  dclocal_read_repair_chance=0.00 AND

  gc_grace_seconds=864000 AND

  read_repair_chance=0.10 AND

  replicate_on_write='true' AND

  populate_io_cache_on_flush='false' AND

  compaction={'class': 'SizeTieredCompactionStrategy'} AND

  compression={'sstable_compression': 'SnappyCompressor'};

  

2) Facing problem of Heap is 0.8116662666877581 full

 

checked the logs getting below messages

 

GCInspector.java (line 142) Heap is 0.8116662666877581 full.  You may need
to reduce memtable and/or cache sizes.  Cassandra will now flush up to the
two largest memtables to free up memory.  Adjust flush_largest_memtables_at
threshold in cassandra.yaml if you don't want Cassandra to do this
automatically

StorageService.java (line 3512) Unable to reduce heap usage since there are
no dirty column families

 

i checked the cassandra-en.sh file currently it is occuping 

 

MAX_HEAP_SIZE=4G

HEAP_NEWSIZE=800M which i guess is maximum.

 

because of this constanly getting heap size error on every command.