Re: cql shell error
Thanks! *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] ta...@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Sun, Apr 15, 2012 at 9:05 PM, Janne Jalkanen janne.jalka...@ecyrd.comwrote: The Resolution line says Fixed, and the Fix Version line says 1.0.9, 1.1.0. So upgrade to 1.0.9 to get a fix for this particular bug :-) (Luckily, 1.0.9 has been released a few days ago, so you can just download and upgrade.) /Janne On Apr 15, 2012, at 20:31 , Tamar Fraenkel wrote: I apologize for what must be a dumb question, but I see that there are patches etc, what do I need to do in order to have the fix. I am running latest Cassandra 1.0.8. *Tamar Fraenkel * Senior Software Engineer, TOK Media tokLogo.png ta...@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Sun, Apr 15, 2012 at 7:46 PM, Janne Jalkanen janne.jalka...@ecyrd.comwrote: You might have hit this bug: https://issues.apache.org/jira/browse/CASSANDRA-4003 /Janne On Apr 15, 2012, at 17:21 , Tamar Fraenkel wrote: Hi! I have an error when I try to read column value using cql but I can read it when I use cli. When I read in cli I get: get cf['a52efb7a-b2ea-417b-b54a-9d6a2ebf6d71']['i:nwtp_name']= = (column=i:nwtp_name, value=G�¼nter Grass's Israel poem provokes outrage, timestamp=1333816116526001) When I try to read with cqlsh I get: 'ascii' codec can't encode character u'\u2019' in position 5: ordinal not in range(128) Do I need to save only ascii chars, or can I read it somehow using cql? Thanks *Tamar Fraenkel * Senior Software Engineer, TOK Media tokLogo.png ta...@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 tokLogo.png
unsubscribe
Re: unsubscribe
List-Help: mailto:user-h...@cassandra.apache.org List-Unsubscribe: mailto:user-unsubscr...@cassandra.apache.orguser-unsubscr...@cassandra.apache.org http://wiki.apache.org/cassandra/FAQ#unsubscribe On Mon, Apr 16, 2012 at 8:53 AM, Dirk Dittmar d.ditt...@wortzwei.de wrote:
AUTO: Ken Robbins is out of the office
I am out of the office until 04/17/2012. I will be out of the office and away from a computer for most of Monday (4/16). For urgent operational issues (including anything customer affecting), please send me a text at 781-856-0078. Note: This is an automated response to your message Re: cql shell error sent on 04/16/2012 0:13:34. This is the only notification you will receive while this person is away.
Re: Off-heap row cache and mmapped sstables
On 4/12/12, Omid Aladini omidalad...@gmail.com wrote: Cassandra issues an mlockall [1] before mmap-ing sstables to prevent the kernel from paging out heap space in favor of memory-mapped sstables. I was wondering, what happens to the off-heap row cache (saved or unsaved)? Is it possible that the kernel pages out off-heap row cache in favor of resident mmap-ed sstable pages? For what it's worth, I find this conjecture plausible given my understanding of the Cassandra ticket which resulted in the use of JNA+mlockall. I'd love to hear an opinion from someone from the project with more in-depth knowledge. :) =Rob -- =Robert Coli AIMGTALK - rc...@palominodb.com YAHOO - rcoli.palominob SKYPE - rcoli_palominodb
Re: Long Startup Times
If you start with DEBUG logging (or just enable logging for SSTableReader) you will get some more information on what's taking time at startup. If you want to dig a little further take a look at the iostat and cpu load. During startup a thread is created for each core on the machine and used to open a file. I've wondered if this could overload the IO on machines that report 16 cores. You'll see messages like this INFO [SSTableBatchOpen:1] where the number is the thread number. Cheers - Aaron Morton Freelance Developer @aaronmorton http://www.thelastpickle.com On 16/04/2012, at 5:13 AM, Derek Barnes wrote: Hi, I have 2 column families with approx 50 GB of compressed data (~150GB uncompressed). The data resides in a keyspace replicated 2-way, hosted by a 2-node Cassandra cluster (v1.0.8), both with 74GB RAM and 16 cores.Key caches are set to 1.0. I'm noticing that it can take upwards of 15+ minutes for the node to start up (i.e. before it becomes responsive to thrift clients). During this time, the logs suggest the system is blocked opening the data files. Is this expected behaviour? Are there any best practices for reducing node startup time? Thanks in advance!
RE: [RELEASE CANDIDATE] Apache Cassandra 1.1.0-rc1 released
I keep running into this with my testing (on a windows box), Is this just a OOM for RAM? ERROR [COMMIT-LOG-ALLOCATOR] 2012-04-16 13:36:18,790 AbstractCassandraDaemon.java (line 134) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main] java.io.IOError: java.io.IOException: Map failed at org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:127) at org.apache.cassandra.db.commitlog.CommitLogSegment.freshSegment(CommitLogSegment.java:80) at org.apache.cassandra.db.commitlog.CommitLogAllocator.createFreshSegment(CommitLogAllocator.java:244) at org.apache.cassandra.db.commitlog.CommitLogAllocator.access$500(CommitLogAllocator.java:49) at org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:104) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.lang.Thread.run(Unknown Source) Caused by: java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(Unknown Source) at org.apache.cassandra.db.commitlog.CommitLogSegment.init(CommitLogSegment.java:119) ... 6 more Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) ... 8 more INFO [StorageServiceShutdownHook] 2012-04-16 13:36:18,961 CassandraDaemon.java (line 218) Stop listening to thrift clients INFO [StorageServiceShutdownHook] 2012-04-16 13:36:18,961 MessagingService.java (line 539) Waiting for messaging service to quiesce INFO [ACCEPT-/10.47.1.15] 2012-04-16 13:36:18,977 MessagingService.java (line 695) MessagingService shutting down server thread. -Original Message- From: Sylvain Lebresne [mailto:sylv...@datastax.com] Sent: Friday, April 13, 2012 9:41 AM To: user@cassandra.apache.org Subject: [RELEASE CANDIDATE] Apache Cassandra 1.1.0-rc1 released The Cassandra team is pleased to announce the release of the first release candidate for the future Apache Cassandra 1.1. Please first note that this is a release candidate, *not* the final release yet. All help in testing this release candidate will be greatly appreciated. Please report any problem you may encounter[3,4] and have a look at the change log[1] and the release notes[2] to see where Cassandra 1.1 differs from the previous series. Apache Cassandra 1.1.0-rc1[5] is available as usual from the cassandra website (http://cassandra.apache.org/download/) and a debian package is available using the 11x branch (see http://wiki.apache.org/cassandra/DebianPackaging). Thank you for your help in testing and have fun with it. [1]: http://goo.gl/XwH7J (CHANGES.txt) [2]: http://goo.gl/JocLX (NEWS.txt) [3]: https://issues.apache.org/jira/browse/CASSANDRA [4]: user@cassandra.apache.org [5]: http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/cassandra-1.1.0-rc1
java.nio.BufferOverflowException from cassandra server
Hi, I have set up a 4 node cassandra cluster. I am using the Thrift C++ API to write a simple C++ application with creates a 50% READ 50% WRITE requests. Every time near about a thousand request mark, I am getting the following exception and my connection is broken: === ERROR 17:30:27,647 Error occurred during processing of message. java.nio.BufferOverflowException at java.nio.charset.CoderResult.throwException(Unknown Source) at java.lang.StringCoding$StringEncoder.encode(Unknown Source) at java.lang.StringCoding.encode(Unknown Source) at java.lang.String.getBytes(Unknown Source) at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBinaryProtocol.java:185) at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:92) at org.apache.cassandra.thrift.Cassandra$Processor$insert.process(Cassandra.java:3302) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) == Some info about the config I am using: - It is a 4 node cluster with only 1 seed. -The consistency level is also set to ONE. -The max heap size and new heap size is set to 4G and 800M(I tried without setting them as well) -Java is run in the interpreted mode(-Xint) -I'm using user mode linux Any pointers to what I might be doing wrong will be very helpful. Thanks in advance, Aniket
Is the secondary index re-built under compaction?
I noticed that nodetool compactionstats shows the building of the secondary index while I initiate compaction. Is this to be expected? Cassandra version 0.8.8. Thank you Maxim