Cleanup would have the same effect I think, in exchange for a minor
amount of extra CPU used.
On Mon, Oct 31, 2011 at 4:08 AM, Sylvain Lebresne wrote:
> On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever wrote:
>> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>>> After an upgrade to ca
On Mon, Oct 31, 2011 at 11:41 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> you can
>> trigger a "user defined compaction" through JMX on each of the sstable
>> you want to rebuild.
>
> May i ask how?
> Everything i see from NodeProbe to StorageProxy is
On Mon, Oct 31, 2011 at 11:35 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
>> >
>> >
>> > I see now this was a bad choice.
>> > The read pattern of these rows is always in bulk
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> you can
> trigger a "user defined compaction" through JMX on each of the sstable
> you want to rebuild.
May i ask how?
Everything i see from NodeProbe to StorageProxy is ks and cf based.
~mck
--
“Anyone who lives within their means s
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
> >
> >
> > I see now this was a bad choice.
> > The read pattern of these rows is always in bulk so the chunk_length
> > could have been much higher so to reduce
On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>>
>> java.lang.OutOfMemoryError: Java heap space
>> at
>> org.apache.cassandra.io.compress.CompressionMeta
On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
> at
ah got it , thanks!
On Thu, Aug 4, 2011 at 2:59 PM, Robert Jackson
wrote:
> You should be able to specify the count parameter to your KeyRange and just
> use the last key returned as your start_key for the next "page".
>
> Sent from my iPhone
>
> On Aug 4, 2011, at 5:00 PM, Yang wrote:
>
>>
You should be able to specify the count parameter to your KeyRange and just use
the last key returned as your start_key for the next "page".
Sent from my iPhone
On Aug 4, 2011, at 5:00 PM, Yang wrote:
> our keyspace is really not that big,
> about 1million rows, each about 500 bytes
>
> but
t;
> > ________
> > De : aaron morton
> > À : user@cassandra.apache.org
> > Envoyé le : Jeudi 23 Juin 2011 20h30
> > Objet : Re: get_range_slices result
> >
> > Not sure what your question is.
> > Does this help ? http://wiki.apa
2011 12h40
Objet : Re: Re : Re : get_range_slices result
First thing is you really should upgrade from 0.6, the current release is 0.8.
Info on time uuid's
http://wiki.apache.org/cassandra/FAQ#working_with_timeuuid_in_java
If you are using a higher level client like Hector or Pelops it will
.apache.org"
> Envoyé le : Lundi 27 Juin 2011 17h59
> Objet : Re : Re : get_range_slices result
>
> i used TimeUUIDType as type in storage-conf.xml file
>
>
> and i used it as comparator in my java code,
> but in the execution i get exception :
> Erreur --java.io.U
can i have an example for using TimeUUIDType as comparator in a client
java code.
De : karim abbouh
À : "user@cassandra.apache.org"
Envoyé le : Lundi 27 Juin 2011 17h59
Objet : Re : Re : get_range_slices result
i used TimeUUIDType as type
@cassandra.apache.org
Cc : karim abbouh
Envoyé le : Vendredi 24 Juin 2011 11h25
Objet : Re: Re : get_range_slices result
You can get the best of both worlds by repeating the key in a column,
and creating a secondary index on that column.
On Fri, Jun 24, 2011 at 1:16 PM, Sylvain Lebresne wrote:
>
nd more
> efficient, and much more simpler than to deal with the load balancing
> problems of OrderPreservingPartitioner.
>
> --
> Sylvain
>
>>
>> ____
>> De : aaron morton
>> À : user@cassandra.apache.org
>> Envoyé le :
very often not too hard and more
efficient, and much more simpler than to deal with the load balancing
problems of OrderPreservingPartitioner.
--
Sylvain
>
>
> De : aaron morton
> À : user@cassandra.apache.org
> Envoyé le : Jeudi 23 Juin 2011 20
i want get_range_slices() function returns records sorted(orded) by the
key(rowId) used during the insertion.
is it possible?
De : aaron morton
À : user@cassandra.apache.org
Envoyé le : Jeudi 23 Juin 2011 20h30
Objet : Re: get_range_slices result
Not sure
Not sure what your question is.
Does this help ? http://wiki.apache.org/cassandra/FAQ#range_rp
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 21:59, karim abbouh wrote:
> how can get_range_slices() function retu
Nvm. Found the answer in the FAQ :P It is normal.
Thx,
Jason
On Fri, Mar 25, 2011 at 1:24 AM, Jason Harvey wrote:
> I am running a get_range_slices on one of my larger CFs. I am then
> running a 'get' call on each of those keys. I have run into 50 or so
> keys that were returned in the range, bu
What are you using for the SlicePredicate with get_range_slices() ? What sort
of performance are you getting for each request (client and server side)?
Even if you are asking for zero columns, there is still a lot of work to be
done when performing a range scan. e.g. Each SSTable must be checke
You can't create a row with no columns without tombstones being
involved somehow. :)
There's no distinction between "a row with no columns because the
individual columns were removed," and "a row with no columns because
the row was removed." the latter is just a more efficient expression
of the f
No, checking the key will not do.
You will need to check if row.getColumnSlice().getColumns() is empty or not.
That's what I do and it works for me.
On Wed, Jan 26, 2011 at 4:53 AM, Nick Santini wrote:
> thanks,
> so I need to check the returned slice for the key to verify that is a valid
> row
thanks,
so I need to check the returned slice for the key to verify that is a valid
row and not a deleted one?
Nicolas Santini
On Wed, Jan 26, 2011 at 12:16 PM, Narendra Sharma wrote:
> Yes. See this http://wiki.apache.org/cassandra/FAQ#range_ghosts
>
> -Naren
>
>
> On Tue, Jan 25, 2011 at 2:
Yes. See this http://wiki.apache.org/cassandra/FAQ#range_ghosts
-Naren
On Tue, Jan 25, 2011 at 2:59 PM, Nick Santini wrote:
> Hi,
> I'm trying a test scenario where I create 100 rows in a CF, then
> use get_range_slices to get all the rows, and I get 100 rows, so far so good
> then after the tes
Looks like the patch that introduced that bug was added in 0.6.6 and wasn't
fixed until 0.6.8 so yes I'd say that is your problem with get_range_slices.
Is there a reason you can't update?
For nodetool ring, if every node in your cluster is not showing one of the
nodes in the ring, then that node
Is this https://issues.apache.org/jira/browse/CASSANDRA-1722 related?
From: Rajat Chopra [mailto:rcho...@makara.com]
Sent: Wednesday, December 15, 2010 9:45 PM
To: user@cassandra.apache.org
Subject: get_range_slices does not work properly
Hi!
Using v0.6.6, I have a 16 node cluster.
One
Was anyone able to reproduce this bug?
On Wed, Oct 6, 2010 at 6:19 PM, Jianing Hu wrote:
> I'm seeing cases where the count in slicerange predicate is not
> respected. This is only happening for super columns. I'm running
> Cassandra 0.6.4 in a single node.
>
> Steps to reproduce, using the Keysp
This is a bug. If you can give us data to reproduce with we can fix it faster.
On Wed, Jul 14, 2010 at 10:29 AM, shimi wrote:
> I wrote a code that iterate on all the rows by using get_range_slices.
> for the first call I use KeyRange from "" to "".
> for all the others I use from iteration> to
FYI: https://issues.apache.org/jira/browse/CASSANDRA-1145
Yes, it's a bug. CL.ONE is a reasonable work around.
On Thu, Jul 8, 2010 at 11:04 PM, Mike Malone wrote:
> I think the answer to your question is no, you shouldn't.
> I'm feeling far too lazy to do even light research on the topic, but I
>
I think the answer to your question is no, you shouldn't.
I'm feeling far too lazy to do even light research on the topic, but I
remember there being a bug where replicas weren't consolidated and you'd get
a result set that included data from each replica that was consulted for a
query. That could
> > streaming
>> > data to 192.168.1.107 since they are holding the replicated data for
>> > that
>> > range.
>> >
>> > 3. nodetool repair ?
>> >
>> > On Tue, Jun 22, 2010 at 12:03 AM, Benjamin Black wrote:
>> >>
>>
> >> On Mon, Jun 21, 2010 at 7:02 PM, Joost Ouwerkerk
> >> wrote:
> >> > I believe we did nodetool removetoken on nodes that were already down
> >> > (due
> >> > to hardware failure), but I will check to make sure. We're running
> >>
k to make sure. We're running
>> > Cassandra
>> > 0.6.2.
>> >
>> > On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk
>> > wrote:
>> >>
>> >> Greg, can you describe the steps we took to decommission the nodes?
>> >>
>> &
gt; > On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk
> > wrote:
> >>
> >> Greg, can you describe the steps we took to decommission the nodes?
> >>
> >> ------ Forwarded message --
> >> From: Rob Coli
> >> Date: Mon, Jun 21
10 at 9:59 PM, Joost Ouwerkerk
> wrote:
>>
>> Greg, can you describe the steps we took to decommission the nodes?
>>
>> -- Forwarded message --
>> From: Rob Coli
>> Date: Mon, Jun 21, 2010 at 8:08 PM
>> Subject: Re: get_range_slices confus
g Cassandra
> 0.6.2.
>
>
> On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk wrote:
>
>> Greg, can you describe the steps we took to decommission the nodes?
>>
>>
>> -- Forwarded message --
>> From: Rob Coli
>> Date: Mon, Jun 21, 2
he nodes?
>
>
> -- Forwarded message --
> From: Rob Coli
> Date: Mon, Jun 21, 2010 at 8:08 PM
> Subject: Re: get_range_slices confused about token ranges after
> decommissioning a node
> To: user@cassandra.apache.org
>
>
> On 6/21/10 4:57 PM, Joost Ouwerk
On 6/21/10 4:57 PM, Joost Ouwerkerk wrote:
We're seeing very strange behaviour after decommissioning a node: when
requesting a get_range_slices with a KeyRange by token, we are getting
back tokens that are out of range.
What sequence of actions did you take to "decommission" the node? What
ver
We haven't gotten around to implementing this yet and so far no one needed
that badly enough to write it.
We accept contributions or forks and we use github, so feel free to diy
(forks are preferable). http://github.com/rantav/hector
On Tue, Apr 20, 2010 at 3:25 AM, Chris Dean wrote:
> Ok, thank
Ok, thanks.
Cheers,
Chris Dean
Nathan McCall writes:
> Not yet. If you wanted to provide a patch that would be much
> appreciated. A fork and pull request would be best logistically, but
> whatever works.
>
> -Nate
>
> On Mon, Apr 19, 2010 at 5:10 PM, Chris Dean wrote:
>> Is there a version of
Not yet. If you wanted to provide a patch that would be much
appreciated. A fork and pull request would be best logistically, but
whatever works.
-Nate
On Mon, Apr 19, 2010 at 5:10 PM, Chris Dean wrote:
> Is there a version of hector that has an interface to get_range_slices ?
> or should I prov
41 matches
Mail list logo