JIra should be left for issues that you have some confidence are bugs in
cassandra or items you want as feature requests.
For general questions, try the cassandra mailing lists
user@cassandra.apache.org to subscribe -
user-subscr...@cassandra.apache.org
or use irc #cassandra on freenode
)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
*From:*Dave Brosius [mailto:dbros...@mebigfatguy.com]
*Sent:* Tuesday, March 31, 2015 8:46 PM
*To:* user@cassandra.apache.org
*Subject:* Re
Is there an 'initial cause' listed under that exception you gave? As
NoClassDefFoundError is not exactly the same as ClassNotFoundException.
It meant that ColumnMapper couldn't initialize it's static initializer,
it could be because some other class couldn't be found, or it could be
some other
As you point out, there's not really a node-based problem with your
query from a performance point of view. This is a limitation of CQL in
that, cql wants to slice one section of a partition's row (no matter how
big the section is). In your case, you are asking to slice multiple
sections of a
The method
com.google.common.collect.Sets.newConcurrentHashSet()Ljava/util/Set;
should be available in guava from 15.0 on. So guava-16.0 should be fine.
It's possible guava is being picked up from somewhere else? have a
global classpath variable?
you might want to do
URL u =
added, thanks.
On 08/18/2014 06:15 AM, Otis Gospodnetic wrote:
Hi,
What is the state of Cassandra Wiki -- http://wiki.apache.org/cassandra ?
I tried to update a few pages, but it looks like pages are immutable.
Do I need to have my Wiki username (OtisGospodnetic) added to some ACL?
We had a massive spam problem before we locked down the wiki, so
unfortunately that was the choice we had to make. But as stated we can
add you to the contributers list.
What is your Wiki user name?
On 2014-07-23 07:33, Peter Lin wrote:
I've tried to contribute docs to Cassandra wiki in
The question assumes that it's likely that datastax employees become
committers.
Actually, it's more likely that committers become datastax employees.
So this underlying tone that datastax only really 'wants' datastax
employees to be cassandra committers, is really misleading.
Why wouldn't
What Colin is saying is that the tool you used to create the token, is
not creating tokens usable for the Murmur3Partitioner. That tool is
probably generating tokens for the (original) RandomPartitioner, which
has a different range.
On 05/17/2014 07:20 PM, Tim Dunphy wrote:
Hi and thanks
For now you can edit the nodetool script itself by adding
-Duser.home=/tmp
as in
$JAVA $JAVA_AGENT -cp $CLASSPATH
-Xmx32m
-Duser.home=/tmp
-Dlogback.configurationFile=logback-tools.xml
-Dstorage-config=$CASSANDRA_CONF
org.apache.cassandra.tools.NodeTool -p $JMX_PORT $ARGS
if
In the mean time you can try upping the value of your -Xss setting in
cassandra-env.sh to see if just a little push will take the problem away.
On 01/10/2014 10:18 AM, Дмитрий Шохов wrote:
https://issues.apache.org/jira/browse/CASSANDRA-6567
Thank you!
2014/1/10 Benedict Elliott Smith
just send that email to user-unsubscribe@cassandra.apache.orgif still confused
check here http://hadonejob.com/img/full/12598654.jpg - Original Message
-From: quot;Earl Rubyquot; ;er...@webcdr.com
Realize that there will be more and more new features that come along as
cassandra matures. It is an overwhelming certainty that these feature will be
available thru the new native interface amp; CQL. The same level of certainty
can't be given to Thrift. Certainly if you have existing
Not really a cassandra question, but it would seem your xml file isn't
particularly well designed. It would seem you need to qualify your
test entries with indices when put in the map, such as
put(test.1.C, 0);
put(test.2.C, 50);
before figuring out the cassandra angle, i'd rethink
BoundStatement query = prBatchInsert.bind(userId,
attributes.values().toArray(new *String*[attributes.size()]))
On 12/07/2013 03:59 PM, Techy Teck wrote:
I am trying to insert into Cassandra database using Datastax Java
driver. But everytime I am getting below exception at
Please send that same riveting text to user-unsubscr...@cassandra.apache.org
*http://tinyurl.com/kdrwyrc*
On 10/30/2013 02:49 PM, Leonid Ilyevsky wrote:
Unsubscribe
This email, along with any attachments, is confidential and may be legally privileged or
otherwise protected from disclosure.
each node would forward the write request to the node responsible to
hold that key (determined by the hash function)
On 10/26/2013 09:25 PM, Mohammad Hajjat wrote:
Hi,
Quick question about Cassandra.
If I write the same key (with two different values) to two different
nodes with consistency
Unfortunately, as tech books tend to be, it's quite a bit out of date,
at this point.
On 10/27/2013 09:54 PM, Mohan L wrote:
On Sun, Oct 27, 2013 at 9:57 PM, Erwin Karbasi er...@optinity.com
mailto:er...@optinity.com wrote:
Hey Guys,
What is the best book to learn Cassandra
The explanation for Composite columns is muddied by verbage depending on
whether you are talking about the thrift interface which tends to talk
about things in low terms, or cql which tends to talk about things in
higher level terms.
At a thrift/low level, a composite column, really now
Cassandra-2.0 needs to run on jdk7
On 09/17/2013 11:21 PM, Gary Zhao wrote:
Hello
I just saw this error. Anyone knows how to fix it?
[root@gary-vm1 apache-cassandra-2.0.0]# bin/cassandra -f
xss = -ea -javaagent:bin/../lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities
I think your class is missing a required
public TypeSerializerVoid getSerializer() {}
method
This is what you need to derive from
What is your -Xss set to. If it's below 256m, set it there, and see if you
still have the issues. - Original Message -From: quot;Julio
Quieratiquot; ;julio.quier...@gmail.com
It seems to me that isExistingUser should be pushed down to the
IAuthenticator implementation.
Perhaps you should add a ticket to
https://issues.apache.org/jira/browse/CASSANDRA
On 06/17/2013 05:12 PM, Bao Le wrote:
Hi,
We have a custom authenticator that works well with Cassandra
You sent an email to user-unsubscr...@cassandra.apache.org from the
email addressed used, and it didn't unsubscribe you? Did you get the
'are you sure' email? Did you check your spam folder?
see
http://cassandra.apache.org/
http://hadonejob.com/img/70907344.jpg
On 06/10/2013 10:46 AM,
what version of netty is on your classpath?
On 05/16/2013 07:33 PM, aaron morton wrote:
Try the IRC room for the java driver or submit a ticket on the JIRA
system, see the links here https://github.com/datastax/java-driver
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
if you want to store all the roles in one row, you can do
create table roles (synthetic_key int, name text, primary
key(synthetic_key, name)) with compact storage
when inserting roles, just use the same key
insert into roles (synthetic_key, name) values (0, 'Programmer');
insert into roles
19:30, Dave Brosius dbros...@mebigfatguy.com
mailto:dbros...@mebigfatguy.com wrote:
if you want to store all the roles in one row, you can do
create table roles (synthetic_key int, name text, primary
key(synthetic_key, name)) with compact storage
when inserting roles, just use
getColumnDefinitions only returns meta data, to get the data, use the
iterator to navigate the rows
IteratorRow it = result.iterator();
while (it.hasNext()) {
Row r = it.next();
//do stuff with row
}
On 04/21/2013 12:02 AM, Techy Teck wrote:
I am working with Datastax java-driver.
is the read and write happening on the same thread?
On 03/10/2013 12:00 PM, André Cruz wrote:
Hello.
In my application it sometimes happens that I execute a multiget (I use
pycassa) to fetch data that I have just inserted. I use quorum writes and
reads, and my RF is 3.
I've noticed that
On 02/17/2013 01:26 PM, puneet loya wrote:
unsubscribe me please.
Thank you
if only directions were followed:
http://hadonejob.com/images/full/102.jpg
send to
user-unsubscr...@cassandra.apache.org
see https://issues.apache.org/jira/browse/CASSANDRA-5201
On 02/15/2013 10:05 PM, Yang Song wrote:
Hi,
Does anyone use CDH4's Hadoop with Cassandra to interact? The goal is
simply read/write to Cassandra from Hadoop direclty using
ColumnFamilyInput(Output)Format, but seems a bit
An exception occurred on the server, check the logs for the details of
what happened, and post back here.
On 02/07/2013 11:04 PM, Adam Venturella wrote:
Has anyone encountered this before?
What did I most likely break or how do I fix it?
xss = -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M
-XX:+HeapDumpOnOutOfMemoryError -Xss180k That is not an error, that is just
'debugging' information output to the command line. - Original Message
-From:
This part, ERROR 13:39:24,456 Cannot open
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5; partitioner
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner
org.apache.cassandra.dht.Murmur3Partitioner. Note that the default partitioner
starting with Cassandra
If querying by a date inequality is an important access paradigm you
probably want a column that represents some time bucket (a month?) And
have that column be part of the cql primary key. Thus when a query is
requested you can make c* happy by specifying a date bucket to pick the
c* row and
The statements used to create and populate the data might be mildly useful for
those trying to help - Original Message -From: quot;Kuldeep
Mishraquot; ;kuld.cs.mis...@gmail.com
the format has changed, check the help in cqlsh
CREATE KEYSPACE Test WITH replication = {'class':'SimpleStrategy',
'replication_factor':1};
On 12/29/2012 04:27 PM, Adam Venturella wrote:
When I create a keyspace with a SimpleStrategy as outlined here:
I swapped in hadoop-core-1.0.3.jar and rebuilt cassandra, without
issues. What problems where you having?
On 09/21/2012 07:40 PM, Juan Valencia wrote:
I can't seem to get Bulk Loading to Work in newer versions of Hadoop.
since they switched JobContext from a class to an interface
You lose
You'd need to make n queries, or do a superset query from min;-
If i understand you correctly, you are only ever querying for the rows
where is_exported = false, and turning them into trues. What this means
is that eventually you will have 1 row in the secondary index table with
350K columns that you will never look at.
It seems to me you that perhaps you
Are you using multiple client threads?
You might want to try the stress tool in the distribution.
On 08/19/2012 02:09 PM, Peter Morris wrote:
Hi all
I have a Windows 7 machine (64 bit) with DataStax community server
installed. Running a benchmark app on the server gives me 7000
inserts
When data is first written it does remain in memory until that memory is
flushed. After the data is only on disk, it remains there until a read
for that row-key/column is requested so in essense it's always load on
demand.
Currently there is no support for async notifications of changes.
There is a second (system managed) column family for each secondary
index, so any write to a field that is indexed causes two writes, one to
the main column family, and another to the index column family, where in
this index column family the key is the value of the secondary column,
and the
Quorum is defined as
(replication_factor / 2) + 1
therefore quorum when rf = 2 is 2! so in your case, both nodes must be up.
Really, using Quorum only starts making sense as a 'quorum' when RF=3
On 07/26/2012 10:38 PM, Yan Chunlu wrote:
I am using Cassandra 1.0.2, have a 3 nodes
You have RF=2, CL= Quorum but 3 nodes. So each row is represented on 2 of the 3
nodes.If you take a node down, one of two things can happen when you attempt to
read a row.The row lives on the two nodes that are still up. In this case you
will successfully read the data.The row lives on one node
Cassandra doesn't do reads before writes. It just places the updates in
memtables. In effect updates are the same as inserts.Batches certainly help
with network latency, and some minor amount of code repetitiion on the server
side. - Original Message -From: quot;Leonid Ilyevskyquot;
On 07/13/2012 08:00 PM, Michael Theroux wrote:
Hello,
I've been trying to understand in greater detail how SStables are stored, and
how information is transferred between Cassandra nodes, especially when a new
node is joining a cluster.
Specifically, Is information stored to SStables ordered
While in memory cassandra calls it a MemTable, but yes sstables are
write-once, and later combined with others into new ones thru compaction.
On 07/13/2012 09:54 PM, Michael Theroux wrote:
Thanks for the information,
So is the SStable essentially kept in memory, then sorted and written to
BTW, an issue was just fixed with dynamic columns in hector, you might
want to try trunk.
https://github.com/hector-client/hector/commit/2910b484629add683f61f392553e824c291fb6eb
On 07/12/2012 06:25 PM, aaron morton wrote:
You may have better luck on the Hector Mailing list…
If i read what you are saying, you are _not_ using composite keys?
That's one thing that could do it, if the first part of the composite
key had a very very low cardinality.
On 06/24/2012 11:00 AM, Safdar Kureishy wrote:
Hi,
I've searched online but was unable to find any leads for the
, Jun 24, 2012 at 8:38 PM, Dave Brosius
dbros...@mebigfatguy.com mailto:dbros...@mebigfatguy.com wrote:
If i read what you are saying, you are _not_ using composite keys?
That's one thing that could do it, if the first part of the
composite key had a very very low cardinality
On 06/22/2012 03:57 AM, Jeff Williams wrote:
Hi,
It doesn't look like this is possible, but can I select all rows missing a certain
column? The equivalent of select * where col is null in SQL.
Regards,
Jeff
remember that there really is no such thing as a row, just arbitrary
columns
Column values are limited at 2G.Why store them as Base64? that just adds
overhead. Storing the raw bytes will save you a bunch. - Original Message
-From: quot;Cyril Auburtinquot; ;cyril.aubur...@gmail.com
One of the column names on the row with key 353339332d3134363533393931
failed to validate with the validator for the column.
If you really are after what column is problematic, and are able to
build and run cassandra, you can add debugging info to Column.java
protected void
You can create composite columns on the fly.
On 06/13/2012 09:58 PM, Greg Fausak wrote:
That's a good question. I just went to a class, Ben was saying that
any action on a super column requires de-re-serialization. But, it
would be nice if a write had this sort of efficiency.
I have been
Via thrift, or a high level client on thrift, see as an example
http://www.datastax.com/dev/blog/introduction-to-composite-columns-part-1
On 06/13/2012 11:08 PM, Greg Fausak wrote:
Interesting.
How do you do it?
I have a version 2 CF, that works fine.
A version 3 table won't let me invent
What version of Cassandra?
might be related to https://issues.apache.org/jira/browse/CASSANDRA-4098
On 06/11/2012 12:07 AM, Prakrati Agrawal wrote:
Sorry
I ran list /columnFamilyName/; and it threw this error.
Thanks and Regards
Prakrati
*From:*aaron morton
What version are you using?
It might be related to https://issues.apache.org/jira/browse/CASSANDRA-4052
On 05/25/2012 07:32 AM, Victor Blaga wrote:
Hi all,
This is my first message on this posting list so I'm sorry if I am
breaking any rules. I just wanted to report some sort of a problem
On 05/21/2012 02:44 AM, Qingyan(Evan) Liu wrote:
send to user-unsubscr...@cassandra.apache.org
On 05/17/2012 09:49 PM, casablinca126.com wrote:
unsubscribe
send that message to
user-unsubscr...@cassandra.apache.org
Might be related to
https://issues.apache.org/jira/browse/CASSANDRA-3794
On 05/16/2012 08:12 AM, Christoph Eberhardt wrote:
Hi there,
if updgraded cassandra from 1.0.8 to 1.1.0. It seemed to work in the first
place, all seemed to work fine. So I started upgrading the rest of the cluster
You're in for a world of hurt going down that rabbit hole. If you truely
want version data then you should think about changing your keying to
perhaps be a composite key where key is of form
NaturalKey/VersionId
Or if you want the versioning at the column level, use composite columns
with
tracking issue here: https://issues.apache.org/jira/browse/CASSANDRA-4251
might be related to: https://issues.apache.org/jira/browse/CASSANDRA-3794
On 05/16/2012 08:12 AM, Christoph Eberhardt wrote:
Hi there,
if updgraded cassandra from 1.0.8 to 1.1.0. It seemed to work in the first
place,
Each index you define on the source CF is created using an internal CF
that has as its key the value of the column it's indexing, and as its
columns, all the keys of all the rows in the source CF that have that
value. So if all your rows in your source CF have the same value, then
your index
The replication factor for a keyspace is stored in the
system.schema_keyspaces column family.
Since you can't view this with cli as the server won't start, the only
way to look at it, that i know of is to use the
sstable2json tool on the *.db file for that column family...
So for instance
This could be accomplished with a custom 'CaseInsensitiveUTF8Type'
comparator to be used as the comparator for that column family. This
would require adding a class of your writing to the server.
On 05/14/2012 07:26 AM, Ertio Lew wrote:
I need to make a search by names index using entity
it can be in a separate jar with just one class.
On 05/15/2012 12:29 AM, Ertio Lew wrote:
Can I put this comparator class in a separate new jar(with just this
single file) or is it to be appended to the original jar along with
the other comparator classes?
On Tue, May 15, 2012 at 12:22 AM,
The only way you could get the old value for a column would be to insert
the column value, then flush, then insert the new column, then before
compaction look at the old sstable.
If you insert the value twice in a row without a flush, the old value is
gone, as it only exists in memtables (and
Inequalities on secondary indices are always done in memory, so without
at least one EQ on another secondary index you will be loading every row
in the database, which with a massive database isn't a good idea. So by
requiring at least one EQ on an index, you hopefully limit the set of
rows
If you read at Consistency of at least quorum, you are guaranteed that
at least one of the nodes has the latest data, and so you get the right
data. If you read with less than quorum it would be possible for all the
nodes that respond to have stale data.
On 05/10/2012 09:46 PM, Carpenter,
0 is a perfectly valid id.node - 1 is modulo the maximum token value. that
token range is 0 - 2**127so node - 1 in this case is 2**127 - Original
Message -From: quot;Deno Vichasquot; ;d...@syncopated.net
Works for me on trunk... what version are you using?
On 04/23/2012 08:39 AM, mdione@orange.com wrote:
I understand the error message, but I don't understand why I get it.
Here's the CF:
cqlsh:avatars describe columnfamily HBX_FILE;
CREATE COLUMNFAMILY HBX_FILE (
KEY blob PRIMARY
I think your math is 'relatively' correct. It would seem to me you
should focus on how you can reduce the amount of storage you are using
per item, if at all possible, if that node count is prohibitive.
On 04/19/2012 07:12 AM, Franc Carter wrote:
Hi,
One of the projects I am working on is
Your design should be around how you want to query. If you are only querying
by user, then having a user as part of the row key makes sense. To manage row
size, you should think of a row as being a bucket of time. Cassandra supports a
large (but not without bounds) row size. To manage row size
Yes in this cassandra model, time wouldn't be a column value, it would be part
of the column name. Depending on how you want to access your data (give me all
data points for time X) and how many separate datapoints you have for time X,
you might consider packing all the data for a time in one
Yes in this cassandra model, time wouldn't be a column value, it would be
part of the column name. Depending on how you want to access your data (give me
all data points for time X) and how many separate datapoints you have for time
X, you might consider packing all the data for a time in one
It seems to me you are on the right track. Finding the right balance of # rows
vs row width is the part that will take the most experimentation. -
Original Message -From: quot;Trevor Francisquot;
;trevor.fran...@tgrahamcapital.com
If you want to reduce the number of columns, you could pack all the data
for a product into one column, as in
composite column name- product_id_1:12.44:1.00:3.00
On 04/12/2012 03:03 PM, Philip Shon wrote:
I am currently working on a data model where the purpose is to look up
multiple
It's easy to spend other people's money, but handling 1TB of data with
1.5 g heap? Memory is cheap, and just a little more will solve many
problems.
On 04/11/2012 08:43 AM, Romain HARDOUIN wrote:
Thank you for your answers.
I originally post this question because we encoutered an OOM
For a thrift client, you need the following jars at a minimum
apache-cassandra-clientutil-*.jar
apache-cassandra-thrift-*.jar
libthrift-*.jar
slf4j-api-*.jar
slf4j-log4j12-*.jar
all of these jars can be found in the cassandra distribution.
On 04/02/2012 07:40 AM, Rishabh Agrawal wrote:
Any
slf4j files in distribution. So I downloaded them can
you help me how to configure it.
*From:*Dave Brosius [mailto:dbros...@mebigfatguy.com]
*Sent:* Monday, April 02, 2012 6:28 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Using Thrift
For a thrift client, you need the following jars
Counter columns are special, they must be in a column family to themselves.
On 03/27/2012 09:32 AM, puneet loya wrote:
wen i m using a counter column.. i m nt able to add columns of other
type to the column family.. is it so or it is just synactical error??
[default@CMDCv99] create column
I think you want
assume UserDetails validator as bytes;
On 03/23/2012 08:09 PM, Drew Kutcharian wrote:
Hi Everyone,
I'm having an issue with cassandra-cli's assume command with a custom type. I
tried it with the built-in BytesType and got the same error:
[default@test] assume UserDetails
if your keys are 1-n and you are using BOP, then almost certainly your
ring will be massively unbalanced with the first node getting clobbered.
You'll have bigger issues than getting lexical ordering.
I'd try to rethink your design so that you don't need BOP.
On 03/16/2012 06:49 PM, Watanabe
Given the hashtable nature of cassandra, finding a row is probably 'relatively'
constant no matter how many columns you have.The smaller the number of columns,
i suppose the more likely that all the columns will be in one sstable. If
you've got a ton of columns per row, it is much more likely
sorry, should have been: Given the hashtable nature of cassandra, finding a
row is probably 'relatively' constant no matter how many *rows* you have.
- Original Message -From: quot;Dave Brosiusquot;
;dbros...@mebigfatguy.com
With random partitioner, the rows are sorted by the hashes of the
keys, so for all intents and purposes, not sorted.
This comment below really is talking about how columns are sorted,
and yes when time uuids are used, they are sorted by the time
component, as a time
Given that these rows are wanted to be time buckets, you would want
collisions, in fact that would be the standard way of working, so
IMO, the uuid just removes the ability to bucket data and would not
be wanted.
On 02/28/2012 10:30 AM, Paul Loy wrote:
I guess the issue with 2 machines and RF=2 is that Consistency level of QUORUM
is the same as ALL, so you've pretty much have little flexibility with this
setup, of course this might be fine depending on what you want to do. In
addition, RF=2 also means that you get no data-storage improvements
What it's saying is if you define a KeySpace Foo and under it a
ColumnFamily called Foo, you won't be able to use describe to describe
the ColumnFamily named Foo.
On 02/21/2012 07:26 AM, Rishabh Agrawal wrote:
Hello,
I am newbie to Cassandra. Please bear with my lame doubts.
I running
if the composite column was rearranged as ticks:111wouldn't the result be as
desired? - Original Message -From: quot;aaron mortonquot;
;aa...@thelastpickle.com
Based on the tags listed here:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git
I would look here
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=9d4c0d9a37c7d77a05607b85611c3abdaf75be94
On 02/12/2012 10:39 PM, Maki Watanabe wrote:
Hello,
How to find the right
On 02/12/2012 10:53 PM, Shubham Srivastava wrote:
--
Sent using BlackBerry
send an email to user-unsubscr...@cassandra.apache.org
On 02/04/2012 12:05 PM, Andrea Loggia wrote:
Unsubscribe
If you wish to unsubscribe from the cassandra user list send a blank
email here
user-unsubscr...@cassandra.apache.org
mailto://user-subscr...@cassandra.apache.org
Folks who wish to unsubscribe should sent a blank email to the following
address
user-unsubscr...@cassandra.apache.org
mailto:user-unsubscr...@cassandra.apache.org
Change your yaml entry for data_file_directories from
data_file_directories: F:\cassandra\data
to
data_file_directories:
- F:\cassandra\data
On 01/17/2012 11:54 PM, Asha Subramanian wrote:
Here is the yaml file..
Thanks
*From:*Dave Brosius [mailto:dbros...@mebigfatguy.com]
*Sent
This works for me
http://wiki.apache.org/cassandra/HowToDebug
On 01/06/2012 01:18 AM, Kuldeep Sengar wrote:
Hi,
Can you post the error(saying that only 1 error is there), that'll make things
more clear.
Thanks
Kuldeep Singh Sengar
Opera Solutions
Tech Boulevard,8th floor, Tower C,
Sector
A ByteBuffer is not a byte[] to convert a String to a ByteBuffer do something
likepublic static ByteBuffer toByteBuffer(String value)
throws UnsupportedEncodingException
{
return ByteBuffer.wrap(value.getBytes(quot;UTF-8quot;));
} see
KsDef ksDef = new KsDef();Map;String, String;String, String
On 12/16/2011 10:13 PM, Brandon Williams wrote:
On Fri, Dec 16, 2011 at 8:52 PM, Kent Tongfreemant2...@yahoo.com wrote:
Hi,
From the source code I can see that for each key, the hash (token), the key
itself (ByteBuffer) and the position (long. offset in the sstable) are stored into
the key
1 - 100 of 107 matches
Mail list logo