Hi,
When we connect to Opscenter using an account, we do not see any disconnect
button to connect under another account.
Thanks
--
Cyril SCETBON
_
Ce message et ses pieces
thanks Nick, I'll give it a try
Regards
--
Cyril SCETBON
On Jul 3, 2013, at 5:16 PM, Nick Bailey
n...@datastax.commailto:n...@datastax.com wrote:
OpsCenter uses http auth so the credentials will be saved by your browser.
There are a couple things you could do.
* Clear the local data/cache on
Hi,
Our Hadoop jobs will only do READs and we want to restrict reads in this
dedicated DC even if performances are bad.
What can we do to achieve this goal ?
- set dynamic_snitch_badness_threshold to 0.98 on these DC's nodes ? can we
have different dynamic_snitch_badness_threshold values on
Hi,
What's the procedure to remove authentification ?
I have set authenticator to org.apache.cassandra.auth.AllowAllAuthenticator in
cassandra.yaml, however I still get
cqlsh:pns_fr select * from t1 limit 1;
Bad Request: User anonymous has no SELECT permission on table k1.t1 or any of
its
right ! authorizer was still set. I didn't know there was 2 different classes
for handling logging and access.
thanks
--
Cyril SCETBON
On Jun 4, 2013, at 12:00 PM, Michal Michalski
mich...@opera.commailto:mich...@opera.com wrote:
How about authorizer? Is it set to
Forget it !
The issue came from a different configuration (cassandra-env.sh) of the chosen
node. It was not using the CMS garbage.
Regards
--
Cyril SCETBON
On May 27, 2013, at 12:10 PM,
cscetbon@orange.commailto:cscetbon@orange.com wrote:
Hi,
I'm using Opscenter 3.1.0 and see no
I was going to open one. Great !
--
Cyril SCETBON
On May 7, 2013, at 9:03 AM, Shamim sre...@yandex.rumailto:sre...@yandex.ru
wrote:
I have created an issue in jira
https://issues.apache.org/jira/browse/CASSANDRA-5544
I tried to use your quick workaround but the task is lasting really longer than
before even if it uses 2 mappers in //. The fact is that there are 1000 tasks.
Are you using vnodes ? I didn't try to disable them.
Kind% Complete Num Tasks Pending Running CompleteKilled
Unfortunately I've just tried with a new cluster with RandomPartitioner and it
doesn't work better :
it may come from hadoop/pig modifications :
18:02:53|elia:hadoop cyril$ git diff --stat cassandra-1.1.5..cassandra-1.2.1 .
.../apache/cassandra/hadoop/BulkOutputFormat.java | 27 +--
Hi,
I'm using Pig to calculate the sum of a columns from a columnfamily (scan of
all rows) and I've read that input data locality is supported at
http://wiki.apache.org/cassandra/HadoopSupport
However when I execute my Pig script Hadoop assigns only one mapper to the task
and not one mapper on
+1
We're also waiting for this bugfix :(
--
Cyril SCETBON
On Apr 23, 2013, at 2:42 PM, Ondřej Černoš
cern...@gmail.commailto:cern...@gmail.com wrote:
Hi all,
is there someone on this list knowledgable enough about the plans for support
on non-compact storage tables
So, you're saying that deleted rows can come back even if the node is always up
or down for less than max_hint_window_in_ms, right ?
--
Cyril SCETBON
On Apr 5, 2013, at 11:59 PM, Edward Capriolo
edlinuxg...@gmail.commailto:edlinuxg...@gmail.com wrote:
There are a series of edge cases that
That's exactly what I understood and why I was using the max_hint_window_in_ms
threshold to force a manual repair.
--
Cyril SCETBON
On Apr 5, 2013, at 5:22 PM, Jean-Armel Luce
jaluc...@gmail.commailto:jaluc...@gmail.com wrote:
Hi Cyril,
According to the documentation
Hi,
I know that deleted rows can reappear if node repair is not run on every node
before gc_grace_seconds seconds. However do we really need to obey this rule if
we run node repair on node that are down for more than max_hint_window_in_ms
milliseconds ?
Thanks
--
Cyril SCETBON
Okay. I found an issue already opened for that
https://issues.apache.org/jira/browse/CASSANDRA-5234 and added it my comment as
it's labeled as 'Not a problem'
thanks
--
Cyril SCETBON
On Mar 26, 2013, at 9:24 PM, aaron morton
aa...@thelastpickle.commailto:aa...@thelastpickle.com wrote:
Is
No one else concerned by the fact that we must define the column families the
old way to access it with Pig ?
Is there a way to have the column family defined the new way in a DC and the
old way (WITH COMPACT STORAGE) in another DC ?
Thanks
--
Cyril SCETBON
Expert bases de données
Humanlog
On Mar 20, 2013, at 5:21 AM, aaron morton aa...@thelastpickle.com wrote:
By design. There may be a plan to change in the future, I'm not aware of one
though.
bad news. If someone else has more information about that, don't hesitate !
Do you know how hard it would be to change this behaviour ?
Hi,
I'm testing Pig (0.11) with Cassandra (1.2.2). I've noticed that when the
column family is created without WITH COMPACT STORAGE clause, Pig can't find it
:(
After searching in the code, I've found that the issue comes from the function
recv_describe_keyspace. This function returns a KsDef
It succeeds but returns nothing as my columnfamily has only data for columns
appearing in the CREATE TABLE order. It you want to keep it you should provide
the CREATE order with sample data
--
Cyril SCETBON
On Mar 14, 2013, at 2:16 PM, aaron morton
Ok forget it. It was a mix of mistakes like environment variables not set,
package name not added in the script and libraries not found.
Regards
--
Cyril SCETBON
On Mar 12, 2013, at 10:43 AM,
cscetbon@orange.commailto:cscetbon@orange.com wrote:
I'm already using Cassandra 1.2.2 with
I'm trying to execute your sample pig script and I don't understand where the
alias columns comes from :
grunt rows = LOAD 'cassandra://MyKeyspace/MyColumnFamily' USING
CassandraStorage();
grunt cols = FOREACH rows GENERATE flatten(columns);
I suppose it's defined by the call to getSchema
Finally I've found the answer in CassandraStorage.java !
columns is not an alias but a bag that you fill with columns (name+value) that
don't have metadata.
That's why your sample doesn't return anything in my test as I've only filled
existing columns (found in the CQL CREATE command)
I think
You said all versions. However, when I try to access
cassandra://twissandra/users based on
http://www.datastax.com/docs/1.0/dml/using_cql I get :
2013-03-11 17:35:48,444 [main] INFO org.apache.pig.Main - Apache Pig version
0.11.0 (r1446324) compiled Feb 14 2013, 16:40:57
2013-03-11
what do you mean ? it's not needed by Pig or Hive to access Cassandra data.
Regards
On Jan 16, 2013, at 11:14 PM, Brandon Williams
dri...@gmail.commailto:dri...@gmail.com wrote:
You won't get CFS,
but it's not a hard requirement, either.
Jimmy,
I understand that CFS can replace HDFS for those who use Hadoop. I just want to
use pig and hive on cassandra. I know that pig samples are provided and work
now with cassandra natively (they are part of the core). However, does it mean
that the process will be spread over nodes with
Ok, I understand that I need to manage both cassandra and hadoop components and
that pig will use hadoop components to launch its tasks which will use
Cassandra as the Storage engine.
Thanks
--
Cyril SCETBON
On Jan 17, 2013, at 4:03 PM, James Schappet
Hi,
I know that DataStax Enterprise package provide Brisk, but is there a community
version ? Is it easy to interface Hadoop with Cassandra as the storage or do we
absolutely have to use Brisk for that ?
I know CassandraFS is natively available in cassandra 1.2, the version I use,
so is there
I don't want to write to Cassandra as it replicates data from another
datacenter, but I just want to use Hadoop Jobs (Pig and Hive) to read data from
it. I would like to use the same configuration as
http://www.datastax.com/dev/blog/hadoop-mapreduce-in-the-cassandra-cluster but
I want to know
Here is the point. You're right this github repository has not been updated for
a year and a half. I thought brisk was just a bundle of some technologies and
that it was possible to install the same components and make them work together
without using this bundle :(
On Jan 16, 2013, at 8:22
Hi,
FYI, I've added the devel version to the cassandra formula of the homebrew
package installer, and updated the release version to 1.1.8.
You can now use brew install cassandra to install version 1.1.8 and brew
install --devel cassandra to install version 1.2.0-rc2
enjoy !
--
Cyril SCETBON
Hi,
Is it normal that ReadStage completed counter is incremented by 2 when the CQL
request uses a secondary index ?
thanks
--
Cyril SCETBON
_
Ce message et ses pieces
Nice job Aaron,
AFAIU now you set the gc_before to the current time for secondary indexes. And
as it was set to Integer.MAX_VALUE before your patch, removeDeletedStandard
function was testing if (column.getLocalDeletiontime() MAX_VALUE) which is
always true and so was removing all rows from
32 matches
Mail list logo