d without a keyspace name may not show the effective
> ownerships, but it should always show the effective ownership information
> when you use "nodetool status keyspace_name".
> On 10/02/2021 13:12, Shalom Sagges wrote:
>
> I don't think it's related specifically to 4.0
> I
I don't think it's related specifically to 4.0
I can see this issue on 3.11 and even on previous versions as well.
It occurs when the replication factor is not similar on all keyspaces.
For example, if you have a cluster with 2 DCs and one of the keyspaces has
a RF of DC1: 3 whie other keyspaces
You are right Yakir.
How did I miss that?? It was a misconfiguration on my end.
Thanks a lot!
On Sat, Dec 12, 2020 at 9:28 PM Yakir Gibraltar wrote:
> See also:
> https://support.datastax.com/hc/en-us/articles/360027838911
>
>
> On Sat, Dec 12, 2020 at 9:11 PM Yakir Gibraltar wrote:
>
>> Hi
org.apache.cassandra.io.util.FileHandle$Cleanup@1791308664:/data_path/md-1105027-big-Index.db
was not released before the reference was garbage collected
On Fri, Dec 11, 2020 at 6:50 PM Shalom Sagges
wrote:
> Hi All,
>
> I upgraded Cassandra from v3.11.4 to v3.11.8.
> The upgrade went smoot
Hi All,
I upgraded Cassandra from v3.11.4 to v3.11.8.
The upgrade went smoothly, however, after a few hours, a node crashed on
OOM and a few hours later, another one crashed.
Seems like they crashed from excessive GC behaviour (CMS). The logs show
Map failures on CompactionExecutor:
ERROR
Thanks a lot guys!
I have a feeling that this tool will give me hell.
I'll just have to wait till they implement it and monitor the clusters, but
at least I know what to expect.
Thanks again
On Tue, Nov 17, 2020 at 1:33 AM Jeff Jirsa wrote:
> (Just to put this in perspective, it's
Hi Guys,
Our Service team would like to add a 3rd party tool (AppDynamics) that will
monitor Cassandra.
This tool will get read permissions on the system_traces keyspace and also
needs to enable TRACING.
tracetype_query_ttl in the yaml file will be reduced from 24 hours to 5
minutes.
I feel and
I agree with Erick and believe it's most likely a hot partitions issue.
I'd check "Compacted partition maximum bytes" in nodetool tablestats on
those "affected" nodes and compare the result with the other nodes.
I'd also check how the cpu_load is affected. From my experience, during
excessive GC
Hi All,
Apologies for the long email, so TL;DR:
A node bootstrapped successfully but only got the data which it was the
owner of, and didn't get the data as a replica.
I'm experiencing a really odd situation during node bootstrap.
Cassandra 3.11.4.
Background:
Due to a capacity issue on one of
Hi Gil,
You can run a full repair on your cluster. But if these messages come back
again, you need to check what's causing these data inconsistencies.
On Sun, Mar 8, 2020 at 10:11 AM Gil Ganz wrote:
> Hey all
> I have a lot of debug message about read repairs in my debug log :
>
> DEBUG
Thanks Erick!
I will check with the owners of this keyspace, hoping to find the culprit.
If they won't come up with anything, is there a way to read the key cache
file? (as I understand it's a binary file)
On another note, there's actually another keyspace I missed to point out on
which I found a
Hi again,
Does anyone perhaps have an idea on what could've gone wrong here?
Could it be just a calculation error on startup?
Thanks!
On Sun, Jan 26, 2020 at 5:57 PM Shalom Sagges
wrote:
> Hi Jeff,
>
> It is happening on multiple servers and even on different DCs.
> The schema
estamp":157784458,"level":"Info","message":"Best Logs in
town","extras":[]}] | 0 | 1 |
2966683f-cf37-4ea3-9d82-1de46207d51e | 0
Thanks for your help on this one!
On Thu, Jan 23, 2020 at 5:40 PM Jeff Jirsa
Hi All,
Cassandra 3.11.4.
On one of our clusters, during startup, I see two types of "Harmless
error" notification regarding the keycache:
*Server 1:*
INFO [pool-3-thread-1] 2020-01-23 04:34:46,167
AutoSavingCache.java:263 - *Harmless
error reading saved cache*
Hi Georgelin,
Do you have jdk 1.8 installed?
Is JAVA_HOME set in your cassandra-env.sh file?
Also, try to check the /var/log/cassandra/startup.log for additional
information.
Hope this helps.
On Tue, Dec 24, 2019 at 10:39 AM gloCalHelp.com
wrote:
> To Dimo:
>
> Thank you for your reply
Sorry, disregard the schema ID. It's too early in the morning here ;)
On Tue, Nov 26, 2019 at 7:58 AM Shalom Sagges
wrote:
> Hi Paul,
>
> From the gossipinfo output, it looks like the node's IP address and
> rpc_address are different.
> /192.168.*187*.121 vs RPC_ADDRESS:192.168.
Hi Paul,
>From the gossipinfo output, it looks like the node's IP address and
rpc_address are different.
/192.168.*187*.121 vs RPC_ADDRESS:192.168.*185*.121
You can also see that there's a schema disagreement between nodes, e.g.
schema_id on node001 is fd2dcb4b-ca62-30df-b8f2-d3fd774f2801 and on
ueries have run is to use audit
> logging plugin supported in 3.x, 2.2
> https://github.com/Ericsson/ecaudit
>
> On Thu, Sep 26, 2019 at 2:19 PM shalom sagges
> wrote:
>
>> Thanks for the quick response Jeff!
>>
>> The EXECUTE lines are a prepared state
*d67e6a07c24b675f492686078b46c9**97*
Thanks!
On Thu, Sep 26, 2019 at 11:14 AM Jeff Jirsa wrote:
> The EXECUTE lines are a prepared statement with the specified number of
> parameters.
>
>
> On Wed, Sep 25, 2019 at 11:38 PM shalom sagges
> wrote:
>
>> Hi All,
>>
>>
Hi All,
I've been trying to find which queries are run on a Cassandra node.
I've enabled DEBUG and ran *nodetool setlogginglevel
org.apache.cassandra.transport TRACE*
I did get some queries, but it's definitely not all the queries that are
run on this database.
I've also found a lot of DEBUG
don't do it :) this is kind of a
> special circumstances where other things have gone wrong.
>
> Thanks
>
> On Wed, Jun 5, 2019, 5:23 PM shalom sagges wrote:
>
>> If anyone has any idea on what might cause this issue, it'd be great.
>>
>> I don't understand what could trigg
is turned off I see repair running only in the
logs.
Thanks!
On Wed, Jun 5, 2019 at 2:32 PM shalom sagges wrote:
> Hi All,
>
> I'm having a bad situation where after upgrading 2 nodes (binaries only)
> from 2.1.21 to 3.11.4 I'm getting a lot of warning
Hi All,
I'm having a bad situation where after upgrading 2 nodes (binaries only)
from 2.1.21 to 3.11.4 I'm getting a lot of warnings as follows:
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread
Thread[ReadStage-5,5,main]: {}
java.lang.ArrayIndexOutOfBoundsException: null
t;> finding issues on the larger scale), especially with high volume clusters
>> so the loss in accuracy kinda moot. Your average for local reads/writes
>> will almost always be sub millisecond but you might end up having 500
>> millisecond requests or worse that the mean wi
.$cf.ReadTotalLatency.Count),7,8,9),1),'test')
WDYT?
On Thu, May 30, 2019 at 2:29 PM shalom sagges
wrote:
> Thanks for your replies guys. I really appreciate it.
>
> @Alain, I use Graphite for backend on top of Grafana. But the goal is to
> move from Graphite to Prometheus eventually.
>
&g
ad these measure the
> latency in milliseconds
>
> Thanks
>
> Paul
> www.redshots.com
>
> > On 29 May 2019, at 15:31, shalom sagges wrote:
> >
> > Hi All,
> >
> > I'm creating a dashboard that should collect read/write latency metrics
> on C* 3.x.
&g
If I only send ReadTotalLatency to Graphite/Grafana, can I run an average
on it and use "scale to seconds=1" ?
Will that do the trick?
Thanks!
On Wed, May 29, 2019 at 5:31 PM shalom sagges
wrote:
> Hi All,
>
> I'm creating a dashboard that should collect read/write latency
Hi All,
I'm creating a dashboard that should collect read/write latency metrics on
C* 3.x.
In older versions (e.g. 2.0) I used to divide the total read latency in
microseconds with the read count.
Is there a metric attribute that shows read/write latency without the need
to do the math, such as
d exactly like that over the
> cluster...
>
> thanks!
> Attila Wind
>
> http://www.linkedin.com/in/attilaw
> Mobile: +36 31 7811355
>
>
> On 2019. 05. 23. 11:42, shalom sagges wrote:
>
> a) Interesting... But only in case you do not provide partitioning key
> right
n if servers are busy
> with the request seriously becoming non-responsive...?
>
> cheers
> Attila Wind
>
> http://www.linkedin.com/in/attilaw
> Mobile: +36 31 7811355
>
>
> On 2019. 05. 23. 0:37, shalom sagges wrote:
>
> Hi Vsevolod,
>
> 1) Why such behavio
Hi Vsevolod,
1) Why such behavior? I thought any given SELECT request is handled by a
limited subset of C* nodes and not by all of them, as per connection
consistency/table replication settings, in case.
When you run a query with allow filtering, Cassandra doesn't know where the
data is located,
In a lot of cases, the issue is with the data model.
Can you describe the table?
Can you provide the query you use to retrieve the data?
What's the load on your cluster?
Are there lots of tombstones?
You can set the consistency level to ONE, just to check if you get
responses. Although normally I
Hi Rhys,
I encountered this error after adding new SSTables to a cluster and running
nodetool refresh (v3.0.12).
The refresh worked, but after starting repairs on the cluster, I got the
"Validation failed in /X.X.X.X" error on the remote DC.
A rolling restart solved the issue for me.
Hope this
Hi Simon,
If you haven't did that already, try to drain and restart the node you
deleted the data from.
Then run the repair again.
Regards,
On Thu, May 2, 2019 at 5:53 PM Simon ELBAZ wrote:
> Hi,
>
> I am running Cassandra v2.1 on a 3 node cluster.
>
> *# yum list installed | grep cassa*
>
I would just stop the service of the joining node and then delete the data,
commit logs and saved caches.
After stopping the node while joining, the cluster will remove it from the
list (i.e. nodetool status) without the need to decommission.
On Tue, Apr 30, 2019 at 2:44 PM Akshay Bhardwaj <
>
>
> Everyone really should move off of the 2.x versions just like you are
> doing.
>
>
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 12:34 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: A Question About Hints
>
&
ou go to fast or two slow?
>
> BTW, I thought the comments at the end of the article you mentioned were
> really good.
>
>
>
>
>
>
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 11:04 AM
> *To:* user@cassandra.apache.org
&
ster?
>
> Are both settings definitely on the default values currently?
>
>
>
> I’d try making a single conservative change to one or the other, measure
> and reassess. Then do same to other setting.
>
>
>
> Then of course share your results with us.
>
>
t;
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 7:22 AM
> *To:* user@cassandra.apache.org
> *Subject:* A Question About Hints
>
>
>
> Hi All,
>
>
>
> Does anyone know what is the most optimal hints configuration (multipl
Hi All,
Does anyone know what is the most optimal hints configuration (multiple
DCs) in terms of
max_hints_delivery_threads and hinted_handoff_throttle_in_kb?
If it's different for various use cases, is there a rule of thumb I can
work with?
I found this post but it's quite old:
Thanks for the info Alex!
I read
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html
but still have a few questions:
Our clusters are comprised of 2 DCs with no rack configuration, RF=3 on
each DC.
In this scenario, if I wish to seamlessly change the snitch with
If you're using the PropertyFileSnitch, well... you shouldn't as it's a
rather dangerous and tedious snitch to use
I inherited Cassandra clusters that use the PropertyFileSnitch. It's been
working fine, but you've kinda scared me :-)
Why is it dangerous to use?
If I decide to change the snitch,
Cleanup is a great way to free up disk space.
Just note you might run into
https://issues.apache.org/jira/browse/CASSANDRA-9036 if you use a version
older than 2.0.15.
On Thu, Feb 14, 2019 at 10:20 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Wed, Feb 13, 2019 at 6:47 PM
or useful.
Thanks a lot Jeff for clarifying this.
I really hoped the answer would be different. Now I need to nag our R
teams again :-)
Thanks!
On Mon, Feb 11, 2019 at 8:21 PM Michael Shuler
wrote:
> On 2/11/19 9:24 AM, shalom sagges wrote:
> > I've successfully upgraded a 2.0 clust
Hi All,
I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
3.11 (hopefully 3.11.4 if it'd be released very soon).
I have 2 small questions:
1. Currently the Datastax clients are enforcing Protocol Version 2 to
prevent mixed cluster issues. Do I need now to enforce
Disclaimer: The information provided in above response is my personal
> opinion based on the best of my knowledge and experience. We do
> not take any responsibility and we are not liable for any damage caused by
> actions taken based on above information.
> Thanks
> Anuj
>
>
&
Hi All,
I'm about to start a rolling upgrade process from version 2.0.14 to version
3.11.3.
I have a few small questions:
1. The upgrade process that I know of is from 2.0.14 to 2.1.x (higher
than 2.1.9 I think) and then from 2.1.x to 3.x. Do I need to upgrade first
to 3.0.x or can I
rency Factor)
>
>
>
> On Tue, Nov 6, 2018 at 8:21 AM shalom sagges
> wrote:
>
>> Hi All,
>>
>> If I run for example:
>> select * from myTable limit 3;
>>
>> Does Cassandra do a full table scan regardless of the limit?
>>
>> Thanks!
>>
>
Hi All,
If I run for example:
select * from myTable limit 3;
Does Cassandra do a full table scan regardless of the limit?
Thanks!
I guess the code experts could shed more light on
org.apache.cassandra.util.coalesceInternal and SepWorker.run.
I'll just add anything I can think of
Any cron or other scheduler running on those nodes?
Lots of Java processes running simultaneously?
Heavy repair continuously running?
Lots of
What takes the most CPU? System or User?
Did you try removing a problematic node and installing a brand new one
(instead of re-adding)?
When you decommissioned these nodes, did the high CPU "move" to other nodes
(probably data model/query issues) or was it completely gone? (server
issues)
On
Hi Riccardo,
Does this issue occur when performing a single restart or after several
restarts during a rolling restart (as mentioned in your original post)?
We have a cluster that when performing a rolling restart, we prefer to wait
~10-15 minutes between each restart because we see an increase
If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).
This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
you are on 3.0,
> So you are affected by UDT behaviour (stored as BLOB) mentioned in the
> JIRA.
>
> Cheers,
> Anup
>
> On 5 August 2018 at 23:29, shalom sagges wrote:
>
>> Hi All,
>>
>> Are there any known caveats for User Defined Types in Cassandra (vers
Hi All,
Are there any known caveats for User Defined Types in Cassandra (version
3.0)?
One of our teams wants to start using them. I wish to assess it and see if
it'd be wise (or not) to refrain from using UDTs.
Thanks!
Hi Gareth,
If you're using batches for multiple partitions, this may be the root cause
you've been looking for.
https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/
If batches are optimally used and only one node is misbehaving, check if
NTP on the node is properly synced.
Hope
The clustering column is ordered per partition key.
So if for example I create the following table:
create table desc_test (
id text,
name text,
PRIMARY KEY (id,name)
) WITH CLUSTERING ORDER BY (name DESC );
I insert a few rows:
insert into desc_test (id , name ) VALUES (
1. How to use sharding partition key in a way that partitions end up in
different nodes?
You could, for example, create a table with a bucket column added to the
partition key:
Table distinct(
hourNumber int,
bucket int, //could be a 5 minute bucket for example
key text,
distinctValue long
It's advisable to set the RF to 3 regardless of the consistency level.
If using RF=1, Read CL=LOCAL_ONE and a node goes down in the local DC, you
will not be able to read data related to this node until it goes back up.
For writes and CL=LOCAL_ONE, the write will fail (if it falls on the token
Thanks a lot Hitesh!
I'll try to re-tune the heap to a lower level
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.doc
Hi All,
I have a 44 node cluster (22 nodes on each DC).
Each node has 24 cores and 130 GB RAM, 3 TB HDDs.
Version 2.0.14 (soon to be upgraded)
~10K writes per second per node.
Heap size: 8 GB max, 2.4 GB newgen
I deployed Reaper and GC started to increase rapidly. I'm not sure if it's
because
, DuyHai Doan <doanduy...@gmail.com> wrote:
> Compress it and stores it as a blob.
> Unless you ever need to index it but I guess even with SASI indexing a so
> huge text block is not a good idea
>
> On Wed, Apr 4, 2018 at 2:25 PM, shalom sagges <shalomsag...@gmail.c
Hi All,
A certain application is writing ~55,000 characters for a single row. Most
of these characters are entered to one column with "text" data type.
This looks insanely large for one row.
Would you suggest to change the data type from "text" to BLOB or any other
option that might fit this
y unauthorized person acting, or refraining from acting,
> on any information contained in this message. For security purposes, staff
> training, to assist in resolving complaints and to improve our customer
> service, email communications may be monitored and telephone calls may be
> recorde
Hi All,
I ran nodetool cfstats (v2.0.14) on a keyspace and found that there are a
few large partitions. I assume that since "Compacted partition maximum
bytes": 802187438 (~800 MB) and since
"Compacted partition mean bytes": 100465 (~100 KB), it means that most
partitions are in okay size and
Thanks Guys!
This really helps!
On Fri, Mar 23, 2018 at 7:10 AM, Mick Semb Wever
wrote:
> Is there a way to protect C* on the server side from tracing commands that
>> are executed from clients?
>>
>
>
> If you really needed a way to completely disable all and any
ration
>
> On Mar 22, 2018, 11:10 AM -0500, shalom sagges <shalomsag...@gmail.com>,
> wrote:
>
> Hi All,
>
> Is there a way to protect C* on the server side from tracing commands that
> are executed from clients?
>
> Thanks!
>
>
Hi All,
Is there a way to protect C* on the server side from tracing commands that
are executed from clients?
Thanks!
If the problem is recurring, then you might have a corrupted SSTable.
Check the system log. If a certain file is corrupted, you'll find it.
grep -i corrupt /system.log*
On Wed, Mar 21, 2018 at 2:18 PM, Jerome Basa wrote:
> hi,
>
> when i run `nodetool compactionstats`
gt; protecting users from themselves but it doesnt hurt anything to have the
> table there. Just ignore it and its existence will not cause any issues.
>
> Chris
>
>
> On Mar 19, 2018, at 10:27 AM, shalom sagges <shalomsag...@gmail.com>
> wrote:
>
> Tha
al, I see.
>
> With https://issues.apache.org/jira/browse/CASSANDRA-13813 you wont be
> able to drop the table, but would be worth a ticket to prevent creation in
> those keyspaces or allow some sort of override if allowing create.
>
> Chris
>
>
> On Mar 19, 2018,
ng in
> debugging.
>
> Chris
>
> On Mar 19, 2018, at 7:52 AM, shalom sagges <shalomsag...@gmail.com> wrote:
>
> Hi All,
>
> I accidentally created a test table on the system_traces keyspace.
>
> When I tried to drop the table with the Cassandra user, I got the
Hi All,
I accidentally created a test table on the system_traces keyspace.
When I tried to drop the table with the Cassandra user, I got the following
error:
*Unauthorized: Error from server: code=2100 [Unauthorized] message="Cannot
DROP "*
Is there a way to drop this table permanently?
o ensure the combined results are properly ordered.
>
> Writes will be slowed by the double-writes, reads you'll be bound by the
> worse performing cluster.
>
> On Tue, Feb 27, 2018 at 8:23 AM, Kenneth Brotman <
> kenbrot...@yahoo.com.invalid> wrote:
>
>> Could you tell
Hi All,
I'm planning to upgrade my C* cluster to version 3.x and was wondering
what's the best way to perform a rollback if need be.
If I used snapshot restoration, I would be facing data loss, depends when I
took the snapshot (i.e. a rollback might be required after upgrading half
the cluster
des,
> the sstable that you are loading SHOULD not be live. If you at streaming a
> life sstable, it means you are using sstableloader not as it is designed to
> be used - which is with static files.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
&
!
On Sun, Feb 18, 2018 at 3:58 PM, Rahul Singh <rahul.xavier.si...@gmail.com>
wrote:
> Check permissions maybe? Who owns the files vs. who is running
> sstableloader.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Feb 18, 2018, 4:2
Hi All,
C* version 2.0.14.
I was loading some data to another cluster using SSTableLoader. The
streaming failed with the following error:
Streaming error occurred
java.lang.RuntimeException: java.io.*FileNotFoundException*:
/data1/keyspace1/table1/keyspace1-table1-jb-65174-Data.db (No such
Hi All,
I want to push the Cassandra logs (version 3.x) to Kibana.
Is there a way to configure the Cassandra logs to be in json format?
If modifying the logs to json is not an option, I came across this blog
post from about a year ago regarding that matter:
Thanks a lot for the info!
Much appreciated.
On Tue, Jan 9, 2018 at 2:33 AM, Mick Semb Wever
wrote:
>
>
>> Can you please provide dome JIRAs for superior fixes and performance
>> improvements which are present in 3.11.1 but are missing in 3.0.15.
>>
>
>
> Some that come
Thanks Guys!
Sorry for the late reply.
I'm interested in TWCS where I understand is more stable in 3.11.1 than in
3.0.15, tombstone compaction and slow logs.
I don't plan to use MVs and SASI in the near future, as I understand are
not Production ready.
Is it okay to use the above features?
Hi All,
I want to upgrade from 2.x to 3.x.
I can definitely use the features in 3.11.1 but it's not a must.
So my question is, is 3.11.1 stable and suitable for Production compared to
3.0.15?
Thanks!
t its own topology settings in
> cassandra-rackdc.properties, so the problem you point out in 2 goes away,
> as when adding a node you only need to specify its configuration and that
> will be propagated to the rest of the cluster through gossip.
>
> On 24 October 2017 at 07:13, sh
Hi Everyone,
I have 2 DCs (v2.0.14) with the following topology.properties:
DC1:
xxx11=DC1:RAC1
xxx12=DC1:RAC1
xxx13=DC1:RAC1
xxx14=DC1:RAC1
xxx15=DC1:RAC1
DC2:
yyy11=DC2:RAC1
yyy12=DC2:RAC1
yyy13=DC2:RAC1
yyy14=DC2:RAC1
yyy15=DC2:RAC1
# default for unknown nodes
default=DC1:RAC1
Now let's
removed. If you just keep recompacting sstable 2 by itself, the row in
> sstable A remains on disk.
>
>
>
> --
> Jeff Jirsa
>
>
> On Sep 26, 2017, at 2:01 AM, shalom sagges <shalomsag...@gmail.com> wrote:
>
> Thanks Jeff!
>
> I'll try that.
> I'm
best if you can be sure which
> data is overlapping, but short of that you'll probably want to pick data
> with approximately the same (or older) calendar timestamps.
>
>
>
> On Mon, Sep 25, 2017 at 11:10 AM, shalom sagges <shalomsag...@gmail.com>
> wrote:
>
>> Hi Everyone
Hi Everyone,
I'm running into an issue I can't seem to Solve.
I execute force compaction in order to reclaim back storage.
Everything was working fine for a time, but after a while I found that
tombstones aren't being removed any longer.
For example, I've compacted the following SSTable:
*21G
-compact
> // something that was already being compacted earlier.
>
> On 4 September 2017 at 13:54, Nicolas Guyomar <nicolas.guyo...@gmail.com>
> wrote:
>
>> You'll get the WARN "Will not compact {}: it is not an active sstable"
>> :)
>>
>>
By the way, does anyone know what happens if I run a user defined
compaction on an sstable that's already in compaction?
On Sun, Sep 3, 2017 at 2:55 PM, Shalom Sagges <shal...@liveperson.com>
wrote:
> Try this blog by The Last Pickle:
>
> http://thelastpickle.com/blog/2016/10/
Try this blog by The Last Pickle:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
Shalom Sagges
DBA
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
O
Thanks guys for all the info!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docsend.com/view/8iiswfp>
On Wed,
Thanks a lot!
I'll make sure it'll be prepared once.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docsend.com/view/8
That's a good to know post.
Thanks for the info Nicolas!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docsend.com/v
Sounds great then.
Thanks a lot guys!
Shalom Sagges
DBA
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
On Tue, Aug 29, 2017 at 2:41 PM, Nicolas Guyomar <nicolas.guy
Hi Matija,
I just wish to know if there are any disadvantages when using prepared
statement or any warning signs I should look for. Queries will run multiple
times so it fits the use case.
Thanks!
Shalom Sagges
DBA
<http://www.linkedin.com/company/164748> <http://twitter.com/livepers
Insights, anyone?
Shalom Sagges
DBA
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
On Mon, Aug 28, 2017 at 10:43 AM, Shalom Sagges <shal...@liveperson.com>
wrote:
&
!
Shalom Sagges
DBA
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
--
This message may contain confidential and/or privileged information.
If you are not the addressee or aut
Thanks Nitan!
Eventually it was a firewall issue related to the Centos7 node.
Once fixed, the rolling restart resolved the issue completely.
Thanks again!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://w
c07bf5c17a: [x.x.x.2]
UNREACHABLE: [x.x.x.31, x.x.x.1, x.x.x.28, x.x.x.252,
x.x.x.253, x.x.x.15, x.x.x.126, x.x.x.35, x.x.x.32]
I'd really REALLY appreciate some guidance. Did I do something wrong? Is
there a way to fix this?
Thanks a lot!
Shalom Sagges
DBA
<http://www.linkedin.co
That's awesome!! Thanks for contributing!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
On Thu, Jun 15, 2017 at 2:32 AM,
1 - 100 of 143 matches
Mail list logo