if this approach would work? I’m
>> concerned if having mixed version on Cassandra nodes may cause any issues
>> like in streaming data/sstables from existing DC to newly created third DC
>> with version 3.10 installed, will nodes in DC3 join the cluster with data
>> withou
t was the cause? How prevent it from repeating?
>
> --
> Best Regards,
> Dmitry Simonov
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
uot; regulary for this keyspace (every 24 hours on each
> node), because gc_grace_seconds is set to 24 hours.
>
> Should we consider increasing compaction throughput and
> "concurrent_compactors" (as recommended for SSDs) to keep
> "CompactionExecutor" pending tas
e it takes forever sometome 45 to 1hr and sometime times out ..so
> i started running "nodetool repair -dc dc1" for each dc one by one ..which
> works fine ..do we have an better way to handle this?
> I am thinking abouy exploring cassandra reaper ..does anyone has used that
>
) with 256 Vnodes .
>>>>>> When we tried to start repairs from opscenter then it showed
>>>>>> 1.9Million ranges to repair .
>>>>>> And even after doing compaction and strekamthroughput to 0 ,
>>>>>> opscenter is not able to help us much to finish repair in 9 days
>>>>>> timeframe .
>>>>>>
>>>>>> What is your thought on Reaper ?
>>>>>> Do you think , Reaper might be able to help us in this scenario ?
>>>>>>
>>>>>> Thanks
>>>>>> Surbhi
>>>>>>
>>>>>>
>>>>>> --
>>>> Jon Haddad
>>>> http://www.rustyrazorblade.com
>>>> twitter: rustyrazorblade
>>>>
>>>>
>>>>
>>>
>>> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
soon.
>
> On Mon, May 21, 2018 at 10:53 AM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
>> Hi Subri,
>>
>> Reaper might indeed be your best chance to reduce the overhead of vnodes
>> there.
>> The latest betas include a new feature that
e, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
repair by -pr option only.
>
> Question:Is incremental repair is the default repair for cassandra 3.11.2
> version.
>
> Thanks,
> Prachi
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
start problem we were forced to delete commit logs from
> one of nodes.
>
> Now repair is running, but meanwhile some reads bring no data (RF=2)
>
> Can this node be excluded from reads queries? And that all reads will be
> redirected to other node in the ring?
>
>
>
ifferent
> application should be changed for this.
>
>
> On Wednesday, August 29, 2018 2:41 PM, kurt greaves
> wrote:
>
>
> Note that you'll miss incoming writes if you do that, so you'll be
> inconsistent even after the repair. I'd say best to just query at
elect queries) and "WARN: commit log syncs over the past"
>> ===
>>
>> *notetool tablestats -H ks.xyz <http://ks.xyz>;*
>> Total number of tables: 89
>>
>> Keyspace : ks
>> Read Count: 1439722
>> Read Latency: 1.8982509581710914 ms
>> Write Count: 4222811
>> Write Latency: 0.016324778684151386 ms
>> Pending Flushes: 0
>> Table: xyz
>> SSTable count: 1036
>> SSTables in each level: [1, 10, 116/100, 909, 0, 0, 0, 0,
>> 0]
>> Space used (live): 187.09 GiB
>> Space used (total): 187.09 GiB
>> Space used by snapshots (total): 0 bytes
>> Off heap memory used (total): 783.93 MiB
>> SSTable Compression Ratio: 0.3238726404414842
>> Number of partitions (estimate): 447095605
>> Memtable cell count: 306194
>> Memtable data size: 20.59 MiB
>> Memtable off heap memory used: 0 bytes
>> Memtable switch count: 7
>> Local read count: 1440322
>> Local read latency: 6.785 ms
>> Local write count: 1408204
>> Local write latency: 0.021 ms
>> Pending flushes: 0
>> Percent repaired: 0.0
>> Bloom filter false positives: 19
>> Bloom filter false ratio: 0.3
>> Bloom filter space used: 418.2 MiB
>> Bloom filter off heap memory used: 418.19 MiB
>> Index summary off heap memory used: 307.75 MiB
>> Compression metadata off heap memory used: 57.99 MiB
>> Compacted partition minimum bytes: 150
>> Compacted partition maximum bytes: 1916
>> Compacted partition mean bytes: 1003
>> Average live cells per slice (last five minutes): 20.0
>> Maximum live cells per slice (last five minutes): 20
>> Average tombstones per slice (last five minutes): 1.0
>> Maximum tombstones per slice (last five minutes): 1
>> Dropped Mutations: 0 bytes
>>
>> --
>>
>> regards,
>> Laxmikant Upadhyay
>>
>> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
training by DataStax). I would like to share it with my team. Did anyone
> come across this information? If yes, can you please share it?
>
> Thanks!
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
WnU7MvUsiN3xos1D6CqJXvUWxhX%3DS4ahZFQpfNGLQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/tlp-apache-cassandra-reaper-users/CAHEGkNMRpWnU7MvUsiN3xos1D6CqJXvUWxhX%3DS4ahZFQpfNGLQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
uits the most.
>
>1. LeveledCompactionStrategy (LCS)
>2. SizeTieredCompactionStrategy (STCS)
>3. TimeWindowCompactionStrategy (TWCS)
>
>
> --
> Raman Gugnani
>
> 8588892293 <(858)%20889-2293>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
ugh the upgradesstables as needed, and that
> upgradesstables is a node-local concern that doesn't impact streaming or
> node replacement or other situations since cassandra can read old version
> sstables and new sstables would simply be the new format.
>
--
---
gt;> upgradesstables is a node-local concern that doesn't impact streaming or
>> node replacement or other situations since cassandra can read old version
>> sstables and new sstables would simply be the new format.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
t; region will immediately start replicating on the Mum region's nodes.
> However even after 2 weeks I do not see historical data to be replicated,
> but new data being written on Sgp region is present in Mum region as well.
> >>>>
> >>>> Any help or suggestions to debug this issue will be highly
> appreciated.
> >>>>
> >>>> Regards
> >>>> Akshay Bhardwaj
> >>>> +91-97111-33849 <+91%2097111%2033849>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Jon Haddad
> >>> http://www.rustyrazorblade.com
> >>> twitter: rustyrazorblade
> >>>
> >>>
> >>
> >>
>
>
> --
> Best Regards,
> Kiran.M.K.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
> Bonus question: changing the compaction throughput to 0 (removing the
> throttling), had no impacts in the current compaction. Do new compaction
> throughput values only come into effect when a new compaction kicks in?
>
> Cheers
>
> Pedro Gordo
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
es in connection with this e-mail message or its attachment.
>
> --
> M.Sc. Daniel Seybold
>
> Universität Ulm
> Institut Organisation und Management
> von Informationssystemen (OMI)Albert-Einstein-Allee 43
> 89081 Ulm
> <https://maps.google.com/?q=Albert-Einstein-Allee+43%0D%0A89081+Ulm&entry=gmail&source=g>
> Phone: +49 (0)731 50-28 799 <+49%20731%205028799>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
even through major upgrades. It is stored in system
> keyspace in data directory, and is stable across restarts.
>
> --
> Alex
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
to use
>
> I inherited Cassandra clusters that use the PropertyFileSnitch. It's been
> working fine, but you've kinda scared me :-)
> Why is it dangerous to use?
> If I decide to change the snitch, is it seamless or is there a specific
> procedure one must follow
our seed list
across the cluster.
On Wed, Feb 27, 2019 at 10:52 AM wxn...@zjqunshuo.com
wrote:
> I'm using SimpleSnitch. I have only one DC. Is there any problem to follow
> the below procedure?
>
> -Simon
>
> *From:* Alexander Dejanovski
> *Date:* 2019-02-27 16:07
>
t one by one?
> Will this method cause problems?
>
> Thanks!
>
>
> On Wed, Feb 27, 2019 at 12:18 PM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
>> You'll be fine with the SimpleSnitch (which shouldn't be used either
>> because it doesn
anyway)
>
>
>
> I do put UNKNOWN as the default DC so that any missed node easily appears
> in its own unused DC.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Alexander Dejanovski
> *Sent:* Wednesday, February 27, 2019 4:43 AM
> *To:* user@cassandra
e). I would
> like to know what is the recommended process to change an existing cluster
> with single racks configuration to multi rack configuration.
>
>
> I want to introduce 3 racks with 2 nodes in each rack.
>
>
> Regards
> Manish
>
> --
-
Alex
ill be proportionally very high in comparison to other nodes in
> rac1.
>
> So until both racks have equal number of nodes and we run nodetool cleaup,
> the data will not be equally distributed.
>
>
>
>
>
> On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski <
> a...@th
On Sat, Mar 16, 2019 at 1:04 AM Nick Hatfield
wrote:
> Hey guys,
>
>
>
> Can someone give me some idea or link some good material for determining a
> good / aggressive tombstone strategy? I want to make sure my tombstones are
> getting purged as soon as possible to reclai
mpact your cluster performance in ways I
cannot predict, and should be attempted only if you really need to perform
this major compaction and cannot wait for it to go through at the current
pace.
Cheers,
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consu
e rule for a tombstone to be purged is that there is no SSTable outside
the compaction that would possibly contain the partition and that would
have older timestamps.
Is this a followup on your previous issue where you were trying to perform
a major compaction on an LCS table?
-
Alexand
stones are sticking around.
Your best shot here will be a major compaction of that table, since it
doesn't seem so big. Remember to use the --split-output flag on the
compaction command to avoid ending up with a single SSTable after that.
Cheers,
-----
Alexander Dejanovski
France
ghtfully reported by the
docs as an "intensive process" (not more than a repair though).
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On Thu, Jun 20, 2019 at 9:17 AM Alexander Dejanovski
wrote:
> My
l, stop Cassandra, mark sstables as unrepaired, restart Cassandra).
Cheers,
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On Wed, Jul 31, 2019 at 3:53 PM Martin Xue wrote:
> Sorry ASAD, don't have
pair
¯\_(ツ)_/¯
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On Wed, Jul 31, 2019 at 3:51 PM Martin Xue wrote:
> Hi,
>
> I am running repair on production, started with one of 6 nodes in the
> cluster (3 n
to 3.0.19 (even 3.11.4 IMHO as 3.0 offers less
performance than 3.11) and use Reaper <http://cassandra-reaper.io/> to
handle/schedule repairs.
Cheers,
-----
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On Thu,
Hi Jeff,
Anticompaction only runs before repair in the upcoming 4.0.
In all other versions of Cassandra, it runs at the end of repair sessions.
My understanding from other messages Martin sent to the ML was that he was
already running full repair not incremental, which before 4.0 will also
perform
specifics).
If it shows up as down, it will rely on hints to get the writes. If it
shows as joining, it will get the writes while streaming is ongoing.
Cheers,
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
O
e the sstables are fully expired.
Cheers,
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On Thu, Sep 5, 2019 at 11:33 AM Eunsu Kim wrote:
> Thank you for your response.
>
>
>
> I’m using TimeWind
There are 2 main reasons I see for still having unrepaired sstables after
running nodetool repair -pr :
1- new data is still flowing in your database after the repair sessions
were launched, and thus hasn't been repaired
2- some repair sessions failed and left unrepaired data on your nodes.
Increm
After running some tests I can confirm that using -pr leaves unrepaired
SSTables, while removing it shows repaired SSTables only once repair is
completed.
The purpose of -pr was to lighten the repair process by not repairing
ranges RF times, but just once. With incremental repair though, repaired
Reads at quorum in dc3 will involve dc1 and dc2 as they will require a
response from more than half the replicas throughout the Cluster.
If you're using RF=3 in each DC, each read will need at least 5 responses,
which DC3 cannot provide on its own.
You can have troubles if DC3 has more than half
Hi Siddarth,
I would recommend running "nodetool describering keyspace_name" as its
output is much simpler to reason about :
Schema Version:9a091b4e-3712-3149-b187-d2b09250a19b
TokenRange:
TokenRange(start_token:1943978523300203561, end_token:2137919499801737315,
endpoints:[127.0.0.3, 127.0.0.6
Hi Siddharth,
yes, we are sure token ranges will never overlap (I think the start token
in describering output is excluded and the end token included).
You can get per host information in the Datastax Java driver using :
Set rangesForKeyspace = cluster.getMetadata().getTokenRanges(
keyspaceName
To be thorough, token ranges do not overlap per DC. Ranges in different DCs
do overlap as the token distribution is different.
Le mer. 31 août 2016 10:51, Moshe Levy a écrit :
>
>
> .
> P
>
> On Wednesday, 31 August 2016, Alexander DEJANOVSKI
> wrote:
>
>> Hi Sid
Hi Paulo,
don't you think it might be better to keep applying the migration procedure
whatever the version ?
Anticompaction is pretty expensive on big SSTables and if the cluster has a
lot of data, the first run might be very very long if the nodes are dense,
and especially with a high number of v
Hi,
the analysis is valid, and strong consistency the Cassandra way means that
one client writing at quorum, then reading at quorum will always see his
previous write.
Two different clients have no guarantee to see the same data when using
quorum, as illustrated in your example.
Only options here
d --> conflict
> Because a quorum (2 nodes) responded, the coordinator will return the
> latest time stamp and may issue read repair depending on YAML settings.
>
> So where do you see only one client having this guarantee?
>
> Regards,
>
> James
>
> On Sep 14, 2016, at 4:00 A
the foreground before the response is returned to the client.
> So, at least from a single client's perspective, you get monotonic reads.
>
>
> --
> Tyler Hobbs
> DataStax <http://datastax.com/>
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
BlockingQueue.java:339)
> ~[na:1.8.0_60]*
> * at
> org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:168)
> ~[apache-cassandra-3.0.5.jar:3.0.5]*
> * ... 6 common frames omitted*
>
>
> Now if I run nodetool repair I get the
>
> *java.lan
he tables. 10.45.113.88 is the ip of the machine I am running
> the nodetool on.
> I'm wondering if this is normal...
>
> Thanks,
> Robert
>
>
>
>
> Robert Sicoie
>
> On Wed, Sep 28, 2016 at 11:53 AM, Alexander Dejanovski <
> a...@thelastpickle.com&g
> Is there other way I can find out if is there any anticompaction running
> on any node?
>
> Thanks a lot,
> Robert
>
> Robert Sicoie
>
> On Wed, Sep 28, 2016 at 4:44 PM, Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
>> Robert,
>>
>
repair. On others there are less
> pending repairs (min 12). Is there any recomandation for the restart order?
> The one with more less pending repairs first, perhaps?
>
> Thanks,
> Robert
>
> Robert Sicoie
>
> On Wed, Sep 28, 2016 at 5:35 PM, Alexander Dejanovski <
>
how to verify and debug this issue. Help will be
> appreciated.
>
>
> --
> Regards,
> Atul Saroha
>
> *Lead Software Engineer | CAMS*
>
> M: +91 8447784271
> Plot #362, ASF Center - Tower A, 1st Floor, Sec-18,
> Udyog Vihar Phase IV,Gurgaon, Haryana, India
&g
ome materialized view. Some have values over 500MB. How this affects
> performance? What can/should be done? I suppose is a problem in the schema
> design.
>
> Thanks,
> Robert Sicoie
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
ng repair
> of same partition on other box for same partition range. We saw error
> validation failed with some ip as repair in already running for the same
> SSTable.
> Just few days back, we had 2 DCs with 3 nodes each and replication was
> also 3. It means all data on each nod
e suggestions mentioned by *brstgt* which we can try on our
> side.
>
> On Thu, Sep 29, 2016 at 5:42 PM, Atul Saroha
> wrote:
>
>> Thanks Alexander.
>>
>> Will look into all these.
>>
>> On Thu, Sep 29, 2016 at 4:39 PM, Alexander Dejanovski <
>>
low error.
>
>
> A repair run already exist for the same cluster/keyspace/table but with a
> different incremental repair value.Requested value: true | Existing value:
> false
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
> <https://itunes.apple.com/in/app/snapdeal-mobile-shopping/id721124909?ls=1&mt=8&utm_source=mobileAppLp&utm_campaign=ios>
> [image:
> W]
> <http://www.windowsphone.com/en-in/store/app/snapdeal/ee17fccf-40d0-4a59-80a3-04da47a5553f>
>
> On Wed, Oct 19, 2016 at 4
continually anti compacting? If we do a full repair on each node
> with the -pr flag, will subsequent full repairs also force anti compacting
> most (all?) sstables?
>
> Thanks,
>
> Sean
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
compacting
most (all?) sstables?
Thanks,
Sean
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
one
>
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair
> process in smaller yet sequential pieces of work (repair is started giving
> a start and end token). Overall, you sh
nches, which should soon be merged to master.
Le mer. 19 oct. 2016 19:03, Kant Kodali a écrit :
Also any suggestions on a tool to orchestrate the incremental repair? Like
say most commonly used
Sent from my iPhone
On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
wrote:
Hi Kant,
subrange is a
ising
> anticompactions.
>
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
mail archiving system. This message is intended only for the use of the
> individual or entity to which it is addressed, and may contain information
> that is privileged, confidential, and exempt from disclosure under
> applicable law. Global Relay will not be liable for any compliance o
ut I'm wondering how do other
Cassandra users manage repairs ?
Vincent.
--
-----
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
ke 5 days or more. We were never able to run
> one to completion. I'm not sure it's a good idea to disable autocompaction
> for that long.
>
> But maybe I'm wrong. Is it possible to use incremental repairs on some
> column family only ?
>
>
> On Thu, Oct 2
ess big partitions are around 500Mb and less.
>
>
> On Thu, Oct 27, 2016, at 05:37 PM, Alexander Dejanovski wrote:
>
> Oh right, that's what they advise :)
> I'd say that you should skip the full repair phase in the migration
> procedure as that will obviously fail
21:28, Vincent Rischmann a écrit :
> Yeah that particular table is badly designed, I intend to fix it, when the
> roadmap allows us to do it :)
> What is the recommended maximum partition size ?
>
> Thanks for all the information.
>
>
> On Thu, Oct 27, 2016, at 08:14 PM, A
.com/watch?v=N3mGxgnUiRY
Slides :
http://www.slideshare.net/DataStax/myths-of-big-partitions-robert-stupp-datastax-cassandra-summit-2016
Cheers,
On Fri, Oct 28, 2016 at 4:09 PM Eric Evans
wrote:
> On Thu, Oct 27, 2016 at 4:13 PM, Alexander Dejanovski
> wrote:
> > A few patches are
.jar server
> cassandra-reaper.yaml
> 3. ./bin/spreaper repair production users
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
,
> "intensity": 0.900,
> "keyspace_name": "users",
> * "last_event": "no events",*
> "owner": "root",
> "pause_time": null,
> "repair_parallelism": "DATACENTER_AWARE",
&g
ogs
>
> On Tue, Nov 1, 2016 at 10:25 AM, Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
> Do you have anything in the reaper logs that would show a failure of some
> sort ?
> Also, can you tell me which version of Cassandra you're using ?
>
> Tha
d to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
> Shalom Sagges
> DBA
> T: +972-74-700-4035 <+972%2074-700-4035>
> <http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
> <http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
>
> <https://enga
t;http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
> <http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
>
> <https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=email&utm_source=mkto&utm_campaign=idcsig>
>
d=com.snapdeal.main&utm_source=mobileAppLp&utm_campaign=android>
> [image:
> A]
> <https://itunes.apple.com/in/app/snapdeal-mobile-shopping/id721124909?ls=1&mt=8&utm_source=mobileAppLp&utm_campaign=ios>
> [image:
> W]
> <http://www.windowspho
7;s possible I missed it because I have
> no idea what to look for exactly.
>
> Anyone have some advice for troubleshooting this ?
>
> Thanks.
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
4.6MB, 98% around 2MB.
Could the 1% here really have that much impact ? We do write a lot to the
biggest table and read quite often too, however I have no way to know if
that big partition is ever read.
On Mon, Nov 21, 2016, at 01:09 PM, Alexander Dejanovski wrote:
Hi Vincent,
one of the usual
t;
> Thanks again for the help
>
>
> On Tue, Nov 1, 2016 at 12:26 PM, Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
> ok thank you,
> I will try and update you.
>
> On Tue, Nov 1, 2016 at 10:57 AM, Alexander Dejanovski <
> a...@thelastpickle.
rage tombstones per slice (last five minutes): 1108.2466913854232
> Maximum tombstones per slice (last five minutes): 22602.0
>
> - regarding swap, it's not disabled anywhere, I must say we never really
> thought about it. Does it provide a significant benefit ?
>
> Thanks for you
he default gc_grace_period of 10 days. Are there any
> reasons to run repairing more often that once per 10 days, for a case
> when previous repairing fails?
> - how to monitor start and finish times of repairs, and if the runs were
> successful? Does the "nodetool repair" co
va:1142)
> ~[na:1.8.0_60]*
> * at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_60]*
> * at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]*
>
> On the node /x.x.x.y
>
> Do you any suggestion?
> Thank you in advance,
> Robert
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
cremental: true
>
> Don't want to swamp you with more details or unnecessary logs, especially
> as I'd have to sanitize them before sending them out, so please let me know
> if there is anything else I can provide, and I'll do my best to get it to
> you.
oolExecutor.java:1142)
>
> Hope it helps!
>
> Regards,
> Bhuvan
>
> According to
> https://medium.com/@mlowicki/cassandra-reaper-introduction-ed73410492bf#.f0erygqpk
> :
>
> Segment runner has protection mechanism to avoid overloading nodes using
> two s
gt; I believe that output of compactionstats shows you the size of
> *uncompressed* data. Can you check (with nodetool tablestats) your
> compression ratio?
>
> --
> Alex
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
to add new node,
> as data streams to new nodes from nodes of group to which it is added)
>
> OR
>
> Boootstrap/add 2(multiple nodes) at a time?
>
>
> Please suggest better way to fix this.
>
> Thanks in advance
>
> Techpyaasa
>
>
>
>
> --
>
uerying by Primary keys, the second query will have 100k+ primary key id’s
> in the WHERE clause, and the second solution looks like an anti pattern in
> cassandra.
>
> Could anyone give any advice how would we create a model for our use case?
>
> Thank you in advance,
> Zoltan.
>
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
se, with RF= 4 instead of 3, with several clients accessing keys
> same key ranges, a coordinator could pick up one node to handle the request
> in 4 replicas instead of picking up one node in 3 , thus having
> more "workers" to handle a request ?
>
> Am I wrong here ?
>
> Thank you for the clarification
>
>
> --
> best,
> Alain
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
ems with > 8 cores, the default ParallelGCThreads is 5/8 the
> number of logical cores.
>
> # Otherwise equal to the number of cores when 8 or less.
>
> # Machines with > 10 cores should try setting these to <= full cores.
>
> #-XX:ParallelGCThreads=16
>
> # By default,
urrently
>> using TWCS and have some new use cases for performing deletes. So far I
>> have avoided performing deletes, but I am wondering what issues I might run
>> into.
>>
>>
>> - John
>>
>>
>>
>
>
> --
>
> - John
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
earch but does
>> it result in a table scan? for example I can have the following
>> >
>> > create table hello(
>> > a text,
>> > b int,
>> > c text,
>> > d text,
>> > primary key((a,b), c)
>> > );
>> >
>> > Now I can do sel
keyspaces?
>>
>> Please advice.
>>
>>
>>
>
>
> --
> Regards,
>
> Manikandan Srinivasan
>
> Director, Product Management| +1.408.887.3686 |
> manikandan.sriniva...@datastax.com
>
> [image: linkedin.png] <http://www.linkedin.com/in/srini
gged almost always correspond to times
> where our schedules SELECTs are happening. That narrows the scope a little,
> but still.
>
> Anyway, I'd appreciate any information about troubleshooting this scenario.
> Thanks.
>
--
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
is how we process each
> user update:
> - SELECT from a "master" slug to get the fields we need
> - from that, compute a list of slugs the user had and a list of slugs
> the user should have (for example if he changes timezone we have to update
> the slug)
> - delet
he.cassandra.repair.ValidationTask.treesReceived(ValidationTask.java:68)
> ~[apache-cassandra-3.0.14.jar:3.0.14]
>
> at
> org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:178)
> ~[apache-cassandra-3.0.14.jar:3.0.14]
>
> at
>
56683870319],
> (-3221630728515706463,-3206856875356976885],
> (-1193448110686154165,-1161640137086921883],
> (-3356304907368646189,-3346460884208327912],
> (3466596314109623830,346814432669172],
> (-9050241313548454460,-9005441616028750657],
> (402227699082311580,40745
gt; 2017-09-18 07:59:17 repair finished
>
>
>
>
>
> If running the above nodetool call sequentially on all nodes, repair
> finishes without printing a stack trace.
>
>
>
> The error message and stack trace isn’t really useful here. Any further
> ideas/experiences?
&g
?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
>>
>> The contents of this e-mail are intended for the named addressee only. It
>> contains information that may be confidential. Unless you are the named
>> addressee or an authorized designee, you may not copy or use it, or
>> disclose it to anyone else. If you received it in error please notify us
>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>> number FN 91482h) is a company registered in Linz whose registered office
>> is at 4040 Linz, Austria, Freistädterstraße 313
>> <https://maps.google.com/?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
>> The contents of this e-mail are intended for the named addressee only. It
>> contains information that may be confidential. Unless you are the named
>> addressee or an authorized designee, you may not copy or use it, or
>> disclose it to anyone else. If you received it in error please notify us
>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>> number FN 91482h) is a company registered in Linz whose registered office
>> is at 4040 Linz, Austria, Freistädterstraße 313
>> <https://maps.google.com/?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
>>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
eeing some but not a whole lot of dropped mutations. nodetool
> tpstats looks ok.
>
> The growing number of SSTables really makes me think this is an I/O issue.
> Casssandra is running in a kubernetes cluster using a SAN which is another
> reason I suspect I/O.
>
> What are s
a wrong Load (nearly 2TB per node instead à 300Gb) => we are
> loading some data for a week now, it seems that this can happen sometimes
>
> If anyone ever experienced that kind of behavior I'd be glad to know
> whether it is OK or not, I'd like to avoid manually triggering JMX
> UserDefinedCompaction ;)
>
> Thank you
>
> --
-
Alexander Dejanovski
France
@alexanderdeja
Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
1 - 100 of 137 matches
Mail list logo