We did this query, most our files are less than 100MB.
Our heap setting are like (they are calculatwed using scipr in
cassandra.env):
MAX_HEAP_SIZE=8GB
HEAP_NEWSIZE=2GB
which is maximum recommended by DataStax.
What values do you think we should try?
On Thu, Feb 26, 2015 at 10:06 AM, Roland
Hi, Ron
I look deep into my cassandra files and SSTables created during last day
are less than 20MB.
Piotrek
p.s. Your tips are really useful at least I am starting to finding where
exactly the problem is.
On Thu, Feb 26, 2015 at 3:11 PM, Ja Sam ptrstp...@gmail.com wrote:
We did this query
,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your data
rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Tel: 1649
www.pythian.com
On Wed, Feb 25, 2015 at 11:19 AM, Ja Sam ptrstp...@gmail.com wrote:
Hi,
I write
SSTables and pending
compactions are decreasing to zero.
In AGRAF the minimum pending compaction is 2500 , maximum is 6000 (avg on
screen from opscenter is less then 5000)
Regards
Piotrek.
p.s. I don't know why my mail client display my name as Ja Sam instead of
Piotr Stapp, but this doesn't change
I do NOT have SSD. I have normal HDD group by JBOD.
My CF have SizeTieredCompactionStrategy
I am using local quorum for reads and writes. To be precise I have a lot of
writes and almost 0 reads.
I changed cold_reads_to_omit to 0.0 as someone suggest me. I used set
compactionthrouput to 999.
So if
://drive.google.com/file/d/0B4N_AbBPGGwLc25nU0lnY3Z5NDA/view
On Wed, Feb 25, 2015 at 7:50 PM, Roni Balthazar ronibaltha...@gmail.com
wrote:
Hi Piotr,
Are your repairs finishing without errors?
Regards,
Roni Balthazar
On 25 February 2015 at 15:43, Ja Sam ptrstp...@gmail.com wrote:
Hi
Hi,
One more thing. Hinted Handoff for last week for all nodes was less than 5.
For me every READ is a problem because it must open too many files (3
SSTables), which occurs as an error in reads, repairs, etc.
Regards
Piotrek
On Wed, Feb 25, 2015 at 8:32 PM, Ja Sam ptrstp...@gmail.com wrote
,thoroughly used up, totally worn out, and loudly
proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
(+1) 415.501.0198 %28%2B1%29%20415.501.0198London (+44) (0) 20 8144 9872
%28%2B44%29%20%280%29%2020%208144%209872*
On Wed, Feb 25, 2015 at 11:01 AM, Ja Sam ptrstp
Hi,
I write some question before about my problems with C* cluster. All my
environment is described here:
https://www.mail-archive.com/user@cassandra.apache.org/msg40982.html
To sum up I have thousands SSTables in one DC and much much less in second.
I write only to first DC.
Anyway after reading
The repair results is following (we run it Friday): Cannot proceed on
repair because a neighbor (/192.168.61.201) is dead: session failed
But to be honest the neighbor did not died. It seemed to trigger a series
of full GC events on the initiating node. The results form logs are:
[2015-02-20
, 2015 at 11:58 AM, Roni Balthazar ronibaltha...@gmail.com
wrote:
Try repair -pr on all nodes.
If after that you still have issues, you can try to rebuild the SSTables
using nodetool upgradesstables or scrub.
Regards,
Roni Balthazar
Em 18/02/2015, às 14:13, Ja Sam ptrstp...@gmail.com
, concurrent reads and
so on)
Regards,
Roni Balthazar
On Wed, Feb 18, 2015 at 9:51 AM, Ja Sam ptrstp...@gmail.com wrote:
Hi,
Thanks for your tip it looks that something changed - I still don't
know
if it is ok.
My nodes started to do more compaction, but it looks that some
compactions
decreased from many thousands to a number below
a hundred and the SSTables are now much bigger with several gigabytes
(most of them).
Cheers,
Roni Balthazar
On Tue, Feb 17, 2015 at 11:32 AM, Ja Sam ptrstp...@gmail.com wrote:
After some diagnostic ( we didn't set yet cold_reads_to_omit
at 11:07 AM, Ja Sam ptrstp...@gmail.com wrote:
I don't have problems with DC_B (replica) only in DC_A(my system write
only
to it) I have read timeouts.
I checked in OpsCenter SSTable count and I have:
1) in DC_A same +-10% for last week, a small increase for last 24h (it
is
more than
of Data.db file is ~13 mb. I have few a really big
ones, but most is really small (almost 1 files are less then 100mb).
2) in DC_B avg size of Data.db is much bigger ~260mb.
Do you think that above flag will help us?
On Tue, Feb 17, 2015 at 9:04 AM, Ja Sam ptrstp...@gmail.com wrote:
I set
`hostname` setcompactionthroughput 999
0 6 * * * root nodetool -h `hostname` setcompactionthroughput 16
Cheers,
Roni Balthazar
On Mon, Feb 16, 2015 at 7:47 PM, Ja Sam ptrstp...@gmail.com wrote:
One think I do not understand. In my case compaction is running
permanently.
Is there a way to check
*Environment*
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
2) not using vnodes
3)Two data centres: 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
4) each node is set up on a physical box with two 16-Core HT Xeon
processors (E5-2660), 64GB RAM
One think I do not understand. In my case compaction is running
permanently. Is there a way to check which compaction is pending? The only
information is about total count.
On Monday, February 16, 2015, Ja Sam ptrstp...@gmail.com wrote:
Of couse I made a mistake. I am using 2.1.2. Anyway night
Of couse I made a mistake. I am using 2.1.2. Anyway night build is
available from
http://cassci.datastax.com/job/cassandra-2.1/
I read about cold_reads_to_omit It looks promising. Should I set also
compaction throughput?
p.s. I am really sad that I didn't read this before:
Is there a simple way (or even a complicated one) how can I speed up SELECT
* FROM [table] query?
I need to get all rows form one table every day. I split tables, and create
one for each day, but still query is quite slow (200 millions of records)
I was thinking about run this query in parallel,
and batch size of a single query against one node.
Basically, what you/driver should do is to transform the query to series
of SELECT * FROM TABLE WHERE TOKEN IN (start, stop).
I will need to look up the actual code, but the idea should be clear :)
Jirka H.
On 02/11/2015 11:26 AM, Ja Sam wrote
to discard data. Some
of it may be recoverable with a nodetool repair after you're caught up on
compaction, but you will almost certainly lose some records.
On Tue, Jan 13, 2015 at 2:22 AM, Ja Sam ptrstp...@gmail.com wrote:
Ad 4) For sure I got a big problem. Because pending tasks: 3094
Ad 4) For sure I got a big problem. Because pending tasks: 3094
The question is what should I change/monitor? I can present my whole
solution design, if it helps
On Mon, Jan 12, 2015 at 8:32 PM, Ja Sam ptrstp...@gmail.com wrote:
To precise your remarks:
1) About 30 sec GC. I know that after
*Environment*
- Cassandra 2.1.0
- 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
- 2500 writes per seconds, I write only to DC_A with local_quorum
- minimal reads (usually none, sometimes few)
*Problem*
After a few weeks of running I cannot read any data from my cluster,
, Jan 12, 2015 at 7:35 AM, Ja Sam ptrstp...@gmail.com wrote:
*Environment*
- Cassandra 2.1.0
- 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
- 2500 writes per seconds, I write only to DC_A with local_quorum
- minimal reads (usually none, sometimes few)
*Problem*
After
25 matches
Mail list logo