://www.thelastpickle.com
On 14/04/2013, at 11:56 AM, Rustam Aliyev rustam.li...@code.az wrote:
Just a followup on this issue. Due to the cost of shuffle, we decided not to do
it. Recently, we added new node and ended up in not well balanced cluster:
Datacenter: datacenter1
===
Status=Up/Down
it be
assigned ranges randomly from all nodes?
Some other notes inline below:
On 08/04/2013 15:00, Eric Evans wrote:
[ Rustam Aliyev ]
Hi,
After upgrading to the vnodes I created and enabled shuffle
operation as suggested. After running for a couple of hours I had to
disable it because nodes
Hi,
After upgrading to the vnodes I created and enabled shuffle operation as
suggested. After running for a couple of hours I had to disable it
because nodes were not catching up with compactions. I repeated this
process 3 times (enable/disable).
I have 5 nodes and each of them had ~35GB.
upgrade to 1.2.3.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 3/04/2013, at 4:09 AM, Rustam Aliyev rustam.li...@code.az
mailto:rustam.li...@code.az wrote:
Hi,
I just wanted to share our experience of upgrading
Hi,
I just wanted to share our experience of upgrading 1.0.10 to 1.2.3. It
happened that first we upgraded both of our two seeds to 1.2.3. And
basically after that old nodes couldn't communicate with new ones
anymore. Cluster was down until we upgraded all nodes to 1.2.3. We don't
have many
Hi Edward,
That's a great news!
One thing I'd like to see in the new edition is Counters, known issues
and how to avoid them:
- avoid double counting (don't retry on failure, use write consistency
level ONE, use dedicated Hector connector?)
- delete counters (tricky, reset to zero?)
-
@aaronmorton
http://www.thelastpickle.com
On 2/06/2012, at 7:53 AM, Rustam Aliyev wrote:
Hi all,
I have SCF with ~250K rows. One of these rows is relatively large - it's a wide
row (according to compaction logs) containing ~100.000 super columns and
overall size of 1GB. Each super column has
Hi all,
I have SCF with ~250K rows. One of these rows is relatively large - it's a
wide row (according to compaction logs) containing ~100.000 super columns
and overall size of 1GB. Each super column has average size of 10K and ~10
sub columns.
When I'm trying to delete ~90% of the columns in
No, it's not possible.
On 15/03/2012 10:53, Tamar Fraenkel wrote:
Watched the video, really good!
One question:
I wonder if it is possible to mix counter columns in
Cassandra 1.0.7 with regular columns in the same CF.
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk space
to be on the safe side. So if you want to store 2TB data, you need
partition size of 4TB at least. LeveledCompactionStrategy is available
in 1.x and supposed to require less free disk space (but comes at price
of
space.
Why suddenly node needs 2x more space for data it already have? Why
decreasing token range not lead to decreasing disk usage?
On 12.03.2012 15:14, Rustam Aliyev wrote:
Hi,
If you use SizeTieredCompactionStrategy, you should have x2 disk
space to be on the safe side. So if you want
:
Cassandra v1.0.8
once again: 4-nodes cluster, RF = 3.
On 12.03.2012 16:18, Rustam Aliyev wrote:
What version of Cassandra do you have?
On 12/03/2012 11:38, Vanger wrote:
We were aware of compaction overhead, but still don't understand why
that shall happened: node 'D' was in stable condition
Hi Maxim,
If you need to store Blobs, then BlobStores such as OpenStack Object
Store (aka Swift) should be better choise.
As far as I know, MogileFS (which is also a sort of BlobStore) has
scalability bottleneck - MySQL.
There are few reasons why BlobStores are better choise. In the
Hi,
I was just about to upgrade to the latest 0.8.x, but noticed that
there's no RPM package for 0.8.9 on DataStax repo. Latest is 0.8.8.
Any plans to publish 0.8.9 rpm?
--
Rustam
On 14/12/2011 19:59, Sylvain Lebresne wrote:
The Cassandra team is pleased to announce the release of Apache
Great, will try 0.7.1 when it's ready.
(Bug I mentioned was already reported)
On 19/01/2012 13:15, Andrei Savu wrote:
On Wed, Jan 18, 2012 at 7:58 PM, Rustam Aliyev rus...@code.az
mailto:rus...@code.az wrote:
Hi Andrei,
As you know, we are using Whirr for ElasticInbox
(https
Hi Andrei,
As you know, we are using Whirr for ElasticInbox
(https://github.com/elasticinbox/whirr-elasticinbox). While testing we
encountered a few minor problems which I think could be improved. Note
that we were using 0.6 (there were some strange bug in 0.7, maybe fixed
already).
My suggestion is simple: don't use any deprecated stuff out there. In
practically any case there is a good reason why it's deprecated.
SuperColumns are not deprecated.
On Sat Jan 7 19:51:55 2012, R. Verlangen wrote:
My suggestion is simple: don't use any deprecated stuff out there. In
Hi Sasha,
Replying to the old thread just for reference. We've released a code
which we use to store emails in Cassandra as an open source project:
http://elasticinbox.com/
Hope you find it helpful.
Regards,
Rustam.
On Fri Apr 29 15:20:07 2011, Sasha Dolgy wrote:
Great read. thanks.
On
Hi Sasha,
There's been a lot of fud in regards to SuperColumns. But actually in
our case we found them quite useful.
The main argument for using SC in that case is that message metadata is
immutable and in most of the cases read and written alltogether (i.e.
you fetch all message headers
.
Regards,
Rustam.
On 18/11/2011 13:08, Dotan N. wrote:
Thanks!!
--
Dotan, @jondot http://twitter.com/jondot
On Fri, Nov 18, 2011 at 2:48 PM, Rustam Aliyev rus...@code.az
mailto:rus...@code.az wrote:
It's pleasing to see interest out there. We'll try to do some
cleanups and push
on
twitter if I can.
On 18 November 2011 00:37, Rustam Aliyev rus...@code.az
mailto:rus...@code.az wrote:
Hi Dotan,
We have already built something similar and were planning to open
source it. It will be available under http://www.elasticinbox.com/.
We haven't followed exactly
Hi Dotan,
We have already built something similar and were planning to open source
it. It will be available under http://www.elasticinbox.com/.
We haven't followed exactly IBM's paper, we believe our Cassandra model
design is more robust. It's written in Java and provides LMTP and REST
Hi David,
This is interesting topic and it would be interesting to hear from
someone who is using it in prod.
Particularly - How your fs implementation behaves for medium/large
files, e.g. 1MB?
If you store large files, how large is your store per node and how does
it handle compactions
On 13/02/2011 13:49, Janne Jalkanen wrote:
Folks,
as it seems that wrapping the brain around the R+WN concept is a big hurdle
for a lot of users, I made a simple web page that allows you to try out the
different parameters and see how they affect the system.
will be adding this to trunk (and thus moving
Hector trunk to Cassandra 0.8.x) in the next week or two.
On Wed, Jan 19, 2011 at 6:12 PM, Rustam Aliyev rus...@code.az
mailto:rus...@code.az wrote:
Hi,
Does anyone use CASSANDRA-1072 counters patch with 0.7 stable
branch? I need
,
Rustam Aliyev.
http://www.linkedin.com/in/aliyev
Is there any plans to improve this in future?
For big data clusters this could be very expensive. Based on your
comment, I will need 200TB of storage for 100TB of data to keep
Cassandra running.
--
Rustam.
On 09/12/2010 17:56, Tyler Hobbs wrote:
If you are on 0.6, repair is particularly
that Cassandra performs well with average disks, so you
don't need to spend a lot there. Additionally, most people find that
the replication protects their data enough to allow them to use RAID 0
instead of 1, 10, 5, or 6.
- Tyler
On Thu, Dec 9, 2010 at 12:20 PM, Rustam Aliyev rus...@code.az
-- it all depends on your access
rates. Anywhere from 10 GB to 1TB is typical.
- Tyler
On Thu, Dec 9, 2010 at 5:52 PM, Rustam Aliyev rus...@code.az
mailto:rus...@code.az wrote:
That depends on your scenario. In the worst case of one big CF,
there's not much that can be easily done
29 matches
Mail list logo