Hi,
>>
>> You could manually trigger it with nodetool compact.
>>
>> /Oskar
>>
>> > On 8 nov. 2016, at 21:47, Lahiru Gamathige <lah...@highfive.com> wrote:
>> >
>> > Hi Users,
>> >
>> > I am thinking of migrating ou
Hi Users,
I am thinking of migrating our timeseries tables to use TWCS. I am using
JMX to set the new compaction and one node at a time and I am not sure how
to confirm that after the flush all the compaction is done in each node. I
tried this in a small cluster but after setting the compaction I
Hi Rajesh,
By looking at your code I see that the memory would definitely grow because
you write big batches async and you will end up large number of batch
statements and the all end up slowing down. We recently migrated some data
to C* and what we did was we created a data stream and wrote in
Hi Oleg,
I highly recommend to contribute to Apache documentation. I think C* needs
lot more non-datastax documentation.
Lahiru
On Thu, Nov 3, 2016 at 1:24 PM, Justin Cameron
wrote:
> Maybe a little off-tangent, but there is also a set of open source
> documentation
Since this is already commented this size check is disabled or its default
set to 256MB and if we have higher sized SSTables those are going to mark
currupted ?
On Tue, Nov 1, 2016 at 11:47 AM, Lahiru Gamathige <lah...@highfive.com>
wrote:
> Hi Users,
>
> I see tha
Hi Users,
I see that C* introduced max_value_size_in_mb and if a SSTable is larger
than this it will be a currupted SSTable. In our current cluster I see
tables with very large SSTables, and if we are migrating to new version
should I increase this number ?
But increasing max_value_size_in_mb to
ng to 3.0, no file format version bumps between
> 3.0 and 3.9
>
>
>
> (There was one format change in 3.6 – CASSANDRA-11206 should have probably
> bumped the version identifier, but we didn’t, and there’s nothing special
> you’d need to do for it anyway.)
>
>
>
Hi Users,
I am trying to find a migration guide from 2.1.* to 3.x and figured I
should go through the NEWS.txt so I read that and found out few things that
I should be careful/consider during the upgrade.
I'm curious there's any documentation with specific steps how to do the
migration.
Anyone
Highly recommend to move to a newer Cassandra version first because TTL and
compaction are much more consistent.
On Wed, Oct 26, 2016 at 10:36 AM, Tyler Hobbs wrote:
>
> On Wed, Oct 26, 2016 at 10:07 AM, techpyaasa .
> wrote:
>
>> Can some one please
>
>> *COPY keyspace1.columnFamily1 FROM 'dump_data.csv'
>> WITH DEFAULT_TIME_TO_LIVE = '7200';*
>> I tried this way too, but again exception thrown saying "*Unrecognized
>> COPY FROM options: default_time_to_live*" :( :(
>>
>> On Wed, Oct 26, 2016 at 8:53 PM, L
You have to use with default_time_to_live = 7200.
On Wed, Oct 26, 2016 at 8:07 AM, techpyaasa . wrote:
> Hi all,
>
> I'm getting following exception when I try to set TTL using COPY command,
> where as it is working fine without TTL option. Followed doc at
>
Hi Jan,
Thanks for the response. My SSTables are < 3MB and I have 3500+ SSTables in
the folder. When you say if they are small do you think my file sizes are
small ? I ran the nodetool compact nothing happened, then I ran nodetool
scrub it removed 500 SSTables then it stopped.
Thanks for that
Hi Users,
I have a single server code deployed with multiple environments (staging,
dev etc) but they all use a single Cassandra cluster but keyspaces are
prefixed with the environment name, so each server has its own keyspace to
store data. I am using Cassandra 2.1.0 and using it to store
13 matches
Mail list logo