Re: Upgrade to version 3

2018-10-17 Thread Anup Shirolkar
Hi,

Yes you can upgrade from 2.2 to 3.11.3

The steps for upgrade are there on lots of blogs and sites.

 You can follow:
https://myopsblog.wordpress.com/2017/12/04/upgrade-cassandra-cluster-from-2-x-to-3-x/

You should read the NEWS.txt for information on any release while planning
for upgrade.
https://github.com/apache/cassandra/blob/trunk/NEWS.txt

Please see below mail archive for your case of 2.2 to 3.x :
https://www.mail-archive.com/user@cassandra.apache.org/msg45381.html

Regards,

Anup Shirolkar




On Thu, 18 Oct 2018 at 09:30, Mun Dega  wrote:

> Hello,
>
> If we are upgrading from version 2.2 to 3.x, should we go directly to
> latest version 3.11.3?
>
> Anything we need to look out for?  If anyone can point to an upgrade
> process that would be great!
>
>
>


Upgrade to version 3

2018-10-17 Thread Mun Dega
Hello,

If we are upgrading from version 2.2 to 3.x, should we go directly to
latest version 3.11.3?

Anything we need to look out for?  If anyone can point to an upgrade
process that would be great!


Re: Mview in cassandra

2018-10-17 Thread Alain RODRIGUEZ
Hello,

The error might be related to your specific clusters (sstableloader / 1
node), I imagine connectivity might be wrong or the data not be loaded
properly for some reason. The data has to be available (nodes up - and
maybe 'nodetool refresh'/'cassandra restart' after the sstableloader things)
and able to connect to each other.

I tried to drop the Materialized View and recreate it , but the data is not
> getting populated with version 3.11.1
>

What error do you have then, when recreating (if any)?
Are normal reads working? And when using a consistency of 'all'?

I was overall willing to share this information that might matter to you
about MVs:
The recommendation around Materialized view at the moment is as follow:
"Do not use them unless you know exactly how they (don't) work."

http://mail-archives.apache.org/mod_mbox/cassandra-user/201710.mbox/%3cetpan.59f24f38.438f4e99.7...@apple.com%3E

According to Blake (but I think there is a large consensus on this opinion):

Concerns about MV’s suitability for production are not uncommon, and this
> just formalizes
> the advice often given to people considering materialized views. That is:
> materialized views
> have shortcomings that can make them unsuitable for the general use case.
> If you’re not
> familiar with their shortcomings and confident they won’t cause problems
> for your use case,
> you shouldn’t use them



The shortcomings I’m referring to are:
> * There's no way to determine if a view is out of sync with the base table.
> * If you do determine that a view is out of sync, the only way to fix it
> is to drop and rebuild
> the view.
> Even in the happy path, there isn’t an upper bound on how long it will
> take for updates
> to be reflected in the view.


It was even shared that the feature is complex and that we (community, but
I *think* also committers and developers) don't have a complete
understanding of this feature.
You might want to consider to look for a workaround not involving MVs.
Also, version C*3.11.2 do not seem to have any improvement regarding MV
compared to C*3.11.1, it was just marked as experimental it seems. Thus
upgrading will not help probably.

I imagine it's not what you wanted to hear, but I would really not stick
with MVs at the moment. If this cluster would be under my responsibility I
would probably consider redesign the schema.

C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com



Le mar. 16 oct. 2018 à 18:25, rajasekhar kommineni  a
écrit :

> Hi,
>
> I am seeing below warning message in system.log after datacopy using
> sstabloader.
>
> WARN  [CompactionExecutor:972] 2018-10-15 22:20:39,308
> ViewBuilder.java:189 - Materialized View failed to complete, sleeping 5
> minutes before restarting
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve
> consistency level ONE
>
> I tried to drop the Materialized View and recreate it , but the data is
> not getting populated with version 3.11.1
>
> I tried the same on version 3.11.2 on single node dev box and I can query
> the Materialized View with data. Any body have some experiences with
> Mview’s.
>
> Thanks,
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: TWCS: Repair create new buckets with old data

2018-10-17 Thread wxn...@zjqunshuo.com
Hi Maik,
IMO, when using TWCS, you had better not run repair. The behaviour of TWCS is 
same with STCS for repair when merging sstables and the result is leaving 
sstables spanning multiple time buckets, but maybe I'm wrong. In my use case, I 
don't do repair with table using TWCS.

-Simon
 
From: Caesar, Maik
Date: 2018-10-16 17:46
To: user@cassandra.apache.org
Subject: TWCS: Repair create new buckets with old data
Hallo,
we work with Cassandra version 3.0.9 and have a problem in a table with TWCS. 
The command “nodetool repair” create always new files with old data. This avoid 
the delete of the old data.
The layout of the Table is following:
cqlsh> desc stat.spa
 
CREATE TABLE stat.spa (
region int,
id int,
date text,
hour int,
zippedjsonstring blob,
PRIMARY KEY ((region, id), date, hour)
) WITH CLUSTERING ORDER BY (date ASC, hour ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 
'compaction_window_size': '1', 'compaction_window_unit': 'DAYS', 
'max_threshold': '100', 'min_threshold': '4', 'tombstone_compaction_interval': 
'86460'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
 
Actual the oldest data are from 2017/04/15 and will not remove:
 
$ for f in *Data.db; do meta=$(sudo sstablemetadata $f); echo -e "Max:" $(date 
--date=@$(echo "$meta" | grep Maximum\ time | cut -d" "  -f3| cut -c 1-10) 
'+%Y/%m/%d %H:%M') "Min:" $(date --date=@$(echo "$meta" | grep Minimum\ time | 
cut -d" "  -f3| cut -c 1-10) '+%Y/%m/%d %H:%M') $(echo "$meta" | grep 
droppable) $(echo "$meta" | grep "Repaired at") ' \t ' $(ls -lh $f | awk 
'{print $5" "$6" "$7" "$8" "$9}'); done | sort
Max: 2017/04/15 12:08 Min: 2017/03/31 13:09 Estimated droppable tombstones: 
1.7731048805815162 Repaired at: 1525685601400 42K May 7 19:56 
mc-22922-big-Data.db
Max: 2017/04/17 13:49 Min: 2017/03/31 13:09 Estimated droppable tombstones: 
1.9600207684319835 Repaired at: 1525685601400 116M May 7 13:31 
mc-15096-big-Data.db
Max: 2017/04/21 13:43 Min: 2017/04/15 13:34 Estimated droppable tombstones: 
1.9090909090909092 Repaired at: 1525685601400 11K May 7 19:56 
mc-22921-big-Data.db
Max: 2017/05/23 21:45 Min: 2017/04/21 14:00 Estimated droppable tombstones: 
1.8360655737704918 Repaired at: 1525685601400 21M May 7 19:56 
mc-22919-big-Data.db
Max: 2017/06/12 15:19 Min: 2017/04/25 14:45 Estimated droppable tombstones: 
1.8091397849462365 Repaired at: 1525685601400 19M May 7 14:36 
mc-17095-big-Data.db
Max: 2017/06/15 15:26 Min: 2017/05/10 14:37 Estimated droppable tombstones: 
1.76536312849162 Repaired at: 1529612605539   9.3M Jun 21 22:31 
mc-25372-big-Data.db
…
 
After a „nodetool repair“ run, a new big data file is created that include old 
data from 2017/07/31.
 
Max: 2018/07/27 18:10 Min: 2017/03/31 13:13 Estimated droppable tombstones: 
0.08392555471691247 Repaired at: 011G Sep 11 22:02 
mc-39281-big-Data.db
…
Max: 2018/08/16 18:18 Min: 2018/08/06 12:19 Estimated droppable tombstones: 0.0 
Repaired at: 1534525730510123M Aug 17 23:46 mc-36847-big-Data.db
Max: 2018/08/17 19:20 Min: 2017/07/31 12:04 Estimated droppable tombstones: 
0.03385963490004347 Repaired at: 011G Sep 11 21:43 
mc-39265-big-Data.db
Max: 2018/08/17 19:20 Min: 2018/07/24 12:33 Estimated droppable tombstones: 0.0 
Repaired at: 1534525730510135M Sep 11 21:44 mc-39270-big-Data.db
…
Max: 2018/09/06 17:30 Min: 2018/08/28 12:17 Estimated droppable tombstones: 0.0 
Repaired at: 1536690786879129M Sep 11 21:10 mc-39238-big-Data.db
Max: 2018/09/07 18:22 Min: 2017/04/23 12:48 Estimated droppable tombstones: 
0.1548442441468401 Repaired at: 0 8.0G Sep 11 21:33 mc-39258-big-Data.db
Max: 2018/09/07 18:22 Min: 2018/09/07 12:15 Estimated droppable tombstones: 0.0 
Repaired at: 153669078687972M Sep 11 21:34 mc-39262-big-Data.db
Max: 2018/09/08 18:20 Min: 2018/08/22 12:17 Estimated droppable tombstones: 0.0 
Repaired at: 02.8G Sep 11 21:47 mc-39272-big-Data.db
 
The tool sstableexpiredblockers shows that the file mc-39281-big-Data.db blocks 
95 expired files from getting dropped, for example the oldest file 
mc-22922-big-Data.db
 
[BigTableReader(path='.../stat/spa-.../mc-39281-big-Data.db') (minTS = 
149095878253, maxTS = 1532707837676719, maxLDT = 1557154990)
  blocks 95 expired sstables from getting dropped: 
 [BigTableReader(path='.../stat/spa-.../mc-36936-big-Data.db') (minTS =