We have faced the same situation as yours in our production environment
where we suddenly got "Unknown CF Exception" for materialized views too. We
are using Lagom apps with cassandra for persistence. In our case, since
these views can be regenerated from the original events, we were able to
Few suggestions from my operations experience:
1) Upgrade your cassandra cluster to 3.11.2 because there are lots of bug
fixes specific to materialized views.
2) Never let your application create/update/delete cassandra
table/materialized views. Always create them manually to make sure that
only connection is doing the operation.
On Wed, Jun 6, 2018 at 9:44 PM, <m...@vis.at> wrote:
> Hi Evelyn,
> thanks a lot for your detailed response message.
> The data is not important. We've already wiped the data and created a new
> cassandra installation. The data re-import task is already running. We've
> lost the data for a couple of months but in this case this does not matter.
> Nevertheless we will try what you told us - just to be smarter/faster if
> this happens in production (where we will setup a cassandra cluster with
> multiple cassandra nodes anyway). I will drop you a note when we are done.
> Hmmm... the problem is within a "View". Are this the materialized views?
> I'm asking this because:
> * Someone on the internet (stackoverflow if a recall correctly) mentioned
> that using materialized views are to be deprecated.
> * I had been on a datastax workshop in Zurich a couple of days ago where a
> datastax employee told me that we should not use materialized views - it is
> better to create & fill all tables directly.
> Would you also recommend not to use materialized views? As this problem is
> related to a view - maybe we could avoid this problem simply by following
> this recommendation.
> Thanks a lot again!
> On 06.06.2018 16:48, Evelyn Smith wrote:
>> Hi Michael,
>> So I looked at the code, here are some stages of your error message:
>> 1. at
>> At this step Cassandra is running through the keyspaces in it’s
>> schema turning off compactions for all tables before it starts
>> rerunning the commit log (so it isn’t an issue with the commit log).
>> 2. at org.apache.cassandra.db.Keyspace.open(Keyspace.java:127)
>> Loading key space related to the column family that is erroring out
>> 3. at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:324)
>> Cassandra has initialised the column family and is reloading the view
>> 4. at
>> At this point I haven’t had enough time to tell if Cassandra is
>> requesting info on a column specifically or still requesting
>> information on a column family. Regardless, given we already rule out
>> issues with the SSTables and their directory and Cassandra is yet to
>> start processing the commit log this to me suggests it’s something
>> wrong in one of the system keyspaces storing the schema information.
>> There should definitely be a way to resolve this with zero data loss
>> by either:
>> 1. Fixing the issue in the system keyspace SSTables (hard)
>> 2. Rerunning the commit log on a new Cassandra node that has been
>> restored from the current one (I’m not sure if this is possible but
>> I’ll figure it out tomorrow)
>> The alternative is if you are ok with losing the commitlog then you
>> can backup the data and restore it to a new node (or the same node but
>> with everything blown away). This isn’t a trivial process though
>> I’ve done it a few times.
>> How important is the data?
>> Happy to come back to this tomorrow (need some sleep)
>> On 5 Jun 2018, at 7:32 pm, m...@vis.at wrote:
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org