No I set Auto bootstrap to true and the node was UN in nodetool status but
when doing a select on the node with ONE I got incomplete data.
Jeff Jirsa <jji...@gmail.com> schrieb am Do. 3. Aug. 2017 um 09:02:

> "nodetool status" shows node as UN (up normal) instead of UJ (up joining)
>
> What you're describing really sounds odd. Something isn't adding up to me
> but I'm not sure why. You shouldn't be able to query it directly until its
> bootstrapped as far as I know
>
> Are you sure you're not joining as a seed node? Or with auto bootstrap set
> to false?
>
>
> --
> Jeff Jirsa
>
>
> On Aug 2, 2017, at 11:52 PM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen
> that anywhere so far.
>
> Reads via storage would be ok as every query would be checked by another
> node too. I was only seeing inconsistencies since clients went directly to
> the node with Consistency ONE
>
> Greetings
> Jeff Jirsa <jji...@gmail.com> schrieb am Mi. 2. Aug. 2017 um 16:01:
>
>> By the time bootstrap is complete it should be as consistent as the
>> source node - you can change start_native_transport to false to avoid
>> serving clients directly (tcp/9042), but it'll still serve reads via the
>> storage service (tcp/7000), but the guarantee is that data should be
>> consistent by the time bootstrap finishes
>>
>>
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Aug 2, 2017, at 1:53 AM, Daniel Hölbling-Inzko <
>> daniel.hoelbling-in...@bitmovin.com> wrote:
>> >
>> > Hi,
>> > It's probably a strange question but I have a heavily read-optimized
>> payload where data integrity is not a big deal. So to keep latencies low I
>> am reading with Consistency ONE from my Multi-DC Cluster.
>> >
>> > Now the issue I saw is that I needed to add another Cassandra node (for
>> redundancy reasons).
>> > Since I want this for renduncancy I booted the node and then changed
>> the Replication of my Keyspace to include the new node (all nodes have 100%
>> of the data).
>> >
>> > The issue I was seeing is that clients that connected to the new Node
>> afterwards were seeing incomplete data - so the Key would already be
>> present, but the columns would all be null values.
>> > I expect this to die down once the node is fully replicated, but in the
>> meantime a lot of my connected clients were in trouble. (The application
>> can handle seeing old data - incomplete is another matter all together)
>> >
>> > The total data in question is a negligible 500kb (so nothing that
>> should really take any amount of time in my opinion but it took a few
>> minutes for the data to replicate over and I am still not sure everything
>> is replicated correctly).
>> >
>> > Increasing the RF to something higher won't really help as the setup is
>> dc1: 3; dc2: 2 (I added the second node in dc2). So a LOCAL_QUORUM in dc2
>> would still be 2 nodes which means I just can't loose either of them.
>> Adding a third node is not really cost effective for the current workloads
>> these nodes need to handle.
>> >
>> > Any advice on how to avoid this in the future? Is there a way to start
>> up a node that does not serve client requests but does replicate data?
>> >
>> > greetings Daniel
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>

Reply via email to