Re: Cassandra Needs to Grow Up by Version Five!

2018-02-21 Thread Daniel Hölbling-Inzko
But what does this video really show? That Microsoft managed to run
Cassandra as a SaaS product with nice UI?
Google did that years ago with BigTable and Amazon with DynamoDB.

I agree that we need more tools, but not so much for querying (although
that would also help a bit), but just in general the project feels
unapproachable right now.
Besides the excellent DataStax documentation there is little best practice
knowledge about how to operate and provision Cassandra clusters.
Having some recipes for Chef, Puppet or Ansible that show the most common
settings (or some Cloudfoundry/GCP Templates or Helm Charts) would be
really useful.
Also a list of all the projects that Cassandra goes well with (like TLP
Reaper and and Netflix's Priam etc..)

greetings Daniel

On Wed, 21 Feb 2018 at 07:23 Kenneth Brotman <kenbrot...@yahoo.com.invalid>
wrote:

> If you watch this video through you'll see why usability is so important.
> You can't ignore usability issues.
>
> Cassandra does not exist in a vacuum.  The competitors are world class.
>
> The video is on the New Cassandra API for Azure Cosmos DB:
> https://www.youtube.com/watch?v=1Sf4McGN1AQ
>
> Kenneth Brotman
>
> -----Original Message-
> From: Daniel Hölbling-Inzko [mailto:daniel.hoelbling-in...@bitmovin.com]
> Sent: Tuesday, February 20, 2018 1:28 AM
> To: user@cassandra.apache.org; James Briggs
> Cc: d...@cassandra.apache.org
> Subject: Re: Cassandra Needs to Grow Up by Version Five!
>
> Hi,
>
> I have to add my own two cents here as the main thing that keeps me from
> really running Cassandra is the amount of pain running it incurs.
> Not so much because it's actually painful but because the tools are so
> different and the documentation and best practices are scattered across a
> dozen outdated DataStax articles and this mailing list etc.. We've been
> hesitant (although our use case is perfect for using Cassandra) to deploy
> Cassandra to any critical systems as even after a year of running it we
> still don't have the operational experience to confidently run critical
> systems with it.
>
> Simple things like a foolproof / safe cluster-wide S3 Backup (like
> Elasticsearch has it) would for example solve a TON of issues for new
> people. I don't need it auto-scheduled or something, but having to
> configure cron jobs across the whole cluster is a pain in the ass for small
> teams.
> To be honest, even the way snapshots are done right now is already super
> painful. Every other system I operated so far will just create one backup
> folder I can export, in C* the Backup is scattered across a bunch of
> different Keyspace folders etc.. needless to say that it took a while until
> I trusted my backup scripts fully.
>
> And especially for a Database I believe Backup/Restore needs to be a
> non-issue that's documented front and center. If not smaller teams just
> don't have the resources to dedicate to learning and building the tools
> around it.
>
> Now that the team is getting larger we could spare the resources to
> operate these things, but switching from a well-understood RDBMs schema to
> Cassandra is now incredibly hard and will probably take years.
>
> greetings Daniel
>
> On Tue, 20 Feb 2018 at 05:56 James Briggs <james.bri...@yahoo.com.invalid>
> wrote:
>
> > Kenneth:
> >
> > What you said is not wrong.
> >
> > Vertica and Riak are examples of distributed databases that don't
> > require hand-holding.
> >
> > Cassandra is for Java-programmer DIYers, or more often Datastax
> > clients, at this point.
> > Thanks, James.
> >
> > --
> > *From:* Kenneth Brotman <kenbrot...@yahoo.com.INVALID>
> > *To:* user@cassandra.apache.org
> > *Cc:* d...@cassandra.apache.org
> > *Sent:* Monday, February 19, 2018 4:56 PM
> >
> > *Subject:* RE: Cassandra Needs to Grow Up by Version Five!
> >
> > Jeff, you helped me figure out what I was missing.  It just took me a
> > day to digest what you wrote.  I’m coming over from another type of
> > engineering.  I didn’t know and it’s not really documented.  Cassandra
> > runs in a data center.  Now days that means the nodes are going to be
> > in managed containers, Docker containers, managed by Kerbernetes,
> > Meso or something, and for that reason anyone operating Cassandra in a
> > real world setting would not encounter the issues I raised in the way I
> described.
> >
> > Shouldn’t the architectural diagrams people reference indicate that in
> > some way?  That would have help me.
> >
> > Kenneth Brotman
> >
> > *From:* Kenneth Brotman [mailto:kenbrot...@yahoo.com]
> > *Sent:* Monday, February 19, 2018

Re: Cassandra Needs to Grow Up by Version Five!

2018-02-20 Thread Daniel Hölbling-Inzko
Hi,

I have to add my own two cents here as the main thing that keeps me from
really running Cassandra is the amount of pain running it incurs.
Not so much because it's actually painful but because the tools are so
different and the documentation and best practices are scattered across a
dozen outdated DataStax articles and this mailing list etc.. We've been
hesitant (although our use case is perfect for using Cassandra) to deploy
Cassandra to any critical systems as even after a year of running it we
still don't have the operational experience to confidently run critical
systems with it.

Simple things like a foolproof / safe cluster-wide S3 Backup (like
Elasticsearch has it) would for example solve a TON of issues for new
people. I don't need it auto-scheduled or something, but having to
configure cron jobs across the whole cluster is a pain in the ass for small
teams.
To be honest, even the way snapshots are done right now is already super
painful. Every other system I operated so far will just create one backup
folder I can export, in C* the Backup is scattered across a bunch of
different Keyspace folders etc.. needless to say that it took a while until
I trusted my backup scripts fully.

And especially for a Database I believe Backup/Restore needs to be a
non-issue that's documented front and center. If not smaller teams just
don't have the resources to dedicate to learning and building the tools
around it.

Now that the team is getting larger we could spare the resources to operate
these things, but switching from a well-understood RDBMs schema to
Cassandra is now incredibly hard and will probably take years.

greetings Daniel

On Tue, 20 Feb 2018 at 05:56 James Briggs 
wrote:

> Kenneth:
>
> What you said is not wrong.
>
> Vertica and Riak are examples of distributed databases that don't require
> hand-holding.
>
> Cassandra is for Java-programmer DIYers, or more often Datastax clients,
> at this point.
> Thanks, James.
>
> --
> *From:* Kenneth Brotman 
> *To:* user@cassandra.apache.org
> *Cc:* d...@cassandra.apache.org
> *Sent:* Monday, February 19, 2018 4:56 PM
>
> *Subject:* RE: Cassandra Needs to Grow Up by Version Five!
>
> Jeff, you helped me figure out what I was missing.  It just took me a day
> to digest what you wrote.  I’m coming over from another type of
> engineering.  I didn’t know and it’s not really documented.  Cassandra runs
> in a data center.  Now days that means the nodes are going to be in managed
> containers, Docker containers, managed by Kerbernetes,  Meso or something,
> and for that reason anyone operating Cassandra in a real world setting
> would not encounter the issues I raised in the way I described.
>
> Shouldn’t the architectural diagrams people reference indicate that in
> some way?  That would have help me.
>
> Kenneth Brotman
>
> *From:* Kenneth Brotman [mailto:kenbrot...@yahoo.com]
> *Sent:* Monday, February 19, 2018 10:43 AM
> *To:* 'user@cassandra.apache.org'
> *Cc:* 'd...@cassandra.apache.org'
> *Subject:* RE: Cassandra Needs to Grow Up by Version Five!
>
> Well said.  Very fair.  I wouldn’t mind hearing from others still.  You’re
> a good guy!
>
> Kenneth Brotman
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com ]
> *Sent:* Monday, February 19, 2018 9:10 AM
> *To:* cassandra
> *Cc:* Cassandra DEV
> *Subject:* Re: Cassandra Needs to Grow Up by Version Five!
>
> There's a lot of things below I disagree with, but it's ok. I convinced
> myself not to nit-pick every point.
>
> https://issues.apache.org/jira/browse/CASSANDRA-13971 has some of
> Stefan's work with cert management
>
> Beyond that, I encourage you to do what Michael suggested: open JIRAs for
> things you care strongly about, work on them if you have time. Sometime
> this year we'll schedule a NGCC (Next Generation Cassandra Conference)
> where we talk about future project work and direction, I encourage you to
> attend if you're able (I encourage anyone who cares about the direction of
> Cassandra to attend, it's probably be either free or very low cost, just to
> cover a venue and some food). If nothing else, you'll meet some of the
> teams who are working on the project, and learn why they've selected the
> projects on which they're working. You'll have an opportunity to pitch your
> vision, and maybe you can talk some folks into helping out.
>
> - Jeff
>
>
>
>
> On Mon, Feb 19, 2018 at 1:01 AM, Kenneth Brotman <
> kenbrot...@yahoo.com.invalid> wrote:
> Comments inline
>
> >-Original Message-
> >From: Jeff Jirsa [mailto:jji...@gmail.com]
> >Sent: Sunday, February 18, 2018 10:58 PM
> >To: user@cassandra.apache.org
> >Cc: d...@cassandra.apache.org
> >Subject: Re: Cassandra Needs to Grow Up by Version Five!
> >
> >Comments inline
> >
> >
> >> On Feb 18, 2018, at 9:39 PM, Kenneth Brotman <
> kenbrot...@yahoo.com.INVALID> wrote:
> >>
> > >Cassandra feels like an unfinished program to me. The 

Re: Migrating a Limit/Offset Pagination and Sorting to Cassandra

2017-10-07 Thread Daniel Hölbling-Inzko
I now finished a implementation where I just save the pagination state to a
separate table and retrieve it to get to the next page.

So far it seems to work pretty well. But I have to do more thorough
testing.

Greetings.
On Wed 4. Oct 2017 at 19:42, Jon Haddad <j...@jonhaddad.com> wrote:

> Seems pretty overengineered, imo, given you can just save the pagination
> state as Andy Tolbert pointed out.
>
> On Oct 4, 2017, at 8:38 AM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Thanks for pointing me to Elassandra.
> Have you had any experience running this in production at scale? Not sure
> if I
>
> I think ES will enter the picture at some point since some things just
> don't work efficiently with Cassandra and so it's inevitable in the end.
> But I'd rather delay that step for as long as possible since it would add
> a lot of complexity and another layer of eventual consistency I'd rather
> not deal with at the moment :)
>
> greetings Daniel
>
> On Wed, 4 Oct 2017 at 08:36 Greg Saylor <gr...@net-virtual.com> wrote:
>
>> Without knowing other details, of course, have you considered using
>> something like Elassandra?  That is a pretty tightly integrated Cassandra +
>> Elastic Search solution.   You’d insert data into Cassandra like you do
>> normally, then query it with Elastic Search.  Of course this would increase
>> the size of your storage requirements.
>>
>> - Greg
>>
>>
>> On Oct 3, 2017, at 11:10 PM, Daniel Hölbling-Inzko <
>> daniel.hoelbling-in...@bitmovin.com> wrote:
>>
>> Thanks Kurt,
>> I thought about that but one issue is that we are doing limit/offset not
>> pages. So one customer can choose to page through the list in 10 Item
>> increments, another might want to page through with 100 elements per page.
>> So I can't have a clustering key that represents a page range.
>>
>> What I was thinking about doing was saving the paginationState in a
>> separate table along with limit/offset info of the last query the
>> paginationState originated from so I can use the last paginationState to
>> continue the iteration from if the customer requests the next page with the
>> same limit but a different offset.
>> This breaks down if the customer does a cold offset=1000 request but
>> that's something I can throw error messages for at, what I do need to
>> support is a customer doing
>> Request 1: offset=0 + limit=100
>> Request 2: offset=100 + limit=100
>> Request 3: offset=200 + limit=100
>>
>> So next question would be: How long is the paginationState from the
>> driver current? I was thinking about inserting the paginationState with a
>> TTL into another Cassandra table - not sure if that's smart though.
>>
>> greetings Daniel
>>
>> On Tue, 3 Oct 2017 at 12:20 kurt greaves <k...@instaclustr.com> wrote:
>>
>>> I get the impression that you are paging through a single partition in
>>> Cassandra? If so you should probably use bounds on clustering keys to get
>>> your "next page". You could use LIMIT as well here but it's mostly
>>> unnecessary. Probably just use the pagesize that you intend for the API.
>>>
>>> Yes you'll need a table for each sort order, which ties into how you
>>> would use clustering keys for LIMIT/OFFSET. Essentially just do range
>>> slices on the clustering keys for each table to get your "pages".
>>>
>>> Also I'm assuming there's a lot of data per partition if in-mem sorting
>>> isn't an option, if this is true you will want to be wary of creating large
>>> partitions and reading them all at once. Although this depends on your data
>>> model and compaction strategy choices.
>>>
>>> On 3 October 2017 at 08:36, Daniel Hölbling-Inzko <
>>> daniel.hoelbling-in...@bitmovin.com> wrote:
>>>
>>>> Hi,
>>>> I am currently working on migrating a service that so far was MySQL
>>>> based to Cassandra.
>>>> Everything seems to work fine so far, but a few things in the old
>>>> services API Spec is posing some interesting data modeling challenges:
>>>>
>>>> The old service was doing Limit/Offset pagination which is obviously
>>>> something Cassandra can't really do. I understand how paginationState works
>>>> - but so far I haven't figured out a good way to make Limit/Offset work on
>>>> top of paginationState (as I need to be 100% backwards compatible).
>>>> The only ways which I could think of to make Limit/Offset work would
>>>> create scalability issues down the road.
>>>>
>>>> The old service allowed sorting by any field. If I understood correctly
>>>> that would require a table for each sort order right? (In-Mem sorting is
>>>> not an option unfortunately)
>>>> In doing so, how can I make the Java Datastax mapper save to another
>>>> table (I really don't want to be writing a Subclass of the Entity for each
>>>> Table to add the @Table annotation.
>>>>
>>>> greetings Daniel
>>>>
>>>
>>>
>>
>


Re: Migrating a Limit/Offset Pagination and Sorting to Cassandra

2017-10-04 Thread Daniel Hölbling-Inzko
Thanks for pointing me to Elassandra.
Have you had any experience running this in production at scale? Not sure
if I

I think ES will enter the picture at some point since some things just
don't work efficiently with Cassandra and so it's inevitable in the end.
But I'd rather delay that step for as long as possible since it would add a
lot of complexity and another layer of eventual consistency I'd rather not
deal with at the moment :)

greetings Daniel

On Wed, 4 Oct 2017 at 08:36 Greg Saylor <gr...@net-virtual.com> wrote:

> Without knowing other details, of course, have you considered using
> something like Elassandra?  That is a pretty tightly integrated Cassandra +
> Elastic Search solution.   You’d insert data into Cassandra like you do
> normally, then query it with Elastic Search.  Of course this would increase
> the size of your storage requirements.
>
> - Greg
>
>
> On Oct 3, 2017, at 11:10 PM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Thanks Kurt,
> I thought about that but one issue is that we are doing limit/offset not
> pages. So one customer can choose to page through the list in 10 Item
> increments, another might want to page through with 100 elements per page.
> So I can't have a clustering key that represents a page range.
>
> What I was thinking about doing was saving the paginationState in a
> separate table along with limit/offset info of the last query the
> paginationState originated from so I can use the last paginationState to
> continue the iteration from if the customer requests the next page with the
> same limit but a different offset.
> This breaks down if the customer does a cold offset=1000 request but
> that's something I can throw error messages for at, what I do need to
> support is a customer doing
> Request 1: offset=0 + limit=100
> Request 2: offset=100 + limit=100
> Request 3: offset=200 + limit=100
>
> So next question would be: How long is the paginationState from the driver
> current? I was thinking about inserting the paginationState with a TTL into
> another Cassandra table - not sure if that's smart though.
>
> greetings Daniel
>
> On Tue, 3 Oct 2017 at 12:20 kurt greaves <k...@instaclustr.com> wrote:
>
>> I get the impression that you are paging through a single partition in
>> Cassandra? If so you should probably use bounds on clustering keys to get
>> your "next page". You could use LIMIT as well here but it's mostly
>> unnecessary. Probably just use the pagesize that you intend for the API.
>>
>> Yes you'll need a table for each sort order, which ties into how you
>> would use clustering keys for LIMIT/OFFSET. Essentially just do range
>> slices on the clustering keys for each table to get your "pages".
>>
>> Also I'm assuming there's a lot of data per partition if in-mem sorting
>> isn't an option, if this is true you will want to be wary of creating large
>> partitions and reading them all at once. Although this depends on your data
>> model and compaction strategy choices.
>>
>> On 3 October 2017 at 08:36, Daniel Hölbling-Inzko <
>> daniel.hoelbling-in...@bitmovin.com> wrote:
>>
>>> Hi,
>>> I am currently working on migrating a service that so far was MySQL
>>> based to Cassandra.
>>> Everything seems to work fine so far, but a few things in the old
>>> services API Spec is posing some interesting data modeling challenges:
>>>
>>> The old service was doing Limit/Offset pagination which is obviously
>>> something Cassandra can't really do. I understand how paginationState works
>>> - but so far I haven't figured out a good way to make Limit/Offset work on
>>> top of paginationState (as I need to be 100% backwards compatible).
>>> The only ways which I could think of to make Limit/Offset work would
>>> create scalability issues down the road.
>>>
>>> The old service allowed sorting by any field. If I understood correctly
>>> that would require a table for each sort order right? (In-Mem sorting is
>>> not an option unfortunately)
>>> In doing so, how can I make the Java Datastax mapper save to another
>>> table (I really don't want to be writing a Subclass of the Entity for each
>>> Table to add the @Table annotation.
>>>
>>> greetings Daniel
>>>
>>
>>
>


Re: Migrating a Limit/Offset Pagination and Sorting to Cassandra

2017-10-04 Thread Daniel Hölbling-Inzko
Thanks Kurt,
I thought about that but one issue is that we are doing limit/offset not
pages. So one customer can choose to page through the list in 10 Item
increments, another might want to page through with 100 elements per page.
So I can't have a clustering key that represents a page range.

What I was thinking about doing was saving the paginationState in a
separate table along with limit/offset info of the last query the
paginationState originated from so I can use the last paginationState to
continue the iteration from if the customer requests the next page with the
same limit but a different offset.
This breaks down if the customer does a cold offset=1000 request but that's
something I can throw error messages for at, what I do need to support is a
customer doing
Request 1: offset=0 + limit=100
Request 2: offset=100 + limit=100
Request 3: offset=200 + limit=100

So next question would be: How long is the paginationState from the driver
current? I was thinking about inserting the paginationState with a TTL into
another Cassandra table - not sure if that's smart though.

greetings Daniel

On Tue, 3 Oct 2017 at 12:20 kurt greaves <k...@instaclustr.com> wrote:

> I get the impression that you are paging through a single partition in
> Cassandra? If so you should probably use bounds on clustering keys to get
> your "next page". You could use LIMIT as well here but it's mostly
> unnecessary. Probably just use the pagesize that you intend for the API.
>
> Yes you'll need a table for each sort order, which ties into how you would
> use clustering keys for LIMIT/OFFSET. Essentially just do range slices on
> the clustering keys for each table to get your "pages".
>
> Also I'm assuming there's a lot of data per partition if in-mem sorting
> isn't an option, if this is true you will want to be wary of creating large
> partitions and reading them all at once. Although this depends on your data
> model and compaction strategy choices.
>
> On 3 October 2017 at 08:36, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
>> Hi,
>> I am currently working on migrating a service that so far was MySQL based
>> to Cassandra.
>> Everything seems to work fine so far, but a few things in the old
>> services API Spec is posing some interesting data modeling challenges:
>>
>> The old service was doing Limit/Offset pagination which is obviously
>> something Cassandra can't really do. I understand how paginationState works
>> - but so far I haven't figured out a good way to make Limit/Offset work on
>> top of paginationState (as I need to be 100% backwards compatible).
>> The only ways which I could think of to make Limit/Offset work would
>> create scalability issues down the road.
>>
>> The old service allowed sorting by any field. If I understood correctly
>> that would require a table for each sort order right? (In-Mem sorting is
>> not an option unfortunately)
>> In doing so, how can I make the Java Datastax mapper save to another
>> table (I really don't want to be writing a Subclass of the Entity for each
>> Table to add the @Table annotation.
>>
>> greetings Daniel
>>
>
>


Migrating a Limit/Offset Pagination and Sorting to Cassandra

2017-10-03 Thread Daniel Hölbling-Inzko
Hi,
I am currently working on migrating a service that so far was MySQL based
to Cassandra.
Everything seems to work fine so far, but a few things in the old services
API Spec is posing some interesting data modeling challenges:

The old service was doing Limit/Offset pagination which is obviously
something Cassandra can't really do. I understand how paginationState works
- but so far I haven't figured out a good way to make Limit/Offset work on
top of paginationState (as I need to be 100% backwards compatible).
The only ways which I could think of to make Limit/Offset work would create
scalability issues down the road.

The old service allowed sorting by any field. If I understood correctly
that would require a table for each sort order right? (In-Mem sorting is
not an option unfortunately)
In doing so, how can I make the Java Datastax mapper save to another table
(I really don't want to be writing a Subclass of the Entity for each Table
to add the @Table annotation.

greetings Daniel


Re: Datastax Driver Mapper & Secondary Indexes

2017-09-26 Thread Daniel Hölbling-Inzko
Hi, I also just figured out that there is no schema generation off the
mapper.
Thanks for pointing me to the secondary index info. I'll have a look.

greetings Daniel

On Tue, 26 Sep 2017 at 09:42 kurt greaves <k...@instaclustr.com> wrote:

> If you've created a secondary index you simply query it by specifying it
> as part of the where clause. Note that you should really understand the
> drawbacks of secondary indexes before using them, as they might not be
> incredibly efficient depending on what you need them for.
> http://www.wentnet.com/blog/?p=77 and
> https://pantheon.io/blog/cassandra-scale-problem-secondary-indexes might
> help.
>
> On 26 September 2017 at 07:17, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
>> Hi,
>> I am currently moving an application from SQL to Cassandra using Java. I
>> successfully got the DataStax driver and the mapper up and running, but
>> can't seem to figure out how to set secondary indexes through the mapper.
>> I also can't seem to find anything related to indexes in the mapper
>> sources - am I missing something or is this missing from the client library?
>>
>> greetings Daniel
>>
>
>


Datastax Driver Mapper & Secondary Indexes

2017-09-26 Thread Daniel Hölbling-Inzko
Hi,
I am currently moving an application from SQL to Cassandra using Java. I
successfully got the DataStax driver and the mapper up and running, but
can't seem to figure out how to set secondary indexes through the mapper.
I also can't seem to find anything related to indexes in the mapper sources
- am I missing something or is this missing from the client library?

greetings Daniel


Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
That makes sense. Thank you so much for pointing that out Alex.
So long story short. Once I am up to the RF I actually want (RF3 per DC)
and am just adding nodes for capacity joining the Ring will correctly work
and no inconsistencies will exist.
If I just change the RF the nodes don't have the data yet so a repair needs
to be run.

Awesome - thanks so much.

greetings Daniel

On Thu, 3 Aug 2017 at 09:56 Oleksandr Shulgin <oleksandr.shul...@zalando.de>
wrote:

> On Thu, Aug 3, 2017 at 9:33 AM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
>> No I set Auto bootstrap to true and the node was UN in nodetool status
>> but when doing a select on the node with ONE I got incomplete data.
>>
>
> What I think is happening here is not related to the new node being added.
>
> When you increase Replication Factor, that does not automatically
> redistribute the existing data.  It just makes other nodes responsible for
> portions of the data they might not really have yet.  So I would expect
> that all your nodes show some inconsistencies, before you run a full repair
> of the ring.
>
> I can fairly easily reproduce it locally with ccm[1], 3 nodes, version
> 3.0.13.
>
> $ ccm status
> Cluster: 'v3013'
> 
> node1: UP
> node3: UP
> node2: UP
>
> $ ccm node1 cqlsh
> cqlsh> create keyspace test_rf WITH replication = {'class':
> 'NetworkTopologyStrategy', 'datacenter1': 1};
> cqlsh> create table test_rf.t1(id int, data text, primary key(id));
> cqlsh> insert into test_rf.t1(id, data) values(1, 'one');
> cqlsh> select * from test_rf.t1;
>
>  id | data
> +--
>   1 |  one
>
> (1 rows)
>
> At this point selecting from t1 works correctly on any of the nodes with
> the default CL=ONE.
>
> If we would now increase the RF and try reading again, something
> surprising will happen:
>
> cqlsh> alter keyspace test_rf WITH replication = {'class':
> 'NetworkTopologyStrategy', 'datacenter1': 2};
> cqlsh> select * from test_rf.t1;
>
>  id | data
> +--
>
> (0 rows)
>
> And in my test this happens on all nodes at the same time.  Explanation is
> fairly simple: now a different node is responsible for the data that was
> written to only one other node previously.
>
> A repair in this tiny test is trivial:
> cqlsh> CONSISTENCY ALL;
> cqlsh> select * from test_rf.t1;
>
>  id | data
> +--
>   1 |  one
>
> (1 rows)
>
> And now the data can be read from any node again, since we did a "full
> repair".
>
> --
> Alex
>
> [1] https://github.com/pcmanus/ccm
>
>


Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
No I set Auto bootstrap to true and the node was UN in nodetool status but
when doing a select on the node with ONE I got incomplete data.
Jeff Jirsa <jji...@gmail.com> schrieb am Do. 3. Aug. 2017 um 09:02:

> "nodetool status" shows node as UN (up normal) instead of UJ (up joining)
>
> What you're describing really sounds odd. Something isn't adding up to me
> but I'm not sure why. You shouldn't be able to query it directly until its
> bootstrapped as far as I know
>
> Are you sure you're not joining as a seed node? Or with auto bootstrap set
> to false?
>
>
> --
> Jeff Jirsa
>
>
> On Aug 2, 2017, at 11:52 PM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen
> that anywhere so far.
>
> Reads via storage would be ok as every query would be checked by another
> node too. I was only seeing inconsistencies since clients went directly to
> the node with Consistency ONE
>
> Greetings
> Jeff Jirsa <jji...@gmail.com> schrieb am Mi. 2. Aug. 2017 um 16:01:
>
>> By the time bootstrap is complete it should be as consistent as the
>> source node - you can change start_native_transport to false to avoid
>> serving clients directly (tcp/9042), but it'll still serve reads via the
>> storage service (tcp/7000), but the guarantee is that data should be
>> consistent by the time bootstrap finishes
>>
>>
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On Aug 2, 2017, at 1:53 AM, Daniel Hölbling-Inzko <
>> daniel.hoelbling-in...@bitmovin.com> wrote:
>> >
>> > Hi,
>> > It's probably a strange question but I have a heavily read-optimized
>> payload where data integrity is not a big deal. So to keep latencies low I
>> am reading with Consistency ONE from my Multi-DC Cluster.
>> >
>> > Now the issue I saw is that I needed to add another Cassandra node (for
>> redundancy reasons).
>> > Since I want this for renduncancy I booted the node and then changed
>> the Replication of my Keyspace to include the new node (all nodes have 100%
>> of the data).
>> >
>> > The issue I was seeing is that clients that connected to the new Node
>> afterwards were seeing incomplete data - so the Key would already be
>> present, but the columns would all be null values.
>> > I expect this to die down once the node is fully replicated, but in the
>> meantime a lot of my connected clients were in trouble. (The application
>> can handle seeing old data - incomplete is another matter all together)
>> >
>> > The total data in question is a negligible 500kb (so nothing that
>> should really take any amount of time in my opinion but it took a few
>> minutes for the data to replicate over and I am still not sure everything
>> is replicated correctly).
>> >
>> > Increasing the RF to something higher won't really help as the setup is
>> dc1: 3; dc2: 2 (I added the second node in dc2). So a LOCAL_QUORUM in dc2
>> would still be 2 nodes which means I just can't loose either of them.
>> Adding a third node is not really cost effective for the current workloads
>> these nodes need to handle.
>> >
>> > Any advice on how to avoid this in the future? Is there a way to start
>> up a node that does not serve client requests but does replicate data?
>> >
>> > greetings Daniel
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>


Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen
that anywhere so far.

Reads via storage would be ok as every query would be checked by another
node too. I was only seeing inconsistencies since clients went directly to
the node with Consistency ONE

Greetings
Jeff Jirsa <jji...@gmail.com> schrieb am Mi. 2. Aug. 2017 um 16:01:

> By the time bootstrap is complete it should be as consistent as the source
> node - you can change start_native_transport to false to avoid serving
> clients directly (tcp/9042), but it'll still serve reads via the storage
> service (tcp/7000), but the guarantee is that data should be consistent by
> the time bootstrap finishes
>
>
>
>
> --
> Jeff Jirsa
>
>
> > On Aug 2, 2017, at 1:53 AM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
> >
> > Hi,
> > It's probably a strange question but I have a heavily read-optimized
> payload where data integrity is not a big deal. So to keep latencies low I
> am reading with Consistency ONE from my Multi-DC Cluster.
> >
> > Now the issue I saw is that I needed to add another Cassandra node (for
> redundancy reasons).
> > Since I want this for renduncancy I booted the node and then changed the
> Replication of my Keyspace to include the new node (all nodes have 100% of
> the data).
> >
> > The issue I was seeing is that clients that connected to the new Node
> afterwards were seeing incomplete data - so the Key would already be
> present, but the columns would all be null values.
> > I expect this to die down once the node is fully replicated, but in the
> meantime a lot of my connected clients were in trouble. (The application
> can handle seeing old data - incomplete is another matter all together)
> >
> > The total data in question is a negligible 500kb (so nothing that should
> really take any amount of time in my opinion but it took a few minutes for
> the data to replicate over and I am still not sure everything is replicated
> correctly).
> >
> > Increasing the RF to something higher won't really help as the setup is
> dc1: 3; dc2: 2 (I added the second node in dc2). So a LOCAL_QUORUM in dc2
> would still be 2 nodes which means I just can't loose either of them.
> Adding a third node is not really cost effective for the current workloads
> these nodes need to handle.
> >
> > Any advice on how to avoid this in the future? Is there a way to start
> up a node that does not serve client requests but does replicate data?
> >
> > greetings Daniel
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Thanks for the pointers Kurt!

I did increase the RF to N so that would not have been the issue.
DC migration is also a problem since I am using the Google Cloud Snitch. So
I'd have to take down the whole DC and restart anew (which would mess with
my clients as they only connect to their local DC).

As I said this was a small issue here - we only  were seeing the issue for
5 minutes. But considering how miniscule the amount of data to replicate
was (400 rows with a total of 500kb) I am a bit worried on how to do this
once loads increases.

greetings Daniel

On Wed, 2 Aug 2017 at 11:50 kurt greaves  wrote:

> If you want to change RF on a live system your best bet is through DC
> migration (add another DC with the desired # of nodes and RF), and migrate
> your clients to use that DC. There is a way to boot a node and not join the
> ring, however I don't think it will work for new nodes (have not
> confirmed), also increasing RF in this way would only not be completely
> catastrophic if you were increasing RF to N (num nodes).​
>


Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Hi,
It's probably a strange question but I have a heavily read-optimized
payload where data integrity is not a big deal. So to keep latencies low I
am reading with Consistency ONE from my Multi-DC Cluster.

Now the issue I saw is that I needed to add another Cassandra node (for
redundancy reasons).
Since I want this for renduncancy I booted the node and then changed the
Replication of my Keyspace to include the new node (all nodes have 100% of
the data).

The issue I was seeing is that clients that connected to the new Node
afterwards were seeing incomplete data - so the Key would already be
present, but the columns would all be null values.
I expect this to die down once the node is fully replicated, but in the
meantime a lot of my connected clients were in trouble. (The application
can handle seeing old data - incomplete is another matter all together)

The total data in question is a negligible 500kb (so nothing that should
really take any amount of time in my opinion but it took a few minutes for
the data to replicate over and I am still not sure everything is replicated
correctly).

Increasing the RF to something higher won't really help as the setup is
dc1: 3; dc2: 2 (I added the second node in dc2). So a LOCAL_QUORUM in dc2
would still be 2 nodes which means I just can't loose either of them.
Adding a third node is not really cost effective for the current workloads
these nodes need to handle.

Any advice on how to avoid this in the future? Is there a way to start up a
node that does not serve client requests but does replicate data?

greetings Daniel


Re: Data Loss irreparabley so

2017-07-27 Thread Daniel Hölbling-Inzko
In that vein, Cassandra support Auto compaction and incremental repair.
Does this mean I have to set up cron jobs on each node to do a nodetool
repair or is this taken care of by Cassandra anyways?
How often should I run nodetool repair

Greetings Daniel
Jeff Jirsa  schrieb am Do. 27. Juli 2017 um 07:48:

>
>
> On 2017-07-25 15:49 (-0700), Roger Warner  wrote:
> > This is a quick informational question. I know that Cassandra can
> detect failures of nodes and repair them given replication and multiple DC.
> >
> > My question is can Cassandra tell if data was lost after a failure and
> node(s) “fixed” and resumed operation?
> >
>
> Sorta concerned by the way you're asking this - Cassandra doesn't "fix"
> failed nodes. It can route requests around a down node, but the "fixing" is
> entirely manual.
>
> If you have a node go down temporarily, and it comes back up (with it's
> disk intact), you can see it "repair" data with a combination of active
> (anti-entropy) repair via nodetool repair, or by watching 'nodetool
> netstats' and see the read repair counters increase over time (which will
> happen naturally as data is requested and mismatches are detected in the
> data, based on your consistency level).
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Understanding gossip and seeds

2017-07-22 Thread Daniel Hölbling-Inzko
Seeds are there to bootstrap a node for the very first time when it's has
zero knowledge about the ring.

I think I also read somewhere that seed nodes are periodically queried for
some sanity checks and therefore one should not include too many nodes in
the seed list.
kurt greaves  schrieb am Sa. 22. Juli 2017 um 01:48:

> Haven't checked the code but pretty sure it's because it will always use
> the known state stored in the system tables. the seeds in the yaml are
> mostly for initial set up, used to discover the rest of the nodes in the
> ring.
>
> Once that's done there is little reason to refer to them again, unless
> forced.
>


Re: Don't print Ping caused error logs

2017-06-19 Thread Daniel Hölbling-Inzko
Just out of curiosity how to you then make sure all nodes get the same
amount of traffic from clients without having to maintain a manual contact
points list of all cassandra nodes in the client applications?
Especially with big C* deployments this sounds like a lot of work whenever
adding/removing nodes. Putting them behind a lb that can Auto discover
nodes (or manually adding them to the LB rotation etc) sounds like a much
easier way.
I am thinking mostly about cloud lb systems like AWS ELB or GCP LB

Or can the client libraries discover nodes and use other contact points für
subsequent requests? Having a bunch of seed nodes would be easier I guess.

Greetings Daniel
Akhil Mehra  schrieb am Mo. 19. Juni 2017 um 11:44:

> Just in case you are not aware using a load balancer is an anti patter.
> Please refer to (
> http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal
> )
>
> You can turnoff logging for a particular class using the nodetool
> setlogginglevel (
> http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetLogLev.html
> ).
>
> In your case you can try setting the log level for
> org.apache.cassandra.transport.Message to warn using the following command
>
> nodetool setlogginglevel org.apache.cassandra.transport.Message WARN
>
> Obviously this will suppress all info level logging in the message class.
>
> I hope that helps.
>
> Cheers,
> Akhil
>
>
>
>
> On 19/06/2017, at 9:13 PM, wxn...@zjqunshuo.com wrote:
>
> Hi,
> Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and
> the Cassandra client access the cluster by the VIP.
> System.log print the below IOException every several seconds. I guess it's
> the SLB service which Ping the port 9042 of the Cassandra node periodically
> and caused the exceptions print.
> Any method to prevend the Ping caused exceptions been print?
>
>
> INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
> Unexpected exception during request; channel = [id: 0x332c09b7, /
> 10.253.106.210:9042]
> java.io.IOException: Error while read(...): Connection reset by peer
>
> at io.netty.channel.epoll.Native.readAddress(Native Method) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]
>
> Cheer,
> -Simon
>
>
>


Re: Cassandra Snapshots and directories

2017-05-12 Thread Daniel Hölbling-Inzko
Hi Varun,
yes you are right - that's the structure that gets created. But if I want
to backup ALL columnfamilies at once this requires a quite complex rsync as
Vladimir mentioned.
I can't just copy over the /data/keyspace directory as that contains all
the data AND all the snapshots. I really have to go through this
columnfamily by columnfamily which is annoying.

greetings Daniel

On Thu, 11 May 2017 at 22:48 Varun Gupta <var...@uber.com> wrote:

>
> I did not get your question completely, with "snapshot files are mixed
> with files and backup files".
>
> When you call nodetool snapshot, it will create a directory with snapshot
> name if specified or current timestamp at
> /data///backup/. This directory will
> have all sstables, metadata files and schema.cql (if using 3.0.9 or higher).
>
>
> On Thu, May 11, 2017 at 2:37 AM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
>> Hi,
>> I am going through this guide to do backup/restore of cassandra data to a
>> new cluster:
>>
>> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_backup_snapshot_restore_t.html#task_ds_cmf_11r_gk
>>
>> When creating a snapshot I get the snapshot files mixed in with the
>> normal data files and backup files, so it's all over the place and very
>> hard (especially with lots of tables per keyspace) to transfer ONLY the
>> snapshot.
>> (Mostly since there is a snapshot directory per table..)
>>
>> Am I missing something or is there some arcane shell command that filters
>> out only the snapshots?
>> Because this way it's much easier to just backup the whole data directory.
>>
>> greetings Daniel
>>
>
>


Cassandra Snapshots and directories

2017-05-11 Thread Daniel Hölbling-Inzko
Hi,
I am going through this guide to do backup/restore of cassandra data to a
new cluster:
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_backup_snapshot_restore_t.html#task_ds_cmf_11r_gk

When creating a snapshot I get the snapshot files mixed in with the normal
data files and backup files, so it's all over the place and very hard
(especially with lots of tables per keyspace) to transfer ONLY the snapshot.
(Mostly since there is a snapshot directory per table..)

Am I missing something or is there some arcane shell command that filters
out only the snapshots?
Because this way it's much easier to just backup the whole data directory.

greetings Daniel


Re: Cassandra 3.10 has partial partition key search but does it result in a table scan?

2017-05-09 Thread Daniel Hölbling-Inzko
If you have to allow filtering for the query to work it usually always
results in a table scan.

greetings Daniel

On Tue, 9 May 2017 at 15:33 Jon Haddad  wrote:

> I don’t see any way it wouldn’t.  Have you tried tracing it?
>
> > On May 9, 2017, at 8:32 AM, Kant Kodali  wrote:
> >
> > Hi All,
> >
> > It looks like Cassandra 3.10 has partial partition key search but does
> it result in a table scan? for example I can have the following
> >
> > create table hello(
> > a text,
> > b int,
> > c text,
> > d text,
> > primary key((a,b), c)
> > );
> >
> > Now I can do select * from hello where a='foo' allow filtering;// This
> works in 3.10 but I wonder if this query results in table scan and if so is
> there any way to limit such that I get max b?
> >
> > Thanks!
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Daniel Hölbling-Inzko
Could there be network issues in connecting between the nodes? If node a
gets To be the query coordinator but can't reach b and c is obviously down
it won't get a quorum.

Greetings
Shalom Sagges <shal...@liveperson.com> schrieb am Fr. 10. März 2017 um
10:55:

> @Ryan, my keyspace replication settings are as follows:
> CREATE KEYSPACE mykeyspace WITH replication = {'class':
> 'NetworkTopologyStrategy', 'DC1': '3', 'DC2: '3', 'DC3': '3'}  AND
> durable_writes = true;
>
> CREATE TABLE mykeyspace.test (
> column1 text,
> column2 text,
> column3 text,
> PRIMARY KEY (column1, column2)
>
> The query is *select * from mykeyspace.test where column1='x';*
>
> @Daniel, the replication factor is 3. That's why I don't understand why I
> get these timeouts when only one node drops.
>
> Also, when I enabled tracing, I got the following error:
> *Unable to fetch query trace: ('Unable to complete the operation against
> any hosts', {: Unavailable('Error from server:
> code=1000 [Unavailable exception] message="Cannot achieve consistency level
> LOCAL_QUORUM" info={\'required_replicas\': 2, \'alive_replicas\': 1,
> \'consistency\': \'LOCAL_QUORUM\'}',)})*
>
> But nodetool status shows that only 1 replica was down:
> --  Address  Load   Tokens   OwnsHost ID
> Rack
> DN  x.x.x.235  134.32 MB  256  ?
> c0920d11-08da-4f18-a7f3-dbfb8c155b19  RAC1
> UN  x.x.x.236  134.02 MB  256  ?
> 2cc0a27b-b1e4-461f-a3d2-186d3d82ff3d  RAC1
> UN  x.x.x.237  134.34 MB  256  ?
> 5b2162aa-8803-4b54-88a9-ff2e70b3d830  RAC1
>
>
> I tried to run the same scenario on all 3 nodes, and only the 3rd node
> didn't fail the query when I dropped it. The nodes were installed and
> configured with Puppet so the configuration is the same on all 3 nodes.
>
>
> Thanks!
>
>
>
> On Fri, Mar 10, 2017 at 10:25 AM, Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
> The LOCAL_QUORUM works on the available replicas in the dc. So if your
> replication factor is 2 and you have 10 nodes you can still only loose 1.
> With a replication factor of 3 you can loose one node and still satisfy the
> query.
> Ryan Svihla <r...@foundev.pro> schrieb am Do. 9. März 2017 um 18:09:
>
> whats your keyspace replication settings and what's your query?
>
> On Thu, Mar 9, 2017 at 9:32 AM, Shalom Sagges <shal...@liveperson.com>
> wrote:
>
> Hi Cassandra Users,
>
> I hope someone could help me understand the following scenario:
>
> Version: 3.0.9
> 3 nodes per DC
> 3 DCs in the cluster.
> Consistency Local_Quorum.
>
> I did a small resiliency test and dropped a node to check the availability
> of the data.
> What I assumed would happen is nothing at all. If a node is down in a 3
> nodes DC, Local_Quorum should still be satisfied.
> However, during the ~10 first seconds after stopping the service, I got
> timeout errors (tried it both from the client and from cqlsh.
>
> This is the error I get:
> *ServerError:
> com.google.common.util.concurrent.UncheckedExecutionException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException:
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
> received only 4 responses.*
>
>
> After ~10 seconds, the same query is successful with no timeout errors.
> The dropped node is still down.
>
> Any idea what could cause this and how to fix it?
>
> Thanks!
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
>
> --
>
> Thanks,
> Ryan Svihla
>
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>


Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Daniel Hölbling-Inzko
The LOCAL_QUORUM works on the available replicas in the dc. So if your
replication factor is 2 and you have 10 nodes you can still only loose 1.
With a replication factor of 3 you can loose one node and still satisfy the
query.
Ryan Svihla  schrieb am Do. 9. März 2017 um 18:09:

> whats your keyspace replication settings and what's your query?
>
> On Thu, Mar 9, 2017 at 9:32 AM, Shalom Sagges 
> wrote:
>
> Hi Cassandra Users,
>
> I hope someone could help me understand the following scenario:
>
> Version: 3.0.9
> 3 nodes per DC
> 3 DCs in the cluster.
> Consistency Local_Quorum.
>
> I did a small resiliency test and dropped a node to check the availability
> of the data.
> What I assumed would happen is nothing at all. If a node is down in a 3
> nodes DC, Local_Quorum should still be satisfied.
> However, during the ~10 first seconds after stopping the service, I got
> timeout errors (tried it both from the client and from cqlsh.
>
> This is the error I get:
> *ServerError:
> com.google.common.util.concurrent.UncheckedExecutionException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException:
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
> received only 4 responses.*
>
>
> After ~10 seconds, the same query is successful with no timeout errors.
> The dropped node is still down.
>
> Any idea what could cause this and how to fix it?
>
> Thanks!
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
>
> --
>
> Thanks,
> Ryan Svihla
>
>