Re: Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
This makes a lot more sense. What does TMOF stand for?

On Sun, Feb 26, 2017 at 1:01 PM, Benjamin Roth <benjamin.r...@jaumo.com>
wrote:

> Hi Seth,
>
> Repairs can create a lot of tiny SSTables. I also encountered the creation
> of so many sstables that the node died because of TMOF. At that time the
> affected nodes were REALLY inconsistent.
>
> One reason can be immense inconsistencies spread over many
> partition(-ranges) with a lot of subrange repairs that trigger a lot of
> independant streams. Each stream results in a single SSTable that can be
> very small. No matter how small it is, it has to be compacted and can cause
> a compaction impact that is a lot bigger than expected from a tiny little
> table.
>
> Also consider that there is a theoretical race condition that can cause
> repairs even though data is not inconsistent due to "flighing in mutations"
> during merkle tree calculation.
>
> 2017-02-26 20:41 GMT+01:00 Seth Edwards <s...@pubnub.com>:
>
>> Hello,
>>
>> We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This
>> caused a large proliferation of sstables and compactions. There is likely a
>> lot of entropy in this keyspace. I am trying to better understand why this
>> is.
>>
>> I've also read that you may not want to run repairs on short TTL data and
>> rely upon other anti-entropy mechanisms to achieve consistency instead. Is
>> this generally true?
>>
>>
>> Thanks!
>>
>
>
>
> --
> Benjamin Roth
> Prokurist
>
> Jaumo GmbH · www.jaumo.com
> Wehrstraße 46 · 73035 Göppingen · Germany
> Phone +49 7161 304880-6 <+49%207161%203048806> · Fax +49 7161 304880-1
> <+49%207161%203048801>
> AG Ulm · HRB 731058 · Managing Director: Jens Kammerer
>


Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
Hello,

We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This
caused a large proliferation of sstables and compactions. There is likely a
lot of entropy in this keyspace. I am trying to better understand why this
is.

I've also read that you may not want to run repairs on short TTL data and
rely upon other anti-entropy mechanisms to achieve consistency instead. Is
this generally true?


Thanks!


Re: Question about compaction strategy changes

2016-10-24 Thread Seth Edwards
Thanks Jeff. We've been trying to find the optimal setting for our TWCS.
It's just two tables with only one of the tables being a factor. Initially
we set the window to an hour, and then increased it to a day. It still
seemed that there were lots of small sstables on disk. dozens of small db
files that were maybe only a few megabytes. These were all the most recent
sstables in the data directory. As we've increased the window size and the
tombstone_threshold we've seen the size of the newest db files on disk to
now be larger, as we would expect.

The total size of the table in question is between 500GB and 550GB on each
node. At certain intervals it seems that all nodes begin a cycle of
compactions and the number of pending tasks goes up. During this period we
can see the compactions use up maybe 100 or 200GB, sometimes more, and then
when everything finished, we gain most of that disk space back. We usually
have over 500GB free but it can trickle down to only 150GB free. I assume
solving this is about finding the optimal TWCS settings for our TTL data.

The other thought is that we currently have data mixed in that does not
have a TTL and we are strongly considering putting this data in it's own
table.

On Mon, Oct 24, 2016 at 6:38 AM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
wrote:

>
>
> If you drop window size, you may force some window-major compactions (if
> you go from 1 week windows to 1 day windows, you’ll have 6 days worth of
> files start compacting into 1-day sstables).
>
> If you increase window size, you’ll likely have adjacent windows join (if
> you go from 1 day windows to 2 day windows, nearly every sstable will be
> joined with the one in the day adjacent to it).
>
>
>
> Short of altering compaction strategies, it seems unlikely that you’d see
> huge jumps where you’d run out of space. How many tables/CFs have TWCS
> enabled? How much space are you using, and how much is free?  Do you have
> hundreds with the same TWCS parameters?
>
>
>
> If you’re running very close to your capacity, you may want to consider
> dropping concurrent compactors down so fewer compaction tasks run at the
> same time. That will translate proportionally to the amount of extra disk
> you have consumed by compaction in a TWCS setting.
>
>
>
>
>
>
>
> *From: *Seth Edwards <s...@pubnub.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Sunday, October 23, 2016 at 7:03 PM
> *To: *user <user@cassandra.apache.org>
> *Subject: *Re: Question about compaction strategy changes
>
>
>
> More compactions meaning "rows to be compacted" or actual number of
> pending compactions? I assumed when I run nodetool compactionstats the
> number of pending tasks would line up with number of sstables that will be
> compacted. Most of the time this is idle, then we hit spots when it could
> jump into the thousands and we and up being short of a few hundred GB of
> disk space.
>
>
>
> On Sun, Oct 23, 2016 at 5:49 PM, kurt Greaves <k...@instaclustr.com>
> wrote:
>
>
>
> On 22 October 2016 at 03:37, Seth Edwards <s...@pubnub.com> wrote:
>
> We're using TWCS and we notice that if we make changes to the options to
> the window unit or size, it seems to implicitly start recompacting all
> sstables.
>
>
>
> If you increase the window unit or size you potentially increase the
> number of SSTable candidates for compaction inside each window, which is
> why you would see more compactions. If you decrease the window you
> shouldn't see any new compactions kicked off, however be aware that you
> will have SSTables covering multiple windows, so until a full cycle of your
> TTL passes your read queries won't benefit from the smaller window size.
>
>
> Kurt Greaves
>
> k...@instaclustr.com
>
> www.instaclustr.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.instaclustr.com=DQMFaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=bT5rVUkGNycBRCPaF4XuTwYmPMNlu83RBkGLXPp7up4=GZ6bHFwxWbRnT6rYMPaZStcQTVz0xDq9HNmMMPDjZ9U=>
>
>
> CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and
> may be legally privileged. If you are not the intended recipient, do not
> disclose, copy, distribute, or use this email or any attachments. If you
> have received this in error please let the sender know and then delete the
> email and all attachments.
>


Re: Question about compaction strategy changes

2016-10-23 Thread Seth Edwards
More compactions meaning "rows to be compacted" or actual number of pending
compactions? I assumed when I run nodetool compactionstats the number of
pending tasks would line up with number of sstables that will be compacted.
Most of the time this is idle, then we hit spots when it could jump into
the thousands and we and up being short of a few hundred GB of disk space.

On Sun, Oct 23, 2016 at 5:49 PM, kurt Greaves <k...@instaclustr.com> wrote:

>
> On 22 October 2016 at 03:37, Seth Edwards <s...@pubnub.com> wrote:
>
>> We're using TWCS and we notice that if we make changes to the options to
>> the window unit or size, it seems to implicitly start recompacting all
>> sstables.
>
>
> If you increase the window unit or size you potentially increase the
> number of SSTable candidates for compaction inside each window, which is
> why you would see more compactions. If you decrease the window you
> shouldn't see any new compactions kicked off, however be aware that you
> will have SSTables covering multiple windows, so until a full cycle of your
> TTL passes your read queries won't benefit from the smaller window size.
>
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>


Question about compaction strategy changes

2016-10-21 Thread Seth Edwards
Hello! We're using TWCS and we notice that if we make changes to the
options to the window unit or size, it seems to implicitly start
recompacting all sstables. Is this indeed the case and more importantly,
does the same happen if we were to adjust the gr_grace_seconds for this
table?


Thanks!


Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
Thanks for the detailed steps Ben! This gives me another option in case of
emergency.



On Mon, Oct 17, 2016 at 1:55 PM, Ben Bromhead <b...@instaclustr.com> wrote:

> yup you would need to copy the files across to the new volume from the dir
> you wanted to give additional space to. Rough steps would look like:
>
>1. Create EBS volume (make it big... like 3TB)
>2. Attach to instance
>3. Mount/format EBS volume
>4. Stop C*
>5. Copy full/troublesome directory to the EBS volume
>6. Remove copied files (using rsync for the copy / remove step can be
>a good idea)
>7. bind mount EBS volume with the same path as the troublesome
>directory
>8. Start C* back up
>9. Let it finish compacting / streaming etc
>10. Stop C*
>11. remove bind mount
>12. copy files back to ephemeral
>13. start C* back up
>14. repeat on other nodes
>15. run repair
>
> You can use this process if you somehow end up in a full disk situation.
> If you end up in a low disk situation you'll have other issues (like
> corrupt / half written SSTable components), but it's better than nothing
>
> Also to maintain your read throughput during this whole thing, double
> check the EBS volumes read_ahead_kb setting on the block volume and reduce
> it to something sane like 0 or 16.
>
>
>
> On Mon, 17 Oct 2016 at 13:42 Seth Edwards <s...@pubnub.com> wrote:
>
>> @Ben
>>
>> Interesting idea, is this also an option for situations where the disk is
>> completely full and Cassandra has stopped? (Not that I want to go there).
>>
>> If this was the route taken, and we did
>>
>> mount --bind   /mnt/path/to/large/sstable   /mnt/newebs
>>
>> We would still need to do some manual copying of files? such as
>>
>> mv /mnt/path/to/large/sstable.sd /mnt/newebs ?
>>
>> Thanks!
>>
>> On Mon, Oct 17, 2016 at 12:59 PM, Ben Bromhead <b...@instaclustr.com>
>> wrote:
>>
>> Yup as everyone has mentioned ephemeral are fine if you run in multiple
>> AZs... which is pretty much mandatory for any production deployment in AWS
>> (and other cloud providers) . i2.2xls are generally your best bet for high
>> read throughput applications on AWS.
>>
>> Also on AWS ephemeral storage will generally survive a user initiated
>> restart. For the times that AWS retires an instance, you get plenty of
>> notice and it's generally pretty rare. We run over 1000 instances on AWS
>> and see one forced retirement a month if that. We've never had an instance
>> pulled from under our feet without warning.
>>
>> To add another option for the original question, one thing you can do is
>> to attach a large EBS drive to the instance and bind mount it to the
>> directory for the table that has the very large SSTables. You will need to
>> copy data across to the EBS volume. Let everything compact and then copy
>> everything back and detach EBS. Latency may be higher than normal on the
>> node you are doing this on (especially if you are used to i2.2xl
>> performance).
>>
>> This is something we often have to do, when we encounter pathological
>> compaction situations associated with bootstrapping, adding new DCs or STCS
>> with a dominant table or people ignore high disk usage warnings :)
>>
>> On Mon, 17 Oct 2016 at 12:43 Jeff Jirsa <jeff.ji...@crowdstrike.com>
>> wrote:
>>
>> Ephemeral is fine, you just need to have enough replicas (in enough AZs
>> and enough regions) to tolerate instances being terminated.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Vladimir Yudovin <vla...@winguzone.com>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Monday, October 17, 2016 at 11:48 AM
>> *To: *user <user@cassandra.apache.org>
>>
>>
>> *Subject: *Re: Adding disk capacity to a running node
>>
>>
>>
>> It's extremely unreliable to use ephemeral (local) disks. Even if you
>> don't stop instance by yourself, it can be restarted on different server in
>> case of some hardware failure or AWS initiated update. So all node data
>> will be lost.
>>
>>
>>
>> Best regards, Vladimir Yudovin,
>>
>>
>> *Winguzone
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__winguzone.com-3Ffrom-3Dlist=DQMFaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=ixOxpX-xpw1dJZNpaMT3mepToWX8gzmsVaXFizQLzoU=4q7P9fddEYpXwPR-h9yA_tk5JwR8l6c7cKJ-LQTVcGM=>
>> - Hosted Cloud Cassandra on Azure and SoftLayer.Launch yo

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
@Ben

Interesting idea, is this also an option for situations where the disk is
completely full and Cassandra has stopped? (Not that I want to go there).

If this was the route taken, and we did

mount --bind   /mnt/path/to/large/sstable   /mnt/newebs

We would still need to do some manual copying of files? such as

mv /mnt/path/to/large/sstable.sd /mnt/newebs ?

Thanks!

On Mon, Oct 17, 2016 at 12:59 PM, Ben Bromhead <b...@instaclustr.com> wrote:

> Yup as everyone has mentioned ephemeral are fine if you run in multiple
> AZs... which is pretty much mandatory for any production deployment in AWS
> (and other cloud providers) . i2.2xls are generally your best bet for high
> read throughput applications on AWS.
>
> Also on AWS ephemeral storage will generally survive a user initiated
> restart. For the times that AWS retires an instance, you get plenty of
> notice and it's generally pretty rare. We run over 1000 instances on AWS
> and see one forced retirement a month if that. We've never had an instance
> pulled from under our feet without warning.
>
> To add another option for the original question, one thing you can do is
> to attach a large EBS drive to the instance and bind mount it to the
> directory for the table that has the very large SSTables. You will need to
> copy data across to the EBS volume. Let everything compact and then copy
> everything back and detach EBS. Latency may be higher than normal on the
> node you are doing this on (especially if you are used to i2.2xl
> performance).
>
> This is something we often have to do, when we encounter pathological
> compaction situations associated with bootstrapping, adding new DCs or STCS
> with a dominant table or people ignore high disk usage warnings :)
>
> On Mon, 17 Oct 2016 at 12:43 Jeff Jirsa <jeff.ji...@crowdstrike.com>
> wrote:
>
>> Ephemeral is fine, you just need to have enough replicas (in enough AZs
>> and enough regions) to tolerate instances being terminated.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Vladimir Yudovin <vla...@winguzone.com>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Monday, October 17, 2016 at 11:48 AM
>> *To: *user <user@cassandra.apache.org>
>>
>>
>> *Subject: *Re: Adding disk capacity to a running node
>>
>>
>>
>> It's extremely unreliable to use ephemeral (local) disks. Even if you
>> don't stop instance by yourself, it can be restarted on different server in
>> case of some hardware failure or AWS initiated update. So all node data
>> will be lost.
>>
>>
>>
>> Best regards, Vladimir Yudovin,
>>
>>
>> *Winguzone
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__winguzone.com-3Ffrom-3Dlist=DQMFaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=ixOxpX-xpw1dJZNpaMT3mepToWX8gzmsVaXFizQLzoU=4q7P9fddEYpXwPR-h9yA_tk5JwR8l6c7cKJ-LQTVcGM=>
>> - Hosted Cloud Cassandra on Azure and SoftLayer.Launch your cluster in
>> minutes.*
>>
>>
>>
>>
>>
>>  On Mon, 17 Oct 2016 14:45:00 -0400*Seth Edwards <s...@pubnub.com
>> <s...@pubnub.com>>* wrote 
>>
>>
>>
>> These are i2.2xlarge instances so the disks currently configured as
>> ephemeral dedicated disks.
>>
>>
>>
>> On Mon, Oct 17, 2016 at 11:34 AM, Laing, Michael <
>> michael.la...@nytimes.com> wrote:
>>
>>
>>
>> You could just expand the size of your ebs volume and extend the file
>> system. No data is lost - assuming you are running Linux.
>>
>>
>>
>>
>>
>> On Monday, October 17, 2016, Seth Edwards <s...@pubnub.com> wrote:
>>
>> We're running 2.0.16. We're migrating to a new data model but we've had
>> an unexpected increase in write traffic that has caused us some capacity
>> issues when we encounter compactions. Our old data model is on STCS. We'd
>> like to add another ebs volume (we're on aws) to our JBOD config and
>> hopefully avoid any situation where we run out of disk space during a large
>> compaction. It appears that the behavior we are hoping to get is actually
>> undesirable and removed in 3.2. It still might be an option for us until we
>> can finish the migration.
>>
>>
>>
>> I'm not familiar with LVM so it may be a bit risky to try at this point.
>>
>>
>>
>> On Mon, Oct 17, 2016 at 9:42 AM, Yabin Meng <yabinm...@gmail.com> wrote:
>>
>> I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
>> because you do mention it as adding it 

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
These are i2.2xlarge instances so the disks currently configured as
ephemeral dedicated disks.

On Mon, Oct 17, 2016 at 11:34 AM, Laing, Michael <michael.la...@nytimes.com>
wrote:

> You could just expand the size of your ebs volume and extend the file
> system. No data is lost - assuming you are running Linux.
>
>
> On Monday, October 17, 2016, Seth Edwards <s...@pubnub.com> wrote:
>
>> We're running 2.0.16. We're migrating to a new data model but we've had
>> an unexpected increase in write traffic that has caused us some capacity
>> issues when we encounter compactions. Our old data model is on STCS. We'd
>> like to add another ebs volume (we're on aws) to our JBOD config and
>> hopefully avoid any situation where we run out of disk space during a large
>> compaction. It appears that the behavior we are hoping to get is actually
>> undesirable and removed in 3.2. It still might be an option for us until we
>> can finish the migration.
>>
>> I'm not familiar with LVM so it may be a bit risky to try at this point.
>>
>> On Mon, Oct 17, 2016 at 9:42 AM, Yabin Meng <yabinm...@gmail.com> wrote:
>>
>>> I assume you're talking about Cassandra JBOD (just a bunch of disk)
>>> setup because you do mention it as adding it to the list of data
>>> directories. If this is the case, you may run into issues, depending on
>>> your C* version. Check this out: http://www.datastax.com/d
>>> ev/blog/improving-jbod.
>>>
>>> Or another approach is to use LVM to manage multiple devices into a
>>> single mount point. If you do so, from what Cassandra can see is just
>>> simply increased disk storage space and there should should have no problem.
>>>
>>> Hope this helps,
>>>
>>> Yabin
>>>
>>> On Mon, Oct 17, 2016 at 11:54 AM, Vladimir Yudovin <vla...@winguzone.com
>>> > wrote:
>>>
>>>> Yes, Cassandra should keep percent of disk usage equal for all disk.
>>>> Compaction process and SSTable flushes will use new disk to distribute both
>>>> new and existing data.
>>>>
>>>> Best regards, Vladimir Yudovin,
>>>>
>>>>
>>>> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra
>>>> on Azure and SoftLayer.Launch your cluster in minutes.*
>>>>
>>>>
>>>>  On Mon, 17 Oct 2016 11:43:27 -0400*Seth Edwards <s...@pubnub.com>*
>>>> wrote 
>>>>
>>>> We have a few nodes that are running out of disk capacity at the moment
>>>> and instead of adding more nodes to the cluster, we would like to add
>>>> another disk to the server and add it to the list of data directories. My
>>>> question, is, will Cassandra use the new disk for compactions on sstables
>>>> that already exist in the primary directory?
>>>>
>>>>
>>>>
>>>> Thanks!
>>>>
>>>>
>>>>
>>>
>>


Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
We're running 2.0.16. We're migrating to a new data model but we've had an
unexpected increase in write traffic that has caused us some capacity
issues when we encounter compactions. Our old data model is on STCS. We'd
like to add another ebs volume (we're on aws) to our JBOD config and
hopefully avoid any situation where we run out of disk space during a large
compaction. It appears that the behavior we are hoping to get is actually
undesirable and removed in 3.2. It still might be an option for us until we
can finish the migration.

I'm not familiar with LVM so it may be a bit risky to try at this point.

On Mon, Oct 17, 2016 at 9:42 AM, Yabin Meng <yabinm...@gmail.com> wrote:

> I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
> because you do mention it as adding it to the list of data directories. If
> this is the case, you may run into issues, depending on your C* version.
> Check this out: http://www.datastax.com/dev/blog/improving-jbod.
>
> Or another approach is to use LVM to manage multiple devices into a single
> mount point. If you do so, from what Cassandra can see is just simply
> increased disk storage space and there should should have no problem.
>
> Hope this helps,
>
> Yabin
>
> On Mon, Oct 17, 2016 at 11:54 AM, Vladimir Yudovin <vla...@winguzone.com>
> wrote:
>
>> Yes, Cassandra should keep percent of disk usage equal for all disk.
>> Compaction process and SSTable flushes will use new disk to distribute both
>> new and existing data.
>>
>> Best regards, Vladimir Yudovin,
>>
>>
>> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
>> Azure and SoftLayer.Launch your cluster in minutes.*
>>
>>
>>  On Mon, 17 Oct 2016 11:43:27 -0400*Seth Edwards <s...@pubnub.com
>> <s...@pubnub.com>>* wrote 
>>
>> We have a few nodes that are running out of disk capacity at the moment
>> and instead of adding more nodes to the cluster, we would like to add
>> another disk to the server and add it to the list of data directories. My
>> question, is, will Cassandra use the new disk for compactions on sstables
>> that already exist in the primary directory?
>>
>>
>>
>> Thanks!
>>
>>
>>
>


Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
We have a few nodes that are running out of disk capacity at the moment and
instead of adding more nodes to the cluster, we would like to add another
disk to the server and add it to the list of data directories. My question,
is, will Cassandra use the new disk for compactions on sstables that
already exist in the primary directory?



Thanks!


Re: Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I see what you are saying. So basically take whatever existing token I have
and divide it by 2, give or take a couple of tokens?

On Mon, Feb 9, 2015 at 5:17 PM, Robert Coli rc...@eventbrite.com wrote:

 On Mon, Feb 9, 2015 at 4:59 PM, Seth Edwards s...@pubnub.com wrote:

 We are choosing to double our cluster from six to twelve. I ran the token
 generator. Based on what I read in the documentation, I expected to see the
 same first six tokens and six new tokens. Instead I see almost the same
 tokens but off by a few numbers. Is this expected? Should I change the
 similar tokens to the new ones? Am I doing it wrong?


 In your existing cluster, your first token is at
 28356863910078205288614550619314017621, which ends in an odd number.

 You cannot therefore choose a new token which exactly bisects its range,
 because a node cannot own the token 28356863910078205288614550619314017621
 /2 =
 14178431955039102644307275309657008810.5 ... because tokens are integers.

 You will however notice that floor() of your current token divided by two
 is your new token (14178431955039102644307275309657008810).

 I would personally keep my existing 6 tokens and do the simple math myself
 of bisecting their ranges, not move my existing tokens around by one or two
 tokens.

 =Rob









Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I am on Cassandra 1.2.19 and I am following the documentation for adding
existing nodes to a cluster
http://www.datastax.com/docs/1.1/cluster_management#adding-capacity-to-an-existing-cluster
.

We are choosing to double our cluster from six to twelve. I ran the token
generator. Based on what I read in the documentation, I expected to see the
same first six tokens and six new tokens. Instead I see almost the same
tokens but off by a few numbers. Is this expected? Should I change the
similar tokens to the new ones? Am I doing it wrong?


Here is the output I am dealing with.

With six:

DC #1:
  Node #1:0
  Node #2:   28356863910078205288614550619314017621
  Node #3:   56713727820156410577229101238628035242
  Node #4:   85070591730234615865843651857942052863
  Node #5:  113427455640312821154458202477256070484
  Node #6:  141784319550391026443072753096570088105

With twelve:

DC #1:
  Node #01:0
  Node #02:   14178431955039102644307275309657008810
  Node #03:   28356863910078205288614550619314017620
  Node #04:   42535295865117307932921825928971026430
  Node #05:   56713727820156410577229101238628035240
  Node #06:   70892159775195513221536376548285044050
  Node #07:   85070591730234615865843651857942052860
  Node #08:   99249023685273718510150927167599061670
  Node #09:  113427455640312821154458202477256070480
  Node #10:  127605887595351923798765477786913079290
  Node #11:  141784319550391026443072753096570088100
  Node #12:  155962751505430129087380028406227096910