Hi Ivan

Many thanks for the warnings

I know about of the tuning for the RAID controller, DRBD, but not of cpu pinning in KVM. I should see this topic.

If you have information for me about this topic, please let me know, i use proxmox as Virtualization System (based on KVM and Openvz)

Best regards
Cesar Peschiera


----- Original Message ----- From: "Ivan" <[email protected]>
To: "Cesar Peschiera" <[email protected]>
Sent: Saturday, October 25, 2014 10:53 AM
Subject: Re: [DRBD-user] drbd storage size


Hi Cesar,

On 10/25/2014 04:17 PM, Cesar Peschiera wrote:
Hi Ivan

Thanks for the link, it seem very interesting. In my case, each server have
2 Intel processors of 10 cores and 20 threads, according with this link:
http://ark.intel.com/products/75279/Intel-Xeon-Processor-E5-2690-v2-25M-Cache-3_00-GHz


But i have a question for do about of DRBD and MS-SQL Server 2008 x64, as i use KVM for virtualize a Windows Server with MS-SQL Server, and i only have this VM on the Server. What will be better for me in terms of performance,
enable or not the threads of the processors?

you mean hyperthreading ? I can't really advice on that, but from what I've read on forums it's OK to have it enabled (only the first versions of cpus with hyperthreading many years ago had problems).

You should also investigate cpu pinning in KVM.

That said I don't think you'll be CPU bound if all you run is a database server. The bottleneck will rather be I/O, so you'll have to understand kvm's caching strategy to make good use of the hefty setup you have. If your raid controllers have a battery backup unit (BBU) you should be safe with cache=none.

Also, you'll have to use virtio drivers in windows.

Have a look at that, it should help you.

https://www.ibm.com/developerworks/community/blogs/sagitech/entry/tuning_kvm_guest_for_performance?lang=en

good luck
ivan



Please, if you can explain me, do it clearly for that i can understand you.

Best regards
Cesar


----- Original Message ----- From: "Ivan" <[email protected]>
To: <[email protected]>
Sent: Saturday, October 25, 2014 9:12 AM
Subject: Re: [DRBD-user] drbd storage size


Hi,

On 10/25/2014 02:38 PM, Cesar Peschiera wrote:
Hi Meji

In three weeks i will have two Intel NIC X520-QDA1 of 40 Gb/s, according
to these link:
http://ark.intel.com/products/68672/Intel-Ethernet-Converged-Network-Adapter-X520-QDA1


http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x520-qda1-brief.html



At those speeds it would be interesting to test the upcoming 3.18 kernel
with the bulk network transmission patch [1] ; that should save you a
bunch of cpu cycles.

[1] http://lwn.net/Articles/615238

ivan



In my Hardware setup also i have a RAID controller H710p of Dell (LSI
chipset with 1 MB of cache) and with two groups of 4 HDDs SAS 15K RPM,
each group is  configured in RAID 10, this setup is applied for each
Server (the HDDs for the OS are in other RAID), obviously i don't have
much storage compared to yours.

In these Servers i will have running DRBD 8.4.5 version

If you want to know the result of my tests, only let me know.

Best regards
Cesar

----- Original Message ----- From: "Meij, Henk" <[email protected]>
To: <[email protected]>
Sent: Thursday, October 23, 2014 5:10 PM
Subject: Re: [DRBD-user] drbd storage size


a) it turns out the counter  (8847740/11287100)M goes down, not up,
deh, never noticed

b) ran plain  rsync across eth0 (public, with switches/routers) and
eth1 (nic to nic)
eth0 sent 585260755954 bytes  received 10367 bytes  116690412.98
bytes/sec
eth1 sent 585260755954 bytes  received 10367 bytes  122580535.41
bytes/sec
so my LSI raid card is behaving and DRBD is slowing the initialization
down somehow.
Found chapter 15 and will try some suggestions but ideas welcome.

c) for grins
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by
mockbuild@Build64R6, 2014-08-17 19:26:04
0: cs:SyncTarget ro:Secondary/Secondary ds:Inconsistent/UpToDate C
r-----
   ns:0 nr:3601728 dw:3601408 dr:0 al:0 bm:0 lo:4 pe:11 ua:3 ap:0 ep:1
wo:f oos:109374215324
       [>....................] sync'ed:  0.1% (106810756/106814272)M
       finish: 731:23:05 speed: 41,532 (31,868) want: 41,000 K/sec

100TB in 731 hours would  be 30 days. Can I expect large delta data
replication to go equally slow using DRDB?

-Henk



________________________________________
From: [email protected]
[[email protected]] on behalf of Meij, Henk
[[email protected]]
Sent: Thursday, October 23, 2014 9:57 AM
To: Philipp Reisner; [email protected]
Subject: Re: [DRBD-user] drbd storage size

Thanks for the write up y'll.  I'll have to think about #3 not sure I
grasp it fully.

Last night I started a 12 TB test and started first initialization for
observation (0 is primary).
I have node0:eth1 wired directly into node1:eth1 with 10 foot CAT 6
cable (MTU=9000)
Data from node1 to node0
PING 10.10.52.232 (10.10.52.232) 8970(8998) bytes of data.
8978 bytes from 10.10.52.232: icmp_seq=1 ttl=64 time=0.316 ms

This morning's progress report from node1:(drbd v8.4.5)

       [===>................] sync'ed: 21.7% (8847740/11287100)M
       finish: 62:50:50 speed: 40,032 (39,008) want: 68,840 K/sec

which confuses me: 8.8M out of 11.3M is 77.8% synced, not? I will let
this test finish before I do a dd attempt.

iostat reveals %idle cpu 99%+ and little to no %iowait (near 0%),
iotop confirms very little IO (<5 K/s), typical data
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb1              0.00   231.00    0.00  156.33     0.00 79194.67
506.58 0.46    2.94   1.49  23.37

Something is throttling this IO as 40M/s is about half of what I was
hoping for. Will dig some more.

-Henk

________________________________________
From: [email protected]
[[email protected]] on behalf of Philipp Reisner
[[email protected]]
Sent: Thursday, October 23, 2014 9:17 AM
To: [email protected]
Subject: Re: [DRBD-user] drbd storage size

Am Donnerstag, 23. Oktober 2014, 08:55:03 schrieb Digimer:
On 23/10/14 04:00 AM, Philipp Reisner wrote:
> 2a) Initialize both backend devices to a known state.
>
>      I.e. dd if=/dev/zero of=/dev/sdb1 bs=$((1024*1024))
oflag=direct

Question;

   What I've done in the past to speed up initial sync is to create
the
DRBD device, pause-sync, then do your 'dd if=/dev/zero ...' trick to
/dev/drbd0. This effectively drives the resync speed to the max
possible
and ensures full sync across both nodes. Is this a sane approach?


Yes, sure that is a way to do it. (I have the impression that is
something
form the drbd-8.3 world.)

I do not know from the top of the head if that will be faster than the
built-in background resync in drbd-8.4.

Best,
Phil

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user




_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to