Re: [Pdns-users] PowerDNS needs your thoughts on two important DNSSEC matters

2012-09-04 Thread erkan yanar
On Mon, Sep 03, 2012 at 07:19:45PM +0200, Peter van Dijk wrote:
 Hello,
 
 we are working hard to get 3.1.1 out the door, fixing the last remaining 
 DNSSEC issues. Since 3.1, we have discovered two issues that require some 
 re-engineering and may have database impact. We could really use some input 
 on these issues.
 
 
 ISSUE 1: ordername sorting
 
[snip]
 
 MySQL, depending on charset settings (cannot reproduce right now), will also 
 not do the right thing for us. However, for MySQL there are a few reliable 
 workarounds:
 ALTER TABLE records ADD order name  VARCHAR(255) COLLATE latin1_bin;
 or
 ALTER TABLE records ADD order name  VARBINARY(255);
 
 Both of these will make order name sort correctly - the first one applies 
 when latin1 is already active, the second one is generic.
 
 SQLite mostly seems to do the right thing, at least with default settings.
 
 OUR QUESTIONS:
 1b. Is VARBINARY the best way to do it for MySQL?

Afaik you want only to have the ordering (collation) to be binary. So you have 
some ways to do it without touching the character set.
1. SELECT .. ORDER BY BINARY
   Just change the Query
2. ALTER TABLE records ADD order name  VARCHAR(255) BINARY
   Then you don't care about the CHARSET used by the server. 
   This syntax always set the binary collation specific for that charset

Regards
Erkan


-- 
über den grenzen muß die freiheit wohl wolkenlos sein
___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
http://mailman.powerdns.com/mailman/listinfo/pdns-users


Re: [Pdns-users] PowerDNS in an ISP environment

2011-08-17 Thread erkan yanar
On Tue, Aug 16, 2011 at 01:29:18PM -0600, Michael Loftis wrote:
 On Tue, Aug 16, 2011 at 1:38 AM, Chris Russell
 chris.russ...@knowledgeit.co.uk wrote:
  Hi All,
 
 
 
  Quick question – is anyone on the list using PDNS in an ISP environment,
  especially for auth services ?
 
 
 There have definitely been a few pains here and there.  Some of them
 were caused by the fact that wildcard records are used.  Some of the
 issues I had were caused by MySQL's sometimes flaky replication,
 monitoring them was an absolute must, making sure that they were all
 in sync and up to date was also absolutely required.  The benefits far
 outweighed the costs at that scale for certain.

Yeah an concerning replication-sync you should nowadays use semisync repl. with 
MySQL.
(And still you need monitoring of course:)

Regards
Erkan


-- 
über den grenzen muß die freiheit wohl wolkenlos sein
___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
http://mailman.powerdns.com/mailman/listinfo/pdns-users


Re: [Pdns-users] mysql-tests

2011-05-01 Thread erkan yanar
On Sun, May 01, 2011 at 10:22:58PM +0200, fredrik danerklint wrote:
 erkan,
 
 if you used a script to generate all the data, do you think that you can post 
 that so I also can run these test against the mongodbbackend?
 
 
Na not really.
The basic idea is/was to go through seq() and use md5 to build domains.
So the domains are going to be longer as you would expect in standard
workload.
Having this list. you can fill domains and records where records are
going to have 7-10 records. All the same www/mail/ns etc.

Beside the md5() idea not worth of posting :(

Regards
Erkan


-- 
über den grenzen muß die freiheit wohl wolkenlos sein 
___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
http://mailman.powerdns.com/mailman/listinfo/pdns-users


Re: [Pdns-users] mysql-tests

2011-04-27 Thread erkan yanar
Moin Bert,

On Wed, Apr 27, 2011 at 03:15:27PM +0200, bert hubert wrote:
 On Sat, Apr 23, 2011 at 01:04:51AM +0200, erkan yanar wrote:
  As Im missing any good data I created 6*10^6 entries for domains and
  for every domain some entries in the records-table (about 66*10^6)
 
 That is a pretty good test! 6 million domains is around 2 million domains
 smaller than the largest deployment we know of.
 
Queries per second:   10923.212970 qps
 
 Interesting. Post 3.0 we will be focussing on performance for a few
 releases. It may well be that we'll add guidance on which indexes to use.

In fact I did a new test (on sunday azlev forced me to use -q :):

 # ./dnsperf -d /var/tmp/pdns.list -q 4000 -s localhost

DNS Performance Testing Tool

Nominum Version 1.0.1.0

[Status] Processing input data
[Status] Sending queries (to 127.0.0.1)
[Status] Testing complete

Statistics:

  Parse input file: once
  Ended due to: reaching end of file

  Queries sent: 494969 queries
  Queries completed:494969 queries
  Queries lost: 0 queries

  Avg request size: 55 bytes
  Avg response size:81 bytes

  Percentage completed: 100.00%
  Percentage lost:0.00%

  Started at:   Sun Apr 24 02:50:44 2011
  Finished at:  Sun Apr 24 02:51:05 2011
  Ran for:  21.518132 seconds

  Queries per second:   23002.414894 qps

With pdns-cache it was easy doubled (with up to 1% Packet lost).



 
  As I miss live/real data I would like to get into contact with some 
  live/real-data.
 
 You can use tcpdump  dnsreplay perhaps?

Naa Im just a little dba. In fact I own 5 domains:)

Erkan

___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
http://mailman.powerdns.com/mailman/listinfo/pdns-users


[Pdns-users] mysql-tests

2011-04-22 Thread erkan yanar

Moin I just played with the MySQL-Backend
hardware: DL380G6 12GB memory
pdns: pdns-3.0-rc2
MySQL: 5.2.5-MariaDB-log/XtraDB(InnoDB-Branch)

So Im not into powerdns. As I tried to have a look into mysql only I
disabled the cache as good as I thought it is:
cache-ttl=0
negquery-cache-ttl=0
query-cache-ttl=0
recursive-cache-ttl=0

As Im missing any good data I created 6*10^6 entries for domains and
for every domain some entries in the records-table (about 66*10^6)

So here is the size of the two tables:
++--+-+
| TABLE_NAME | INDEX_LENGTH | DATA_LENGTH |
++--+-+
| domains|475004928 |   431898624 |
| records|  11372855296 |  5813305344 |
++--+-+

I was running about 500.000 queries with dnsperf against pdns.  As there
is an idea to simulate hot data I counted allways the 2. run. And even
if the database data exceeds the memory in total. This does not count
here, as we are working with hot data (the 500.000).
  Parse input file: once
  Ended due to: reaching end of file

  Queries sent: 494969 queries
  Queries completed:494969 queries
  Queries lost: 0 queries

  Avg request size: 55 bytes
  Avg response size:81 bytes

  Percentage completed: 100.00%
  Percentage lost:0.00%

  Started at:   Fri Apr 22 20:45:29 2011
  Finished at:  Fri Apr 22 20:46:18 2011
  Ran for:  48.938326 seconds

  Queries per second:   10114.138354 qps

Next we get rid of an index:
drop  index `rec_name_index`  on records;
There is no need for that index.
++--+-+
| TABLE_NAME | INDEX_LENGTH | DATA_LENGTH |
++--+-+
| domains|475004928 |   431898624 |
| records|   6116343808 |  5813305344 |
++--+-+

yeah see index-size dropping.
  Queries per second:   10822.316691 qps
Ok not faster for *hot* data. But more important not *slower*

Next there is an
 ALTER TABLE records   MODIFY  `type` 
enum('A','','SOA','NS','MX','CNAME','PTR','TXT') NOT NULL;
why?
(ok there is only the subset of types I used for my test) BUT:
1. The index is smaller and faster
2. You can check for the correct types. 

++--+-+
| TABLE_NAME | INDEX_LENGTH | DATA_LENGTH |
++--+-+
| domains|475004928 |   431898624 |
| records|   5816451072 |  5696913408 |
++--+-+

  Queries per second:   10918.386216 qps

Just to make sure I run all this tests with distributor-threads=32. The default 
isn't that effective:

  Queries per second:   5656.248028 qps
So nearly the half with  distributor-threads=3 :(

The last step for today was not indexing the full name-column.
So after dropping nametype_index:
CREATE INDEX `nametype_index` on records(name(100),type);
None of my records had nearly 100 chars. This saves again:

++--+-+
| TABLE_NAME | INDEX_LENGTH | DATA_LENGTH |
++--+-+
| domains|475004928 |   431898624 |
| records|   3547332608 |  5696913408 |
++--+-+

  Queries per second:   10923.212970 qps


So no performance drop while doing all that stuff. Now it is easier to put more 
data into memory. So also queries that are not
hot should benefit etc.
Take care of distributor-threads. 
There are maybe some things to check i.e. more NOT NULL, UNSIGNED INT instead 
INT etc.

Im going to check against PBXT also and MySQL 5.5. Also looking into 
MySQL-Cluster should be fun. 

As I miss live/real data I would like to get into contact with some 
live/real-data.

Comments welcome!

Regards
Erkan



-- 
über den grenzen muß die freiheit wohl wolkenlos sein 
___
Pdns-users mailing list
Pdns-users@mailman.powerdns.com
http://mailman.powerdns.com/mailman/listinfo/pdns-users