On Fri, 15 Jun 2018, Kiss Gabor (Bitman) wrote:
> Yesterday at 18:15 (CEST) keys.niif.hu started to produce tons
> of logs in /var/lib/sks/DB. In less than 2 hours the 40 GB filesystem
> got fulfilled.
> Deleting files and restarting processes did not help:
> Unfortunately I cannot work on
Just a heads up for anyone trying to rebuild from the dump on
keyserver.mattrude.com...
Looks like something went wrong with the export, as today's dump is only
4GB, but the day before is 11GB.
Compare the README.txt files:
http://keyserver.mattrude.com/dump/2018-06-17/README.txt
I'm not sure if there's a better way, but I rebuilt. If you've forgotten
how and you're on debian, the following gist might help you:
https://gist.github.com/paulfurley/b901428d1702c613531147f7573757fd
Kind regards,
Paul
On 18/06/18 10:47, Shengjing Zhu wrote:
> Hi,
>
> My server disk is also
Hi,
My server disk is also fulled with logs.
I tried to run db_archive, but the command never returns.
So I deleted all the log.* file, now I can't start the sks.
Is there anything I can do except rebuilding?
Thanks
Shengjing Zhu
___
Sks-devel
> On 16 Jun 2018, at 17:32, Paul Furley wrote:
>
> This is a serious, serious flaw... I'm grateful to the individual for taking
> the time to research and highlight this issue. Sure, not ideal that the
> network is struggling as a result, but at least we'll have to find a way to
> fix it!
esearch and highlight this issue. Sure, not ideal that
>> the network is struggling as a result, but at least we'll have to find a
>> way to fix it!
>>
>> Paul
>>
>>
>> Original Message
>> From: andr...@andrewg.com
>> Sent: 16 June 2018 4:02 pm
>
taking the time to research and highlight this issue. Sure, not ideal that
> the network is struggling as a result, but at least we'll have to find a
> way to fix it!
>
> Paul
>
>
> Original Message
> From: andr...@andrewg.com
> Sent: 16 June 2018 4:02 pm
> To: sks-devel@no
as a result, but at least we'll have to find a way to fix it!
Paul
Original Message
From: andr...@andrewg.com
Sent: 16 June 2018 4:02 pm
To: sks-devel@nongnu.org
Subject: Re: [Sks-devel] disk full, keys.niif.hu crashed
On 2018/06/15 22:42, tiker wrote:
> Well, it turns out that the cause of
On 2018/06/15 22:42, tiker wrote:
> Well, it turns out that the cause of our issues, the method to re-create
> these keys and make things worse is already posted publicly.
There are two main ways in which critical internet infrastructure goes
on fire: a government TLA takes it down for nefarious
On 2018/06/16 00:49, James Cloos wrote:
> It is hard to check w/o knowing the key hash, but can iconv(1) decode
> that uid into utf8? Perhaps it is in one of the legacy 16bit encodings?
According to the person responsible, it's just random noise.
A
signature.asc
Description: OpenPGP digital
> "t" == tiker writes:
t> Here's a (temporary) link to an image of what I see:
t> http://www.funkymonkey.org/tmp/bigkey.jpg
It is hard to check w/o knowing the key hash, but can iconv(1) decode
that uid into utf8? Perhaps it is in one of the legacy 16bit encodings?
Can you get that uid
Well, it turns out that the cause of our issues, the method to re-create
these keys and make things worse is already posted publicly.
Take a look at the recently reported issues on the SKS bitbucket site.
I don't think my SKS node has enough storage space to survive long
enough for this issue to
I don't think so but I could be wrong. (I'm no expert here.)
Binary attachments (like images) are marked as "uat [contents
ommited]". In this case, it's a "uid" row that starts the binary data
instead of a text line showing a name.
Here's a (temporary) link to an image of what I see:
On 2018-06-15 at 12:40 -0400, tiker wrote:
> The problems seem to be caused by a large key. There's at least 2
> different hash values for this key (so probably recently updated) and
> one of the versions of the key is 22mb. The size is causing timeouts on
> some reverse proxies and the constant
On 2018-06-15 at 09:40 +0200, André Keller wrote:
> On 15.06.2018 05:54, Kiss Gabor (Bitman) wrote:
> > Yesterday at 18:15 (CEST) keys.niif.hu started to produce tons
> > of logs in /var/lib/sks/DB. In less than 2 hours the 40 GB filesystem
> > got fulfilled.
> > Deleting files and restarting
The problems seem to be caused by a large key. There's at least 2
different hash values for this key (so probably recently updated) and
one of the versions of the key is 22mb. The size is causing timeouts on
some reverse proxies and the constant retries is causing the .log files
to be created
This has happened to my keyserver twice in the last two days. I assumed
it was some sort of malicious behavior, because it happened quite
suddenly both times and had the effect of a DoS. ;-)
For example, I have over 1700 binary log files like "log.002014",
each 10MB, created in the last 24
My little Raspberry Pi node is still online but its file system is also
filling up.
It's trying to get updated keys from its peers but is constantly failing
with:
2018-06-15 08:39:53 Error getting missing keys:
Invalid_argument("String.create")
All of my peers have a different number of keys
some nodes have the db cleanup, some nodes have loggging;
Graph of disk space
There was definitely an injection of keys, will perform some clean up
ops later.
Kind Regards,
Mike
On 15/06/18 13:27, Paul M Furley wrote:
> Glad I wasn't the only one :) keyserver.paulfurley.com also got
>
Glad I wasn't the only one :) keyserver.paulfurley.com also got
destroyed, rebuilt this morning.
I've been getting a lot of traffic alerts from my host lately (>200MB
per hour), anyone know if there's a reason there's been a lot more
traffic lately?
I haven't yet managed to investigate if it's
FWIW, you can set the DB_LOG_AUTOREMOVE flag for the database - the logs
should be removed automatically
[root@instance-4 ~]# cat /var/lib/sks/KDB/DB_CONFIG
set_flags DB_LOG_AUTOREMOVE
Best regards,
Am 15.06.18 um 09:40 schrieb André Keller:
> Hi,
>
> On 15.06.2018 05:54, Kiss
Hi,
On 15.06.2018 05:54, Kiss Gabor (Bitman) wrote:
> Yesterday at 18:15 (CEST) keys.niif.hu started to produce tons
> of logs in /var/lib/sks/DB. In less than 2 hours the 40 GB filesystem
> got fulfilled.
> Deleting files and restarting processes did not help:
keys.communityrack.org shares the
22 matches
Mail list logo