Hi

Sorry, this mail was overseen.

[2022-08-30 13:23] "Tobias Fiebig" <tob...@reads-this-mailinglist.com>
> I just started to see some DoS issue on my OpenSMTPd with table-mysql as the 
> backend. Specifically, my server ran into the user lookup process eating a 
> full core and torturing the mysql
> server after some funny brute-force attempts came in. (writeup with graphs 
> here: 
> https://doing-stupid-things.as59645.net/mail/opensmtpd/mysql/2022/08/30/receiving-an-email.html
>  )
>
> After some amateur debugging on my side, it seems like the issue occurs if 
> the mysql table is latin1 (happens if following defaults and table-mysql man) 
> and something is shipped to
> opensmtpd which does not cleanly cast to latin1 (i.e., is not plain ascii), 
> as opensmtpd speaks UTF8 with mysql (again, my amateur analysis).

I hope this doesn't sounds to harsh, but this sounds more like a configuration
issue on your MySQL database, ...

> The query then fails/mysql kills the connection, and table-mysql retries the 
> connection with the same data leading to mysql... you get the idea, and this 
> then happens at 350+ queries/s. 

..., but table-mysql (and table-postgres) should handle this better.

> Would it make sense to have the db-table backends return a tempfailure (for 
> lookups for domain/forward/deliver we'd probably not want to reject mail due 
> to a DB failure) or error
> (auth etc.) if the same query fails like N (5 as default?) times in a row?

A patch limit it to one retry is attached. I don't think it's necesary to
do more then one retry.

Philipp
From b3c7fa2346ea48c4c6737bcbbc999f0c48b8556d Mon Sep 17 00:00:00 2001
From: Philipp Takacs <phil...@bureaucracy.de>
Date: Mon, 13 May 2024 08:55:35 +0200
Subject: [PATCH] limit retries

---
 table_mysql.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/table_mysql.c b/table_mysql.c
index 0de7f78..2631d6c 100644
--- a/table_mysql.c
+++ b/table_mysql.c
@@ -344,9 +344,14 @@ table_mysql_query(const char *key, int service)
 	MYSQL_BIND	 param[1];
 	unsigned long	 keylen;
 	char		 buffer[LINE_MAX];
-	int		 i;
+	int		 i, retry_times = 1;
 
 retry:
+	retry_times--;
+	if (retry_times < 0) {
+		log_warnx("warn: to many retries");
+		return NULL;
+	}
 	stmt = NULL;
 	for (i = 0; i < SQL_MAX; i++) {
 		if (service == 1 << i) {
@@ -508,12 +513,17 @@ table_mysql_fetch(int service, struct dict *params, char *dst, size_t sz)
 {
 	MYSQL_STMT	*stmt;
 	const char	*k;
-	int		 s;
+	int		 s, retry_times = 1;
 
 	if (config->db == NULL && config_connect(config) == 0)
 		return -1;
 
 retry:
+	retry_times--;
+	if (retry_times < 0) {
+		log_warnx("warn: to many retries");
+		return -1;
+	}
 	if (service != K_SOURCE)
 		return -1;
 
-- 
2.39.2

From 7181583fb615a0b0ffbb75c3816baf58e5c4fdc6 Mon Sep 17 00:00:00 2001
From: Philipp Takacs <phil...@bureaucracy.de>
Date: Mon, 13 May 2024 00:50:53 +0200
Subject: [PATCH] limit retries

---
 table_postgres.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/table_postgres.c b/table_postgres.c
index d9f3303..715fad1 100644
--- a/table_postgres.c
+++ b/table_postgres.c
@@ -311,9 +311,14 @@ table_postgres_query(const char *key, int service)
 	PGresult	*res;
 	const char	*errfld;
 	char		*stmt;
-	int		 i;
+	int		 i, retry_times = 2;
 
 retry:
+	retry_times--;
+	if (retry_times < 0) {
+		log_warnx("warn: table-postgres: to many retries");
+		return NULL;
+	}
 	stmt = NULL;
 	for (i = 0; i < SQL_MAX; i++) {
 		if (service == 1 << i) {
@@ -444,12 +449,17 @@ table_postgres_fetch(int service, struct dict *params, char *dst, size_t sz)
 	char		*stmt;
 	PGresult	*res;
 	const char	*k, *errfld;
-	int		 i;
+	int		 i, retry_times = 1;
 
 	if (config->db == NULL && config_connect(config) == 0)
 		return -1;
 
 retry:
+	retry_times--;
+	if (retry_times < 0) {
+		log_warnx("warn: table-postgres: to many retries");
+		return -1;
+	}
 	if (service != K_SOURCE)
 		return -1;
 
-- 
2.39.2

Reply via email to