The branch, master has been updated
via c9d61db ctdb-build: Exit if requested feature cannot be built
via 9f54b6b ctdb-daemon: Log ctdb socket in the main daemon
via 5a67bb4 ctdb-pmda: CTDB client code does not require ctdb->methods
via ca35d81 ctdb-daemon: Check if method is initialized before calling
via e582d21 ctdb-ib: Include system/wait.h for signal
via d9b6cd0 ctdb-client: Expose ctdb_ltdb_fetch in client API
via a660630 ctdb-client: Add debug messages to client db api
via 1bd97e6 ctdb-client: Fix implementation of transaction cancel
via 8310722 ctdb-client: Add async version of transaction cancel
via f1b6bdb ctdb-client: Fix implementation of transaction commit
via f721035 ctdb-client: Fix implementation of transaction start
via 37f587d ctdb-client: During transaction commit fetch seqnum locally
via da03f58 ctdb-client: Release the g_lock record once the update is
done
via 4ff8f60 ctdb-client: Remove commented old g_lock implemention code
via b762d77 ctdb-client: Release g_lock lock before retrying
via 8b8e73a ctdb-client: Fix g_lock implementation
via 3ed3d46 ctdb-client: If g_lock lock conflicts, try again sooner
via 3888439 ctdb-client: Factor out ctdb_client_get_server_id function
via f00319f ctdb-client: Use async version of delete_record in g_lock
unlock
via f0e331b ctdb-client: Fix implementation of delete_record
via 5e2bd64 ctdb-client: Add async version of delete_record
via 5ce8ac8 ctdb-client: Fix ctdb_rec_buffer traversal routine
via 3da13a8 ctdb-client: Add sync version of sending multiple messages
via 7c8c6ce ctdb-daemon: Improve log message
via e6818c8 ctdb-recoverd: Improve election win messages
from 978bc86 kerberos: Return enc data on PREAUTH_FAILED
https://git.samba.org/?p=samba.git;a=shortlog;h=master
- Log -----------------------------------------------------------------
commit c9d61dbbfb595899c69bbf6c827ac6bf46e74214
Author: Amitay Isaacs <[email protected]>
Date: Mon Jun 27 18:26:34 2016 +1000
ctdb-build: Exit if requested feature cannot be built
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
Autobuild-User(master): Martin Schwenke <[email protected]>
Autobuild-Date(master): Tue Jul 5 14:38:30 CEST 2016 on sn-devel-144
commit 9f54b6b67c912f1ca954b87138aeeefa522e8029
Author: Amitay Isaacs <[email protected]>
Date: Mon Jun 27 18:17:38 2016 +1000
ctdb-daemon: Log ctdb socket in the main daemon
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 5a67bb449655b2de0d6c7c905833a23956633f91
Author: Amitay Isaacs <[email protected]>
Date: Mon Jun 27 18:37:27 2016 +1000
ctdb-pmda: CTDB client code does not require ctdb->methods
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit ca35d8149d987258440ed2f8746a953ad74effdf
Author: Amitay Isaacs <[email protected]>
Date: Mon Jun 27 18:00:49 2016 +1000
ctdb-daemon: Check if method is initialized before calling
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit e582d2153742d89478d0eba47c1b948199d245ec
Author: Amitay Isaacs <[email protected]>
Date: Mon Jun 27 17:28:59 2016 +1000
ctdb-ib: Include system/wait.h for signal
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit d9b6cd023686ff31d8ff83a5a98e7d1200f15e67
Author: Amitay Isaacs <[email protected]>
Date: Wed Apr 20 14:18:55 2016 +1000
ctdb-client: Expose ctdb_ltdb_fetch in client API
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit a66063069e28b1bf0a425f477bf63095ee497717
Author: Amitay Isaacs <[email protected]>
Date: Mon Apr 18 15:56:00 2016 +1000
ctdb-client: Add debug messages to client db api
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 1bd97e6d6ff8a99fae28c29be9bbaab93802e986
Author: Amitay Isaacs <[email protected]>
Date: Fri Jul 1 17:53:17 2016 +1000
ctdb-client: Fix implementation of transaction cancel
Wrap async transaction cancel to unlock g_lock lock and free transaction
handle.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 831072292212fb4e63ab6ebf51ccefd1547b98d0
Author: Amitay Isaacs <[email protected]>
Date: Thu Apr 21 17:47:43 2016 +1000
ctdb-client: Add async version of transaction cancel
Transaction cancel should get rid of g_lock lock.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit f1b6bdb6190a3902d7eb5698d8b9562abe3bd72c
Author: Amitay Isaacs <[email protected]>
Date: Fri Apr 15 17:44:14 2016 +1000
ctdb-client: Fix implementation of transaction commit
There is no need to explicitly check that recovery is not active before
sending TRANS33_COMMIT control. Just try TRANS3_COMMIT control and if
recovery occurs before the control is completed, the control will fail
and it can be retried.
Make sure g_lock lock is released after the transaction is complete.
Also, add timeout to the client api.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit f721035d4339d910c3cf479eee93b6539f2ef05a
Author: Amitay Isaacs <[email protected]>
Date: Fri Apr 15 17:44:14 2016 +1000
ctdb-client: Fix implementation of transaction start
Since g_lock checks if the process exists in case of conflicting lock,
there is no need to register srvid.
Transaction start returns a transaction handle and transaction
commit/cancel will free that handle. Since we cannot call async code
in a talloc destructor, this avoids the use of talloc destructor for
cancelling the transaction.
If user frees the transaction handle instead of calling transaction
cancel, it will leave stale g_lock lock. This stale g_lock lock will
get cleaned up on next transaction attempt.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 37f587de7afd01d0b663be722a28acffc7da9c39
Author: Amitay Isaacs <[email protected]>
Date: Tue Apr 19 16:24:05 2016 +1000
ctdb-client: During transaction commit fetch seqnum locally
This avoids extra controls to the server.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit da03f5883e81dc6d35b83241dbfb55c92f4754a6
Author: Amitay Isaacs <[email protected]>
Date: Tue Apr 19 15:35:55 2016 +1000
ctdb-client: Release the g_lock record once the update is done
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 4ff8f603074031afd844cc92058e38b30bd05902
Author: Amitay Isaacs <[email protected]>
Date: Thu Jun 16 16:10:20 2016 +1000
ctdb-client: Remove commented old g_lock implemention code
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit b762d771697b80c35daf4f874143a8e74b703d2d
Author: Amitay Isaacs <[email protected]>
Date: Thu Jun 16 16:17:39 2016 +1000
ctdb-client: Release g_lock lock before retrying
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 8b8e73a3105762828a1ae7ed0bde9f81829f40ef
Author: Amitay Isaacs <[email protected]>
Date: Wed Apr 20 11:30:21 2016 +1000
ctdb-client: Fix g_lock implementation
If a conflicting g_lock entry is found, check if the process exists.
This matches Samba implementation.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 3ed3d460b7aeb12ed44455d9a77452c74e38517b
Author: Amitay Isaacs <[email protected]>
Date: Tue Apr 19 17:37:46 2016 +1000
ctdb-client: If g_lock lock conflicts, try again sooner
Instead of delaying for 1 second, try to get g_lock lock again after
1 milli-second.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 3888439971afd81745e275086fd1142dd78b8aa8
Author: Amitay Isaacs <[email protected]>
Date: Tue Apr 19 15:24:11 2016 +1000
ctdb-client: Factor out ctdb_client_get_server_id function
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit f00319f0bcbeab728f78748b487154b75f84f796
Author: Amitay Isaacs <[email protected]>
Date: Thu Jun 16 16:22:43 2016 +1000
ctdb-client: Use async version of delete_record in g_lock unlock
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit f0e331b70459c2de198894c7b591a03d014ae88b
Author: Amitay Isaacs <[email protected]>
Date: Thu Jun 16 16:34:39 2016 +1000
ctdb-client: Fix implementation of delete_record
In delete_record, sync call to ctdb_ctrl_schedule_for_deletion will
cause nested event loops. Instead wrap the async version.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 5e2bd64cf2dc36251bba02ef58ecc14566db0aca
Author: Amitay Isaacs <[email protected]>
Date: Mon Apr 18 16:14:05 2016 +1000
ctdb-client: Add async version of delete_record
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 5ce8ac85c7f7fbec8d8b4345e223a5c2c953976d
Author: Amitay Isaacs <[email protected]>
Date: Tue Apr 19 16:01:05 2016 +1000
ctdb-client: Fix ctdb_rec_buffer traversal routine
In commit 1ee7053180057ea526870182b5619a206b4d103b, the
ctdb_rec_buffer_traverse always passes NULL for header. So explicitly
extract header from the data.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 3da13a886ddbc789a618ba909552241fda5ffce9
Author: Amitay Isaacs <[email protected]>
Date: Fri Apr 1 16:51:47 2016 +1100
ctdb-client: Add sync version of sending multiple messages
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit 7c8c6ce74e69845fc7a57ab8a678c94d759129f9
Author: Amitay Isaacs <[email protected]>
Date: Mon Jul 4 14:38:28 2016 +1000
ctdb-daemon: Improve log message
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
commit e6818c8e3c3989c5c098e1867f45af6f1968822f
Author: Amitay Isaacs <[email protected]>
Date: Mon Jul 4 14:30:17 2016 +1000
ctdb-recoverd: Improve election win messages
Logging that node has lost election is less useful than knowing which
node has won the election.
Signed-off-by: Amitay Isaacs <[email protected]>
Reviewed-by: Martin Schwenke <[email protected]>
-----------------------------------------------------------------------
Summary of changes:
ctdb/client/client.h | 30 ++
ctdb/client/client_db.c | 764 ++++++++++++++++++++++++++-----------------
ctdb/client/client_message.c | 28 ++
ctdb/client/client_util.c | 15 +
ctdb/ib/ibwrapper_test.c | 1 +
ctdb/server/ctdb_daemon.c | 4 +-
ctdb/server/ctdb_fork.c | 2 +-
ctdb/server/ctdb_recover.c | 8 +-
ctdb/utils/pmda/pmda_ctdb.c | 4 -
ctdb/wscript | 2 +
10 files changed, 552 insertions(+), 306 deletions(-)
Changeset truncated at 500 lines:
diff --git a/ctdb/client/client.h b/ctdb/client/client.h
index 2aca4b5..b7fa7fb 100644
--- a/ctdb/client/client.h
+++ b/ctdb/client/client.h
@@ -92,6 +92,13 @@ int ctdb_client_message(TALLOC_CTX *mem_ctx, struct
tevent_context *ev,
struct ctdb_client_context *client,
uint32_t destnode, struct ctdb_req_message *message);
+int ctdb_client_message_multi(TALLOC_CTX *mem_ctx,
+ struct tevent_context *ev,
+ struct ctdb_client_context *client,
+ uint32_t *pnn_list, int count,
+ struct ctdb_req_message *message,
+ int **perr_list);
+
struct tevent_req *ctdb_client_set_message_handler_send(
TALLOC_CTX *mem_ctx,
struct tevent_context *ev,
@@ -745,6 +752,10 @@ int ctdb_db_traverse(struct ctdb_db_context *db, bool
readonly,
bool extract_header,
ctdb_rec_parser_func_t parser, void *private_data);
+int ctdb_ltdb_fetch(struct ctdb_db_context *db, TDB_DATA key,
+ struct ctdb_ltdb_header *header,
+ TALLOC_CTX *mem_ctx, TDB_DATA *data);
+
struct tevent_req *ctdb_fetch_lock_send(TALLOC_CTX *mem_ctx,
struct tevent_context *ev,
struct ctdb_client_context *client,
@@ -764,6 +775,12 @@ int ctdb_fetch_lock(TALLOC_CTX *mem_ctx, struct
tevent_context *ev,
int ctdb_store_record(struct ctdb_record_handle *h, TDB_DATA data);
+struct tevent_req *ctdb_delete_record_send(TALLOC_CTX *mem_ctx,
+ struct tevent_context *ev,
+ struct ctdb_record_handle *h);
+
+bool ctdb_delete_record_recv(struct tevent_req *req, int *perr);
+
int ctdb_delete_record(struct ctdb_record_handle *h);
struct tevent_req *ctdb_g_lock_lock_send(TALLOC_CTX *mem_ctx,
@@ -815,12 +832,21 @@ int ctdb_transaction_delete_record(struct
ctdb_transaction_handle *h,
struct tevent_req *ctdb_transaction_commit_send(
TALLOC_CTX *mem_ctx,
struct tevent_context *ev,
+ struct timeval timeout,
struct ctdb_transaction_handle *h);
bool ctdb_transaction_commit_recv(struct tevent_req *req, int *perr);
int ctdb_transaction_commit(struct ctdb_transaction_handle *h);
+struct tevent_req *ctdb_transaction_cancel_send(
+ TALLOC_CTX *mem_ctx,
+ struct tevent_context *ev,
+ struct timeval timeout,
+ struct ctdb_transaction_handle *h);
+
+bool ctdb_transaction_cancel_recv(struct tevent_req *req, int *perr);
+
int ctdb_transaction_cancel(struct ctdb_transaction_handle *h);
/* from client/client_util.c */
@@ -841,6 +867,10 @@ int ctdb_ctrl_modflags(TALLOC_CTX *mem_ctx, struct
tevent_context *ev,
uint32_t destnode, struct timeval timeout,
uint32_t set, uint32_t clear);
+struct ctdb_server_id ctdb_client_get_server_id(
+ struct ctdb_client_context *client,
+ uint32_t task_id);
+
bool ctdb_server_id_equal(struct ctdb_server_id *sid1,
struct ctdb_server_id *sid2);
diff --git a/ctdb/client/client_db.c b/ctdb/client/client_db.c
index 85d14e3..98de1b8 100644
--- a/ctdb/client/client_db.c
+++ b/ctdb/client/client_db.c
@@ -121,6 +121,9 @@ static void ctdb_set_db_flags_nodemap_done(struct
tevent_req *subreq)
status = ctdb_client_control_recv(subreq, &ret, state, &reply);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR,
+ ("set_db_flags: 0x%08x GET_NODEMAP failed, ret=%d\n",
+ state->db_id, ret));
tevent_req_error(req, ret);
return;
}
@@ -128,6 +131,9 @@ static void ctdb_set_db_flags_nodemap_done(struct
tevent_req *subreq)
ret = ctdb_reply_control_get_nodemap(reply, state, &nodemap);
talloc_free(reply);
if (ret != 0) {
+ DEBUG(DEBUG_ERR,
+ ("set_db_flags: 0x%08x GET_NODEMAP parse failed,
ret=%d\n",
+ state->db_id, ret));
tevent_req_error(req, ret);
return;
}
@@ -136,6 +142,9 @@ static void ctdb_set_db_flags_nodemap_done(struct
tevent_req *subreq)
state, &state->pnn_list);
talloc_free(nodemap);
if (state->count <= 0) {
+ DEBUG(DEBUG_ERR,
+ ("set_db_flags: 0x%08x no connected nodes, count=%d\n",
+ state->db_id, state->count));
tevent_req_error(req, ENOMEM);
return;
}
@@ -184,6 +193,9 @@ static void ctdb_set_db_flags_readonly_done(struct
tevent_req *subreq)
NULL);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR,
+ ("set_db_flags: 0x%08x SET_DB_READONLY failed, ret=%d\n",
+ state->db_id, ret));
tevent_req_error(req, ret);
return;
}
@@ -208,6 +220,9 @@ static void ctdb_set_db_flags_sticky_done(struct tevent_req
*subreq)
NULL);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR,
+ ("set_db_flags: 0x%08x SET_DB_STICKY failed, ret=%d\n",
+ state->db_id, ret));
tevent_req_error(req, ret);
return;
}
@@ -316,6 +331,8 @@ static void ctdb_attach_mutex_done(struct tevent_req
*subreq)
status = ctdb_client_control_recv(subreq, &ret, state, &reply);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("attach: %s GET_TUNABLE failed, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
@@ -368,6 +385,12 @@ static void ctdb_attach_dbid_done(struct tevent_req
*subreq)
status = ctdb_client_control_recv(subreq, &ret, state, &reply);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("attach: %s %s failed, ret=%d\n",
+ state->db->db_name,
+ (state->db->persistent
+ ? "DB_ATTACH_PERSISTENT"
+ : "DB_ATTACH"),
+ ret));
tevent_req_error(req, ret);
return;
}
@@ -380,6 +403,8 @@ static void ctdb_attach_dbid_done(struct tevent_req *subreq)
}
talloc_free(reply);
if (ret != 0) {
+ DEBUG(DEBUG_ERR, ("attach: %s failed to get db_id, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
@@ -408,6 +433,8 @@ static void ctdb_attach_dbpath_done(struct tevent_req
*subreq)
status = ctdb_client_control_recv(subreq, &ret, state, &reply);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("attach: %s GETDBPATH failed, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
@@ -416,6 +443,8 @@ static void ctdb_attach_dbpath_done(struct tevent_req
*subreq)
&state->db->db_path);
talloc_free(reply);
if (ret != 0) {
+ DEBUG(DEBUG_ERR, ("attach: %s GETDBPATH parse failed, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
@@ -444,19 +473,25 @@ static void ctdb_attach_health_done(struct tevent_req
*subreq)
status = ctdb_client_control_recv(subreq, &ret, state, &reply);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("attach: %s DB_GET_HEALTH failed, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
ret = ctdb_reply_control_db_get_health(reply, state, &reason);
if (ret != 0) {
+ DEBUG(DEBUG_ERR,
+ ("attach: %s DB_GET_HEALTH parse failed, ret=%d\n",
+ state->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
if (reason != NULL) {
/* Database unhealthy, avoid attach */
- /* FIXME: Log here */
+ DEBUG(DEBUG_ERR, ("attach: %s database unhealthy (%s)\n",
+ state->db->db_name, reason));
tevent_req_error(req, EIO);
return;
}
@@ -482,6 +517,8 @@ static void ctdb_attach_flags_done(struct tevent_req
*subreq)
status = ctdb_set_db_flags_recv(subreq, &ret);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("attach: %s set db flags 0x%08x failed\n",
+ state->db->db_name, state->db_flags));
tevent_req_error(req, ret);
return;
}
@@ -489,6 +526,8 @@ static void ctdb_attach_flags_done(struct tevent_req
*subreq)
state->db->ltdb = tdb_wrap_open(state->db, state->db->db_path, 0,
state->tdb_flags, O_RDWR, 0);
if (tevent_req_nomem(state->db->ltdb, req)) {
+ DEBUG(DEBUG_ERR, ("attach: %s tdb_wrap_open failed\n",
+ state->db->db_name));
return;
}
DLIST_ADD(state->client->db, state->db);
@@ -648,9 +687,9 @@ int ctdb_db_traverse(struct ctdb_db_context *db, bool
readonly,
return state.error;
}
-static int ctdb_ltdb_fetch(struct ctdb_db_context *db, TDB_DATA key,
- struct ctdb_ltdb_header *header,
- TALLOC_CTX *mem_ctx, TDB_DATA *data)
+int ctdb_ltdb_fetch(struct ctdb_db_context *db, TDB_DATA key,
+ struct ctdb_ltdb_header *header,
+ TALLOC_CTX *mem_ctx, TDB_DATA *data)
{
TDB_DATA rec;
int ret;
@@ -757,6 +796,8 @@ struct tevent_req *ctdb_fetch_lock_send(TALLOC_CTX *mem_ctx,
/* Check that database is not persistent */
if (db->persistent) {
+ DEBUG(DEBUG_ERR, ("fetch_lock: %s database not volatile\n",
+ db->db_name));
tevent_req_error(req, EINVAL);
return tevent_req_post(req, ev);
}
@@ -783,8 +824,11 @@ static int ctdb_fetch_lock_check(struct tevent_req *req)
int ret, err = 0;
bool do_migrate = false;
- ret = tdb_chainlock(state->h->db->ltdb->tdb, state->h->key);
+ ret = tdb_chainlock(h->db->ltdb->tdb, h->key);
if (ret != 0) {
+ DEBUG(DEBUG_ERR,
+ ("fetch_lock: %s tdb_chainlock failed, %s\n",
+ h->db->db_name, tdb_errorstr(h->db->ltdb->tdb)));
err = EIO;
goto failed;
}
@@ -844,8 +888,9 @@ failed:
}
ret = tdb_chainunlock(h->db->ltdb->tdb, h->key);
if (ret != 0) {
- DEBUG(DEBUG_ERR, ("tdb_chainunlock failed on %s\n",
- h->db->db_name));
+ DEBUG(DEBUG_ERR,
+ ("fetch_lock: %s tdb_chainunlock failed, %s\n",
+ h->db->db_name, tdb_errorstr(h->db->ltdb->tdb)));
return EIO;
}
@@ -894,6 +939,8 @@ static void ctdb_fetch_lock_migrate_done(struct tevent_req
*subreq)
status = ctdb_client_call_recv(subreq, state, &reply, &ret);
TALLOC_FREE(subreq);
if (! status) {
+ DEBUG(DEBUG_ERR, ("fetch_lock: %s CALL failed, ret=%d\n",
+ state->h->db->db_name, ret));
tevent_req_error(req, ret);
return;
}
@@ -917,7 +964,14 @@ static void ctdb_fetch_lock_migrate_done(struct tevent_req
*subreq)
static int ctdb_record_handle_destructor(struct ctdb_record_handle *h)
{
- tdb_chainunlock(h->db->ltdb->tdb, h->key);
+ int ret;
+
+ ret = tdb_chainunlock(h->db->ltdb->tdb, h->key);
+ if (ret != 0) {
+ DEBUG(DEBUG_ERR,
+ ("fetch_lock: %s tdb_chainunlock failed, %s\n",
+ h->db->db_name, tdb_errorstr(h->db->ltdb->tdb)));
+ }
free(h->data.dptr);
return 0;
}
@@ -934,6 +988,7 @@ struct ctdb_record_handle *ctdb_fetch_lock_recv(struct
tevent_req *req,
if (tevent_req_is_unix_error(req, &err)) {
if (perr != NULL) {
+ TALLOC_FREE(state->h);
*perr = err;
}
return NULL;
@@ -1019,8 +1074,9 @@ int ctdb_store_record(struct ctdb_record_handle *h,
TDB_DATA data)
ret = tdb_store(h->db->ltdb->tdb, h->key, rec, TDB_REPLACE);
if (ret != 0) {
- DEBUG(DEBUG_ERR, ("Failed to store record in DB %s\n",
- h->db->db_name));
+ DEBUG(DEBUG_ERR,
+ ("store_record: %s tdb_store failed, %s\n",
+ h->db->db_name, tdb_errorstr(h->db->ltdb->tdb)));
return EIO;
}
@@ -1028,21 +1084,43 @@ int ctdb_store_record(struct ctdb_record_handle *h,
TDB_DATA data)
return 0;
}
-int ctdb_delete_record(struct ctdb_record_handle *h)
+struct ctdb_delete_record_state {
+ struct ctdb_record_handle *h;
+};
+
+static void ctdb_delete_record_done(struct tevent_req *subreq);
+
+struct tevent_req *ctdb_delete_record_send(TALLOC_CTX *mem_ctx,
+ struct tevent_context *ev,
+ struct ctdb_record_handle *h)
{
- TDB_DATA rec;
+ struct tevent_req *req, *subreq;
+ struct ctdb_delete_record_state *state;
struct ctdb_key_data key;
+ struct ctdb_req_control request;
+ TDB_DATA rec;
int ret;
+ req = tevent_req_create(mem_ctx, &state,
+ struct ctdb_delete_record_state);
+ if (req == NULL) {
+ return NULL;
+ }
+
+ state->h = h;
+
/* Cannot delete the record if it was obtained as a readonly copy */
if (h->readonly) {
- return EINVAL;
+ DEBUG(DEBUG_ERR, ("fetch_lock delete: %s readonly record\n",
+ h->db->db_name));
+ tevent_req_error(req, EINVAL);
+ return tevent_req_post(req, ev);
}
rec.dsize = ctdb_ltdb_header_len(&h->header);
rec.dptr = talloc_size(h, rec.dsize);
- if (rec.dptr == NULL) {
- return ENOMEM;
+ if (tevent_req_nomem(rec.dptr, req)) {
+ return tevent_req_post(req, ev);
}
ctdb_ltdb_header_push(&h->header, rec.dptr);
@@ -1050,22 +1128,91 @@ int ctdb_delete_record(struct ctdb_record_handle *h)
ret = tdb_store(h->db->ltdb->tdb, h->key, rec, TDB_REPLACE);
talloc_free(rec.dptr);
if (ret != 0) {
- DEBUG(DEBUG_ERR, ("Failed to delete record in DB %s\n",
- h->db->db_name));
- return EIO;
+ DEBUG(DEBUG_ERR,
+ ("fetch_lock delete: %s tdb_sore failed, %s\n",
+ h->db->db_name, tdb_errorstr(h->db->ltdb->tdb)));
+ tevent_req_error(req, EIO);
+ return tevent_req_post(req, ev);
}
key.db_id = h->db->db_id;
key.header = h->header;
key.key = h->key;
- ret = ctdb_ctrl_schedule_for_deletion(h, h->ev, h->client,
- h->client->pnn,
- tevent_timeval_zero(), &key);
- if (ret != 0) {
- DEBUG(DEBUG_WARNING,
- ("Failed to mark record to be deleted in DB %s\n",
- h->db->db_name));
+ ctdb_req_control_schedule_for_deletion(&request, &key);
+ subreq = ctdb_client_control_send(state, ev, h->client,
+ ctdb_client_pnn(h->client),
+ tevent_timeval_zero(),
+ &request);
+ if (tevent_req_nomem(subreq, req)) {
+ return tevent_req_post(req, ev);
+ }
+ tevent_req_set_callback(subreq, ctdb_delete_record_done, req);
+
+ return req;
+}
+
+static void ctdb_delete_record_done(struct tevent_req *subreq)
+{
+ struct tevent_req *req = tevent_req_callback_data(
+ subreq, struct tevent_req);
+ struct ctdb_delete_record_state *state = tevent_req_data(
+ req, struct ctdb_delete_record_state);
+ int ret;
+ bool status;
+
+ status = ctdb_client_control_recv(subreq, &ret, NULL, NULL);
+ TALLOC_FREE(subreq);
+ if (! status) {
+ DEBUG(DEBUG_ERR,
+ ("delete_record: %s SCHDULE_FOR_DELETION failed, "
+ "ret=%d\n", state->h->db->db_name, ret));
+ tevent_req_error(req, ret);
+ return;
+ }
+
+ tevent_req_done(req);
+}
+
+bool ctdb_delete_record_recv(struct tevent_req *req, int *perr)
+{
+ int err;
+
+ if (tevent_req_is_unix_error(req, &err)) {
+ if (perr != NULL) {
+ *perr = err;
+ }
+ return false;
+ }
+
+ return true;
+}
+
+
+int ctdb_delete_record(struct ctdb_record_handle *h)
+{
+ struct tevent_context *ev = h->ev;
+ TALLOC_CTX *mem_ctx;
+ struct tevent_req *req;
+ int ret;
+ bool status;
+
+ mem_ctx = talloc_new(NULL);
+ if (mem_ctx == NULL) {
+ return ENOMEM;
+ }
+
+ req = ctdb_delete_record_send(mem_ctx, ev, h);
+ if (req == NULL) {
+ talloc_free(mem_ctx);
+ return ENOMEM;
+ }
+
+ tevent_req_poll(req, ev);
+
+ status = ctdb_delete_record_recv(req, &ret);
+ talloc_free(mem_ctx);
+ if (! status) {
return ret;
}
@@ -1151,6 +1298,8 @@ static void ctdb_g_lock_lock_fetched(struct tevent_req
*subreq)
state->h = ctdb_fetch_lock_recv(subreq, NULL, state, &data, &ret);
TALLOC_FREE(subreq);
if (state->h == NULL) {
+ DEBUG(DEBUG_ERR, ("g_lock_lock: %s fetch lock failed\n",
+ (char *)state->key.dptr));
tevent_req_error(req, ret);
return;
}
@@ -1164,6 +1313,8 @@ static void ctdb_g_lock_lock_fetched(struct tevent_req
*subreq)
&state->lock_list);
talloc_free(data.dptr);
if (ret != 0) {
+ DEBUG(DEBUG_ERR, ("g_lock_lock: %s invalid lock data\n",
+ (char *)state->key.dptr));
tevent_req_error(req, ret);
return;
}
@@ -1185,6 +1336,8 @@ static void ctdb_g_lock_lock_process_locks(struct
tevent_req *req)
/* We should not ask for the same lock more than once */
if (ctdb_server_id_equal(&lock->sid, &state->my_sid)) {
+ DEBUG(DEBUG_ERR, ("g_lock_lock: %s deadlock\n",
+ (char *)state->key.dptr));
tevent_req_error(req, EDEADLK);
return;
}
@@ -1199,15 +1352,11 @@ static void ctdb_g_lock_lock_process_locks(struct
tevent_req *req)
if (check_server) {
struct ctdb_req_control request;
- struct ctdb_uint64_array u64_array;
-
- u64_array.num = 1;
--
Samba Shared Repository