HELLO

2018-03-11 Thread Abdul Salif
Abdul Salif ,{ESQ}
Avenue Bassawaga
Ouagadougou, Burkina Faso
Email: (abdul-sa...@consultant.com)

Dear Friend,

Compliments of the day. I hope that this letter does not surprise you
because we have not met. However, I urge you to treat the information
with it's contains with due and utmost attention. Apart from being
eternally grateful to you, I also assure that you shall be adequately
compensated.

My name is Abdul Salif an attorney at law, I am the only registered
attorney to Late business tycoon Eng. Sallas Adams Who died with the
entire family in an auto crash on the 29th october 2006 in
Ouagadougou, the capital of Burkina Faso and While he was alive,he
deposited huge sum of amount with a Security Company in here,valued
$10.5 Million (Ten Million five hundred thousand us dollars).

The Security Company have informed me that the demurrage on the
consignment has risen so high and they have called to inform my client
to come forward to claim his consignment without knowing that my
client is dead. I am planning to present you to the company as the
beneficiary to the consignment.

If you agree,we can arrange the moderlities of sharing the money as
soon the consignment is released. 40% to you while 60% for me. You can
forward your informations like your full names and address, Telephone
and Fax numbers.

All I require is your honest co-operation to enable us accomplish this
transaction.I guarantee you that this will be executed under a
legitimate arrangement that will protect you from any breach of the
law.


I await for your urgent response.
Yours Faithfully,
Abdul Salif


[PATCH v3 10/36] Convert find_unique_abbrev* to struct object_id

2018-03-11 Thread brian m. carlson
Convert find_unique_abbrev and find_unique_abbrev_r to each take a
pointer to struct object_id.

Signed-off-by: brian m. carlson 
---
 builtin/blame.c|  2 +-
 builtin/branch.c   |  2 +-
 builtin/checkout.c |  6 +++---
 builtin/describe.c |  4 ++--
 builtin/log.c  |  4 ++--
 builtin/ls-files.c |  4 ++--
 builtin/ls-tree.c  |  4 ++--
 builtin/merge.c|  6 +++---
 builtin/name-rev.c |  2 +-
 builtin/receive-pack.c |  8 
 builtin/reset.c|  2 +-
 builtin/rev-list.c |  2 +-
 builtin/rev-parse.c|  2 +-
 builtin/show-branch.c  |  2 +-
 builtin/show-ref.c |  4 ++--
 builtin/tag.c  |  6 --
 builtin/worktree.c |  4 ++--
 cache.h|  6 +++---
 combine-diff.c |  4 ++--
 diff.c |  2 +-
 log-tree.c | 12 ++--
 ref-filter.c   |  4 ++--
 sequencer.c|  2 +-
 sha1_name.c| 12 ++--
 strbuf.c   |  2 +-
 tag.c  |  4 ++--
 transport.c|  2 +-
 wt-status.c|  6 +++---
 28 files changed, 61 insertions(+), 59 deletions(-)

diff --git a/builtin/blame.c b/builtin/blame.c
index 9dcb367b90..b980e8a1dd 100644
--- a/builtin/blame.c
+++ b/builtin/blame.c
@@ -499,7 +499,7 @@ static int read_ancestry(const char *graft_file)
 
 static int update_auto_abbrev(int auto_abbrev, struct blame_origin *suspect)
 {
-   const char *uniq = find_unique_abbrev(suspect->commit->object.oid.hash,
+   const char *uniq = find_unique_abbrev(>commit->object.oid,
  auto_abbrev);
int len = strlen(uniq);
if (auto_abbrev < len)
diff --git a/builtin/branch.c b/builtin/branch.c
index 8dcc2ed058..659deb36f3 100644
--- a/builtin/branch.c
+++ b/builtin/branch.c
@@ -273,7 +273,7 @@ static int delete_branches(int argc, const char **argv, int 
force, int kinds,
   bname.buf,
   (flags & REF_ISBROKEN) ? "broken"
   : (flags & REF_ISSYMREF) ? target
-  : find_unique_abbrev(oid.hash, DEFAULT_ABBREV));
+  : find_unique_abbrev(, DEFAULT_ABBREV));
}
delete_branch_config(bname.buf);
 
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 5981e1a3ed..45968c2d85 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -405,10 +405,10 @@ static void describe_detached_head(const char *msg, 
struct commit *commit)
pp_commit_easy(CMIT_FMT_ONELINE, commit, );
if (print_sha1_ellipsis()) {
fprintf(stderr, "%s %s... %s\n", msg,
-   find_unique_abbrev(commit->object.oid.hash, 
DEFAULT_ABBREV), sb.buf);
+   find_unique_abbrev(>object.oid, 
DEFAULT_ABBREV), sb.buf);
} else {
fprintf(stderr, "%s %s %s\n", msg,
-   find_unique_abbrev(commit->object.oid.hash, 
DEFAULT_ABBREV), sb.buf);
+   find_unique_abbrev(>object.oid, 
DEFAULT_ABBREV), sb.buf);
}
strbuf_release();
 }
@@ -778,7 +778,7 @@ static void suggest_reattach(struct commit *commit, struct 
rev_info *revs)
" git branch  %s\n\n",
/* Give ngettext() the count */
lost),
-   find_unique_abbrev(commit->object.oid.hash, 
DEFAULT_ABBREV));
+   find_unique_abbrev(>object.oid, 
DEFAULT_ABBREV));
 }
 
 /*
diff --git a/builtin/describe.c b/builtin/describe.c
index e4869df7b4..7e6535a8bd 100644
--- a/builtin/describe.c
+++ b/builtin/describe.c
@@ -285,7 +285,7 @@ static void append_name(struct commit_name *n, struct 
strbuf *dst)
 
 static void append_suffix(int depth, const struct object_id *oid, struct 
strbuf *dst)
 {
-   strbuf_addf(dst, "-%d-g%s", depth, find_unique_abbrev(oid->hash, 
abbrev));
+   strbuf_addf(dst, "-%d-g%s", depth, find_unique_abbrev(oid, abbrev));
 }
 
 static void describe_commit(struct object_id *oid, struct strbuf *dst)
@@ -383,7 +383,7 @@ static void describe_commit(struct object_id *oid, struct 
strbuf *dst)
if (!match_cnt) {
struct object_id *cmit_oid = >object.oid;
if (always) {
-   strbuf_add_unique_abbrev(dst, cmit_oid->hash, abbrev);
+   strbuf_add_unique_abbrev(dst, cmit_oid, abbrev);
if (suffix)
strbuf_addstr(dst, suffix);
return;
diff --git a/builtin/log.c b/builtin/log.c
index 32a744bfd2..411339c1ed 100644
--- a/builtin/log.c
+++ b/builtin/log.c
@@ -1873,12 +1873,12 @@ static void print_commit(char sign, struct commit 
*commit, int verbose,
 {
if (!verbose) {
fprintf(file, "%c %s\n", sign,
-  

[PATCH v3 09/36] wt-status: convert struct wt_status_state to object_id

2018-03-11 Thread brian m. carlson
Convert the various *_sha1 members to use struct object_id instead.

Signed-off-by: brian m. carlson 
---
 wt-status.c | 12 ++--
 wt-status.h |  6 +++---
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/wt-status.c b/wt-status.c
index 5c685b6165..78be89a422 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -1350,7 +1350,7 @@ static void show_cherry_pick_in_progress(struct wt_status 
*s,
const char *color)
 {
status_printf_ln(s, color, _("You are currently cherry-picking commit 
%s."),
-   find_unique_abbrev(state->cherry_pick_head_sha1, 
DEFAULT_ABBREV));
+   find_unique_abbrev(state->cherry_pick_head_oid.hash, 
DEFAULT_ABBREV));
if (s->hints) {
if (has_unmerged(s))
status_printf_ln(s, color,
@@ -1369,7 +1369,7 @@ static void show_revert_in_progress(struct wt_status *s,
const char *color)
 {
status_printf_ln(s, color, _("You are currently reverting commit %s."),
-find_unique_abbrev(state->revert_head_sha1, 
DEFAULT_ABBREV));
+find_unique_abbrev(state->revert_head_oid.hash, 
DEFAULT_ABBREV));
if (s->hints) {
if (has_unmerged(s))
status_printf_ln(s, color,
@@ -1490,9 +1490,9 @@ static void wt_status_get_detached_from(struct 
wt_status_state *state)
} else
state->detached_from =
xstrdup(find_unique_abbrev(cb.noid.hash, 
DEFAULT_ABBREV));
-   hashcpy(state->detached_sha1, cb.noid.hash);
+   oidcpy(>detached_oid, );
state->detached_at = !get_oid("HEAD", ) &&
-!hashcmp(oid.hash, state->detached_sha1);
+!oidcmp(, >detached_oid);
 
free(ref);
strbuf_release();
@@ -1551,13 +1551,13 @@ void wt_status_get_state(struct wt_status_state *state,
} else if (!stat(git_path_cherry_pick_head(), ) &&
!get_oid("CHERRY_PICK_HEAD", )) {
state->cherry_pick_in_progress = 1;
-   hashcpy(state->cherry_pick_head_sha1, oid.hash);
+   oidcpy(>cherry_pick_head_oid, );
}
wt_status_check_bisect(NULL, state);
if (!stat(git_path_revert_head(), ) &&
!get_oid("REVERT_HEAD", )) {
state->revert_in_progress = 1;
-   hashcpy(state->revert_head_sha1, oid.hash);
+   oidcpy(>revert_head_oid, );
}
 
if (get_detached_from)
diff --git a/wt-status.h b/wt-status.h
index ea2456daf2..430770b854 100644
--- a/wt-status.h
+++ b/wt-status.h
@@ -118,9 +118,9 @@ struct wt_status_state {
char *branch;
char *onto;
char *detached_from;
-   unsigned char detached_sha1[20];
-   unsigned char revert_head_sha1[20];
-   unsigned char cherry_pick_head_sha1[20];
+   struct object_id detached_oid;
+   struct object_id revert_head_oid;
+   struct object_id cherry_pick_head_oid;
 };
 
 size_t wt_status_locate_end(const char *s, size_t len);


[PATCH v3 00/36] object_id part 12

2018-03-11 Thread brian m. carlson
This is the twelfth in a series of patches to convert various parts of
the code to struct object_id.

Changes from v2:
* Rebase onto master (to fix "typename" → "type_name" changes).
* Replace some uses of hashcpy with memcpy.
* Replace some instances of "20" with references to the_hash_algo.

Changes from v1:
* Rebase onto master.

tbdiff output below.

brian m. carlson (36):
  bulk-checkin: convert index_bulk_checkin to struct object_id
  builtin/write-tree: convert to struct object_id
  cache-tree: convert write_*_as_tree to object_id
  cache-tree: convert remnants to struct object_id
  resolve-undo: convert struct resolve_undo_info to object_id
  tree: convert read_tree_recursive to struct object_id
  ref-filter: convert grab_objectname to struct object_id
  strbuf: convert strbuf_add_unique_abbrev to use struct object_id
  wt-status: convert struct wt_status_state to object_id
  Convert find_unique_abbrev* to struct object_id
  http-walker: convert struct object_request to use struct object_id
  send-pack: convert remaining functions to struct object_id
  replace_object: convert struct replace_object to object_id
  builtin/mktag: convert to struct object_id
  archive: convert write_archive_entry_fn_t to object_id
  archive: convert sha1_file_to_archive to struct object_id
  builtin/index-pack: convert struct ref_delta_entry to object_id
  sha1_file: convert read_loose_object to use struct object_id
  sha1_file: convert check_sha1_signature to struct object_id
  streaming: convert open_istream to use struct object_id
  builtin/mktree: convert to struct object_id
  sha1_file: convert assert_sha1_type to object_id
  sha1_file: convert retry_bad_packed_offset to struct object_id
  packfile: convert unpack_entry to struct object_id
  Convert remaining callers of sha1_object_info_extended to object_id
  sha1_file: convert sha1_object_info* to object_id
  builtin/fmt-merge-msg: convert remaining code to object_id
  builtin/notes: convert static functions to object_id
  tree-walk: convert get_tree_entry_follow_symlinks internals to
object_id
  streaming: convert istream internals to struct object_id
  tree-walk: convert tree entry functions to object_id
  sha1_file: convert read_object_with_reference to object_id
  sha1_file: convert read_sha1_file to struct object_id
  Convert lookup_replace_object to struct object_id
  sha1_file: introduce a constant for max header length
  convert: convert to struct object_id

 apply.c  |   4 +-
 archive-tar.c|  28 
 archive-zip.c|  18 ++---
 archive.c|  32 -
 archive.h|  10 +--
 bisect.c |   3 +-
 blame.c  |  18 +++--
 builtin/am.c |   8 +--
 builtin/blame.c  |   4 +-
 builtin/branch.c |   2 +-
 builtin/cat-file.c   |  30 +
 builtin/checkout.c   |  12 ++--
 builtin/commit-tree.c|   2 +-
 builtin/describe.c   |   6 +-
 builtin/difftool.c   |   2 +-
 builtin/fast-export.c|   8 +--
 builtin/fetch.c  |  10 +--
 builtin/fmt-merge-msg.c  |   4 +-
 builtin/fsck.c   |   4 +-
 builtin/grep.c   |   6 +-
 builtin/index-pack.c |  43 ++--
 builtin/log.c|   8 +--
 builtin/ls-files.c   |   4 +-
 builtin/ls-tree.c|   8 +--
 builtin/merge-tree.c |   5 +-
 builtin/merge.c  |   8 +--
 builtin/mktag.c  |  20 +++---
 builtin/mktree.c |  24 +++
 builtin/name-rev.c   |   2 +-
 builtin/notes.c  |  14 ++--
 builtin/pack-objects.c   |  27 
 builtin/prune.c  |   2 +-
 builtin/receive-pack.c   |   8 +--
 builtin/reflog.c |   2 +-
 builtin/replace.c|  10 +--
 builtin/reset.c  |   2 +-
 builtin/rev-list.c   |   2 +-
 builtin/rev-parse.c  |   2 +-
 builtin/rm.c |   2 +-
 builtin/show-branch.c|   2 +-
 builtin/show-ref.c   |   4 +-
 builtin/tag.c|  16 +++--
 builtin/unpack-file.c|   2 +-
 builtin/unpack-objects.c |   4 +-
 builtin/update-index.c   |   2 +-
 builtin/verify-commit.c  |   2 +-
 builtin/worktree.c   |   4 +-
 builtin/write-tree.c |   6 +-
 bulk-checkin.c   |  18 ++---
 bulk-checkin.h   |   2 +-
 bundle.c |   2 +-
 cache-tree.c |  39 +--
 cache-tree.h |   4 +-
 cache.h  |  42 ++--
 combine-diff.c   |   6 +-
 commit.c |   8 +--
 config.c |   2 +-
 convert.c|  12 ++--
 convert.h|   2 +-
 diff.c   |   6 +-
 dir.c|   2 +-
 entry.c  |   4 +-
 fast-import.c|  31 -
 fsck.c   |   2 +-
 grep.c   |   2 +-
 http-push.c  |   2 +-
 http-walker.c|  16 ++---
 line-log.c   |   3 +-
 list-objects-filter.c|   2 +-
 log-tree.c 

[PATCH v3 02/36] builtin/write-tree: convert to struct object_id

2018-03-11 Thread brian m. carlson
This is needed to convert parts of the cache-tree code.

Signed-off-by: brian m. carlson 
---
 builtin/write-tree.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/builtin/write-tree.c b/builtin/write-tree.c
index bd0a78aa3c..299a121531 100644
--- a/builtin/write-tree.c
+++ b/builtin/write-tree.c
@@ -19,7 +19,7 @@ int cmd_write_tree(int argc, const char **argv, const char 
*unused_prefix)
 {
int flags = 0, ret;
const char *prefix = NULL;
-   unsigned char sha1[20];
+   struct object_id oid;
const char *me = "git-write-tree";
struct option write_tree_options[] = {
OPT_BIT(0, "missing-ok", , N_("allow missing objects"),
@@ -38,10 +38,10 @@ int cmd_write_tree(int argc, const char **argv, const char 
*unused_prefix)
argc = parse_options(argc, argv, unused_prefix, write_tree_options,
 write_tree_usage, 0);
 
-   ret = write_cache_as_tree(sha1, flags, prefix);
+   ret = write_cache_as_tree(oid.hash, flags, prefix);
switch (ret) {
case 0:
-   printf("%s\n", sha1_to_hex(sha1));
+   printf("%s\n", oid_to_hex());
break;
case WRITE_TREE_UNREADABLE_INDEX:
die("%s: error reading the index", me);


[PATCH v3 05/36] resolve-undo: convert struct resolve_undo_info to object_id

2018-03-11 Thread brian m. carlson
Convert the sha1 member of this struct to be an array of struct
object_id instead.  This change is needed to convert find_unique_abbrev.

Convert some instances of hard-coded constants to use the_hash_algo as
well.

Signed-off-by: brian m. carlson 
---
 builtin/ls-files.c |  2 +-
 resolve-undo.c | 15 ---
 resolve-undo.h |  2 +-
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/builtin/ls-files.c b/builtin/ls-files.c
index 2fc836e330..9df66ba307 100644
--- a/builtin/ls-files.c
+++ b/builtin/ls-files.c
@@ -271,7 +271,7 @@ static void show_ru_info(const struct index_state *istate)
if (!ui->mode[i])
continue;
printf("%s%06o %s %d\t", tag_resolve_undo, ui->mode[i],
-  find_unique_abbrev(ui->sha1[i], abbrev),
+  find_unique_abbrev(ui->oid[i].hash, abbrev),
   i + 1);
write_name(path);
}
diff --git a/resolve-undo.c b/resolve-undo.c
index b40f3173d3..aed95b4b35 100644
--- a/resolve-undo.c
+++ b/resolve-undo.c
@@ -24,7 +24,7 @@ void record_resolve_undo(struct index_state *istate, struct 
cache_entry *ce)
if (!lost->util)
lost->util = xcalloc(1, sizeof(*ui));
ui = lost->util;
-   hashcpy(ui->sha1[stage - 1], ce->oid.hash);
+   oidcpy(>oid[stage - 1], >oid);
ui->mode[stage - 1] = ce->ce_mode;
 }
 
@@ -44,7 +44,7 @@ void resolve_undo_write(struct strbuf *sb, struct string_list 
*resolve_undo)
for (i = 0; i < 3; i++) {
if (!ui->mode[i])
continue;
-   strbuf_add(sb, ui->sha1[i], 20);
+   strbuf_add(sb, ui->oid[i].hash, the_hash_algo->rawsz);
}
}
 }
@@ -55,6 +55,7 @@ struct string_list *resolve_undo_read(const char *data, 
unsigned long size)
size_t len;
char *endptr;
int i;
+   const unsigned rawsz = the_hash_algo->rawsz;
 
resolve_undo = xcalloc(1, sizeof(*resolve_undo));
resolve_undo->strdup_strings = 1;
@@ -87,11 +88,11 @@ struct string_list *resolve_undo_read(const char *data, 
unsigned long size)
for (i = 0; i < 3; i++) {
if (!ui->mode[i])
continue;
-   if (size < 20)
+   if (size < rawsz)
goto error;
-   hashcpy(ui->sha1[i], (const unsigned char *)data);
-   size -= 20;
-   data += 20;
+   memcpy(ui->oid[i].hash, (const unsigned char *)data, 
rawsz);
+   size -= rawsz;
+   data += rawsz;
}
}
return resolve_undo;
@@ -145,7 +146,7 @@ int unmerge_index_entry_at(struct index_state *istate, int 
pos)
struct cache_entry *nce;
if (!ru->mode[i])
continue;
-   nce = make_cache_entry(ru->mode[i], ru->sha1[i],
+   nce = make_cache_entry(ru->mode[i], ru->oid[i].hash,
   name, i + 1, 0);
if (matched)
nce->ce_flags |= CE_MATCHED;
diff --git a/resolve-undo.h b/resolve-undo.h
index 46306455ed..87291904bd 100644
--- a/resolve-undo.h
+++ b/resolve-undo.h
@@ -3,7 +3,7 @@
 
 struct resolve_undo_info {
unsigned int mode[3];
-   unsigned char sha1[3][20];
+   struct object_id oid[3];
 };
 
 extern void record_resolve_undo(struct index_state *, struct cache_entry *);


[PATCH v3 11/36] http-walker: convert struct object_request to use struct object_id

2018-03-11 Thread brian m. carlson
Convert struct object_request to use struct object_id by updating the
definition and applying the following semantic patch, plus the standard
object_id transforms:

@@
struct object_request E1;
@@
- E1.sha1
+ E1.oid.hash

@@
struct object_request *E1;
@@
- E1->sha1
+ E1->oid.hash

Signed-off-by: brian m. carlson 
---
 http-walker.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/http-walker.c b/http-walker.c
index 07c2b1af82..f506f394ac 100644
--- a/http-walker.c
+++ b/http-walker.c
@@ -22,7 +22,7 @@ enum object_request_state {
 
 struct object_request {
struct walker *walker;
-   unsigned char sha1[20];
+   struct object_id oid;
struct alt_base *repo;
enum object_request_state state;
struct http_object_request *req;
@@ -56,7 +56,7 @@ static void start_object_request(struct walker *walker,
struct active_request_slot *slot;
struct http_object_request *req;
 
-   req = new_http_object_request(obj_req->repo->base, obj_req->sha1);
+   req = new_http_object_request(obj_req->repo->base, obj_req->oid.hash);
if (req == NULL) {
obj_req->state = ABORTED;
return;
@@ -82,7 +82,7 @@ static void finish_object_request(struct object_request 
*obj_req)
return;
 
if (obj_req->req->rename == 0)
-   walker_say(obj_req->walker, "got %s\n", 
sha1_to_hex(obj_req->sha1));
+   walker_say(obj_req->walker, "got %s\n", 
oid_to_hex(_req->oid));
 }
 
 static void process_object_response(void *callback_data)
@@ -129,7 +129,7 @@ static int fill_active_slot(struct walker *walker)
list_for_each_safe(pos, tmp, head) {
obj_req = list_entry(pos, struct object_request, node);
if (obj_req->state == WAITING) {
-   if (has_sha1_file(obj_req->sha1))
+   if (has_sha1_file(obj_req->oid.hash))
obj_req->state = COMPLETE;
else {
start_object_request(walker, obj_req);
@@ -148,7 +148,7 @@ static void prefetch(struct walker *walker, unsigned char 
*sha1)
 
newreq = xmalloc(sizeof(*newreq));
newreq->walker = walker;
-   hashcpy(newreq->sha1, sha1);
+   hashcpy(newreq->oid.hash, sha1);
newreq->repo = data->alt;
newreq->state = WAITING;
newreq->req = NULL;
@@ -481,13 +481,13 @@ static int fetch_object(struct walker *walker, unsigned 
char *sha1)
 
list_for_each(pos, head) {
obj_req = list_entry(pos, struct object_request, node);
-   if (!hashcmp(obj_req->sha1, sha1))
+   if (!hashcmp(obj_req->oid.hash, sha1))
break;
}
if (obj_req == NULL)
return error("Couldn't find request for %s in the queue", hex);
 
-   if (has_sha1_file(obj_req->sha1)) {
+   if (has_sha1_file(obj_req->oid.hash)) {
if (obj_req->req != NULL)
abort_http_object_request(obj_req->req);
abort_object_request(obj_req);
@@ -541,7 +541,7 @@ static int fetch_object(struct walker *walker, unsigned 
char *sha1)
} else if (req->zret != Z_STREAM_END) {
walker->corrupt_object_found++;
ret = error("File %s (%s) corrupt", hex, req->url);
-   } else if (hashcmp(obj_req->sha1, req->real_sha1)) {
+   } else if (hashcmp(obj_req->oid.hash, req->real_sha1)) {
ret = error("File %s has bad hash", hex);
} else if (req->rename < 0) {
struct strbuf buf = STRBUF_INIT;


[PATCH v3 18/36] sha1_file: convert read_loose_object to use struct object_id

2018-03-11 Thread brian m. carlson
Signed-off-by: brian m. carlson 
---
 builtin/fsck.c |  2 +-
 cache.h|  4 ++--
 sha1_file.c| 10 +-
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/builtin/fsck.c b/builtin/fsck.c
index ef78c6c00c..eae018e3fb 100644
--- a/builtin/fsck.c
+++ b/builtin/fsck.c
@@ -513,7 +513,7 @@ static struct object *parse_loose_object(const struct 
object_id *oid,
unsigned long size;
int eaten;
 
-   if (read_loose_object(path, oid->hash, , , ) < 0)
+   if (read_loose_object(path, oid, , , ) < 0)
return NULL;
 
if (!contents && type != OBJ_BLOB)
diff --git a/cache.h b/cache.h
index fe8b2c2676..5a28e94318 100644
--- a/cache.h
+++ b/cache.h
@@ -1241,14 +1241,14 @@ extern int check_sha1_signature(const unsigned char 
*sha1, void *buf, unsigned l
 extern int finalize_object_file(const char *tmpfile, const char *filename);
 
 /*
- * Open the loose object at path, check its sha1, and return the contents,
+ * Open the loose object at path, check its hash, and return the contents,
  * type, and size. If the object is a blob, then "contents" may return NULL,
  * to allow streaming of large blobs.
  *
  * Returns 0 on success, negative on error (details may be written to stderr).
  */
 int read_loose_object(const char *path,
- const unsigned char *expected_sha1,
+ const struct object_id *expected_oid,
  enum object_type *type,
  unsigned long *size,
  void **contents);
diff --git a/sha1_file.c b/sha1_file.c
index 6b887f8aaf..5081d585a2 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -2176,7 +2176,7 @@ static int check_stream_sha1(git_zstream *stream,
 }
 
 int read_loose_object(const char *path,
- const unsigned char *expected_sha1,
+ const struct object_id *expected_oid,
  enum object_type *type,
  unsigned long *size,
  void **contents)
@@ -2208,19 +2208,19 @@ int read_loose_object(const char *path,
}
 
if (*type == OBJ_BLOB) {
-   if (check_stream_sha1(, hdr, *size, path, expected_sha1) 
< 0)
+   if (check_stream_sha1(, hdr, *size, path, 
expected_oid->hash) < 0)
goto out;
} else {
-   *contents = unpack_sha1_rest(, hdr, *size, 
expected_sha1);
+   *contents = unpack_sha1_rest(, hdr, *size, 
expected_oid->hash);
if (!*contents) {
error("unable to unpack contents of %s", path);
git_inflate_end();
goto out;
}
-   if (check_sha1_signature(expected_sha1, *contents,
+   if (check_sha1_signature(expected_oid->hash, *contents,
 *size, type_name(*type))) {
error("sha1 mismatch for %s (expected %s)", path,
- sha1_to_hex(expected_sha1));
+ oid_to_hex(expected_oid));
free(*contents);
goto out;
}


[PATCH v3 19/36] sha1_file: convert check_sha1_signature to struct object_id

2018-03-11 Thread brian m. carlson
Convert this function to take a pointer to struct object_id and rename
it check_object_signature.  Introduce temporaries to convert the return
values of lookup_replace_object and lookup_replace_object_extended into
struct object_id.

The temporaries are needed because in order to convert
lookup_replace_object, open_istream needs to be converted, and
open_istream needs check_sha1_signature to be converted, causing a loop
of dependencies.  The temporaries will be removed in a future patch.

Signed-off-by: brian m. carlson 
---
 builtin/fast-export.c |  2 +-
 builtin/index-pack.c  |  2 +-
 builtin/mktag.c   |  5 -
 cache.h   |  2 +-
 object.c  | 10 --
 pack-check.c  |  4 ++--
 sha1_file.c   | 12 ++--
 7 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/builtin/fast-export.c b/builtin/fast-export.c
index 27b2cc138e..9b049e9d08 100644
--- a/builtin/fast-export.c
+++ b/builtin/fast-export.c
@@ -240,7 +240,7 @@ static void export_blob(const struct object_id *oid)
buf = read_sha1_file(oid->hash, , );
if (!buf)
die ("Could not read blob %s", oid_to_hex(oid));
-   if (check_sha1_signature(oid->hash, buf, size, type_name(type)) 
< 0)
+   if (check_object_signature(oid, buf, size, type_name(type)) < 0)
die("sha1 mismatch in blob %s", oid_to_hex(oid));
object = parse_object_buffer(oid, type, size, buf, );
}
diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index 9744175e23..b28ebfadd4 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -1377,7 +1377,7 @@ static void fix_unresolved_deltas(struct hashfile *f)
if (!base_obj->data)
continue;
 
-   if (check_sha1_signature(d->oid.hash, base_obj->data,
+   if (check_object_signature(>oid, base_obj->data,
base_obj->size, type_name(type)))
die(_("local object %s is corrupt"), 
oid_to_hex(>oid));
base_obj->obj = append_obj_to_pack(f, d->oid.hash,
diff --git a/builtin/mktag.c b/builtin/mktag.c
index 65bb41e3cd..810b24bef3 100644
--- a/builtin/mktag.c
+++ b/builtin/mktag.c
@@ -27,8 +27,11 @@ static int verify_object(const struct object_id *oid, const 
char *expected_type)
const unsigned char *repl = lookup_replace_object(oid->hash);
 
if (buffer) {
+   struct object_id reploid;
+   hashcpy(reploid.hash, repl);
+
if (type == type_from_string(expected_type))
-   ret = check_sha1_signature(repl, buffer, size, 
expected_type);
+   ret = check_object_signature(, buffer, size, 
expected_type);
free(buffer);
}
return ret;
diff --git a/cache.h b/cache.h
index 5a28e94318..27b5fd8a39 100644
--- a/cache.h
+++ b/cache.h
@@ -1236,7 +1236,7 @@ extern void *map_sha1_file(const unsigned char *sha1, 
unsigned long *size);
 extern int unpack_sha1_header(git_zstream *stream, unsigned char *map, 
unsigned long mapsize, void *buffer, unsigned long bufsiz);
 extern int parse_sha1_header(const char *hdr, unsigned long *sizep);
 
-extern int check_sha1_signature(const unsigned char *sha1, void *buf, unsigned 
long size, const char *type);
+extern int check_object_signature(const struct object_id *oid, void *buf, 
unsigned long size, const char *type);
 
 extern int finalize_object_file(const char *tmpfile, const char *filename);
 
diff --git a/object.c b/object.c
index e6ad3f61f0..68bcf35d95 100644
--- a/object.c
+++ b/object.c
@@ -255,7 +255,10 @@ struct object *parse_object(const struct object_id *oid)
if ((obj && obj->type == OBJ_BLOB && has_object_file(oid)) ||
(!obj && has_object_file(oid) &&
 sha1_object_info(oid->hash, NULL) == OBJ_BLOB)) {
-   if (check_sha1_signature(repl, NULL, 0, NULL) < 0) {
+   struct object_id reploid;
+   hashcpy(reploid.hash, repl);
+
+   if (check_object_signature(, NULL, 0, NULL) < 0) {
error("sha1 mismatch %s", oid_to_hex(oid));
return NULL;
}
@@ -265,7 +268,10 @@ struct object *parse_object(const struct object_id *oid)
 
buffer = read_sha1_file(oid->hash, , );
if (buffer) {
-   if (check_sha1_signature(repl, buffer, size, type_name(type)) < 
0) {
+   struct object_id reploid;
+   hashcpy(reploid.hash, repl);
+
+   if (check_object_signature(, buffer, size, 
type_name(type)) < 0) {
free(buffer);
error("sha1 mismatch %s", sha1_to_hex(repl));
return NULL;
diff --git a/pack-check.c b/pack-check.c
index 8fc7dd1694..d0591dd5e8 100644
--- a/pack-check.c
+++ b/pack-check.c
@@ -126,7 +126,7 

[PATCH v3 12/36] send-pack: convert remaining functions to struct object_id

2018-03-11 Thread brian m. carlson
Convert the remaining function, feed_object, to use struct object_id.

Signed-off-by: brian m. carlson 
---
 send-pack.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/send-pack.c b/send-pack.c
index 8d9190f5e7..19025a7aca 100644
--- a/send-pack.c
+++ b/send-pack.c
@@ -37,14 +37,14 @@ int option_parse_push_signed(const struct option *opt,
die("bad %s argument: %s", opt->long_name, arg);
 }
 
-static void feed_object(const unsigned char *sha1, FILE *fh, int negative)
+static void feed_object(const struct object_id *oid, FILE *fh, int negative)
 {
-   if (negative && !has_sha1_file(sha1))
+   if (negative && !has_sha1_file(oid->hash))
return;
 
if (negative)
putc('^', fh);
-   fputs(sha1_to_hex(sha1), fh);
+   fputs(oid_to_hex(oid), fh);
putc('\n', fh);
 }
 
@@ -89,13 +89,13 @@ static int pack_objects(int fd, struct ref *refs, struct 
oid_array *extra, struc
 */
po_in = xfdopen(po.in, "w");
for (i = 0; i < extra->nr; i++)
-   feed_object(extra->oid[i].hash, po_in, 1);
+   feed_object(>oid[i], po_in, 1);
 
while (refs) {
if (!is_null_oid(>old_oid))
-   feed_object(refs->old_oid.hash, po_in, 1);
+   feed_object(>old_oid, po_in, 1);
if (!is_null_oid(>new_oid))
-   feed_object(refs->new_oid.hash, po_in, 0);
+   feed_object(>new_oid, po_in, 0);
refs = refs->next;
}
 


[GSoC][PATCH] git-ci: use pylint to analyze the git-p4 code

2018-03-11 Thread Viet Hung Tran
This is my submission as a microproject for the Google Summer of code.
I apologize for not setting the [GSoC] in my previous email
at <20180312020855.7950-1-viethtran1...@gmail.com>.
Please ignore it.

Add a new job named Pylint to .travis.yml in order to analyze git-p4 Python 
code.
Although Travis CI have yet to implement continuous integration for multiple
languages. Python package can be installed using apt packages. From there, 
pylint can be
installed using pip and used to analyze git-p4.py file.

Signed-off-by: Viet Hung Tran 
---
 .travis.yml | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/.travis.yml b/.travis.yml
index 5f5ee4f3b..581d75319 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -46,6 +46,16 @@ matrix:
 - docker
   before_install:
   script: ci/run-linux32-docker.sh
+- env: jobname=Pylint
+  compiler:
+  addons:
+apt:
+  packages:
+  - python
+  before_install:
+  install: pip install --user pylint
+  script: pylint git-p4.py
+  after_failure:
 - env: jobname=StaticAnalysis
   os: linux
   compiler:
-- 
2.16.2.440.gc6284da



[PATCH v3 29/36] tree-walk: convert get_tree_entry_follow_symlinks internals to object_id

2018-03-11 Thread brian m. carlson
Convert the internals of this function to use struct object_id.  This is
one of the last remaining callers of read_sha1_file_extended that has
not been converted yet.

Signed-off-by: brian m. carlson 
---
 tree-walk.c | 22 +++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tree-walk.c b/tree-walk.c
index 63a87ed666..521defdaa2 100644
--- a/tree-walk.c
+++ b/tree-walk.c
@@ -583,14 +583,14 @@ enum follow_symlinks_result 
get_tree_entry_follow_symlinks(unsigned char *tree_s
struct dir_state *parents = NULL;
size_t parents_alloc = 0;
size_t i, parents_nr = 0;
-   unsigned char current_tree_sha1[20];
+   struct object_id current_tree_oid;
struct strbuf namebuf = STRBUF_INIT;
struct tree_desc t;
int follows_remaining = GET_TREE_ENTRY_FOLLOW_SYMLINKS_MAX_LINKS;
 
init_tree_desc(, NULL, 0UL);
strbuf_addstr(, name);
-   hashcpy(current_tree_sha1, tree_sha1);
+   hashcpy(current_tree_oid.hash, tree_sha1);
 
while (1) {
int find_result;
@@ -599,22 +599,22 @@ enum follow_symlinks_result 
get_tree_entry_follow_symlinks(unsigned char *tree_s
 
if (!t.buffer) {
void *tree;
-   unsigned char root[20];
+   struct object_id root;
unsigned long size;
-   tree = read_object_with_reference(current_tree_sha1,
+   tree = read_object_with_reference(current_tree_oid.hash,
  tree_type, ,
- root);
+ root.hash);
if (!tree)
goto done;
 
ALLOC_GROW(parents, parents_nr + 1, parents_alloc);
parents[parents_nr].tree = tree;
parents[parents_nr].size = size;
-   hashcpy(parents[parents_nr].sha1, root);
+   hashcpy(parents[parents_nr].sha1, root.hash);
parents_nr++;
 
if (namebuf.buf[0] == '\0') {
-   hashcpy(result, root);
+   hashcpy(result, root.hash);
retval = FOUND;
goto done;
}
@@ -671,14 +671,14 @@ enum follow_symlinks_result 
get_tree_entry_follow_symlinks(unsigned char *tree_s
 
/* Look up the first (or only) path component in the tree. */
find_result = find_tree_entry(, namebuf.buf,
- current_tree_sha1, mode);
+ current_tree_oid.hash, mode);
if (find_result) {
goto done;
}
 
if (S_ISDIR(*mode)) {
if (!remainder) {
-   hashcpy(result, current_tree_sha1);
+   hashcpy(result, current_tree_oid.hash);
retval = FOUND;
goto done;
}
@@ -688,7 +688,7 @@ enum follow_symlinks_result 
get_tree_entry_follow_symlinks(unsigned char *tree_s
  1 + first_slash - namebuf.buf);
} else if (S_ISREG(*mode)) {
if (!remainder) {
-   hashcpy(result, current_tree_sha1);
+   hashcpy(result, current_tree_oid.hash);
retval = FOUND;
} else {
retval = NOT_DIR;
@@ -714,7 +714,7 @@ enum follow_symlinks_result 
get_tree_entry_follow_symlinks(unsigned char *tree_s
 */
retval = DANGLING_SYMLINK;
 
-   contents = read_sha1_file(current_tree_sha1, ,
+   contents = read_sha1_file(current_tree_oid.hash, ,
  _len);
 
if (!contents)


[PATCH v3 36/36] convert: convert to struct object_id

2018-03-11 Thread brian m. carlson
Convert convert.c to struct object_id.  Add a use of the_hash_algo to
replace hard-coded constants and change a strbuf_add to a strbuf_addstr
to avoid another hard-coded constant.

Note that a strict conversion using the hexsz constant would cause
problems in the future if the internal and user-visible hash algorithms
differed, as anticipated by the hash function transition plan.

Signed-off-by: brian m. carlson 
---
 convert.c | 12 ++--
 convert.h |  2 +-
 entry.c   |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/convert.c b/convert.c
index cc562f6509..c480097a2a 100644
--- a/convert.c
+++ b/convert.c
@@ -914,7 +914,7 @@ static int ident_to_worktree(const char *path, const char 
*src, size_t len,
to_free = strbuf_detach(buf, NULL);
hash_object_file(src, len, "blob", );
 
-   strbuf_grow(buf, len + cnt * 43);
+   strbuf_grow(buf, len + cnt * (the_hash_algo->hexsz + 3));
for (;;) {
/* step 1: run to the next '$' */
dollar = memchr(src, '$', len);
@@ -1510,7 +1510,7 @@ struct ident_filter {
struct stream_filter filter;
struct strbuf left;
int state;
-   char ident[45]; /* ": x40 $" */
+   char ident[GIT_MAX_HEXSZ + 5]; /* ": x40 $" */
 };
 
 static int is_foreign_ident(const char *str)
@@ -1635,12 +1635,12 @@ static struct stream_filter_vtbl ident_vtbl = {
ident_free_fn,
 };
 
-static struct stream_filter *ident_filter(const unsigned char *sha1)
+static struct stream_filter *ident_filter(const struct object_id *oid)
 {
struct ident_filter *ident = xmalloc(sizeof(*ident));
 
xsnprintf(ident->ident, sizeof(ident->ident),
- ": %s $", sha1_to_hex(sha1));
+ ": %s $", oid_to_hex(oid));
strbuf_init(>left, 0);
ident->filter.vtbl = _vtbl;
ident->state = 0;
@@ -1655,7 +1655,7 @@ static struct stream_filter *ident_filter(const unsigned 
char *sha1)
  * Note that you would be crazy to set CRLF, smuge/clean or ident to a
  * large binary blob you would want us not to slurp into the memory!
  */
-struct stream_filter *get_stream_filter(const char *path, const unsigned char 
*sha1)
+struct stream_filter *get_stream_filter(const char *path, const struct 
object_id *oid)
 {
struct conv_attrs ca;
struct stream_filter *filter = NULL;
@@ -1668,7 +1668,7 @@ struct stream_filter *get_stream_filter(const char *path, 
const unsigned char *s
return NULL;
 
if (ca.ident)
-   filter = ident_filter(sha1);
+   filter = ident_filter(oid);
 
if (output_eol(ca.crlf_action) == EOL_CRLF)
filter = cascade_filter(filter, lf_to_crlf_filter());
diff --git a/convert.h b/convert.h
index 65ab3e5167..2e9b4f49cc 100644
--- a/convert.h
+++ b/convert.h
@@ -93,7 +93,7 @@ extern int would_convert_to_git_filter_fd(const char *path);
 
 struct stream_filter; /* opaque */
 
-extern struct stream_filter *get_stream_filter(const char *path, const 
unsigned char *);
+extern struct stream_filter *get_stream_filter(const char *path, const struct 
object_id *);
 extern void free_stream_filter(struct stream_filter *);
 extern int is_null_stream_filter(struct stream_filter *);
 
diff --git a/entry.c b/entry.c
index d3f26ec8ca..2101201a11 100644
--- a/entry.c
+++ b/entry.c
@@ -266,7 +266,7 @@ static int write_entry(struct cache_entry *ce,
 
if (ce_mode_s_ifmt == S_IFREG) {
struct stream_filter *filter = get_stream_filter(ce->name,
-ce->oid.hash);
+>oid);
if (filter &&
!streaming_write_entry(ce, path, filter,
   state, to_tempfile,


[PATCH v3 25/36] Convert remaining callers of sha1_object_info_extended to object_id

2018-03-11 Thread brian m. carlson
Convert the remaining caller of sha1_object_info_extended to use struct
object_id.  Introduce temporaries, which will be removed later, since
there is a dependency loop between sha1_object_info_extended and
lookup_replace_object_extended.  This allows us to convert the code in a
piecemeal fashion instead of all at once.

Signed-off-by: brian m. carlson 
---
 sha1_file.c | 14 +++---
 streaming.c |  5 -
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/sha1_file.c b/sha1_file.c
index a307037e88..eafb0733e5 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -1231,6 +1231,9 @@ int sha1_object_info_extended(const unsigned char *sha1, 
struct object_info *oi,
lookup_replace_object(sha1) :
sha1;
int already_retried = 0;
+   struct object_id realoid;
+
+   hashcpy(realoid.hash, real);
 
if (is_null_sha1(real))
return -1;
@@ -1295,7 +1298,7 @@ int sha1_object_info_extended(const unsigned char *sha1, 
struct object_info *oi,
rtype = packed_object_info(e.p, e.offset, oi);
if (rtype < 0) {
mark_bad_packed_object(e.p, real);
-   return sha1_object_info_extended(real, oi, 0);
+   return sha1_object_info_extended(realoid.hash, oi, 0);
} else if (oi->whence == OI_PACKED) {
oi->u.packed.offset = e.offset;
oi->u.packed.pack = e.p;
@@ -1323,13 +1326,16 @@ int sha1_object_info(const unsigned char *sha1, 
unsigned long *sizep)
 static void *read_object(const unsigned char *sha1, enum object_type *type,
 unsigned long *size)
 {
+   struct object_id oid;
struct object_info oi = OBJECT_INFO_INIT;
void *content;
oi.typep = type;
oi.sizep = size;
oi.contentp = 
 
-   if (sha1_object_info_extended(sha1, , 0) < 0)
+   hashcpy(oid.hash, sha1);
+
+   if (sha1_object_info_extended(oid.hash, , 0) < 0)
return NULL;
return content;
 }
@@ -1723,9 +1729,11 @@ int force_object_loose(const struct object_id *oid, 
time_t mtime)
 
 int has_sha1_file_with_flags(const unsigned char *sha1, int flags)
 {
+   struct object_id oid;
if (!startup_info->have_repository)
return 0;
-   return sha1_object_info_extended(sha1, NULL,
+   hashcpy(oid.hash, sha1);
+   return sha1_object_info_extended(oid.hash, NULL,
 flags | OBJECT_INFO_SKIP_CACHED) >= 0;
 }
 
diff --git a/streaming.c b/streaming.c
index be85507922..042d6082e8 100644
--- a/streaming.c
+++ b/streaming.c
@@ -111,10 +111,13 @@ static enum input_source istream_source(const unsigned 
char *sha1,
 {
unsigned long size;
int status;
+   struct object_id oid;
+
+   hashcpy(oid.hash, sha1);
 
oi->typep = type;
oi->sizep = 
-   status = sha1_object_info_extended(sha1, oi, 0);
+   status = sha1_object_info_extended(oid.hash, oi, 0);
if (status < 0)
return stream_error;
 


[PATCH v3 26/36] sha1_file: convert sha1_object_info* to object_id

2018-03-11 Thread brian m. carlson
Convert sha1_object_info and sha1_object_info_extended to take pointers
to struct object_id and rename them to use "oid" instead of "sha1" in
their names.  Update the declaration and definition and apply the
following semantic patch, plus the standard object_id transforms:

@@
expression E1, E2;
@@
- sha1_object_info(E1.hash, E2)
+ oid_object_info(, E2)

@@
expression E1, E2;
@@
- sha1_object_info(E1->hash, E2)
+ oid_object_info(E1, E2)

@@
expression E1, E2, E3;
@@
- sha1_object_info_extended(E1.hash, E2, E3)
+ oid_object_info_extended(, E2, E3)

@@
expression E1, E2, E3;
@@
- sha1_object_info_extended(E1->hash, E2, E3)
+ oid_object_info_extended(E1, E2, E3)
---
 archive-tar.c|  2 +-
 archive-zip.c|  2 +-
 blame.c  |  4 ++--
 builtin/blame.c  |  2 +-
 builtin/cat-file.c   | 14 +++---
 builtin/describe.c   |  2 +-
 builtin/fast-export.c|  2 +-
 builtin/fetch.c  |  2 +-
 builtin/fsck.c   |  2 +-
 builtin/index-pack.c |  4 ++--
 builtin/ls-tree.c|  2 +-
 builtin/mktree.c |  2 +-
 builtin/pack-objects.c   |  7 +++
 builtin/prune.c  |  2 +-
 builtin/replace.c| 10 +-
 builtin/tag.c|  4 ++--
 builtin/unpack-objects.c |  2 +-
 cache.h  |  6 +++---
 diff.c   |  2 +-
 fast-import.c| 10 +-
 list-objects-filter.c|  2 +-
 object.c |  2 +-
 pack-bitmap-write.c  |  3 +--
 packfile.c   |  4 ++--
 reachable.c  |  2 +-
 refs.c   |  2 +-
 remote.c |  2 +-
 sequencer.c  |  3 ++-
 sha1_file.c  | 22 +++---
 sha1_name.c  | 12 ++--
 streaming.c  |  2 +-
 submodule.c  |  2 +-
 tag.c|  2 +-
 33 files changed, 71 insertions(+), 72 deletions(-)

diff --git a/archive-tar.c b/archive-tar.c
index 7a0d31d847..3563bcb9f2 100644
--- a/archive-tar.c
+++ b/archive-tar.c
@@ -276,7 +276,7 @@ static int write_tar_entry(struct archiver_args *args,
memcpy(header.name, path, pathlen);
 
if (S_ISREG(mode) && !args->convert &&
-   sha1_object_info(oid->hash, ) == OBJ_BLOB &&
+   oid_object_info(oid, ) == OBJ_BLOB &&
size > big_file_threshold)
buffer = NULL;
else if (S_ISLNK(mode) || S_ISREG(mode)) {
diff --git a/archive-zip.c b/archive-zip.c
index 18b951b732..6b20bce4d1 100644
--- a/archive-zip.c
+++ b/archive-zip.c
@@ -325,7 +325,7 @@ static int write_zip_entry(struct archiver_args *args,
compressed_size = 0;
buffer = NULL;
} else if (S_ISREG(mode) || S_ISLNK(mode)) {
-   enum object_type type = sha1_object_info(oid->hash, );
+   enum object_type type = oid_object_info(oid, );
 
method = 0;
attr2 = S_ISLNK(mode) ? ((mode | 0777) << 16) :
diff --git a/blame.c b/blame.c
index 200e0ad9a2..00b97c1898 100644
--- a/blame.c
+++ b/blame.c
@@ -81,7 +81,7 @@ static void verify_working_tree_path(struct commit 
*work_tree, const char *path)
unsigned mode;
 
if (!get_tree_entry(commit_oid->hash, path, blob_oid.hash, 
) &&
-   sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB)
+   oid_object_info(_oid, NULL) == OBJ_BLOB)
return;
}
 
@@ -506,7 +506,7 @@ static int fill_blob_sha1_and_mode(struct blame_origin 
*origin)
   origin->path,
   origin->blob_oid.hash, >mode))
goto error_out;
-   if (sha1_object_info(origin->blob_oid.hash, NULL) != OBJ_BLOB)
+   if (oid_object_info(>blob_oid, NULL) != OBJ_BLOB)
goto error_out;
return 0;
  error_out:
diff --git a/builtin/blame.c b/builtin/blame.c
index b980e8a1dd..f1a2fd6702 100644
--- a/builtin/blame.c
+++ b/builtin/blame.c
@@ -655,7 +655,7 @@ static int is_a_rev(const char *name)
 
if (get_oid(name, ))
return 0;
-   return OBJ_NONE < sha1_object_info(oid.hash, NULL);
+   return OBJ_NONE < oid_object_info(, NULL);
 }
 
 int cmd_blame(int argc, const char **argv, const char *prefix)
diff --git a/builtin/cat-file.c b/builtin/cat-file.c
index d90170f070..27f4f235f6 100644
--- a/builtin/cat-file.c
+++ b/builtin/cat-file.c
@@ -77,7 +77,7 @@ static int cat_one_file(int opt, const char *exp_type, const 
char *obj_name,
switch (opt) {
case 't':
oi.type_name = 
-   if (sha1_object_info_extended(oid.hash, , flags) < 0)
+   if (oid_object_info_extended(, , flags) < 0)
die("git cat-file: could not get object info");
if (sb.len) {
printf("%s\n", sb.buf);
@@ -88,7 +88,7 @@ static int cat_one_file(int opt, const char *exp_type, const 
char *obj_name,
 
 

[PATCH v3 34/36] Convert lookup_replace_object to struct object_id

2018-03-11 Thread brian m. carlson
Convert both the argument and the return value to be pointers to struct
object_id.  Update the callers and their internals to deal with the new
type.  Remove several temporaries which are no longer needed.

Signed-off-by: brian m. carlson 
---
 builtin/mktag.c  |  7 ++-
 cache.h  |  8 
 object.c | 14 --
 replace_object.c | 14 +++---
 sha1_file.c  | 42 --
 streaming.c  | 16 +---
 6 files changed, 42 insertions(+), 59 deletions(-)

diff --git a/builtin/mktag.c b/builtin/mktag.c
index cfb777b3c8..9f5a50a8fd 100644
--- a/builtin/mktag.c
+++ b/builtin/mktag.c
@@ -24,14 +24,11 @@ static int verify_object(const struct object_id *oid, const 
char *expected_type)
enum object_type type;
unsigned long size;
void *buffer = read_object_file(oid, , );
-   const unsigned char *repl = lookup_replace_object(oid->hash);
+   const struct object_id *repl = lookup_replace_object(oid);
 
if (buffer) {
-   struct object_id reploid;
-   hashcpy(reploid.hash, repl);
-
if (type == type_from_string(expected_type))
-   ret = check_object_signature(, buffer, size, 
expected_type);
+   ret = check_object_signature(repl, buffer, size, 
expected_type);
free(buffer);
}
return ret;
diff --git a/cache.h b/cache.h
index 3c9718cc8c..0491f0cbc4 100644
--- a/cache.h
+++ b/cache.h
@@ -1197,7 +1197,7 @@ static inline void *read_object_file(const struct 
object_id *oid, enum object_ty
  * This internal function is only declared here for the benefit of
  * lookup_replace_object().  Please do not call it directly.
  */
-extern const unsigned char *do_lookup_replace_object(const unsigned char 
*sha1);
+extern const struct object_id *do_lookup_replace_object(const struct object_id 
*oid);
 
 /*
  * If object sha1 should be replaced, return the replacement object's
@@ -1205,11 +1205,11 @@ extern const unsigned char 
*do_lookup_replace_object(const unsigned char *sha1);
  * either sha1 or a pointer to a permanently-allocated value.  When
  * object replacement is suppressed, always return sha1.
  */
-static inline const unsigned char *lookup_replace_object(const unsigned char 
*sha1)
+static inline const struct object_id *lookup_replace_object(const struct 
object_id *oid)
 {
if (!check_replace_refs)
-   return sha1;
-   return do_lookup_replace_object(sha1);
+   return oid;
+   return do_lookup_replace_object(oid);
 }
 
 /* Read and unpack an object file into memory, write memory to an object file 
*/
diff --git a/object.c b/object.c
index 8e32f0d33b..2c909385a7 100644
--- a/object.c
+++ b/object.c
@@ -244,7 +244,7 @@ struct object *parse_object(const struct object_id *oid)
unsigned long size;
enum object_type type;
int eaten;
-   const unsigned char *repl = lookup_replace_object(oid->hash);
+   const struct object_id *repl = lookup_replace_object(oid);
void *buffer;
struct object *obj;
 
@@ -255,10 +255,7 @@ struct object *parse_object(const struct object_id *oid)
if ((obj && obj->type == OBJ_BLOB && has_object_file(oid)) ||
(!obj && has_object_file(oid) &&
 oid_object_info(oid, NULL) == OBJ_BLOB)) {
-   struct object_id reploid;
-   hashcpy(reploid.hash, repl);
-
-   if (check_object_signature(, NULL, 0, NULL) < 0) {
+   if (check_object_signature(repl, NULL, 0, NULL) < 0) {
error("sha1 mismatch %s", oid_to_hex(oid));
return NULL;
}
@@ -268,12 +265,9 @@ struct object *parse_object(const struct object_id *oid)
 
buffer = read_object_file(oid, , );
if (buffer) {
-   struct object_id reploid;
-   hashcpy(reploid.hash, repl);
-
-   if (check_object_signature(, buffer, size, 
type_name(type)) < 0) {
+   if (check_object_signature(repl, buffer, size, type_name(type)) 
< 0) {
free(buffer);
-   error("sha1 mismatch %s", sha1_to_hex(repl));
+   error("sha1 mismatch %s", oid_to_hex(repl));
return NULL;
}
 
diff --git a/replace_object.c b/replace_object.c
index 232e8b8550..336357394d 100644
--- a/replace_object.c
+++ b/replace_object.c
@@ -92,16 +92,16 @@ static void prepare_replace_object(void)
 #define MAXREPLACEDEPTH 5
 
 /*
- * If a replacement for object sha1 has been set up, return the
+ * If a replacement for object oid has been set up, return the
  * replacement object's name (replaced recursively, if necessary).
- * The return value is either sha1 or a pointer to a
+ * The return value is either oid or a pointer to a
  * permanently-allocated value.  This function always 

[PATCH v3 22/36] sha1_file: convert assert_sha1_type to object_id

2018-03-11 Thread brian m. carlson
Convert this function to take a pointer to struct object_id and rename
it to assert_oid_type.

Signed-off-by: brian m. carlson 
---
 builtin/commit-tree.c | 2 +-
 cache.h   | 2 +-
 commit.c  | 2 +-
 sha1_file.c   | 8 
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/builtin/commit-tree.c b/builtin/commit-tree.c
index e5bdf57b1e..ecf42191da 100644
--- a/builtin/commit-tree.c
+++ b/builtin/commit-tree.c
@@ -58,7 +58,7 @@ int cmd_commit_tree(int argc, const char **argv, const char 
*prefix)
usage(commit_tree_usage);
if (get_oid_commit(argv[i], ))
die("Not a valid object name %s", argv[i]);
-   assert_sha1_type(oid.hash, OBJ_COMMIT);
+   assert_oid_type(, OBJ_COMMIT);
new_parent(lookup_commit(), );
continue;
}
diff --git a/cache.h b/cache.h
index 27b5fd8a39..67a04fcd12 100644
--- a/cache.h
+++ b/cache.h
@@ -1275,7 +1275,7 @@ extern int has_object_file_with_flags(const struct 
object_id *oid, int flags);
  */
 extern int has_loose_object_nonlocal(const unsigned char *sha1);
 
-extern void assert_sha1_type(const unsigned char *sha1, enum object_type 
expect);
+extern void assert_oid_type(const struct object_id *oid, enum object_type 
expect);
 
 /* Helper to check and "touch" a file */
 extern int check_and_freshen_file(const char *fn, int freshen);
diff --git a/commit.c b/commit.c
index 00c99c7272..6d77787af9 100644
--- a/commit.c
+++ b/commit.c
@@ -1517,7 +1517,7 @@ int commit_tree_extended(const char *msg, size_t msg_len,
int encoding_is_utf8;
struct strbuf buffer;
 
-   assert_sha1_type(tree->hash, OBJ_TREE);
+   assert_oid_type(tree, OBJ_TREE);
 
if (memchr(msg, '\0', msg_len))
return error("a NUL byte in commit log message not allowed.");
diff --git a/sha1_file.c b/sha1_file.c
index cdcba4483b..a307037e88 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -1966,13 +1966,13 @@ int read_pack_header(int fd, struct pack_header *header)
return 0;
 }
 
-void assert_sha1_type(const unsigned char *sha1, enum object_type expect)
+void assert_oid_type(const struct object_id *oid, enum object_type expect)
 {
-   enum object_type type = sha1_object_info(sha1, NULL);
+   enum object_type type = sha1_object_info(oid->hash, NULL);
if (type < 0)
-   die("%s is not a valid object", sha1_to_hex(sha1));
+   die("%s is not a valid object", oid_to_hex(oid));
if (type != expect)
-   die("%s is not a valid '%s' object", sha1_to_hex(sha1),
+   die("%s is not a valid '%s' object", oid_to_hex(oid),
type_name(expect));
 }
 


[PATCH v3 07/36] ref-filter: convert grab_objectname to struct object_id

2018-03-11 Thread brian m. carlson
This is necessary in order to convert find_unique_abbrev.

Signed-off-by: brian m. carlson 
---
 ref-filter.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/ref-filter.c b/ref-filter.c
index 45fc56216a..dbe554dffa 100644
--- a/ref-filter.c
+++ b/ref-filter.c
@@ -737,18 +737,18 @@ static void *get_obj(const struct object_id *oid, struct 
object **obj, unsigned
return buf;
 }
 
-static int grab_objectname(const char *name, const unsigned char *sha1,
+static int grab_objectname(const char *name, const struct object_id *oid,
   struct atom_value *v, struct used_atom *atom)
 {
if (starts_with(name, "objectname")) {
if (atom->u.objectname.option == O_SHORT) {
-   v->s = xstrdup(find_unique_abbrev(sha1, 
DEFAULT_ABBREV));
+   v->s = xstrdup(find_unique_abbrev(oid->hash, 
DEFAULT_ABBREV));
return 1;
} else if (atom->u.objectname.option == O_FULL) {
-   v->s = xstrdup(sha1_to_hex(sha1));
+   v->s = xstrdup(oid_to_hex(oid));
return 1;
} else if (atom->u.objectname.option == O_LENGTH) {
-   v->s = xstrdup(find_unique_abbrev(sha1, 
atom->u.objectname.length));
+   v->s = xstrdup(find_unique_abbrev(oid->hash, 
atom->u.objectname.length));
return 1;
} else
die("BUG: unknown %%(objectname) option");
@@ -775,7 +775,7 @@ static void grab_common_values(struct atom_value *val, int 
deref, struct object
v->s = xstrfmt("%lu", sz);
}
else if (deref)
-   grab_objectname(name, obj->oid.hash, v, _atom[i]);
+   grab_objectname(name, >oid, v, _atom[i]);
}
 }
 
@@ -1455,7 +1455,7 @@ static void populate_value(struct ref_array_item *ref)
v->s = xstrdup(buf + 1);
}
continue;
-   } else if (!deref && grab_objectname(name, 
ref->objectname.hash, v, atom)) {
+   } else if (!deref && grab_objectname(name, >objectname, v, 
atom)) {
continue;
} else if (!strcmp(name, "HEAD")) {
if (atom->u.head && !strcmp(ref->refname, atom->u.head))


[PATCH v3 23/36] sha1_file: convert retry_bad_packed_offset to struct object_id

2018-03-11 Thread brian m. carlson
Signed-off-by: brian m. carlson 
---
 packfile.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/packfile.c b/packfile.c
index 7c1a2519fc..5f08aa0a14 100644
--- a/packfile.c
+++ b/packfile.c
@@ -1095,13 +1095,13 @@ static int retry_bad_packed_offset(struct packed_git 
*p, off_t obj_offset)
 {
int type;
struct revindex_entry *revidx;
-   const unsigned char *sha1;
+   struct object_id oid;
revidx = find_pack_revindex(p, obj_offset);
if (!revidx)
return OBJ_BAD;
-   sha1 = nth_packed_object_sha1(p, revidx->nr);
-   mark_bad_packed_object(p, sha1);
-   type = sha1_object_info(sha1, NULL);
+   nth_packed_object_oid(, p, revidx->nr);
+   mark_bad_packed_object(p, oid.hash);
+   type = sha1_object_info(oid.hash, NULL);
if (type <= OBJ_NONE)
return OBJ_BAD;
return type;


[PATCH v3 08/36] strbuf: convert strbuf_add_unique_abbrev to use struct object_id

2018-03-11 Thread brian m. carlson
Convert the declaration and definition of strbuf_add_unique_abbrev to
make it take a pointer to struct object_id.  Predeclare the struct in
strbuf.h, as cache.h includes strbuf.h before it declares the struct,
and otherwise the struct declaration would have the wrong scope.

Apply the following semantic patch, along with the standard object_id
transforms, to adjust the callers:

@@
expression E1, E2, E3;
@@
- strbuf_add_unique_abbrev(E1, E2.hash, E3);
+ strbuf_add_unique_abbrev(E1, , E3);

@@
expression E1, E2, E3;
@@
- strbuf_add_unique_abbrev(E1, E2->hash, E3);
+ strbuf_add_unique_abbrev(E1, E2, E3);

Signed-off-by: brian m. carlson 
---
 builtin/checkout.c | 2 +-
 builtin/fetch.c| 8 
 builtin/tag.c  | 2 +-
 merge-recursive.c  | 2 +-
 pretty.c   | 8 
 strbuf.c   | 4 ++--
 strbuf.h   | 8 +++-
 submodule.c| 4 ++--
 transport.c| 4 ++--
 wt-status.c| 6 +++---
 10 files changed, 27 insertions(+), 21 deletions(-)

diff --git a/builtin/checkout.c b/builtin/checkout.c
index 8eb118201d..5981e1a3ed 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -720,7 +720,7 @@ static int add_pending_uninteresting_ref(const char 
*refname,
 static void describe_one_orphan(struct strbuf *sb, struct commit *commit)
 {
strbuf_addstr(sb, "  ");
-   strbuf_add_unique_abbrev(sb, commit->object.oid.hash, DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(sb, >object.oid, DEFAULT_ABBREV);
strbuf_addch(sb, ' ');
if (!parse_commit(commit))
pp_commit_easy(CMIT_FMT_ONELINE, commit, sb);
diff --git a/builtin/fetch.c b/builtin/fetch.c
index d32d94692c..aeafe63ba6 100644
--- a/builtin/fetch.c
+++ b/builtin/fetch.c
@@ -708,9 +708,9 @@ static int update_local_ref(struct ref *ref,
if (in_merge_bases(current, updated)) {
struct strbuf quickref = STRBUF_INIT;
int r;
-   strbuf_add_unique_abbrev(, current->object.oid.hash, 
DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(, >object.oid, 
DEFAULT_ABBREV);
strbuf_addstr(, "..");
-   strbuf_add_unique_abbrev(, ref->new_oid.hash, 
DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(, >new_oid, 
DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
check_for_new_submodule_commits(>new_oid);
@@ -723,9 +723,9 @@ static int update_local_ref(struct ref *ref,
} else if (force || ref->force) {
struct strbuf quickref = STRBUF_INIT;
int r;
-   strbuf_add_unique_abbrev(, current->object.oid.hash, 
DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(, >object.oid, 
DEFAULT_ABBREV);
strbuf_addstr(, "...");
-   strbuf_add_unique_abbrev(, ref->new_oid.hash, 
DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(, >new_oid, 
DEFAULT_ABBREV);
if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
(recurse_submodules != RECURSE_SUBMODULES_ON))
check_for_new_submodule_commits(>new_oid);
diff --git a/builtin/tag.c b/builtin/tag.c
index 8c493a569f..a5fc6c090f 100644
--- a/builtin/tag.c
+++ b/builtin/tag.c
@@ -293,7 +293,7 @@ static void create_reflog_msg(const struct object_id *oid, 
struct strbuf *sb)
strbuf_addstr(sb, rla);
} else {
strbuf_addstr(sb, "tag: tagging ");
-   strbuf_add_unique_abbrev(sb, oid->hash, DEFAULT_ABBREV);
+   strbuf_add_unique_abbrev(sb, oid, DEFAULT_ABBREV);
}
 
strbuf_addstr(sb, " (");
diff --git a/merge-recursive.c b/merge-recursive.c
index c886f2e9cd..f58f13957e 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -228,7 +228,7 @@ static void output_commit_title(struct merge_options *o, 
struct commit *commit)
strbuf_addf(>obuf, "virtual %s\n",
merge_remote_util(commit)->name);
else {
-   strbuf_add_unique_abbrev(>obuf, commit->object.oid.hash,
+   strbuf_add_unique_abbrev(>obuf, >object.oid,
 DEFAULT_ABBREV);
strbuf_addch(>obuf, ' ');
if (parse_commit(commit) != 0)
diff --git a/pretty.c b/pretty.c
index f7ce490230..34fe891fc0 100644
--- a/pretty.c
+++ b/pretty.c
@@ -549,7 +549,7 @@ static void add_merge_info(const struct 
pretty_print_context *pp,
struct object_id *oidp = >item->object.oid;
strbuf_addch(sb, ' ');
if (pp->abbrev)
-   strbuf_add_unique_abbrev(sb, oidp->hash, pp->abbrev);
+   strbuf_add_unique_abbrev(sb, oidp, pp->abbrev);
else
strbuf_addstr(sb, oid_to_hex(oidp));
parent = parent->next;
@@ 

[PATCH v3 27/36] builtin/fmt-merge-msg: convert remaining code to object_id

2018-03-11 Thread brian m. carlson
We were using the util pointer, which is a pointer to void, as an
unsigned char pointer.  The pointer actually points to a struct
origin_data, which has a struct object_id as its first member, which in
turn has an unsigned char array as its first member, so this was valid.
Since we want to convert this to struct object_id, simply change the
pointer we're using.

Signed-off-by: brian m. carlson 
---
 builtin/fmt-merge-msg.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/builtin/fmt-merge-msg.c b/builtin/fmt-merge-msg.c
index 8e8a15ea4a..0bc0dda99c 100644
--- a/builtin/fmt-merge-msg.c
+++ b/builtin/fmt-merge-msg.c
@@ -485,10 +485,10 @@ static void fmt_merge_msg_sigs(struct strbuf *out)
struct strbuf tagbuf = STRBUF_INIT;
 
for (i = 0; i < origins.nr; i++) {
-   unsigned char *sha1 = origins.items[i].util;
+   struct object_id *oid = origins.items[i].util;
enum object_type type;
unsigned long size, len;
-   char *buf = read_sha1_file(sha1, , );
+   char *buf = read_sha1_file(oid->hash, , );
struct strbuf sig = STRBUF_INIT;
 
if (!buf || type != OBJ_TAG)


[PATCH v3 31/36] tree-walk: convert tree entry functions to object_id

2018-03-11 Thread brian m. carlson
Convert get_tree_entry and find_tree_entry to take pointers to struct
object_id.

Signed-off-by: brian m. carlson 
---
 archive.c  |  4 ++--
 blame.c|  6 ++
 builtin/rm.c   |  2 +-
 builtin/update-index.c |  2 +-
 line-log.c |  3 +--
 match-trees.c  |  6 +++---
 merge-recursive.c  | 12 ++--
 notes.c|  2 +-
 sha1_name.c|  7 +++
 tree-walk.c| 20 ++--
 tree-walk.h|  2 +-
 11 files changed, 31 insertions(+), 35 deletions(-)

diff --git a/archive.c b/archive.c
index da62b2f541..1ab62d426e 100644
--- a/archive.c
+++ b/archive.c
@@ -397,8 +397,8 @@ static void parse_treeish_arg(const char **argv,
unsigned int mode;
int err;
 
-   err = get_tree_entry(tree->object.oid.hash, prefix,
-tree_oid.hash, );
+   err = get_tree_entry(>object.oid, prefix, _oid,
+);
if (err || !S_ISDIR(mode))
die("current working directory is untracked");
 
diff --git a/blame.c b/blame.c
index 00b97c1898..cc2cc92944 100644
--- a/blame.c
+++ b/blame.c
@@ -80,7 +80,7 @@ static void verify_working_tree_path(struct commit 
*work_tree, const char *path)
struct object_id blob_oid;
unsigned mode;
 
-   if (!get_tree_entry(commit_oid->hash, path, blob_oid.hash, 
) &&
+   if (!get_tree_entry(commit_oid, path, _oid, ) &&
oid_object_info(_oid, NULL) == OBJ_BLOB)
return;
}
@@ -502,9 +502,7 @@ static int fill_blob_sha1_and_mode(struct blame_origin 
*origin)
 {
if (!is_null_oid(>blob_oid))
return 0;
-   if (get_tree_entry(origin->commit->object.oid.hash,
-  origin->path,
-  origin->blob_oid.hash, >mode))
+   if (get_tree_entry(>commit->object.oid, origin->path, 
>blob_oid, >mode))
goto error_out;
if (oid_object_info(>blob_oid, NULL) != OBJ_BLOB)
goto error_out;
diff --git a/builtin/rm.c b/builtin/rm.c
index 4a2fcca27b..974a7ef5bf 100644
--- a/builtin/rm.c
+++ b/builtin/rm.c
@@ -178,7 +178,7 @@ static int check_local_mod(struct object_id *head, int 
index_only)
 * way as changed from the HEAD.
 */
if (no_head
-|| get_tree_entry(head->hash, name, oid.hash, )
+|| get_tree_entry(head, name, , )
 || ce->ce_mode != create_ce_mode(mode)
 || oidcmp(>oid, ))
staged_changes = 1;
diff --git a/builtin/update-index.c b/builtin/update-index.c
index 58d1c2d282..9625d1e10a 100644
--- a/builtin/update-index.c
+++ b/builtin/update-index.c
@@ -592,7 +592,7 @@ static struct cache_entry *read_one_ent(const char *which,
int size;
struct cache_entry *ce;
 
-   if (get_tree_entry(ent->hash, path, oid.hash, )) {
+   if (get_tree_entry(ent, path, , )) {
if (which)
error("%s: not in %s branch.", path, which);
return NULL;
diff --git a/line-log.c b/line-log.c
index cdc2257db5..ecdce08c4b 100644
--- a/line-log.c
+++ b/line-log.c
@@ -501,8 +501,7 @@ static void fill_blob_sha1(struct commit *commit, struct 
diff_filespec *spec)
unsigned mode;
struct object_id oid;
 
-   if (get_tree_entry(commit->object.oid.hash, spec->path,
-  oid.hash, ))
+   if (get_tree_entry(>object.oid, spec->path, , ))
die("There is no path %s in the commit", spec->path);
fill_filespec(spec, , 1, mode);
 
diff --git a/match-trees.c b/match-trees.c
index 0ca99d5162..10d6c39d47 100644
--- a/match-trees.c
+++ b/match-trees.c
@@ -269,7 +269,7 @@ void shift_tree(const struct object_id *hash1,
if (!*del_prefix)
return;
 
-   if (get_tree_entry(hash2->hash, del_prefix, shifted->hash, 
))
+   if (get_tree_entry(hash2, del_prefix, shifted, ))
die("cannot find path %s in tree %s",
del_prefix, oid_to_hex(hash2));
return;
@@ -296,12 +296,12 @@ void shift_tree_by(const struct object_id *hash1,
unsigned candidate = 0;
 
/* Can hash2 be a tree at shift_prefix in tree hash1? */
-   if (!get_tree_entry(hash1->hash, shift_prefix, sub1.hash, ) &&
+   if (!get_tree_entry(hash1, shift_prefix, , ) &&
S_ISDIR(mode1))
candidate |= 1;
 
/* Can hash1 be a tree at shift_prefix in tree hash2? */
-   if (!get_tree_entry(hash2->hash, shift_prefix, sub2.hash, ) &&
+   if (!get_tree_entry(hash2, shift_prefix, , ) &&
S_ISDIR(mode2))
candidate |= 2;
 
diff --git 

[PATCH v3 33/36] sha1_file: convert read_sha1_file to struct object_id

2018-03-11 Thread brian m. carlson
Convert read_sha1_file to take a pointer to struct object_id and rename
it read_object_file.  Do the same for read_sha1_file_extended.

Convert one use in grep.c to use the new function without any other code
change, since the pointer being passed is a void pointer that is already
initialized with a pointer to struct object_id.  Update the declaration
and definitions of the modified functions, and apply the following
semantic patch to convert the remaining callers:

@@
expression E1, E2, E3;
@@
- read_sha1_file(E1.hash, E2, E3)
+ read_object_file(, E2, E3)

@@
expression E1, E2, E3;
@@
- read_sha1_file(E1->hash, E2, E3)
+ read_object_file(E1, E2, E3)

@@
expression E1, E2, E3, E4;
@@
- read_sha1_file_extended(E1.hash, E2, E3, E4)
+ read_object_file_extended(, E2, E3, E4)

@@
expression E1, E2, E3, E4;
@@
- read_sha1_file_extended(E1->hash, E2, E3, E4)
+ read_object_file_extended(E1, E2, E3, E4)

Signed-off-by: brian m. carlson 
---
 apply.c  |  4 ++--
 archive.c|  2 +-
 bisect.c |  3 ++-
 blame.c  |  8 
 builtin/cat-file.c   | 14 --
 builtin/difftool.c   |  2 +-
 builtin/fast-export.c|  4 ++--
 builtin/fmt-merge-msg.c  |  2 +-
 builtin/grep.c   |  2 +-
 builtin/index-pack.c |  5 +++--
 builtin/log.c|  2 +-
 builtin/merge-tree.c |  5 +++--
 builtin/mktag.c  |  2 +-
 builtin/notes.c  |  6 +++---
 builtin/pack-objects.c   | 16 ++--
 builtin/reflog.c |  2 +-
 builtin/tag.c|  4 ++--
 builtin/unpack-file.c|  2 +-
 builtin/unpack-objects.c |  2 +-
 builtin/verify-commit.c  |  2 +-
 bundle.c |  2 +-
 cache.h  | 10 +-
 combine-diff.c   |  2 +-
 commit.c |  6 +++---
 config.c |  2 +-
 diff.c   |  2 +-
 dir.c|  2 +-
 entry.c  |  2 +-
 fast-import.c|  6 +++---
 fsck.c   |  2 +-
 grep.c   |  2 +-
 http-push.c  |  2 +-
 mailmap.c|  2 +-
 match-trees.c|  4 ++--
 merge-blobs.c|  4 ++--
 merge-recursive.c|  4 ++--
 notes-cache.c|  2 +-
 notes-merge.c|  2 +-
 notes.c  |  8 
 object.c |  2 +-
 read-cache.c |  4 ++--
 ref-filter.c |  2 +-
 remote-testsvn.c |  4 ++--
 rerere.c |  4 ++--
 sha1_file.c  | 20 ++--
 streaming.c  |  2 +-
 submodule-config.c   |  2 +-
 tag.c|  4 ++--
 tree-walk.c  |  4 ++--
 tree.c   |  2 +-
 xdiff-interface.c|  2 +-
 51 files changed, 104 insertions(+), 103 deletions(-)

diff --git a/apply.c b/apply.c
index f8f15edc7c..b1e1bd58bf 100644
--- a/apply.c
+++ b/apply.c
@@ -3180,7 +3180,7 @@ static int apply_binary(struct apply_state *state,
unsigned long size;
char *result;
 
-   result = read_sha1_file(oid.hash, , );
+   result = read_object_file(, , );
if (!result)
return error(_("the necessary postimage %s for "
   "'%s' cannot be read"),
@@ -3242,7 +3242,7 @@ static int read_blob_object(struct strbuf *buf, const 
struct object_id *oid, uns
unsigned long sz;
char *result;
 
-   result = read_sha1_file(oid->hash, , );
+   result = read_object_file(oid, , );
if (!result)
return -1;
/* XXX read_sha1_file NUL-terminates */
diff --git a/archive.c b/archive.c
index 1ab62d426e..93ab175b0b 100644
--- a/archive.c
+++ b/archive.c
@@ -72,7 +72,7 @@ void *object_file_to_archive(const struct archiver_args *args,
const struct commit *commit = args->convert ? args->commit : NULL;
 
path += args->baselen;
-   buffer = read_sha1_file(oid->hash, type, sizep);
+   buffer = read_object_file(oid, type, sizep);
if (buffer && S_ISREG(mode)) {
struct strbuf buf = STRBUF_INIT;
size_t size = 0;
diff --git a/bisect.c b/bisect.c
index f6d05bd66f..ad395bb2b8 100644
--- a/bisect.c
+++ b/bisect.c
@@ -132,7 +132,8 @@ static void show_list(const char *debug, int counted, int 
nr,
unsigned flags = commit->object.flags;
enum object_type type;
unsigned long size;
-   char *buf = read_sha1_file(commit->object.oid.hash, , 
);
+   char *buf = read_object_file(>object.oid, ,
+);
const char *subject_start;
int subject_len;
 
diff --git a/blame.c b/blame.c
index cc2cc92944..78c9808bd1 100644
--- a/blame.c
+++ b/blame.c
@@ -297,8 +297,8 @@ static void 

[PATCH v3 13/36] replace_object: convert struct replace_object to object_id

2018-03-11 Thread brian m. carlson
Convert the two members of this struct to be instances of struct
object_id.  Adjust the various functions in this file accordingly.

Signed-off-by: brian m. carlson 
---
 replace_object.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/replace_object.c b/replace_object.c
index 3e49965d05..232e8b8550 100644
--- a/replace_object.c
+++ b/replace_object.c
@@ -8,8 +8,8 @@
  * sha1.
  */
 static struct replace_object {
-   unsigned char original[20];
-   unsigned char replacement[20];
+   struct object_id original;
+   struct object_id replacement;
 } **replace_object;
 
 static int replace_object_alloc, replace_object_nr;
@@ -17,7 +17,7 @@ static int replace_object_alloc, replace_object_nr;
 static const unsigned char *replace_sha1_access(size_t index, void *table)
 {
struct replace_object **replace = table;
-   return replace[index]->original;
+   return replace[index]->original.hash;
 }
 
 static int replace_object_pos(const unsigned char *sha1)
@@ -29,7 +29,7 @@ static int replace_object_pos(const unsigned char *sha1)
 static int register_replace_object(struct replace_object *replace,
   int ignore_dups)
 {
-   int pos = replace_object_pos(replace->original);
+   int pos = replace_object_pos(replace->original.hash);
 
if (0 <= pos) {
if (ignore_dups)
@@ -59,14 +59,14 @@ static int register_replace_ref(const char *refname,
const char *hash = slash ? slash + 1 : refname;
struct replace_object *repl_obj = xmalloc(sizeof(*repl_obj));
 
-   if (strlen(hash) != 40 || get_sha1_hex(hash, repl_obj->original)) {
+   if (get_oid_hex(hash, _obj->original)) {
free(repl_obj);
warning("bad replace ref name: %s", refname);
return 0;
}
 
/* Copy sha1 from the read ref */
-   hashcpy(repl_obj->replacement, oid->hash);
+   oidcpy(_obj->replacement, oid);
 
/* Register new object */
if (register_replace_object(repl_obj, 1))
@@ -113,7 +113,7 @@ const unsigned char *do_lookup_replace_object(const 
unsigned char *sha1)
 
pos = replace_object_pos(cur);
if (0 <= pos)
-   cur = replace_object[pos]->replacement;
+   cur = replace_object[pos]->replacement.hash;
} while (0 <= pos);
 
return cur;


[PATCH v3 30/36] streaming: convert istream internals to struct object_id

2018-03-11 Thread brian m. carlson
Convert the various open_istream variants to take a pointer to struct
object_id.  Introduce a temporary, which will be removed later, to work
around the fact that lookup_replace_object still returns a pointer to
unsigned char.

Signed-off-by: brian m. carlson 
---
 streaming.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/streaming.c b/streaming.c
index 344678e95f..3f4be1ea2c 100644
--- a/streaming.c
+++ b/streaming.c
@@ -14,7 +14,7 @@ enum input_source {
 
 typedef int (*open_istream_fn)(struct git_istream *,
   struct object_info *,
-  const unsigned char *,
+  const struct object_id *,
   enum object_type *);
 typedef int (*close_istream_fn)(struct git_istream *);
 typedef ssize_t (*read_istream_fn)(struct git_istream *, char *, size_t);
@@ -27,7 +27,7 @@ struct stream_vtbl {
 #define open_method_decl(name) \
int open_istream_ ##name \
(struct git_istream *st, struct object_info *oi, \
-const unsigned char *sha1, \
+const struct object_id *oid, \
 enum object_type *type)
 
 #define close_method_decl(name) \
@@ -142,13 +142,16 @@ struct git_istream *open_istream(const struct object_id 
*oid,
struct object_info oi = OBJECT_INFO_INIT;
const unsigned char *real = lookup_replace_object(oid->hash);
enum input_source src = istream_source(real, type, );
+   struct object_id realoid;
+
+   hashcpy(realoid.hash, real);
 
if (src < 0)
return NULL;
 
st = xmalloc(sizeof(*st));
-   if (open_istream_tbl[src](st, , real, type)) {
-   if (open_istream_incore(st, , real, type)) {
+   if (open_istream_tbl[src](st, , , type)) {
+   if (open_istream_incore(st, , , type)) {
free(st);
return NULL;
}
@@ -338,7 +341,7 @@ static struct stream_vtbl loose_vtbl = {
 
 static open_method_decl(loose)
 {
-   st->u.loose.mapped = map_sha1_file(sha1, >u.loose.mapsize);
+   st->u.loose.mapped = map_sha1_file(oid->hash, >u.loose.mapsize);
if (!st->u.loose.mapped)
return -1;
if ((unpack_sha1_header(>z,
@@ -489,7 +492,7 @@ static struct stream_vtbl incore_vtbl = {
 
 static open_method_decl(incore)
 {
-   st->u.incore.buf = read_sha1_file_extended(sha1, type, >size, 0);
+   st->u.incore.buf = read_sha1_file_extended(oid->hash, type, >size, 
0);
st->u.incore.read_ptr = 0;
st->vtbl = _vtbl;
 


[PATCH v3 24/36] packfile: convert unpack_entry to struct object_id

2018-03-11 Thread brian m. carlson
Convert unpack_entry and read_object to use struct object_id.
---
 packfile.c | 22 +++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/packfile.c b/packfile.c
index 5f08aa0a14..3e31ad7a0c 100644
--- a/packfile.c
+++ b/packfile.c
@@ -1452,7 +1452,7 @@ struct unpack_entry_stack_ent {
unsigned long size;
 };
 
-static void *read_object(const unsigned char *sha1, enum object_type *type,
+static void *read_object(const struct object_id *oid, enum object_type *type,
 unsigned long *size)
 {
struct object_info oi = OBJECT_INFO_INIT;
@@ -1461,7 +1461,7 @@ static void *read_object(const unsigned char *sha1, enum 
object_type *type,
oi.sizep = size;
oi.contentp = 
 
-   if (sha1_object_info_extended(sha1, , 0) < 0)
+   if (sha1_object_info_extended(oid->hash, , 0) < 0)
return NULL;
return content;
 }
@@ -1501,11 +1501,11 @@ void *unpack_entry(struct packed_git *p, off_t 
obj_offset,
struct revindex_entry *revidx = find_pack_revindex(p, 
obj_offset);
off_t len = revidx[1].offset - obj_offset;
if (check_pack_crc(p, _curs, obj_offset, len, 
revidx->nr)) {
-   const unsigned char *sha1 =
-   nth_packed_object_sha1(p, revidx->nr);
+   struct object_id oid;
+   nth_packed_object_oid(, p, revidx->nr);
error("bad packed object CRC for %s",
- sha1_to_hex(sha1));
-   mark_bad_packed_object(p, sha1);
+ oid_to_hex());
+   mark_bad_packed_object(p, oid.hash);
data = NULL;
goto out;
}
@@ -1588,16 +1588,16 @@ void *unpack_entry(struct packed_git *p, off_t 
obj_offset,
 * of a corrupted pack, and is better than failing 
outright.
 */
struct revindex_entry *revidx;
-   const unsigned char *base_sha1;
+   struct object_id base_oid;
revidx = find_pack_revindex(p, obj_offset);
if (revidx) {
-   base_sha1 = nth_packed_object_sha1(p, 
revidx->nr);
+   nth_packed_object_oid(_oid, p, revidx->nr);
error("failed to read delta base object %s"
  " at offset %"PRIuMAX" from %s",
- sha1_to_hex(base_sha1), 
(uintmax_t)obj_offset,
+ oid_to_hex(_oid), 
(uintmax_t)obj_offset,
  p->pack_name);
-   mark_bad_packed_object(p, base_sha1);
-   base = read_object(base_sha1, , 
_size);
+   mark_bad_packed_object(p, base_oid.hash);
+   base = read_object(_oid, , 
_size);
external_base = base;
}
}


[PATCH v3 14/36] builtin/mktag: convert to struct object_id

2018-03-11 Thread brian m. carlson
Signed-off-by: brian m. carlson 
---
 builtin/mktag.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/builtin/mktag.c b/builtin/mktag.c
index beb552847b..65bb41e3cd 100644
--- a/builtin/mktag.c
+++ b/builtin/mktag.c
@@ -18,13 +18,13 @@
 /*
  * We refuse to tag something we can't verify. Just because.
  */
-static int verify_object(const unsigned char *sha1, const char *expected_type)
+static int verify_object(const struct object_id *oid, const char 
*expected_type)
 {
int ret = -1;
enum object_type type;
unsigned long size;
-   void *buffer = read_sha1_file(sha1, , );
-   const unsigned char *repl = lookup_replace_object(sha1);
+   void *buffer = read_sha1_file(oid->hash, , );
+   const unsigned char *repl = lookup_replace_object(oid->hash);
 
if (buffer) {
if (type == type_from_string(expected_type))
@@ -38,8 +38,8 @@ static int verify_tag(char *buffer, unsigned long size)
 {
int typelen;
char type[20];
-   unsigned char sha1[20];
-   const char *object, *type_line, *tag_line, *tagger_line, *lb, *rb;
+   struct object_id oid;
+   const char *object, *type_line, *tag_line, *tagger_line, *lb, *rb, *p;
size_t len;
 
if (size < 84)
@@ -52,11 +52,11 @@ static int verify_tag(char *buffer, unsigned long size)
if (memcmp(object, "object ", 7))
return error("char%d: does not start with \"object \"", 0);
 
-   if (get_sha1_hex(object + 7, sha1))
+   if (parse_oid_hex(object + 7, , ))
return error("char%d: could not get SHA1 hash", 7);
 
/* Verify type line */
-   type_line = object + 48;
+   type_line = p + 1;
if (memcmp(type_line - 1, "\ntype ", 6))
return error("char%d: could not find \"\\ntype \"", 47);
 
@@ -80,8 +80,8 @@ static int verify_tag(char *buffer, unsigned long size)
type[typelen] = 0;
 
/* Verify that the object matches */
-   if (verify_object(sha1, type))
-   return error("char%d: could not verify object %s", 7, 
sha1_to_hex(sha1));
+   if (verify_object(, type))
+   return error("char%d: could not verify object %s", 7, 
oid_to_hex());
 
/* Verify the tag-name: we don't allow control characters or spaces in 
it */
tag_line += 4;


[PATCH v3 17/36] builtin/index-pack: convert struct ref_delta_entry to object_id

2018-03-11 Thread brian m. carlson
Convert this struct to use a member of type object_id.  Convert various
static functions as well.

Signed-off-by: brian m. carlson 
---
 builtin/index-pack.c | 34 +-
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index 59878e70b8..9744175e23 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -58,7 +58,7 @@ struct ofs_delta_entry {
 };
 
 struct ref_delta_entry {
-   unsigned char sha1[20];
+   struct object_id oid;
int obj_no;
 };
 
@@ -671,18 +671,18 @@ static void find_ofs_delta_children(off_t offset,
*last_index = last;
 }
 
-static int compare_ref_delta_bases(const unsigned char *sha1,
-  const unsigned char *sha2,
+static int compare_ref_delta_bases(const struct object_id *oid1,
+  const struct object_id *oid2,
   enum object_type type1,
   enum object_type type2)
 {
int cmp = type1 - type2;
if (cmp)
return cmp;
-   return hashcmp(sha1, sha2);
+   return oidcmp(oid1, oid2);
 }
 
-static int find_ref_delta(const unsigned char *sha1, enum object_type type)
+static int find_ref_delta(const struct object_id *oid, enum object_type type)
 {
int first = 0, last = nr_ref_deltas;
 
@@ -691,7 +691,7 @@ static int find_ref_delta(const unsigned char *sha1, enum 
object_type type)
struct ref_delta_entry *delta = _deltas[next];
int cmp;
 
-   cmp = compare_ref_delta_bases(sha1, delta->sha1,
+   cmp = compare_ref_delta_bases(oid, >oid,
  type, 
objects[delta->obj_no].type);
if (!cmp)
return next;
@@ -704,11 +704,11 @@ static int find_ref_delta(const unsigned char *sha1, enum 
object_type type)
return -first-1;
 }
 
-static void find_ref_delta_children(const unsigned char *sha1,
+static void find_ref_delta_children(const struct object_id *oid,
int *first_index, int *last_index,
enum object_type type)
 {
-   int first = find_ref_delta(sha1, type);
+   int first = find_ref_delta(oid, type);
int last = first;
int end = nr_ref_deltas - 1;
 
@@ -717,9 +717,9 @@ static void find_ref_delta_children(const unsigned char 
*sha1,
*last_index = -1;
return;
}
-   while (first > 0 && !hashcmp(ref_deltas[first - 1].sha1, sha1))
+   while (first > 0 && !oidcmp(_deltas[first - 1].oid, oid))
--first;
-   while (last < end && !hashcmp(ref_deltas[last + 1].sha1, sha1))
+   while (last < end && !oidcmp(_deltas[last + 1].oid, oid))
++last;
*first_index = first;
*last_index = last;
@@ -991,7 +991,7 @@ static struct base_data *find_unresolved_deltas_1(struct 
base_data *base,
  struct base_data *prev_base)
 {
if (base->ref_last == -1 && base->ofs_last == -1) {
-   find_ref_delta_children(base->obj->idx.oid.hash,
+   find_ref_delta_children(>obj->idx.oid,
>ref_first, >ref_last,
OBJ_REF_DELTA);
 
@@ -1075,7 +1075,7 @@ static int compare_ref_delta_entry(const void *a, const 
void *b)
const struct ref_delta_entry *delta_a = a;
const struct ref_delta_entry *delta_b = b;
 
-   return hashcmp(delta_a->sha1, delta_b->sha1);
+   return oidcmp(_a->oid, _b->oid);
 }
 
 static void resolve_base(struct object_entry *obj)
@@ -1141,7 +1141,7 @@ static void parse_pack_objects(unsigned char *hash)
ofs_delta++;
} else if (obj->type == OBJ_REF_DELTA) {
ALLOC_GROW(ref_deltas, nr_ref_deltas + 1, 
ref_deltas_alloc);
-   hashcpy(ref_deltas[nr_ref_deltas].sha1, 
ref_delta_oid.hash);
+   oidcpy(_deltas[nr_ref_deltas].oid, _delta_oid);
ref_deltas[nr_ref_deltas].obj_no = i;
nr_ref_deltas++;
} else if (!data) {
@@ -1373,14 +1373,14 @@ static void fix_unresolved_deltas(struct hashfile *f)
 
if (objects[d->obj_no].real_type != OBJ_REF_DELTA)
continue;
-   base_obj->data = read_sha1_file(d->sha1, , 
_obj->size);
+   base_obj->data = read_sha1_file(d->oid.hash, , 
_obj->size);
if (!base_obj->data)
continue;
 
-   if (check_sha1_signature(d->sha1, base_obj->data,
+   if (check_sha1_signature(d->oid.hash, base_obj->data,
base_obj->size, type_name(type)))
-   die(_("local object 

[PATCH v3 15/36] archive: convert write_archive_entry_fn_t to object_id

2018-03-11 Thread brian m. carlson
Convert the write_archive_entry_fn_t type to use a pointer to struct
object_id.  Convert various static functions in the tar and zip
archivers also.

Signed-off-by: brian m. carlson 
---
 archive-tar.c | 28 ++--
 archive-zip.c | 16 
 archive.c | 12 ++--
 archive.h |  2 +-
 4 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/archive-tar.c b/archive-tar.c
index c6ed96ee74..24b1ccef3a 100644
--- a/archive-tar.c
+++ b/archive-tar.c
@@ -111,7 +111,7 @@ static void write_trailer(void)
  * queues up writes, so that all our write(2) calls write exactly one
  * full block; pads writes to RECORDSIZE
  */
-static int stream_blocked(const unsigned char *sha1)
+static int stream_blocked(const struct object_id *oid)
 {
struct git_istream *st;
enum object_type type;
@@ -119,9 +119,9 @@ static int stream_blocked(const unsigned char *sha1)
char buf[BLOCKSIZE];
ssize_t readlen;
 
-   st = open_istream(sha1, , , NULL);
+   st = open_istream(oid->hash, , , NULL);
if (!st)
-   return error("cannot stream blob %s", sha1_to_hex(sha1));
+   return error("cannot stream blob %s", oid_to_hex(oid));
for (;;) {
readlen = read_istream(st, buf, sizeof(buf));
if (readlen <= 0)
@@ -218,7 +218,7 @@ static void prepare_header(struct archiver_args *args,
 }
 
 static void write_extended_header(struct archiver_args *args,
- const unsigned char *sha1,
+ const struct object_id *oid,
  const void *buffer, unsigned long size)
 {
struct ustar_header header;
@@ -226,14 +226,14 @@ static void write_extended_header(struct archiver_args 
*args,
memset(, 0, sizeof(header));
*header.typeflag = TYPEFLAG_EXT_HEADER;
mode = 0100666;
-   xsnprintf(header.name, sizeof(header.name), "%s.paxheader", 
sha1_to_hex(sha1));
+   xsnprintf(header.name, sizeof(header.name), "%s.paxheader", 
oid_to_hex(oid));
prepare_header(args, , mode, size);
write_blocked(, sizeof(header));
write_blocked(buffer, size);
 }
 
 static int write_tar_entry(struct archiver_args *args,
-  const unsigned char *sha1,
+  const struct object_id *oid,
   const char *path, size_t pathlen,
   unsigned int mode)
 {
@@ -257,7 +257,7 @@ static int write_tar_entry(struct archiver_args *args,
mode = (mode | ((mode & 0100) ? 0777 : 0666)) & ~tar_umask;
} else {
return error("unsupported file mode: 0%o (SHA1: %s)",
-mode, sha1_to_hex(sha1));
+mode, oid_to_hex(oid));
}
if (pathlen > sizeof(header.name)) {
size_t plen = get_path_prefix(path, pathlen,
@@ -268,7 +268,7 @@ static int write_tar_entry(struct archiver_args *args,
memcpy(header.name, path + plen + 1, rest);
} else {
xsnprintf(header.name, sizeof(header.name), "%s.data",
- sha1_to_hex(sha1));
+ oid_to_hex(oid));
strbuf_append_ext_header(_header, "path",
 path, pathlen);
}
@@ -276,14 +276,14 @@ static int write_tar_entry(struct archiver_args *args,
memcpy(header.name, path, pathlen);
 
if (S_ISREG(mode) && !args->convert &&
-   sha1_object_info(sha1, ) == OBJ_BLOB &&
+   sha1_object_info(oid->hash, ) == OBJ_BLOB &&
size > big_file_threshold)
buffer = NULL;
else if (S_ISLNK(mode) || S_ISREG(mode)) {
enum object_type type;
-   buffer = sha1_file_to_archive(args, path, sha1, old_mode, 
, );
+   buffer = sha1_file_to_archive(args, path, oid->hash, old_mode, 
, );
if (!buffer)
-   return error("cannot read %s", sha1_to_hex(sha1));
+   return error("cannot read %s", oid_to_hex(oid));
} else {
buffer = NULL;
size = 0;
@@ -292,7 +292,7 @@ static int write_tar_entry(struct archiver_args *args,
if (S_ISLNK(mode)) {
if (size > sizeof(header.linkname)) {
xsnprintf(header.linkname, sizeof(header.linkname),
- "see %s.paxheader", sha1_to_hex(sha1));
+ "see %s.paxheader", oid_to_hex(oid));
strbuf_append_ext_header(_header, "linkpath",
 buffer, size);
} else
@@ -308,7 +308,7 @@ static int write_tar_entry(struct archiver_args *args,
prepare_header(args, , 

[PATCH v3 21/36] builtin/mktree: convert to struct object_id

2018-03-11 Thread brian m. carlson
Convert this file to use struct object_id.  Modify one use of
get_sha1_hex into parse_oid_hex; this is safe since we get the data from
a strbuf.

Signed-off-by: brian m. carlson 
---
 builtin/mktree.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/builtin/mktree.c b/builtin/mktree.c
index f5f3c0eea1..e39fe5d423 100644
--- a/builtin/mktree.c
+++ b/builtin/mktree.c
@@ -10,13 +10,13 @@
 
 static struct treeent {
unsigned mode;
-   unsigned char sha1[20];
+   struct object_id oid;
int len;
char name[FLEX_ARRAY];
 } **entries;
 static int alloc, used;
 
-static void append_to_tree(unsigned mode, unsigned char *sha1, char *path)
+static void append_to_tree(unsigned mode, struct object_id *oid, char *path)
 {
struct treeent *ent;
size_t len = strlen(path);
@@ -26,7 +26,7 @@ static void append_to_tree(unsigned mode, unsigned char 
*sha1, char *path)
FLEX_ALLOC_MEM(ent, name, path, len);
ent->mode = mode;
ent->len = len;
-   hashcpy(ent->sha1, sha1);
+   oidcpy(>oid, oid);
 
ALLOC_GROW(entries, used + 1, alloc);
entries[used++] = ent;
@@ -54,7 +54,7 @@ static void write_tree(struct object_id *oid)
for (i = 0; i < used; i++) {
struct treeent *ent = entries[i];
strbuf_addf(, "%o %s%c", ent->mode, ent->name, '\0');
-   strbuf_add(, ent->sha1, 20);
+   strbuf_add(, ent->oid.hash, the_hash_algo->rawsz);
}
 
write_object_file(buf.buf, buf.len, tree_type, oid);
@@ -69,11 +69,12 @@ static const char *mktree_usage[] = {
 static void mktree_line(char *buf, size_t len, int nul_term_line, int 
allow_missing)
 {
char *ptr, *ntr;
+   const char *p;
unsigned mode;
enum object_type mode_type; /* object type derived from mode */
enum object_type obj_type; /* object type derived from sha */
char *path, *to_free = NULL;
-   unsigned char sha1[20];
+   struct object_id oid;
 
ptr = buf;
/*
@@ -85,9 +86,8 @@ static void mktree_line(char *buf, size_t len, int 
nul_term_line, int allow_miss
die("input format error: %s", buf);
ptr = ntr + 1; /* type */
ntr = strchr(ptr, ' ');
-   if (!ntr || buf + len <= ntr + 40 ||
-   ntr[41] != '\t' ||
-   get_sha1_hex(ntr + 1, sha1))
+   if (!ntr || parse_oid_hex(ntr + 1, , ) ||
+   *p != '\t')
die("input format error: %s", buf);
 
/* It is perfectly normal if we do not have a commit from a submodule */
@@ -116,12 +116,12 @@ static void mktree_line(char *buf, size_t len, int 
nul_term_line, int allow_miss
}
 
/* Check the type of object identified by sha1 */
-   obj_type = sha1_object_info(sha1, NULL);
+   obj_type = sha1_object_info(oid.hash, NULL);
if (obj_type < 0) {
if (allow_missing) {
; /* no problem - missing objects are presumed to be of 
the right type */
} else {
-   die("entry '%s' object %s is unavailable", path, 
sha1_to_hex(sha1));
+   die("entry '%s' object %s is unavailable", path, 
oid_to_hex());
}
} else {
if (obj_type != mode_type) {
@@ -131,11 +131,11 @@ static void mktree_line(char *buf, size_t len, int 
nul_term_line, int allow_miss
 * because the new tree entry will never be correct.
 */
die("entry '%s' object %s is a %s but specified type 
was (%s)",
-   path, sha1_to_hex(sha1), type_name(obj_type), 
type_name(mode_type));
+   path, oid_to_hex(), type_name(obj_type), 
type_name(mode_type));
}
}
 
-   append_to_tree(mode, sha1, path);
+   append_to_tree(mode, , path);
free(to_free);
 }
 


[PATCH v3 06/36] tree: convert read_tree_recursive to struct object_id

2018-03-11 Thread brian m. carlson
Convert the callback functions for read_tree_recursive to take a pointer
to struct object_id.

Signed-off-by: brian m. carlson 
---
 archive.c  |  8 
 builtin/checkout.c |  4 ++--
 builtin/log.c  |  2 +-
 builtin/ls-tree.c  |  8 
 merge-recursive.c  |  2 +-
 tree.c | 14 +++---
 tree.h |  2 +-
 7 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/archive.c b/archive.c
index 0b7b62af0c..e664cdb624 100644
--- a/archive.c
+++ b/archive.c
@@ -198,7 +198,7 @@ static int write_directory(struct archiver_context *c)
return ret ? -1 : 0;
 }
 
-static int queue_or_write_archive_entry(const unsigned char *sha1,
+static int queue_or_write_archive_entry(const struct object_id *oid,
struct strbuf *base, const char *filename,
unsigned mode, int stage, void *context)
 {
@@ -224,14 +224,14 @@ static int queue_or_write_archive_entry(const unsigned 
char *sha1,
 
if (check_attr_export_ignore(check))
return 0;
-   queue_directory(sha1, base, filename,
+   queue_directory(oid->hash, base, filename,
mode, stage, c);
return READ_TREE_RECURSIVE;
}
 
if (write_directory(c))
return -1;
-   return write_archive_entry(sha1, base->buf, base->len, filename, mode,
+   return write_archive_entry(oid->hash, base->buf, base->len, filename, 
mode,
   stage, context);
 }
 
@@ -303,7 +303,7 @@ static const struct archiver *lookup_archiver(const char 
*name)
return NULL;
 }
 
-static int reject_entry(const unsigned char *sha1, struct strbuf *base,
+static int reject_entry(const struct object_id *oid, struct strbuf *base,
const char *filename, unsigned mode,
int stage, void *context)
 {
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 8f4dfb1046..8eb118201d 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -66,7 +66,7 @@ static int post_checkout_hook(struct commit *old_commit, 
struct commit *new_comm
 
 }
 
-static int update_some(const unsigned char *sha1, struct strbuf *base,
+static int update_some(const struct object_id *oid, struct strbuf *base,
const char *pathname, unsigned mode, int stage, void *context)
 {
int len;
@@ -78,7 +78,7 @@ static int update_some(const unsigned char *sha1, struct 
strbuf *base,
 
len = base->len + strlen(pathname);
ce = xcalloc(1, cache_entry_size(len));
-   hashcpy(ce->oid.hash, sha1);
+   oidcpy(>oid, oid);
memcpy(ce->name, base->buf, base->len);
memcpy(ce->name + base->len, pathname, len - base->len);
ce->ce_flags = create_ce_flags(0) | CE_UPDATE;
diff --git a/builtin/log.c b/builtin/log.c
index 94ee177d56..32a744bfd2 100644
--- a/builtin/log.c
+++ b/builtin/log.c
@@ -541,7 +541,7 @@ static int show_tag_object(const struct object_id *oid, 
struct rev_info *rev)
return 0;
 }
 
-static int show_tree_object(const unsigned char *sha1,
+static int show_tree_object(const struct object_id *oid,
struct strbuf *base,
const char *pathname, unsigned mode, int stage, void *context)
 {
diff --git a/builtin/ls-tree.c b/builtin/ls-tree.c
index ef965408e8..c613dd7b82 100644
--- a/builtin/ls-tree.c
+++ b/builtin/ls-tree.c
@@ -60,7 +60,7 @@ static int show_recursive(const char *base, int baselen, 
const char *pathname)
return 0;
 }
 
-static int show_tree(const unsigned char *sha1, struct strbuf *base,
+static int show_tree(const struct object_id *oid, struct strbuf *base,
const char *pathname, unsigned mode, int stage, void *context)
 {
int retval = 0;
@@ -94,7 +94,7 @@ static int show_tree(const unsigned char *sha1, struct strbuf 
*base,
char size_text[24];
if (!strcmp(type, blob_type)) {
unsigned long size;
-   if (sha1_object_info(sha1, ) == OBJ_BAD)
+   if (sha1_object_info(oid->hash, ) == 
OBJ_BAD)
xsnprintf(size_text, sizeof(size_text),
  "BAD");
else
@@ -103,11 +103,11 @@ static int show_tree(const unsigned char *sha1, struct 
strbuf *base,
} else
xsnprintf(size_text, sizeof(size_text), "-");
printf("%06o %s %s %7s\t", mode, type,
-  find_unique_abbrev(sha1, abbrev),
+  find_unique_abbrev(oid->hash, abbrev),
   size_text);
} else
printf("%06o %s %s\t", mode, type,
-  find_unique_abbrev(sha1, abbrev));

[PATCH v3 04/36] cache-tree: convert remnants to struct object_id

2018-03-11 Thread brian m. carlson
Convert the remaining portions of cache-tree.c to use struct object_id.
Convert several instances of 20 to use the_hash_algo instead.

Signed-off-by: brian m. carlson 
---
 cache-tree.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/cache-tree.c b/cache-tree.c
index ba07a8067e..6a555f4d43 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -320,7 +320,7 @@ static int update_one(struct cache_tree *it,
struct cache_tree_sub *sub = NULL;
const char *path, *slash;
int pathlen, entlen;
-   const unsigned char *sha1;
+   const struct object_id *oid;
unsigned mode;
int expected_missing = 0;
int contains_ita = 0;
@@ -338,7 +338,7 @@ static int update_one(struct cache_tree *it,
die("cache-tree.c: '%.*s' in '%s' not found",
entlen, path + baselen, path);
i += sub->count;
-   sha1 = sub->cache_tree->oid.hash;
+   oid = >cache_tree->oid;
mode = S_IFDIR;
contains_ita = sub->cache_tree->entry_count < 0;
if (contains_ita) {
@@ -347,19 +347,19 @@ static int update_one(struct cache_tree *it,
}
}
else {
-   sha1 = ce->oid.hash;
+   oid = >oid;
mode = ce->ce_mode;
entlen = pathlen - baselen;
i++;
}
 
-   if (is_null_sha1(sha1) ||
-   (mode != S_IFGITLINK && !missing_ok && 
!has_sha1_file(sha1))) {
+   if (is_null_oid(oid) ||
+   (mode != S_IFGITLINK && !missing_ok && 
!has_object_file(oid))) {
strbuf_release();
if (expected_missing)
return -1;
return error("invalid object %06o %s for '%.*s'",
-   mode, sha1_to_hex(sha1), entlen+baselen, path);
+   mode, oid_to_hex(oid), entlen+baselen, path);
}
 
/*
@@ -385,12 +385,12 @@ static int update_one(struct cache_tree *it,
/*
 * "sub" can be an empty tree if all subentries are i-t-a.
 */
-   if (contains_ita && !hashcmp(sha1, EMPTY_TREE_SHA1_BIN))
+   if (contains_ita && !oidcmp(oid, _tree_oid))
continue;
 
strbuf_grow(, entlen + 100);
strbuf_addf(, "%o %.*s%c", mode, entlen, path + baselen, 
'\0');
-   strbuf_add(, sha1, 20);
+   strbuf_add(, oid->hash, the_hash_algo->rawsz);
 
 #if DEBUG
fprintf(stderr, "cache-tree update-one %o %.*s\n",
@@ -401,7 +401,7 @@ static int update_one(struct cache_tree *it,
if (repair) {
struct object_id oid;
hash_object_file(buffer.buf, buffer.len, tree_type, );
-   if (has_sha1_file(oid.hash))
+   if (has_object_file())
oidcpy(>oid, );
else
to_invalidate = 1;
@@ -465,7 +465,7 @@ static void write_one(struct strbuf *buffer, struct 
cache_tree *it,
 #endif
 
if (0 <= it->entry_count) {
-   strbuf_add(buffer, it->oid.hash, 20);
+   strbuf_add(buffer, it->oid.hash, the_hash_algo->rawsz);
}
for (i = 0; i < it->subtree_nr; i++) {
struct cache_tree_sub *down = it->down[i];
@@ -492,6 +492,7 @@ static struct cache_tree *read_one(const char **buffer, 
unsigned long *size_p)
char *ep;
struct cache_tree *it;
int i, subtree_nr;
+   const unsigned rawsz = the_hash_algo->rawsz;
 
it = NULL;
/* skip name, but make sure name exists */
@@ -520,11 +521,11 @@ static struct cache_tree *read_one(const char **buffer, 
unsigned long *size_p)
goto free_return;
buf++; size--;
if (0 <= it->entry_count) {
-   if (size < 20)
+   if (size < rawsz)
goto free_return;
-   hashcpy(it->oid.hash, (const unsigned char*)buf);
-   buf += 20;
-   size -= 20;
+   memcpy(it->oid.hash, (const unsigned char*)buf, rawsz);
+   buf += rawsz;
+   size -= rawsz;
}
 
 #if DEBUG


[PATCH v3 32/36] sha1_file: convert read_object_with_reference to object_id

2018-03-11 Thread brian m. carlson
Convert read_object_with_reference to take pointers to struct object_id.
Update the internals of the function accordingly.

Signed-off-by: brian m. carlson 
---
 builtin/cat-file.c |  2 +-
 builtin/grep.c |  4 ++--
 builtin/pack-objects.c |  2 +-
 cache.h|  4 ++--
 fast-import.c  | 15 ---
 sha1_file.c| 20 ++--
 tree-walk.c|  9 -
 7 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/builtin/cat-file.c b/builtin/cat-file.c
index 27f4f235f6..4382b4447e 100644
--- a/builtin/cat-file.c
+++ b/builtin/cat-file.c
@@ -159,7 +159,7 @@ static int cat_one_file(int opt, const char *exp_type, 
const char *obj_name,
 * fall-back to the usual case.
 */
}
-   buf = read_object_with_reference(oid.hash, exp_type, , 
NULL);
+   buf = read_object_with_reference(, exp_type, , NULL);
break;
 
default:
diff --git a/builtin/grep.c b/builtin/grep.c
index 9a0973e8f1..386a86b66e 100644
--- a/builtin/grep.c
+++ b/builtin/grep.c
@@ -452,7 +452,7 @@ static int grep_submodule(struct grep_opt *opt, struct 
repository *superproject,
object = parse_object_or_die(oid, oid_to_hex(oid));
 
grep_read_lock();
-   data = read_object_with_reference(object->oid.hash, tree_type,
+   data = read_object_with_reference(>oid, tree_type,
  , NULL);
grep_read_unlock();
 
@@ -614,7 +614,7 @@ static int grep_object(struct grep_opt *opt, const struct 
pathspec *pathspec,
int hit, len;
 
grep_read_lock();
-   data = read_object_with_reference(obj->oid.hash, tree_type,
+   data = read_object_with_reference(>oid, tree_type,
  , NULL);
grep_read_unlock();
 
diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 66bc0057fe..a0ccbf6699 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1351,7 +1351,7 @@ static void add_preferred_base(struct object_id *oid)
if (window <= num_preferred_base++)
return;
 
-   data = read_object_with_reference(oid->hash, tree_type, , 
tree_oid.hash);
+   data = read_object_with_reference(oid, tree_type, , _oid);
if (!data)
return;
 
diff --git a/cache.h b/cache.h
index 66f41af56e..9293c14564 100644
--- a/cache.h
+++ b/cache.h
@@ -1431,10 +1431,10 @@ extern int df_name_compare(const char *name1, int len1, 
int mode1, const char *n
 extern int name_compare(const char *name1, size_t len1, const char *name2, 
size_t len2);
 extern int cache_name_stage_compare(const char *name1, int len1, int stage1, 
const char *name2, int len2, int stage2);
 
-extern void *read_object_with_reference(const unsigned char *sha1,
+extern void *read_object_with_reference(const struct object_id *oid,
const char *required_type,
unsigned long *size,
-   unsigned char *sha1_ret);
+   struct object_id *oid_ret);
 
 extern struct object *peel_to_type(const char *name, int namelen,
   struct object *o, enum object_type);
diff --git a/fast-import.c b/fast-import.c
index af3a211ec3..dc63af1eb6 100644
--- a/fast-import.c
+++ b/fast-import.c
@@ -2583,8 +2583,9 @@ static void note_change_n(const char *p, struct branch 
*b, unsigned char *old_fa
oidcpy(_oid, _oe->idx.oid);
} else if (!get_oid(p, _oid)) {
unsigned long size;
-   char *buf = read_object_with_reference(commit_oid.hash,
-   commit_type, , commit_oid.hash);
+   char *buf = read_object_with_reference(_oid,
+  commit_type, ,
+  _oid);
if (!buf || size < 46)
die("Not a valid commit: %s", p);
free(buf);
@@ -2653,9 +2654,8 @@ static void parse_from_existing(struct branch *b)
unsigned long size;
char *buf;
 
-   buf = read_object_with_reference(b->oid.hash,
-commit_type, ,
-b->oid.hash);
+   buf = read_object_with_reference(>oid, commit_type, ,
+>oid);
parse_from_commit(b, buf, size);
free(buf);
}
@@ -2732,8 +2732,9 @@ static struct hash_list *parse_merge(unsigned int *count)
oidcpy(>oid, >idx.oid);
} else if (!get_oid(from, >oid)) {

[PATCH v3 16/36] archive: convert sha1_file_to_archive to struct object_id

2018-03-11 Thread brian m. carlson
Convert this function to take a pointer to struct object_id and rename
it object_file_to_archive.

Signed-off-by: brian m. carlson 
---
 archive-tar.c |  2 +-
 archive-zip.c |  4 ++--
 archive.c | 10 +-
 archive.h |  8 
 4 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/archive-tar.c b/archive-tar.c
index 24b1ccef3a..fd622eacc0 100644
--- a/archive-tar.c
+++ b/archive-tar.c
@@ -281,7 +281,7 @@ static int write_tar_entry(struct archiver_args *args,
buffer = NULL;
else if (S_ISLNK(mode) || S_ISREG(mode)) {
enum object_type type;
-   buffer = sha1_file_to_archive(args, path, oid->hash, old_mode, 
, );
+   buffer = object_file_to_archive(args, path, oid, old_mode, 
, );
if (!buffer)
return error("cannot read %s", oid_to_hex(oid));
} else {
diff --git a/archive-zip.c b/archive-zip.c
index e2e5513c03..5841a6ceb6 100644
--- a/archive-zip.c
+++ b/archive-zip.c
@@ -344,8 +344,8 @@ static int write_zip_entry(struct archiver_args *args,
flags |= ZIP_STREAM;
out = buffer = NULL;
} else {
-   buffer = sha1_file_to_archive(args, path, oid->hash, 
mode,
- , );
+   buffer = object_file_to_archive(args, path, oid, mode,
+   , );
if (!buffer)
return error("cannot read %s",
 oid_to_hex(oid));
diff --git a/archive.c b/archive.c
index 4942b5632b..da62b2f541 100644
--- a/archive.c
+++ b/archive.c
@@ -63,16 +63,16 @@ static void format_subst(const struct commit *commit,
free(to_free);
 }
 
-void *sha1_file_to_archive(const struct archiver_args *args,
-  const char *path, const unsigned char *sha1,
-  unsigned int mode, enum object_type *type,
-  unsigned long *sizep)
+void *object_file_to_archive(const struct archiver_args *args,
+const char *path, const struct object_id *oid,
+unsigned int mode, enum object_type *type,
+unsigned long *sizep)
 {
void *buffer;
const struct commit *commit = args->convert ? args->commit : NULL;
 
path += args->baselen;
-   buffer = read_sha1_file(sha1, type, sizep);
+   buffer = read_sha1_file(oid->hash, type, sizep);
if (buffer && S_ISREG(mode)) {
struct strbuf buf = STRBUF_INIT;
size_t size = 0;
diff --git a/archive.h b/archive.h
index 741991bfb6..1f9954f7cd 100644
--- a/archive.h
+++ b/archive.h
@@ -39,9 +39,9 @@ extern int write_archive_entries(struct archiver_args *args, 
write_archive_entry
 extern int write_archive(int argc, const char **argv, const char *prefix, 
const char *name_hint, int remote);
 
 const char *archive_format_from_filename(const char *filename);
-extern void *sha1_file_to_archive(const struct archiver_args *args,
- const char *path, const unsigned char *sha1,
- unsigned int mode, enum object_type *type,
- unsigned long *sizep);
+extern void *object_file_to_archive(const struct archiver_args *args,
+   const char *path, const struct object_id 
*oid,
+   unsigned int mode, enum object_type *type,
+   unsigned long *sizep);
 
 #endif /* ARCHIVE_H */


[PATCH v3 28/36] builtin/notes: convert static functions to object_id

2018-03-11 Thread brian m. carlson
Convert the remaining static functions to take pointers to struct
object_id.

Signed-off-by: brian m. carlson 
---
 builtin/notes.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/builtin/notes.c b/builtin/notes.c
index 39304ba743..08ab9d7130 100644
--- a/builtin/notes.c
+++ b/builtin/notes.c
@@ -118,11 +118,11 @@ static int list_each_note(const struct object_id 
*object_oid,
return 0;
 }
 
-static void copy_obj_to_fd(int fd, const unsigned char *sha1)
+static void copy_obj_to_fd(int fd, const struct object_id *oid)
 {
unsigned long size;
enum object_type type;
-   char *buf = read_sha1_file(sha1, , );
+   char *buf = read_sha1_file(oid->hash, , );
if (buf) {
if (size)
write_or_die(fd, buf, size);
@@ -162,7 +162,7 @@ static void write_commented_object(int fd, const struct 
object_id *object)
 }
 
 static void prepare_note_data(const struct object_id *object, struct note_data 
*d,
-   const unsigned char *old_note)
+   const struct object_id *old_note)
 {
if (d->use_editor || !d->given) {
int fd;
@@ -457,7 +457,7 @@ static int add(int argc, const char **argv, const char 
*prefix)
oid_to_hex());
}
 
-   prepare_note_data(, , note ? note->hash : NULL);
+   prepare_note_data(, , note);
if (d.buf.len || allow_empty) {
write_note_data(, _note);
if (add_note(t, , _note, combine_notes_overwrite))
@@ -602,7 +602,7 @@ static int append_edit(int argc, const char **argv, const 
char *prefix)
t = init_notes_check(argv[0], NOTES_INIT_WRITABLE);
note = get_note(t, );
 
-   prepare_note_data(, , edit && note ? note->hash : NULL);
+   prepare_note_data(, , edit && note ? note : NULL);
 
if (note && !edit) {
/* Append buf to previous note contents */


[PATCH v3 20/36] streaming: convert open_istream to use struct object_id

2018-03-11 Thread brian m. carlson
Signed-off-by: brian m. carlson 
---
 archive-tar.c  | 2 +-
 archive-zip.c  | 2 +-
 builtin/index-pack.c   | 2 +-
 builtin/pack-objects.c | 2 +-
 sha1_file.c| 2 +-
 streaming.c| 6 +++---
 streaming.h| 2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/archive-tar.c b/archive-tar.c
index fd622eacc0..7a0d31d847 100644
--- a/archive-tar.c
+++ b/archive-tar.c
@@ -119,7 +119,7 @@ static int stream_blocked(const struct object_id *oid)
char buf[BLOCKSIZE];
ssize_t readlen;
 
-   st = open_istream(oid->hash, , , NULL);
+   st = open_istream(oid, , , NULL);
if (!st)
return error("cannot stream blob %s", oid_to_hex(oid));
for (;;) {
diff --git a/archive-zip.c b/archive-zip.c
index 5841a6ceb6..18b951b732 100644
--- a/archive-zip.c
+++ b/archive-zip.c
@@ -337,7 +337,7 @@ static int write_zip_entry(struct archiver_args *args,
 
if (S_ISREG(mode) && type == OBJ_BLOB && !args->convert &&
size > big_file_threshold) {
-   stream = open_istream(oid->hash, , , NULL);
+   stream = open_istream(oid, , , NULL);
if (!stream)
return error("cannot stream blob %s",
 oid_to_hex(oid));
diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index b28ebfadd4..ae11d2a610 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -771,7 +771,7 @@ static int check_collison(struct object_entry *entry)
 
memset(, 0, sizeof(data));
data.entry = entry;
-   data.st = open_istream(entry->idx.oid.hash, , , NULL);
+   data.st = open_istream(>idx.oid, , , NULL);
if (!data.st)
return -1;
if (size != entry->size || type != entry->type)
diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index a197926eaa..8307cc9b04 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -267,7 +267,7 @@ static unsigned long write_no_reuse_object(struct hashfile 
*f, struct object_ent
if (!usable_delta) {
if (entry->type == OBJ_BLOB &&
entry->size > big_file_threshold &&
-   (st = open_istream(entry->idx.oid.hash, , , 
NULL)) != NULL)
+   (st = open_istream(>idx.oid, , , NULL)) != 
NULL)
buf = NULL;
else {
buf = read_sha1_file(entry->idx.oid.hash, ,
diff --git a/sha1_file.c b/sha1_file.c
index e4cb840661..cdcba4483b 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -799,7 +799,7 @@ int check_object_signature(const struct object_id *oid, 
void *map,
return oidcmp(oid, _oid) ? -1 : 0;
}
 
-   st = open_istream(oid->hash, _type, , NULL);
+   st = open_istream(oid, _type, , NULL);
if (!st)
return -1;
 
diff --git a/streaming.c b/streaming.c
index 5892b50bd8..be85507922 100644
--- a/streaming.c
+++ b/streaming.c
@@ -130,14 +130,14 @@ static enum input_source istream_source(const unsigned 
char *sha1,
}
 }
 
-struct git_istream *open_istream(const unsigned char *sha1,
+struct git_istream *open_istream(const struct object_id *oid,
 enum object_type *type,
 unsigned long *size,
 struct stream_filter *filter)
 {
struct git_istream *st;
struct object_info oi = OBJECT_INFO_INIT;
-   const unsigned char *real = lookup_replace_object(sha1);
+   const unsigned char *real = lookup_replace_object(oid->hash);
enum input_source src = istream_source(real, type, );
 
if (src < 0)
@@ -507,7 +507,7 @@ int stream_blob_to_fd(int fd, const struct object_id *oid, 
struct stream_filter
ssize_t kept = 0;
int result = -1;
 
-   st = open_istream(oid->hash, , , filter);
+   st = open_istream(oid, , , filter);
if (!st) {
if (filter)
free_stream_filter(filter);
diff --git a/streaming.h b/streaming.h
index 73c1d156b3..32f4626771 100644
--- a/streaming.h
+++ b/streaming.h
@@ -8,7 +8,7 @@
 /* opaque */
 struct git_istream;
 
-extern struct git_istream *open_istream(const unsigned char *, enum 
object_type *, unsigned long *, struct stream_filter *);
+extern struct git_istream *open_istream(const struct object_id *, enum 
object_type *, unsigned long *, struct stream_filter *);
 extern int close_istream(struct git_istream *);
 extern ssize_t read_istream(struct git_istream *, void *, size_t);
 


[PATCH v3 01/36] bulk-checkin: convert index_bulk_checkin to struct object_id

2018-03-11 Thread brian m. carlson
Convert the index_bulk_checkin function, and the static functions it
calls, to use pointers to struct object_id.

Signed-off-by: brian m. carlson 
---
 bulk-checkin.c | 18 +-
 bulk-checkin.h |  2 +-
 sha1_file.c|  2 +-
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/bulk-checkin.c b/bulk-checkin.c
index 9d87eac07b..e5ce2a7954 100644
--- a/bulk-checkin.c
+++ b/bulk-checkin.c
@@ -60,17 +60,17 @@ static void finish_bulk_checkin(struct bulk_checkin_state 
*state)
reprepare_packed_git();
 }
 
-static int already_written(struct bulk_checkin_state *state, unsigned char 
sha1[])
+static int already_written(struct bulk_checkin_state *state, struct object_id 
*oid)
 {
int i;
 
/* The object may already exist in the repository */
-   if (has_sha1_file(sha1))
+   if (has_sha1_file(oid->hash))
return 1;
 
/* Might want to keep the list sorted */
for (i = 0; i < state->nr_written; i++)
-   if (!hashcmp(state->written[i]->oid.hash, sha1))
+   if (!oidcmp(>written[i]->oid, oid))
return 1;
 
/* This is a new object we need to keep */
@@ -186,7 +186,7 @@ static void prepare_to_stream(struct bulk_checkin_state 
*state,
 }
 
 static int deflate_to_pack(struct bulk_checkin_state *state,
-  unsigned char result_sha1[],
+  struct object_id *result_oid,
   int fd, size_t size,
   enum object_type type, const char *path,
   unsigned flags)
@@ -236,17 +236,17 @@ static int deflate_to_pack(struct bulk_checkin_state 
*state,
if (lseek(fd, seekback, SEEK_SET) == (off_t) -1)
return error("cannot seek back");
}
-   the_hash_algo->final_fn(result_sha1, );
+   the_hash_algo->final_fn(result_oid->hash, );
if (!idx)
return 0;
 
idx->crc32 = crc32_end(state->f);
-   if (already_written(state, result_sha1)) {
+   if (already_written(state, result_oid)) {
hashfile_truncate(state->f, );
state->offset = checkpoint.offset;
free(idx);
} else {
-   hashcpy(idx->oid.hash, result_sha1);
+   oidcpy(>oid, result_oid);
ALLOC_GROW(state->written,
   state->nr_written + 1,
   state->alloc_written);
@@ -255,11 +255,11 @@ static int deflate_to_pack(struct bulk_checkin_state 
*state,
return 0;
 }
 
-int index_bulk_checkin(unsigned char *sha1,
+int index_bulk_checkin(struct object_id *oid,
   int fd, size_t size, enum object_type type,
   const char *path, unsigned flags)
 {
-   int status = deflate_to_pack(, sha1, fd, size, type,
+   int status = deflate_to_pack(, oid, fd, size, type,
 path, flags);
if (!state.plugged)
finish_bulk_checkin();
diff --git a/bulk-checkin.h b/bulk-checkin.h
index fbd40fc98c..a85527318b 100644
--- a/bulk-checkin.h
+++ b/bulk-checkin.h
@@ -4,7 +4,7 @@
 #ifndef BULK_CHECKIN_H
 #define BULK_CHECKIN_H
 
-extern int index_bulk_checkin(unsigned char sha1[],
+extern int index_bulk_checkin(struct object_id *oid,
  int fd, size_t size, enum object_type type,
  const char *path, unsigned flags);
 
diff --git a/sha1_file.c b/sha1_file.c
index 1b94f39c4c..6b887f8aaf 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -1892,7 +1892,7 @@ static int index_stream(struct object_id *oid, int fd, 
size_t size,
enum object_type type, const char *path,
unsigned flags)
 {
-   return index_bulk_checkin(oid->hash, fd, size, type, path, flags);
+   return index_bulk_checkin(oid, fd, size, type, path, flags);
 }
 
 int index_fd(struct object_id *oid, int fd, struct stat *st,


[PATCH v3 03/36] cache-tree: convert write_*_as_tree to object_id

2018-03-11 Thread brian m. carlson
Convert write_index_as_tree and write_cache_as_tree to use struct
object_id.

Signed-off-by: brian m. carlson 
---
 builtin/am.c |  8 
 builtin/merge.c  |  2 +-
 builtin/write-tree.c |  2 +-
 cache-tree.c | 10 +-
 cache-tree.h |  4 ++--
 sequencer.c  |  4 ++--
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/builtin/am.c b/builtin/am.c
index 1151b5c73a..1bcc3606c5 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -1550,7 +1550,7 @@ static int fall_back_threeway(const struct am_state 
*state, const char *index_pa
discard_cache();
read_cache_from(index_path);
 
-   if (write_index_as_tree(orig_tree.hash, _index, index_path, 0, 
NULL))
+   if (write_index_as_tree(_tree, _index, index_path, 0, NULL))
return error(_("Repository lacks necessary blobs to fall back 
on 3-way merge."));
 
say(state, stdout, _("Using index info to reconstruct a base tree..."));
@@ -1575,7 +1575,7 @@ static int fall_back_threeway(const struct am_state 
*state, const char *index_pa
return error(_("Did you hand edit your patch?\n"
"It does not apply to blobs recorded in its 
index."));
 
-   if (write_index_as_tree(their_tree.hash, _index, index_path, 0, 
NULL))
+   if (write_index_as_tree(_tree, _index, index_path, 0, NULL))
return error("could not write tree");
 
say(state, stdout, _("Falling back to patching base and 3-way 
merge..."));
@@ -1626,7 +1626,7 @@ static void do_commit(const struct am_state *state)
if (run_hook_le(NULL, "pre-applypatch", NULL))
exit(1);
 
-   if (write_cache_as_tree(tree.hash, 0, NULL))
+   if (write_cache_as_tree(, 0, NULL))
die(_("git write-tree failed to write a tree"));
 
if (!get_oid_commit("HEAD", )) {
@@ -2004,7 +2004,7 @@ static int clean_index(const struct object_id *head, 
const struct object_id *rem
if (fast_forward_to(head_tree, head_tree, 1))
return -1;
 
-   if (write_cache_as_tree(index.hash, 0, NULL))
+   if (write_cache_as_tree(, 0, NULL))
return -1;
 
index_tree = parse_tree_indirect();
diff --git a/builtin/merge.c b/builtin/merge.c
index e8d9d4383e..0c7437f536 100644
--- a/builtin/merge.c
+++ b/builtin/merge.c
@@ -639,7 +639,7 @@ static int read_tree_trivial(struct object_id *common, 
struct object_id *head,
 
 static void write_tree_trivial(struct object_id *oid)
 {
-   if (write_cache_as_tree(oid->hash, 0, NULL))
+   if (write_cache_as_tree(oid, 0, NULL))
die(_("git write-tree failed to write a tree"));
 }
 
diff --git a/builtin/write-tree.c b/builtin/write-tree.c
index 299a121531..c9d3c544e7 100644
--- a/builtin/write-tree.c
+++ b/builtin/write-tree.c
@@ -38,7 +38,7 @@ int cmd_write_tree(int argc, const char **argv, const char 
*unused_prefix)
argc = parse_options(argc, argv, unused_prefix, write_tree_options,
 write_tree_usage, 0);
 
-   ret = write_cache_as_tree(oid.hash, flags, prefix);
+   ret = write_cache_as_tree(, flags, prefix);
switch (ret) {
case 0:
printf("%s\n", oid_to_hex());
diff --git a/cache-tree.c b/cache-tree.c
index c52e4303df..ba07a8067e 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -599,7 +599,7 @@ static struct cache_tree *cache_tree_find(struct cache_tree 
*it, const char *pat
return it;
 }
 
-int write_index_as_tree(unsigned char *sha1, struct index_state *index_state, 
const char *index_path, int flags, const char *prefix)
+int write_index_as_tree(struct object_id *oid, struct index_state 
*index_state, const char *index_path, int flags, const char *prefix)
 {
int entries, was_valid;
struct lock_file lock_file = LOCK_INIT;
@@ -640,19 +640,19 @@ int write_index_as_tree(unsigned char *sha1, struct 
index_state *index_state, co
ret = WRITE_TREE_PREFIX_ERROR;
goto out;
}
-   hashcpy(sha1, subtree->oid.hash);
+   oidcpy(oid, >oid);
}
else
-   hashcpy(sha1, index_state->cache_tree->oid.hash);
+   oidcpy(oid, _state->cache_tree->oid);
 
 out:
rollback_lock_file(_file);
return ret;
 }
 
-int write_cache_as_tree(unsigned char *sha1, int flags, const char *prefix)
+int write_cache_as_tree(struct object_id *oid, int flags, const char *prefix)
 {
-   return write_index_as_tree(sha1, _index, get_index_file(), flags, 
prefix);
+   return write_index_as_tree(oid, _index, get_index_file(), flags, 
prefix);
 }
 
 static void prime_cache_tree_rec(struct cache_tree *it, struct tree *tree)
diff --git a/cache-tree.h b/cache-tree.h
index f7b9cab7ee..cfd5328cc9 100644
--- a/cache-tree.h
+++ b/cache-tree.h
@@ -47,8 +47,8 @@ int update_main_cache_tree(int);
 #define 

[PATCH v3 35/36] sha1_file: introduce a constant for max header length

2018-03-11 Thread brian m. carlson
There were several instances of 32 sprinkled throughout this file, all
of which were used for allocating a buffer to store the header of an
object.  Introduce a constant, MAX_HEADER_LEN, for this purpose.

Note that this constant is slightly larger than required; the longest
possible header is 28 (7 for "commit", 1 for a space, 20 for a 63-bit
length in decimal, and 1 for the NUL).  However, the overallocation
should not cause any problems, so leave it as it is.

Signed-off-by: brian m. carlson 
---
 sha1_file.c | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/sha1_file.c b/sha1_file.c
index 60418da10b..581a9dc522 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -30,6 +30,9 @@
 #include "packfile.h"
 #include "fetch-object.h"
 
+/* The maximum size for an object header. */
+#define MAX_HEADER_LEN 32
+
 const unsigned char null_sha1[GIT_MAX_RAWSZ];
 const struct object_id null_oid;
 const struct object_id empty_tree_oid = {
@@ -791,7 +794,7 @@ int check_object_signature(const struct object_id *oid, 
void *map,
enum object_type obj_type;
struct git_istream *st;
git_hash_ctx c;
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
int hdrlen;
 
if (map) {
@@ -1150,7 +1153,7 @@ static int sha1_loose_object_info(const unsigned char 
*sha1,
unsigned long mapsize;
void *map;
git_zstream stream;
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
struct strbuf hdrbuf = STRBUF_INIT;
unsigned long size_scratch;
 
@@ -1514,7 +1517,7 @@ static int write_buffer(int fd, const void *buf, size_t 
len)
 int hash_object_file(const void *buf, unsigned long len, const char *type,
 struct object_id *oid)
 {
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
int hdrlen = sizeof(hdr);
write_object_file_prepare(buf, len, type, oid, hdr, );
return 0;
@@ -1669,7 +1672,7 @@ static int freshen_packed_object(const unsigned char 
*sha1)
 int write_object_file(const void *buf, unsigned long len, const char *type,
  struct object_id *oid)
 {
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
int hdrlen = sizeof(hdr);
 
/* Normally if we have it in the pack then we do not bother writing
@@ -1689,7 +1692,7 @@ int hash_object_file_literally(const void *buf, unsigned 
long len,
int hdrlen, status = 0;
 
/* type string, SP, %lu of the length plus NUL must fit this */
-   hdrlen = strlen(type) + 32;
+   hdrlen = strlen(type) + MAX_HEADER_LEN;
header = xmalloc(hdrlen);
write_object_file_prepare(buf, len, type, oid, header, );
 
@@ -1709,7 +1712,7 @@ int force_object_loose(const struct object_id *oid, 
time_t mtime)
void *buf;
unsigned long len;
enum object_type type;
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
int hdrlen;
int ret;
 
@@ -2191,7 +2194,7 @@ int read_loose_object(const char *path,
void *map = NULL;
unsigned long mapsize;
git_zstream stream;
-   char hdr[32];
+   char hdr[MAX_HEADER_LEN];
 
*contents = NULL;
 


Re: [RFC v2] Rebasing merges: a jorney to the ultimate solution (Road Clear)

2018-03-11 Thread Igor Djordjevic
Hi Dscho,

I`m yet to read (and reason about) your whole (very informative) 
reply, but I just wanted to address this part first, as it might be a 
clear end-game situation already, due to a mutual agreement, all the 
rest being purely academic, interesting, but not any more (that) 
important to discuss.

On 11/03/2018 16:40, Johannes Schindelin wrote:
> 
> > For myself, I do actually favor Sergey`s approach in general, but 
> > _implemented_ through what Phillip described (or a mixture of both, to 
> > be precise). But, let me explain... :)
> 
> So as you explained later in this sub-thread, Sergey's approach is
> essentially the same as Phillip's.
> 
> I still do not understand Sergey's approach on a fundamental level. I
> mean, I can follow his instructions how to implement his algorithm, but it
> is as if I had a blindfold on and somebody guided me through a maze: I
> understand *what* I am supposed to do, but I have no clue *why*.
> 
> And admittedly, I got very frustrated when a document was thrown my way
> that is too long to read in one sitting, and all of my attempts at getting
> clear and comprehensible answers to specific questions were met with "go
> and read that document, I am sure you will understand then".
> 
> For something as fundamental to my daily workflow as an interactive rebase
> (*especially* when trying to maintain the branch topology), this is no
> good at all.
> 
> Since you already confirmed that there is essentially no difference
> between the two approaches, I will simply go with the one I understand, in
> particular I understand *why* it works.
> 
> But let's read on, maybe I will change my mind based on your explanations
> (which do answer my questions, thank you so much for that)...

No problem, I learned much myself trying to write those explanations 
in the first place, and I still need to read on yet myself, seeing 
how well my explanations actually fared :) Thank you for still 
holding on, though.

But I just wanted to point out that you can really just go with what 
Phillip described if you find that easier to reason about (and/or 
implement), there`s even no need for mind changing, as essentially, 
and in my opinion, it seems to be just a bit different implementation 
of the same concept (but not requiring temporary commits).

That said, *if* we decide we like temporary commit U1' == U2' consistency 
check (especially for non-interactive rebase, maybe), we can produce 
these after the fact for the sake of the check only.

I will come with a follow-up, but all the rest might be less important.

Regards, Buga


Re: [Feature request] Add config option to gpgsign IFF key is present

2018-03-11 Thread Joshua Nelson
I like having machine-specific config in ~/.config/git, I think I'll do that. I 
didn't realize you could forward gpg-agent over a connection, I may look 
further into that.

Thanks for the help!

Joshua Nelson

On Sunday, March 11, 2018 17:21:42 EDT brian m. carlson wrote:
> On Sat, Mar 10, 2018 at 03:28:43PM +, NELSON, JOSHUA Y wrote:
> > Currently, `commit.gpgsign` allows you to give either 'true' or 'false' as
> > a value. If the key is not present, commits will fail:
> > 
> > ```sh
> > $ git commit -m "example"
> > error: gpg failed to sign the data
> > fatal: failed to write commit object
> > ```
> > 
> > I like to reuse my config file across several machines, some of which do
> > not have my GPG key. Would it be possible to add an option to sign the
> > commit only if the private key for `user.signingkey` is present? It could
> > be named something like `commit.gpgsign=default-yes`.
> Unfortunately, this isn't always possible.  You can forward the Unix
> socket for the agent over an SSH connection, at which point the remote
> machine has the ability to sign, but the gpg client doesn't list those
> as existing secret keys in its output (because technically, those keys
> don't exist on the remote system).  I use this technique at work, for
> example, to sign things on my development VM.
> 
> It might be possible to make the failure of the signing operation not be
> fatal in this case, although that could cause people to fail to sign due
> to transient failures even when the key is present on the system.
> 
> I usually handle this by storing my main configuration in ~/.gitconfig
> and on machines where I have a key, additionally having a
> ~/.config/git/config file that contains the commit.gpgsign entry.
> --
> brian m. carlson / brian with sandals: Houston, Texas, US
> https://www.crustytoothpaste.net/~bmc | My opinion only
> OpenPGP: https://keybase.io/bk2204


signature.asc
Description: This is a digitally signed message part.


[PATCH] git-ci: use pylint to analyze the git-p4 code

2018-03-11 Thread Viet Hung Tran
Add a new job named Pylint to .travis.yml in order to analyze git-p4 Python 
code.
Although Travis CI have yet to implement continuous integration for multiple
languages. Python package can be installed using apt packages. From there, 
pylint can be
installed using pip and used to analyze git-p4.py file.

Signed-off-by: Viet Hung Tran 
---
 .travis.yml | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/.travis.yml b/.travis.yml
index 5f5ee4f3b..581d75319 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -46,6 +46,16 @@ matrix:
 - docker
   before_install:
   script: ci/run-linux32-docker.sh
+- env: jobname=Pylint
+  compiler:
+  addons:
+apt:
+  packages:
+  - python
+  before_install:
+  install: pip install --user pylint
+  script: pylint git-p4.py
+  after_failure:
 - env: jobname=StaticAnalysis
   os: linux
   compiler:
-- 
2.16.2.440.gc6284da



Re: [RFC v2] Rebasing merges: a jorney to the ultimate solution (Road Clear)

2018-03-11 Thread Igor Djordjevic
Hi Dscho,

On 11/03/2018 16:47, Johannes Schindelin wrote:
> 
> > > > Phillip's method is essentially merging the new tips into the original
> > > > merge, pretending that the new tips were not rebased but merged into
> > > > upstream.
> > >
> > > [...]
> > >
> > > Here`s a starting point, two commits A and B, merged into M:
> > >
> > > (3) ---A
> > > \
> > >  M
> > > /
> > > ---B
> > >
> > >
> > > According the "patch theory"[1] (which might not be too popular 
> > > around here, but should serve the purpose for what I`m trying to 
> > > explain), each merge commit can be "transformed" to two non-merge 
> > > commits, one on top of each of the merge parents, where new commit 
> > > brings its original merge parent commit tree to the state of the 
> > > merge commit tree:
> > >
> > > (4) ---A---U1
> > >
> > >
> > >
> > > ---B---U2
> > >
> > >
> > > Now, we have two new commits, U1 and U2, each having the same tree as 
> > > previous merge commit M, but representing changes in regards to 
> > > specific parents - and this is essentially what Sergey`s original 
> > > approach was using (whether he knew it, or not).
> > >
> > > When it comes to rebasing, it`s pretty simple, too. As this:
> > >
> > > (5) ---X1---o---o---o---o---o---X2 (master)
> > >|\
> > >| A1---A2---A3
> > >| \
> > >|  M
> > >| /
> > >\-B1---B2---B3
> > >
> > > ... actually equals this:
> > >
> > > (6) ---X1---o---o---o---o---o---X2 (master)
> > >|\
> > >| A1---A2---A3---U1
> > >|
> > >|
> > >|
> > >\-B1---B2---B3---U2
> > >
> > > ... where trees of M, U1 and U2 are same, and we can use the regular 
> > > rebase semantics and rebase it to this:
> > >
> > > (7) ---X1---o---o---o---o---o---X2 (master)
> > > |\
> > > | A1'--A2'--A3'--U1'
> > > |
> > > |
> > > |
> > > \-B1'--B2'--B3'--U2'
> > >
> > > ... which is essentially this again:
> > >
> > > (8) ---X1---o---o---o---o---o---X2 (master)
> > > |\
> > > | A1'--A2'--A3'
> > > |\
> > > | M'
> > > |/
> > > \-B1'--B2'--B3'
> > >
> >
> > Having explained all this, I realized this is the same "essentially 
> > merging the new tips into the original pretending that the new tips 
> > were not rebased but merged into upstream" as Phillip`s one, just 
> > that we have additional temporary commits U1 and U2 (as per mentioned 
> > "patch theory") :)
> 
> But if the old tips had been merged into upstream (resulting in the new
> tips), then the merge bases would be *the old tips*.

Exactly, and that is what part you`ve cut out of the quote was 
showing :) By Phillip`s implementation, we would start with *old tips* 
as merge bases, indeed (old tips being U1 and U2 in this case), where 
it further gets transformed as previously written:

>   git merge-recursive U1 -- M U1'
>   tree="$(git write-tree)"
>   git merge-recursive U2 -- $tree U2'
>   tree="$(git write-tree)"
> 
> ..., where we know U1 = U2 = M (in regards to trees), so this is the 
> same as:
> 
>   git merge-recursive M -- M U1'
>   tree="$(git write-tree)"
>   git merge-recursive M -- $tree U2'
>   tree="$(git write-tree)"

Here, `git merge-recursive M -- M U1'` simply equals to U1' tree 
(being a fast-forward merge), so we can write the two merges above as
a single merge, too:

>   git merge-recursive M -- U1' U2'
>   tree="$(git write-tree)"
> 
> ... which is exactly what Sergey`s (updated) approach suggests, 
> merging U1' and U2' with M as merge-base (and shown inside that 
> sample implementation script I provided[1]) :)

So from *old tips* being the rebased merge base (Phillip), we got to 
*old merge commit* being the rebased merge base (Sergey), or vice 
versa. Does this shed a bit more light on it now? Or you wanted to 
point out something else in the first place...?

> I am still not sure for what scenarios Phillip's strategy is the same as
> Sergey's (updated) one, as the former strategy can do completely without
> temporary commits [...]

I think the root of misunderstanding might be coming from the fact 
that Sergey was mainly describing a general concept (without a 
strictly defined implementation strategy, not being restricted to a 
specific one), where Phillip came up with a solution that eventually 
seems to use the same concept (as those transformations above should 
show), but simplifying it further inside a concrete implementation.

By saying that Phillip "simplified it", even though transformations 
shown above 

[PATCH v2 0/2] git-svn: --author-prog improvements

2018-03-11 Thread Andreas Heiduk
The first patch has been queued by Eric Wong but by Junio Hamano, so
I'm not sure what's the expected procedure. I#M posting it again just
in case.

The second patch has grown up with some documentation and some tests.

Andreas Heiduk (2):
  git-svn: search --authors-prog in PATH too
  git-svn: allow empty email-address in authors-prog and authors-file

 Documentation/git-svn.txt   | 13 ++---
 git-svn.perl|  3 ++-
 perl/Git/SVN.pm | 13 ++---
 t/t9130-git-svn-authors-file.sh | 14 ++
 t/t9138-git-svn-authors-prog.sh | 25 -
 5 files changed, 56 insertions(+), 12 deletions(-)

-- 
2.16.2



[PATCH v2 1/2] git-svn: search --authors-prog in PATH too

2018-03-11 Thread Andreas Heiduk
In 36db1eddf9 ("git-svn: add --authors-prog option", 2009-05-14) the path
to authors-prog was made absolute because git-svn changes the current
directory in some situations. This makes sense if the program is part of
the repository but prevents searching via $PATH.

The old behaviour is still retained, but if the file does not exists, then
authors-prog is searched for in $PATH as any other command.

Signed-off-by: Andreas Heiduk 
Signed-off-by: Eric Wong 
---
 Documentation/git-svn.txt | 5 +
 git-svn.perl  | 3 ++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/Documentation/git-svn.txt b/Documentation/git-svn.txt
index 636e09048e..b858374649 100644
--- a/Documentation/git-svn.txt
+++ b/Documentation/git-svn.txt
@@ -657,6 +657,11 @@ config key: svn.authorsfile
expected to return a single line of the form "Name ",
which will be treated as if included in the authors file.
 +
+Due to historical reasons a relative 'filename' is first searched
+relative to the current directory for 'init' and 'clone' and relative
+to the root of the working tree for 'fetch'. If 'filename' is
+not found, it is searched like any other command in '$PATH'.
++
 [verse]
 config key: svn.authorsProg
 
diff --git a/git-svn.perl b/git-svn.perl
index a6b6c3e40c..050f2a36f4 100755
--- a/git-svn.perl
+++ b/git-svn.perl
@@ -374,7 +374,8 @@ version() if $_version;
 usage(1) unless defined $cmd;
 load_authors() if $_authors;
 if (defined $_authors_prog) {
-   $_authors_prog = "'" . File::Spec->rel2abs($_authors_prog) . "'";
+   my $abs_file = File::Spec->rel2abs($_authors_prog);
+   $_authors_prog = "'" . $abs_file . "'" if -x $abs_file;
 }
 
 unless ($cmd =~ /^(?:clone|init|multi-init|commit-diff)$/) {
-- 
2.16.2



[PATCH v2 2/2] git-svn: allow empty email-address in authors-prog and authors-file

2018-03-11 Thread Andreas Heiduk
The email address in --authors-file and --authors-prog can be empty but
git-svn translated it into a syntethic email address in the form
$USERNAME@$REPO_UUID. Now git-svn behaves like git-commit: If the email
is explicitly set to the empty string, the commit does not contain
an email address.

Signed-off-by: Andreas Heiduk 
---
 Documentation/git-svn.txt   |  8 +---
 perl/Git/SVN.pm | 13 ++---
 t/t9130-git-svn-authors-file.sh | 14 ++
 t/t9138-git-svn-authors-prog.sh | 25 -
 4 files changed, 49 insertions(+), 11 deletions(-)

diff --git a/Documentation/git-svn.txt b/Documentation/git-svn.txt
index b858374649..d59379ee23 100644
--- a/Documentation/git-svn.txt
+++ b/Documentation/git-svn.txt
@@ -635,7 +635,8 @@ config key: svn.findcopiesharder
 
 -A::
 --authors-file=::
-   Syntax is compatible with the file used by 'git cvsimport':
+   Syntax is compatible with the file used by 'git cvsimport' but
+   an empty email address can be supplied with '<>':
 +
 
loginname = Joe User 
@@ -654,8 +655,9 @@ config key: svn.authorsfile
If this option is specified, for each SVN committer name that
does not exist in the authors file, the given file is executed
with the committer name as the first argument.  The program is
-   expected to return a single line of the form "Name ",
-   which will be treated as if included in the authors file.
+   expected to return a single line of the form "Name " or
+   "Name <>", which will be treated as if included in the authors
+   file.
 +
 Due to historical reasons a relative 'filename' is first searched
 relative to the current directory for 'init' and 'clone' and relative
diff --git a/perl/Git/SVN.pm b/perl/Git/SVN.pm
index bc4eed3d75..945ca4db2b 100644
--- a/perl/Git/SVN.pm
+++ b/perl/Git/SVN.pm
@@ -1482,7 +1482,6 @@ sub call_authors_prog {
}
if ($author =~ /^\s*(.+?)\s*<(.*)>\s*$/) {
my ($name, $email) = ($1, $2);
-   $email = undef if length $2 == 0;
return [$name, $email];
} else {
die "Author: $orig_author: $::_authors_prog returned "
@@ -2020,8 +2019,8 @@ sub make_log_entry {
remove_username($full_url);
$log_entry{metadata} = "$full_url\@$r $uuid";
$log_entry{svm_revision} = $r;
-   $email ||= "$author\@$uuid";
-   $commit_email ||= "$author\@$uuid";
+   $email = "$author\@$uuid" unless defined $email;
+   $commit_email = "$author\@$uuid" unless defined $commit_email;
} elsif ($self->use_svnsync_props) {
my $full_url = canonicalize_url(
add_path_to_url( $self->svnsync->{url}, $self->path )
@@ -2029,15 +2028,15 @@ sub make_log_entry {
remove_username($full_url);
my $uuid = $self->svnsync->{uuid};
$log_entry{metadata} = "$full_url\@$rev $uuid";
-   $email ||= "$author\@$uuid";
-   $commit_email ||= "$author\@$uuid";
+   $email = "$author\@$uuid" unless defined $email;
+   $commit_email = "$author\@$uuid" unless defined $commit_email;
} else {
my $url = $self->metadata_url;
remove_username($url);
my $uuid = $self->rewrite_uuid || $self->ra->get_uuid;
$log_entry{metadata} = "$url\@$rev " . $uuid;
-   $email ||= "$author\@" . $uuid;
-   $commit_email ||= "$author\@" . $uuid;
+   $email = "$author\@$uuid" unless defined $email;
+   $commit_email = "$author\@$uuid" unless defined $commit_email;
}
$log_entry{name} = $name;
$log_entry{email} = $email;
diff --git a/t/t9130-git-svn-authors-file.sh b/t/t9130-git-svn-authors-file.sh
index 41264818cc..6af6daf461 100755
--- a/t/t9130-git-svn-authors-file.sh
+++ b/t/t9130-git-svn-authors-file.sh
@@ -108,6 +108,20 @@ test_expect_success !MINGW 'fresh clone with 
svn.authors-file in config' '
)
 '
 
+cat >> svn-authors <
+EOF
+
+test_expect_success 'authors-file imported user without email' '
+   svn_cmd mkdir -m aa/branches/ff --username ff "$svnrepo/aa/branches/ff" 
&&
+   (
+   cd aa-work &&
+   git svn fetch --authors-file=../svn-authors &&
+   git rev-list -1 --pretty=raw refs/remotes/origin/ff | \
+ grep "^author FFF FFF <> "
+   )
+   '
+
 test_debug 'GIT_DIR=gitconfig.clone/.git git log'
 
 test_done
diff --git a/t/t9138-git-svn-authors-prog.sh b/t/t9138-git-svn-authors-prog.sh
index 7d7e9d46bc..0cec56128f 100755
--- a/t/t9138-git-svn-authors-prog.sh
+++ b/t/t9138-git-svn-authors-prog.sh
@@ -9,7 +9,9 @@ test_description='git svn authors prog tests'
 
 

Re: [RFC v2] Rebasing merges: a jorney to the ultimate solution (Road Clear)

2018-03-11 Thread Johannes Schindelin
Hi Buga,

On Fri, 9 Mar 2018, Igor Djordjevic wrote:

> On 08/03/2018 20:58, Igor Djordjevic wrote:
> > 
> > > Phillip's method is essentially merging the new tips into the original
> > > merge, pretending that the new tips were not rebased but merged into
> > > upstream.
> > 
> > [...]
> > 
> > Here`s a starting point, two commits A and B, merged into M:
> > 
> > (3) ---A
> > \
> >  M
> > /
> > ---B
> > 
> > 
> > According the "patch theory"[1] (which might not be too popular 
> > around here, but should serve the purpose for what I`m trying to 
> > explain), each merge commit can be "transformed" to two non-merge 
> > commits, one on top of each of the merge parents, where new commit 
> > brings its original merge parent commit tree to the state of the 
> > merge commit tree:
> > 
> > (4) ---A---U1
> > 
> > 
> > 
> > ---B---U2
> > 
> > 
> > Now, we have two new commits, U1 and U2, each having the same tree as 
> > previous merge commit M, but representing changes in regards to 
> > specific parents - and this is essentially what Sergey`s original 
> > approach was using (whether he knew it, or not).
> > 
> > When it comes to rebasing, it`s pretty simple, too. As this:
> > 
> > (5) ---X1---o---o---o---o---o---X2 (master)
> >|\
> >| A1---A2---A3
> >| \
> >|  M
> >| /
> >\-B1---B2---B3
> > 
> > ... actually equals this:
> > 
> > (6) ---X1---o---o---o---o---o---X2 (master)
> >|\
> >| A1---A2---A3---U1
> >|
> >|
> >|
> >\-B1---B2---B3---U2
> > 
> > ... where trees of M, U1 and U2 are same, and we can use the regular 
> > rebase semantics and rebase it to this:
> > 
> > (7) ---X1---o---o---o---o---o---X2 (master)
> > |\
> > | A1'--A2'--A3'--U1'
> > |
> > |
> > |
> > \-B1'--B2'--B3'--U2'
> > 
> > ... which is essentially this again:
> > 
> > (8) ---X1---o---o---o---o---o---X2 (master)
> > |\
> > | A1'--A2'--A3'
> > |\
> > | M'
> > |/
> > \-B1'--B2'--B3'
> > 
> 
> Having explained all this, I realized this is the same "essentially 
> merging the new tips into the original pretending that the new tips 
> were not rebased but merged into upstream" as Phillip`s one, just 
> that we have additional temporary commits U1 and U2 (as per mentioned 
> "patch theory") :)

But if the old tips had been merged into upstream (resulting in the new
tips), then the merge bases would be *the old tips*.

I am still not sure for what scenarios Phillip's strategy is the same as
Sergey's (updated) one, as the former strategy can do completely without
temporary commits, and cannot introduce ambiguities when rebasing the
changes introduced by M (i.e. the "amendmendts" we talked about).

Ciao,
Dscho


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Igor Djordjevic
Hi Dscho,

On 11/03/2018 13:11, Johannes Schindelin wrote:
> 
> > > I did wonder about using 'pick ' for rebasing merges
> > > and keeping 'merge ...' for recreating them but I'm not sure if that
> > > is a good idea. It has the advantage that the user cannot specify the
> > > wrong parents for the merge to be rebased as 'git rebase' would work
> > > out if the parents have been rebased, but maybe it's a bit magical to
> > > use pick for merge commits. Also there isn't such a simple way for the
> > > user to go from 'rabase this merge' to 'recreate this merge' as they'd
> > > have to write the whole merge line themselves (though I guess
> > > something like emacs' git-rebase.el would be able to help with that)
> >
> > Since the ultimate commit hashes of newly rebased commits would be
> > unknown at the time of writing the todo file, I'm not sure how this
> > would work to specify the parents?
> 
> I agree with Phillip's follow-up that the `pick ` syntax
> would pose a problem, but for different reasons: We already tried it, with
> --preserve-merges, and it is just a really stupid syntax that does not
> allow the user even to reorder commits. Or drop commits (except at the
> very end of the todo list).

Hehe, please excuse me, but in the light of that other explicit (or 
not) parent mapping discussion[1], I would take a chance to be really 
sneaky here and say that being non-explicit "is just a really stupid 
syntax that does not allow the user even to reorder rebased merge 
parents. Or drop parents (except at the very end of the parent list)." ;)

[1] https://public-inbox.org/git/b329bb98-f9d6-3d51-2513-465aad2fa...@gmail.com/


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Johannes Schindelin
Hi Buga,

On Thu, 8 Mar 2018, Igor Djordjevic wrote:

> On 08/03/2018 13:16, Phillip Wood wrote:
> > 
> > > I did wonder about using 'pick ' for rebasing merges
> > > and keeping 'merge ...' for recreating them but I'm not sure if that
> > > is a good idea. It has the advantage that the user cannot specify
> > > the wrong parents for the merge to be rebased as 'git rebase' would
> > > work out if the parents have been rebased, but maybe it's a bit
> > > magical to use pick for merge commits. Also there isn't such a
> > > simple way for the user to go from 'rabase this merge' to 'recreate
> > > this merge' as they'd have to write the whole merge line themselves
> > > (though I guess something like emacs' git-rebase.el would be able to
> > > help with that)
> > 
> > Scrub that, it is too magical and I don't think it would work with
> > rearranged commits - it's making the --preserve-merges mistake all
> > over again. It's a shame to have 'merge' mean 'recreate the merge' and
> > 'rebase the merge' but I don't think there is an easy way round that.
> 
> I actually like `pick` for _rebasing_ merge commits, as `pick` is 
> already used for rebasing non-merge commits, too, so it feels natural.

Phillip is right, though: this would repeat the design mistake of
--preserve-merges.

We must not forget that the interactive mode is the target here, and that
the syntax (as well as the generated todo list) must allow for easy
modification. The `pick ` approach does not allow that, so we
cannot use it.

The `merge -R -C  ` approach is a lot better:
it offers the flexibility, without sacrificing the ease when not modifying
the todo list.

Ciao,
Dscho


Re: [RFC] Rebasing merges: a jorney to the ultimate solution (Road Clear)

2018-03-11 Thread Johannes Schindelin
Hi Junio,

On Thu, 8 Mar 2018, Junio C Hamano wrote:

> Johannes Schindelin  writes:
> 
> >> Non-textual semantic conflicts are made (in the best case just once)
> >> as a separate commit on top of mechanical auto-merge whose focus is
> >> predictability (rather than cleverness) done by Git, and then that
> >> separate commit is kept outside the history.  When replaying these
> >> merges to rebuild the 'pu' branch, after resetting the tip to
> >> 'master', each topic is merged mechanically, and if such a fix-up
> >> commit is present, "cherry-pick --no-commit" applies it and then
> >> "commit --amend --no-edit" to adjust the merge.  I find it quite
> >> valuable to have a separate record of what "evil" non-mechanical
> >> adjustment was done, which I know won't be lost in the noise when
> >> these merges need to be redone daily or more often.
> >
> > So essentially, you have something that `git rerere` would have learned,
> > but as a commit?
> 
> You probably wouldn't be asking that if you read what you cut out
> when you quoted above ;-) 

Sorry to be so stupid, and no, the part I cut did not clarify it for me,
hence my question.

> There are a collection of cherry-pickable commits in hierarchy under
> refs/merge-fix.  They are indexed by the branch that will cause
> semantic conflicts that do not involve textual conflicts at all (the
> important implication of which is that 'rerere' fundamentally will
> not trigger to help resolving them) [*1*], and are used to create
> evil merge when a corresponding branch is merged to 'pu' (and down).

I see. My question was unclear, I agree. Please let me re-try:

So essentially, what your cherry-pick'able commits are is a way to store
what rerere would have stored (i.e. the set of merge conflicts together
with their resolution)?

If so, what tooling do you have to identify quickly what to cherry-pick,
given merge conflicts?

(I know I could spend some half hour to scour your `todo` branch and the
documentation you wrote about how you maintain Git, but you already know
the answer to this question, and it would probably be interesting to
others who are as eager to have better tools for handling merge conflicts
as I am, too, so it would save time overall to discuss this single
question here.)

> *1* One topic adds an extra parameter to read_index_from() that has
> been and still is defined in a file and merged to 'pu' first, while
> another topic adds a new callsite for the same function in a file
> that the former topic does not touch, hence a merge of the latter
> topic has no textual conflict to the file with a new callsite, but
> still needs adjusting.  That sort of think.

Okay, so it is even more intricate than `rerere`: it does not only store
the merge conflicts (by grace of using the "patch ID" of the merge
conflicts) together with their resolution, but instead it has some sort of
idea of what context needs to be met to require the resolution?

Color me intrigued.

If you really found a way to automate describing, say, that a function
signature changed in one branch, and a caller was introduced in another,
and that merging those requires adjusting the caller in a specific way, I
now *really* think that this should be made available more broadly.

Ciao,
Dscho


[PATCH 1/3] configure: fix a regression in PCRE v1 detection

2018-03-11 Thread Ævar Arnfjörð Bjarmason
Change the check for PCRE v1 to disable the --with-libpcre1 option if
the pcre_version() function can't be found in the pcre library. I
unintentionally changed this in my 94da9193a6 ("grep: add support for
PCRE v2", 2017-06-01) while renaming moving some variables.

The intent of this check ever since it was added in
a119f91e57 ("configure: Check for libpcre", 2011-05-09) is to
second-guess the user and turn off an explicitly provided
--with-libpcre if the library can't be found.

I don't think that behavior makes any sense, we shouldn't be
second-guessing the user with an auto-detection, but changing that
needs a bigger refactoring of this script, and only has marginal
benefits. So let's fix it to work as it was intended to work again.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 configure.ac | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/configure.ac b/configure.ac
index 7f8415140f..41ceb2ac81 100644
--- a/configure.ac
+++ b/configure.ac
@@ -549,8 +549,8 @@ if test -n "$USE_LIBPCRE1"; then
 GIT_STASH_FLAGS($LIBPCREDIR)
 
 AC_CHECK_LIB([pcre], [pcre_version],
-[USE_LIBPCRE=YesPlease],
-[USE_LIBPCRE=])
+[USE_LIBPCRE1=YesPlease],
+[USE_LIBPCRE1=])
 
 GIT_UNSTASH_FLAGS($LIBPCREDIR)
 
-- 
2.15.1.424.g9478a66081



[PATCH 2/3] configure: detect redundant --with-libpcre & --with-libpcre1

2018-03-11 Thread Ævar Arnfjörð Bjarmason
The --with-libpcre option is a synonym for the --with-libpcre1 flag,
but the configure script allowed for redundantly specifying both.

Nothing broke as a result of this, but it's confusing, so let's
disallow it.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 configure.ac | 4 
 1 file changed, 4 insertions(+)

diff --git a/configure.ac b/configure.ac
index 41ceb2ac81..d1b3b143c4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -280,6 +280,10 @@ AS_HELP_STRING([--with-libpcre],[synonym for 
--with-libpcre1]),
 AC_ARG_WITH(libpcre1,
 AS_HELP_STRING([--with-libpcre1],[support Perl-compatible regexes via libpcre1 
(default is NO)])
 AS_HELP_STRING([],   [ARG can be also prefix for libpcre library and 
headers]),
+if test -n "$USE_LIBPCRE1"; then
+AC_MSG_ERROR([Only supply one of --with-libpcre or its synonym 
--with-libpcre1!])
+fi
+
 if test "$withval" = "no"; then
USE_LIBPCRE1=
 elif test "$withval" = "yes"; then
-- 
2.15.1.424.g9478a66081



[PATCH 3/3] Makefile: make USE_LIBPCRE=YesPlease mean v2, not v1

2018-03-11 Thread Ævar Arnfjörð Bjarmason
Change the USE_LIBPCRE flag from being an alias for USE_LIBPCRE1 to
being an alias for USE_LIBPCRE2.

When support for v2 was added in my 94da9193a6 ("grep: add support for
PCRE v2", 2017-06-01) the existing USE_LIBPCRE flag was left as
meaning v1, with a note that this would likely change in a future
release. That optional support for v2 first made it into Git version
2.14.0.

The PCRE v2 support has been shown to be stable, and the upstream PCRE
project is highly encouraging downstream users to move to v2, so it
makes sense to give packagers of Git who haven't heard the news about
PCRE v2 a further nudge to move to v2.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 Makefile | 26 +-
 configure.ac | 26 +-
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/Makefile b/Makefile
index de4b8f0c02..449ff71f45 100644
--- a/Makefile
+++ b/Makefile
@@ -29,10 +29,10 @@ all::
 # Perl-compatible regular expressions instead of standard or extended
 # POSIX regular expressions.
 #
-# Currently USE_LIBPCRE is a synonym for USE_LIBPCRE1, define
-# USE_LIBPCRE2 instead if you'd like to use version 2 of the PCRE
-# library. The USE_LIBPCRE flag will likely be changed to mean v2 by
-# default in future releases.
+# USE_LIBPCRE is a synonym for USE_LIBPCRE2, define USE_LIBPCRE1
+# instead if you'd like to use the legacy version 1 of the PCRE
+# library. Support for version 1 will likely be removed in some future
+# release of Git, as upstream has all but abandoned it.
 #
 # When using USE_LIBPCRE1, define NO_LIBPCRE1_JIT if the PCRE v1
 # library is compiled without --enable-jit. We will auto-detect
@@ -1164,13 +1164,18 @@ ifdef NO_LIBGEN_H
COMPAT_OBJS += compat/basename.o
 endif
 
-USE_LIBPCRE1 ?= $(USE_LIBPCRE)
+USE_LIBPCRE2 ?= $(USE_LIBPCRE)
 
-ifneq (,$(USE_LIBPCRE1))
-   ifdef USE_LIBPCRE2
-$(error Only set USE_LIBPCRE1 (or its alias USE_LIBPCRE) or USE_LIBPCRE2, not 
both!)
+ifneq (,$(USE_LIBPCRE2))
+   ifdef USE_LIBPCRE1
+$(error Only set USE_LIBPCRE2 (or its alias USE_LIBPCRE) or USE_LIBPCRE1, not 
both!)
endif
 
+   BASIC_CFLAGS += -DUSE_LIBPCRE2
+   EXTLIBS += -lpcre2-8
+endif
+
+ifdef USE_LIBPCRE1
BASIC_CFLAGS += -DUSE_LIBPCRE1
EXTLIBS += -lpcre
 
@@ -1179,11 +1184,6 @@ ifdef NO_LIBPCRE1_JIT
 endif
 endif
 
-ifdef USE_LIBPCRE2
-   BASIC_CFLAGS += -DUSE_LIBPCRE2
-   EXTLIBS += -lpcre2-8
-endif
-
 ifdef LIBPCREDIR
BASIC_CFLAGS += -I$(LIBPCREDIR)/include
EXTLIBS += -L$(LIBPCREDIR)/$(lib) $(CC_LD_DYNPATH)$(LIBPCREDIR)/$(lib)
diff --git a/configure.ac b/configure.ac
index d1b3b143c4..6f1fd9df35 100644
--- a/configure.ac
+++ b/configure.ac
@@ -254,25 +254,25 @@ GIT_PARSE_WITH([openssl]))
 # Perl-compatible regular expressions instead of standard or extended
 # POSIX regular expressions.
 #
-# Currently USE_LIBPCRE is a synonym for USE_LIBPCRE1, define
-# USE_LIBPCRE2 instead if you'd like to use version 2 of the PCRE
-# library. The USE_LIBPCRE flag will likely be changed to mean v2 by
-# default in future releases.
+# USE_LIBPCRE is a synonym for USE_LIBPCRE2, define USE_LIBPCRE1
+# instead if you'd like to use the legacy version 1 of the PCRE
+# library. Support for version 1 will likely be removed in some future
+# release of Git, as upstream has all but abandoned it.
 #
 # Define LIBPCREDIR=/foo/bar if your PCRE header and library files are in
 # /foo/bar/include and /foo/bar/lib directories.
 #
 AC_ARG_WITH(libpcre,
-AS_HELP_STRING([--with-libpcre],[synonym for --with-libpcre1]),
+AS_HELP_STRING([--with-libpcre],[synonym for --with-libpcre2]),
 if test "$withval" = "no"; then
-   USE_LIBPCRE1=
+   USE_LIBPCRE2=
 elif test "$withval" = "yes"; then
-   USE_LIBPCRE1=YesPlease
+   USE_LIBPCRE2=YesPlease
 else
-   USE_LIBPCRE1=YesPlease
+   USE_LIBPCRE2=YesPlease
LIBPCREDIR=$withval
AC_MSG_NOTICE([Setting LIBPCREDIR to $LIBPCREDIR])
-dnl USE_LIBPCRE1 can still be modified below, so don't substitute
+dnl USE_LIBPCRE2 can still be modified below, so don't substitute
 dnl it yet.
GIT_CONF_SUBST([LIBPCREDIR])
 fi)
@@ -280,10 +280,6 @@ AS_HELP_STRING([--with-libpcre],[synonym for 
--with-libpcre1]),
 AC_ARG_WITH(libpcre1,
 AS_HELP_STRING([--with-libpcre1],[support Perl-compatible regexes via libpcre1 
(default is NO)])
 AS_HELP_STRING([],   [ARG can be also prefix for libpcre library and 
headers]),
-if test -n "$USE_LIBPCRE1"; then
-AC_MSG_ERROR([Only supply one of --with-libpcre or its synonym 
--with-libpcre1!])
-fi
-
 if test "$withval" = "no"; then
USE_LIBPCRE1=
 elif test "$withval" = "yes"; then
@@ -300,6 +296,10 @@ AS_HELP_STRING([],   [ARG can be also prefix for 
libpcre library and hea
 AC_ARG_WITH(libpcre2,
 AS_HELP_STRING([--with-libpcre2],[support Perl-compatible regexes via libpcre2 
(default is NO)])
 AS_HELP_STRING([], 

[PATCH 0/3] Switch the default PCRE from v1 to v2 + configure fixes

2018-03-11 Thread Ævar Arnfjörð Bjarmason
This small series makes USE_LIBPCRE=YesPlease mean
USE_LIBPCRE2=YesPlease, instead of USE_LIBPCRE1=YesPlease is it does
now. Along the way I fixed a couple of minor issues in the PCRE
detection in the autoconf script.

Ævar Arnfjörð Bjarmason (3):
  configure: fix a regression in PCRE v1 detection
  configure: detect redundant --with-libpcre & --with-libpcre1
  Makefile: make USE_LIBPCRE=YesPlease mean v2, not v1

 Makefile | 26 +-
 configure.ac | 26 +++---
 2 files changed, 28 insertions(+), 24 deletions(-)

-- 
2.15.1.424.g9478a66081



Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Igor Djordjevic
Hi Dscho,

On 11/03/2018 13:00, Johannes Schindelin wrote:
> 
> > I actually like `pick` for _rebasing_ merge commits, as `pick` is 
> > already used for rebasing non-merge commits, too, so it feels natural.
> 
> Phillip is right, though: this would repeat the design mistake of
> --preserve-merges.
> 
> We must not forget that the interactive mode is the target here, and that
> the syntax (as well as the generated todo list) must allow for easy
> modification. The `pick ` approach does not allow that, so we
> cannot use it.
> 
> The `merge -R -C  ` approach is a lot better:
> it offers the flexibility, without sacrificing the ease when not modifying
> the todo list.

Eh, I`m afraid the quote you took is missing the rest of its 
(important) context, where I mentioned already proposed format for 
`pick` in that other subthread[1], including other parameters beside 
merge commit to pick, as that parent mapping.

I agree with both of you that `pick ` is inflexible 
(not to say just plain wrong), but I never thought about it like that.

If we are to extract further mentioned explicit old:new merge 
parameter mapping to a separate discussion point, what we`re 
eventually left with is just replacing this:

merge -R -C  

... with this:

pick  


That is what I had in mind, seeming possibly more straightforward and 
beautifully aligned with previously existing (and well known) 
`rebase` terminology.

Not to say this would make it possible to use other `rebase -i` todo 
list commands, too, like if you want to amend/edit merge commit after 
it was rebased, you would write:

edit  

..., where in case you would simply like to reword its commit 
message, it would be just:

reword  


Even `squash` and `fixup` could have their place in combination with 
a (to be rebased) merge commit, albeit in a pretty exotic rebases, 
thus these could probably be just disallowed - for the time being, at 
least.

The real power would be buried in implementation, learning to rebase 
merge commits, so user is left with a very familiar interface, slightly 
adapted do accommodate a bit different nature of merge commit in 
comparison to an ordinary one, also to allow a bit more of interactive 
rebase functionality, but it would pretty much stay the same, without 
even a need to learn about new `merge`, `-R`, `-C`, and so on.

Yes, those would have its purpose, but for real merging then 
(creating new merges, or recreating old ones), not necessarily for 
merge rebasing.

With state of `merge -R -C ...` (that `-R` being the culprit), it 
kind of feels like we`re now trying to bolt "rebase merges" 
functionality onto a totally different one (recreate merges, serving 
a different purpose), making them both needlessly dependent on each 
other, further complicating user interface, making it more confusing 
and less tunable as per each separate functionality needs (rebase vs. 
recreate).

I guess I`m the one to pretty much blame here, too, as I really 
wanted `--recreate-merges` to handle "rebase merges" better, only to 
later realize it might not be the best tool for the job, and that a 
more separate approach would be better (at least not through the same 
`merge` todo list command)...

Regards, Buga

[1] https://public-inbox.org/git/f3872fb9-01bc-b2f1-aee9-cfc0e4db7...@gmail.com/


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Johannes Schindelin
Hi Jake,

On Thu, 8 Mar 2018, Jacob Keller wrote:

> On Thu, Mar 8, 2018 at 3:20 AM, Phillip Wood  
> wrote:
> > I did wonder about using 'pick ' for rebasing merges
> > and keeping 'merge ...' for recreating them but I'm not sure if that
> > is a good idea. It has the advantage that the user cannot specify the
> > wrong parents for the merge to be rebased as 'git rebase' would work
> > out if the parents have been rebased, but maybe it's a bit magical to
> > use pick for merge commits. Also there isn't such a simple way for the
> > user to go from 'rabase this merge' to 'recreate this merge' as they'd
> > have to write the whole merge line themselves (though I guess
> > something like emacs' git-rebase.el would be able to help with that)
> 
> Since the ultimate commit hashes of newly rebased commits would be
> unknown at the time of writing the todo file, I'm not sure how this
> would work to specify the parents?

I agree with Phillip's follow-up that the `pick ` syntax
would pose a problem, but for different reasons: We already tried it, with
--preserve-merges, and it is just a really stupid syntax that does not
allow the user even to reorder commits. Or drop commits (except at the
very end of the todo list).

As to your concern: The ultimate commit hashes do not have to be known
until the merge commit is rebased. The current approach is to use labels
for them (`label `), which Simply Works.

Ciao,
Dscho


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Johannes Schindelin
Hi Buga,

On Thu, 8 Mar 2018, Igor Djordjevic wrote:

> On 08/03/2018 16:16, Igor Djordjevic wrote:
> > 
> > > Unless we reimplement the octopus merge (which works quite a bit
> > > differently from the "rebase merge commit" strategy, even if it is
> > > incremental, too), which has its own challenges: if there are merge
> > > conflicts before merging the last MERGE_HEAD, the octopus merge will exit
> > > with status 2, telling you "Should not be doing an octopus.". While we
> > > will want to keep merge conflict markers and continue with the "rebase the
> > > original merge commit" strategy.
> > >
> > > [...]
> > 
> > The thing is, in my opinion, as long as we are _rebasing_, you can`t 
> > pick any merge strategy, as it doesn`t really make much sense. If you 
> > do want a specific strategy, than that`s _recreating_ a merge, and it 
> > goes fine with what you already have for `--recreate-merges`.
> > 
> > On merge rebasing, the underlying strategy we decide to use is just an 
> > implementation detail, picking the one that works best (or the only 
> > one that works, even), user should have nothing to do with it.
> 
> Just to add, if not already assumable, that I think we should stop 
> and let user react on conflicts on each of the "rebase the original 
> commit" strategy steps (rebase first parent, rebase second parent... 
> merge parents).

I am not sure about that. Actually, I am pretty certain that we should
imitate the recursive merge, which "accumulates" merge conflicts and only
presents the final result, possibly with nested merge conflicts.

In practice, we see that rarely, and more often than not, in those cases
the user wants to re-do the merge differently, anyway.

In the scenario we discuss here, I would wager a bet that the user
encountering nested merge conflicts would most likely see that the rebase
tried to rebase a merge, then `git reset --hard` and re-do the merge
manually (i.e. *recreate* rather than *rebase*, most likely with
(non-nested) merge conflicts).

> I guess this stresses not using real "octopus merge" strategy even in 
> case where we`re rebasing octopus merge commit even more (and aligns 
> nicely with what you seem to expect already).

My mistake, I should have pointed you my implementation:

https://github.com/git/git/commit/d41a29ceb61ff445efd0f97a836f381de57c5a41

The full thicket of branches can be found here:

https://github.com/git/git/compare/master...dscho:sequencer-shears

I chose to make this an add-on patch after adding support for octopus
merges because it actually makes it easier to think about "rebase merge
commits" independently of the number of merge parents.

Ciao,
Dscho


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Igor Djordjevic
Hi Dscho,

On 11/03/2018 13:08, Johannes Schindelin wrote:
> 
> > Hmm, funny enough, `pick ` was something I though about
> > originally, too, feeling that it might make more sense in terms on
> > what`s really going on, but I guess I wanted it incorporated into
> > `--recreate-merges` too much that I tried really hard to fit it in,
> > without changing it much :/
> 
> The `pick ` syntax is too limited to allow reordering, let
> alone changing the parents.

I agree, `pick ` syntax alone is never what I had in 
mind, so it`s missing further context here, touched in that other 
subthread[1]. My fault, sorry for confusion.

> >   pick  :HEAD 
> > :
> 
> I do not really like it, as it makes things a ton less intuitive. If you
> did not know about this here discussion, and you did not read the manual
> (and let's face it: a UI that does not require users to read the manual is
> vastly superior to a UI that does), and you encountered this command:
> 
>   merge deadbeef cafecafe:download-button
> 
> what would you think those parameters would mean?
> 
> Granted, encountering
> 
>   merge -R -C deadbeef download-button # Merge branch 'download-button'
> 
> is still not *quite* as intuitive as I would wish. Although, to be honest,
> if I encountered this, I would think that I should probably leave the -R
> and the -C deadbeef alone, and that I could change what is getting merged
> by changing the `download-button` parameter.

Agreed, encountering mapping is slightly more complicated, but I 
would argue it`s much more powerful at the same time, too, thus 
pretty much worth it.

Without it, actually, it seems like we`re repeating the mistake of 
`--preserve-merges`, where we`re assuming too much (order of new and 
old parents being the same, and I guess number of them, too).

Oh, and as we`re still discussing in terms of `merge` command, using 
(elsewhere mentioned[1]) `pick` instead, it might be even less 
non-intuitive, as we`re not married to `merge` semantics any more:

pick deadbeef cafecafe:download-button


And might be calling it "non-intuitive" is unfair, I guess it would 
rather be "not familiar yet", being case with any new functionality, 
let alone a very powerful one, where getting a clue on what it does 
at the beginning could do wonders later.

Sacrificing that power for a bit of perceived simplicity, where it 
actually assumes stuff on its own (trying to stay simple for the 
user), doesn`t seem as a good way to go in the long run.

Sometimes one just needs to read the manual, and I don`t really think 
this is a ton complicated, but just something we didn`t really have 
before (real merge rebasing), so it requires a moment to grasp the 
concept.

But I`m still not sure if there isn`t a better way to present 
explicit mapping, just that : seemed as the most straightforward 
one to pass on the point for the purpose of discussing it.

And I`m not reluctant to simplifying user interface, or for dropping 
the explicit mapping altogether, even, just for carefully measuring 
what we could lose without explicit mapping - think flexibility, but 
ease and correctness of implementation, too, as we need to guess the 
old merge parents and which new one they should correspond to.

> > p.s. Are we moving towards `--rebase-merges` I mentioned in that 
> > other topic[1], as an add-on series after `--recreate-merges` hits 
> > the mainstream (as-is)...? :P
> 
> That's an interesting question. One that I do not want to answer alone,
> but I would be in favor of `--rebase-merges` as it is IMHO a much better
> name for what this option is all about.

Saying in favor of `--rebase-merges`, you mean as a separate option, 
alongside `--recreate-merges` (once that series lands)?

That does seem as the most clean, intuitive and straightforward 
solution. Depending on the option you provide (recreate vs rebase), 
todo list would be populated accordingly by default - but important 
thing is "todo list parser" would know to parse both, so one can 
still adapt todo list to both recreate (some) and rebase (some other) 
merges at the same time.

Of course, this only once `--rebase-merges` gets implemented, too, 
but as we had waited for it for so long, it won`t hurt to wait a bit 
more and possibly do it more properly, than rush it now and make a 
confusing user interface, needlessly welding two functionalities 
together (rebase vs. recreate).

But I guess you already knew my thoughts, so let`s see what other`s 
think, too ;)

Regards, Buga

[1] https://public-inbox.org/git/f4e6237a-84dc-1aa8-150d-041806e24...@gmail.com/


Re: [RFC] Rebasing merges: a jorney to the ultimate solution(RoadClear)

2018-03-11 Thread Johannes Schindelin
Hi Buga,

On Thu, 8 Mar 2018, Igor Djordjevic wrote:

> On 08/03/2018 12:20, Phillip Wood wrote:
> > 
> > I did wonder about using 'pick ' for rebasing merges
> > and keeping 'merge ...' for recreating them but I'm not sure if that
> > is a good idea. It has the advantage that the user cannot specify the
> > wrong parents for the merge to be rebased as 'git rebase' would work
> > out if the parents have been rebased, but maybe it's a bit magical to
> > use pick for merge commits. Also there isn't such a simple way for the
> > user to go from 'rabase this merge' to 'recreate this merge' as they'd
> > have to write the whole merge line themselves (though I guess
> > something like emacs' git-rebase.el would be able to help with that)
> 
> Hmm, funny enough, `pick ` was something I though about
> originally, too, feeling that it might make more sense in terms on
> what`s really going on, but I guess I wanted it incorporated into
> `--recreate-merges` too much that I tried really hard to fit it in,
> without changing it much :/

The `pick ` syntax is too limited to allow reordering, let
alone changing the parents.

> And now that I said this in a previous reply:
> 
> > The thing is, in my opinion, as long as we are _rebasing_, you can`t 
> > pick any merge strategy, as it doesn`t really make much sense. If you 
> > do want a specific strategy, than that`s _recreating_ a merge, and it 
> > goes fine with what you already have for `--recreate-merges`.
> > 
> > On merge rebasing, the underline strategy we decide to use is just an 
> > implementation detail, picking the one that works best (or the only 
> > one that works, even), user should have nothing to do with it.
> 
> The difference between "rebase merge commit" and "recreate merge 
> commit" might starting to be more evident.

True.

> So... I might actually go for this one now. And (trying to stick with 
> explicit mappings, still :P), now that we`re not married to `merge` 
> expectations a user may already have, maybe a format like this:
> 
>   pick  :HEAD 
> :

I do not really like it, as it makes things a ton less intuitive. If you
did not know about this here discussion, and you did not read the manual
(and let's face it: a UI that does not require users to read the manual is
vastly superior to a UI that does), and you encountered this command:

merge deadbeef cafecafe:download-button

what would you think those parameters would mean?

Granted, encountering

merge -R -C deadbeef download-button # Merge branch 'download-button'

is still not *quite* as intuitive as I would wish. Although, to be honest,
if I encountered this, I would think that I should probably leave the -R
and the -C deadbeef alone, and that I could change what is getting merged
by changing the `download-button` parameter.

> p.s. Are we moving towards `--rebase-merges` I mentioned in that 
> other topic[1], as an add-on series after `--recreate-merges` hits 
> the mainstream (as-is)...? :P

That's an interesting question. One that I do not want to answer alone,
but I would be in favor of `--rebase-merges` as it is IMHO a much better
name for what this option is all about.

Other opinions?

Ciao,
Dscho


Re: [RFC v2] Rebasing merges: a jorney to the ultimate solution (Road Clear)

2018-03-11 Thread Johannes Schindelin
Hi Buga,

On Thu, 8 Mar 2018, Igor Djordjevic wrote:

> On 07/03/2018 15:08, Johannes Schindelin wrote:
> > 
> > > > Didn't we settle on Phillip's "perform successive three-way merges
> > > > between the original merge commit and the new tips with the old tips
> > > > as base" strategy?
> > >
> > > It seems you did, dunno exactly why.
> > 
> > That is not true. You make it sound like I was the only one who liked
> > this, and not Phillip and Buga, too.
> 
> For myself, I do actually favor Sergey`s approach in general, but 
> _implemented_ through what Phillip described (or a mixture of both, to 
> be precise). But, let me explain... :)

So as you explained later in this sub-thread, Sergey's approach is
essentially the same as Phillip's.

I still do not understand Sergey's approach on a fundamental level. I
mean, I can follow his instructions how to implement his algorithm, but it
is as if I had a blindfold on and somebody guided me through a maze: I
understand *what* I am supposed to do, but I have no clue *why*.

And admittedly, I got very frustrated when a document was thrown my way
that is too long to read in one sitting, and all of my attempts at getting
clear and comprehensible answers to specific questions were met with "go
and read that document, I am sure you will understand then".

For something as fundamental to my daily workflow as an interactive rebase
(*especially* when trying to maintain the branch topology), this is no
good at all.

Since you already confirmed that there is essentially no difference
between the two approaches, I will simply go with the one I understand, in
particular I understand *why* it works.

But let's read on, maybe I will change my mind based on your explanations
(which do answer my questions, thank you so much for that)...

> > > The main problem with this decision is that we still don't see how
> > > and when to stop for user amendment using this method. OTOH, the
> > > original has this issue carefully discussed.
> > 
> > Why would we want to stop, unless there are merge conflicts?
> 
> Because we can reliably know that something "unusual" happened - and by
> that I don`t necessarily mean "wrong", but just might be worth user
> inspection.

We have a similar conundrum in recursive merges. Remember how multiple
merge bases are merged recursively? There can be merge conflicts, too, in
*any* of the individual merges involved, and indeed, there are (under
relatively rare circumstances).

Since we already faced that problem, and we already answered it by
presenting possibly nested merge conflicts, I am in strong favor of
keeping our new scenario consistent: present possibly-nested merge
conflicts.

As far as I understand, one of the arguments in favor of the current
approach was: there is no good way to tell the user where they are, and
how to continue from there. So better just to continue and present the
user with the entire set of conflicts, and have an obvious way out.

> For example, situation like this (M is made on A3 with `-s ours`, 
> obsoleting Bx commits):
> 
> (1) ---X8--X9 (master)
>|\
>| A1---A2---A3
>| \
>|  M (topic)
>| /
>\-B1---B2---B3
> 
> ... where we want to rebase M onto X9 is what I would call "usual 
> stuff", but this situation (M is still made on A3 with `-s ours`, 
> obsoleting Bx commits, but note cherry-picked B2'):
> 
> (2) ---X8--B2'--X9 (master)
>|\
>| A1---A2---A3
>| \
>|  M (topic)
>| /
>\-B1---B2---B3
> 
> ... where we still want to rebase M onto X9 is what we might consider 
> "unusual", because we noticed that something that shouldn`t be part 
> of the rebased merge commit (due to previous `-s ours`) actually got 
> in there (due to later cherry-pick), and just wanting the user to 
> check and confirm.

We already have those scenarios when performing a regular interactive
rebase, where a patch was already applied upstream. In the normal case,
the user is not even shown B2, thanks to the --cherry-pick option used in
generating the todo list.

Granted, in some cases --cherry-pick does not detect that, and then we
generate a todo list including B2, and when that patch is applied, the
interactive rebase stops, saying that there are no changes to be
committed.

And this behavior is exactly the same with --recreate-merges!

So I do not think that it would make sense to bother the user *again* when
rebasing the merge commit.

If there are merge conflicts, yes, we will have to. If there are none
(even if your U1' != U2'), it would be outright annoying to stop.

> > > "rebase sides of the merge commit and then three-way merge them back
> > > using original merge commit as base"
> > 
> > And that is also wrong, as I had proved already! Only Buga's addition
> > made it robust against dropping/modifying commits, and that addition
> > also makes it more complicated.
> 
> No, this 

Re: [GSoC] Questions about "convert interactive rebase to C"

2018-03-11 Thread Alban Gruin
Le 07/03/2018 à 22:56, Alban Gruin a écrit :
> Hi,
> 
> I was reading the email related to the "convert interactive rebase to C"
> idea[1], and I have a few questions about it:

Hi,

I’m writing to you again, in order to have some more information about
the "convert interactive rebase to C" project idea.

Regards, Alban.


Re: [Feature request] Add config option to gpgsign IFF key is present

2018-03-11 Thread brian m. carlson
On Sat, Mar 10, 2018 at 03:28:43PM +, NELSON, JOSHUA Y wrote:
> Currently, `commit.gpgsign` allows you to give either 'true' or 'false' as a 
> value. If the key is not present, commits will fail:
> 
> ```sh
> $ git commit -m "example"
> error: gpg failed to sign the data
> fatal: failed to write commit object
> ```
> 
> I like to reuse my config file across several machines, some of which do not 
> have my GPG key. Would it be possible to add an option to sign the commit 
> only if the private key for `user.signingkey` is present? It could be named 
> something like `commit.gpgsign=default-yes`.

Unfortunately, this isn't always possible.  You can forward the Unix
socket for the agent over an SSH connection, at which point the remote
machine has the ability to sign, but the gpg client doesn't list those
as existing secret keys in its output (because technically, those keys
don't exist on the remote system).  I use this technique at work, for
example, to sign things on my development VM.

It might be possible to make the failure of the signing operation not be
fatal in this case, although that could cause people to fail to sign due
to transient failures even when the key is present on the system.

I usually handle this by storing my main configuration in ~/.gitconfig
and on machines where I have a key, additionally having a
~/.config/git/config file that contains the commit.gpgsign entry.
-- 
brian m. carlson / brian with sandals: Houston, Texas, US
https://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: https://keybase.io/bk2204


signature.asc
Description: PGP signature