On Thu, 3 Oct 2013 06:57:33 -0700 (PDT)
> > > I have defined the following bash script:
> > > #!/bin/bash
> > > export USER=$1
> > > /bin/bash
> > >
> > > In the authorized_keys file I call this script with a user
> > > parameter whom would be logging in.
> > There are commits and I'm not receiving any errors while cloning,
> > it just sits with a blinking cursor and no prompt.
> Yes, I suppose that the update hook wouldn't be run here. However I
> fail to see why setting an environment variable is causing the clone
> operation to hang. I suppose the process I'm going about isn't the
> appropriate one for this task, so I will go about trying to revert my
If what you're calling from the authorized_keys file is that script I
quoted, this would explain everything: `bash` is spawned by your script,
and it just sits there sleeping on its stdin and writing nothing to its
stdout and stderr; the `git-send-pack` process on your local machine
perceives this as a network delay as it expects a process which
actually communicates with it via its standard channels (tunneled by
means of SSH).
Log into your server and run
git receive-pack /path/to/a/repo
and you'll see it dumps a few lines to its stdout and then waits for
you to respond; your local `git-send-pack` finds itself in the same
situation: it waits for the initial data from the peer which is not
See below for more discussion.
> > So if you want to subvert git-receive-pack (that's what
> > you're trying to do, as I understand) then you should employ forced
> > commands in your authorized_keys file, and that forced command
> > should be a script which ultimately calls $SSH_ORIGINAL_COMMAND
> > after performing the require setup. Refer to the authorized_keys
> > manual page.
> Ah I believe I'm starting to understand. By executing
> the server will execute the process that I interrupted with my
> script? That's certainly helpful to know!
Yes, just not interrupted but rather replaced/subverted.
That's why nothing has happened when you replaced whatever
`git-send-pack` expected to communicate with with plain `bash`.
> As an aside, the ultimate purpose of this is to work in accordance
> with a program I'm developing using jgit that will update my clients
> software by keeping an up to date copy of the compiled code in their
> local repo. Is this something that people in the industry do, am I
> opening myself up to vulnerabilities, or is this overly complicated?
If it's okay for your users to update by force-fetching then why not?
SSH/HTTPS access can be made quite secure.
On the other hand, is that compiled code has the form of a huge number
of files? Why not just rely on HTTP? I mean, each time a new binary
is built, it's given a unique name, it's uploaded to the client's
directory made available to the web server, and a symlink with a
"canonical" (I mean version-less) name is updated to point to the new
binary. The client could then just do HTTP HEAD to see the
last-modified content's date and fetch it, if it changed.
Or are you trying to minimize the amount of data to be transferred?
> This is why I'm trying to disable pushes by anyone except the
> compilation server using the update hook.
Okay, you could also take another route and look at serving your Git
repos *to your clients* via HTTPS, and just deny pushes for everyone by
accordingly configuring your web server. Your build daemons could
access the repos using SSH or via a different HTTP daemon's virtualhost
having a different permission policy.
Another route to explore might be providing Git bundles (again, via
plain old HTTP) created for the range of commits between the
now-current and the previously current versions. A Git bundle is a
self-contained single file which could be `git fetch`'ed from on the
client's side. Read the `git-bundle` manual page if interested.
> I haven't done much research yet on this next bullet point, but is it
> possible to limit customer specific code? IE, If I have a repository
> with 100 folders and 100 users, can I make it so that each user only
> has access to one of these folders on a clone or pull?
No, almost definitely not. Contrary to certain tools (namely
Subversion), Git does not put any special meaning into the *names* of
files and directories. Git developers even stand clearly that "Git
tracks content, not files". Git only stores three types of objects in
its database: blobs (contents of files), trees (they refer blobs and
other trees by their SHA-1 names *and* also store their "filesystem"
names) and commits with each commit referring to exactly one tree.
Each tree object represents the state of a single directory; a commit
refers to a tree object representing the state of the root project
directory. Note that "filesystem names" of files and directories are not
first-class objects -- they're just metadata. Hence you might record a
commit then rename every single directory and file in your project and
record another commit -- Git *will not* in any way link "new filesystem
names" of your objects to their respective names in the previous commit.
Now you should understand that implementing any policy for updates
based on filesystem names is very hard if at all possible. In the
receiving repository, it requires either a mechanism to a) ensure a set
of pathnames is present in each commit, and b) no object referenced by
those pathnames is modified in each commit; or a more sophisticated
mechanism which tracks renames and updates some sort of a DB mapping
changed pathnames to those on which your policy has been defined.
I would say that it's better to not go this route and just have a
per-user repository with "all or nothing" access model.
You received this message because you are subscribed to the Google Groups "Git
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/groups/opt_out.