On Tuesday 01 February 2005 11:35 pm, John Myers
<[EMAIL PROTECTED]> wrote:
> On Tuesday 01 February 2005 16:03, Boyd Stephen Smith Jr. wrote:
> > [On in reply to any one message in particular, so not quoting anyone.]
> >
> > Probably the best way to do this is to provide a drop-in replacement
> > for make w/ an optional patch to emerge/ebuild.
>
> Probably, but in the shorter term, a patch would be more likely, as
> rewriting make, making it bug-compatible, would be a project all its
> own.
You don't have to make it bug compatible, just POSIX/SUS compliant. For
software using configure/auto* tools (Pretty much anything C/C++ for GNU,
and a lot of other devs use it, too), the makefiles generated are designed
to work with many different POSIX/SUS makes. In particular, it doesn't
have to be gmake.
Writing "make" is not trivial, but it might be easier than getting a patch
for a new project added into "old" projects like gcc and gmake.
> > Now that's not easy, but it'd probably be easier than asking the gmake
> > and gcc teams to call back into your daemon and/or making and
> > maintaining your own patches to these (and other) projects and having
> > users recompile them to support your stuff.
>
> A patch to GCC isn't strictly neccessary, though. A wrapper could be
> created to run gcc, or it could be done by make just the same as for
> other processes that aren't progress-aware. I just thought that if it
> was patched, it could tell what stage of compilation it was on
> (preprocessing, compiling, assembling, etc.).
> Also, How often does make actually have any major architectural changes?
> I don't think a patch to make would be all that hard to maintain.
While the wrapper could tell something about what it's doing, based on the
command-line, it can't tell you how many times gcc (for example) will be
invoked in the future. A wrapper for make won't be able to do that
either, because you have to parse the makefile to get an accurate count.
I simply don't see a place a simple wrapper is going to give you the
information you need. You might be able to patch make (etc.) to give you
the correct information. That may be (probably is) easier that writing a
POPSIX/SUS make.
By scanning the filesystem and performing a few guesses, you could get an
estimate, but don't we already have a tool that does that?
AFAIK, gmake is fairly stable. I have a handful of ways it could be
improved, but there's only so much you really want make to do and gmake
follows the POSIX/SUS fairly well (plus, has a bunch of other features).
> > * Daemon should be light weight, and separate from the UI.
>
> Umm, yeah.That was exactly my idea. The UI would be a completely
> separate component, related only by a common communication protocol.
> Same with the clients. The daemon just keeps track.
Yeah, I wasn't sure how much of this you had already covered. I know you
had a daemon process, I wasn't sure how much it was responsible for.
> > * UI could provide different looks:
> > - Progress bar tree (similar to a process tree)
> > - Progress bar stack (for purely/mostly sequential tasks)
> > - Nested progress bars (they'd fill up like eDonkey or GetRight's
> > progress bars; not strictly reading order)
>
> Of course. The author of a viewer could make the progress bars any way
> they want. As an example, I might even write a web-based viewer.
Sure, these are just ideas for the UI.
> > * Programs that report to the deamon could also act like UI agents; in
> > the case of make it could show a progress bar that's really the "sum"
> > progress of it and all it's child makes.
>
> Maybe. This would be, in some ways, harder. I would go more for just
> supressing console output and forking to the background
I wouldn't want to fork to the background. Heck, for e{merge,build} I
wouldn't want output supressed [but I would want their called make output
supressed >:)].
I don't think either should be done by default, as most times it's easy to
append "> /dev/null 2>> /var/log/background_errors.log &" to a command
line.
> Also, here's an expansion of my concept: a client, bundled in the
> distribution, to allow shell scripts (ebuilds, anyone?) to use the
> system.
It would be nice if you had a small program that was called like:
prog-mon 'emerge sync' 'emerge -uD --newuse sys-apps/portage' 'emerge -uD
--newuse system' 'emerge -uD --newuse world'
It would run each of it's arguments as a command line, in sequence, and
display a progress bar. (Or hide and fork and be monitorable otherwise.)
> Here's how I see it working:
> Two parts:
> 1) a *client* daemon. This would be the process that actually
> communicates with the master daemon. It would keep the connection to the
> master open, and would also have a consistent PID (unlike a shell
> script, which, AFAIK, may not). We don't want the master daemon to think
> we mysteriously died and report an error. This client daemon would keep
> track of just the one job's tasks.
What do you mean by "a consistent PID". A shell script will not change
PIDs while executing, at least not normally. (I suppose *sh could handle
^Z specially in a shell script.) It won't have the same PID if invoked
twice, but neither will a client daemon. I don't think you really need
this part.
Now, a shell script might invoke another shell script or a binary, but
that's done by spawning a separate process, so of course it has a
different PID.
If you want binary/scripts that are callable from the shell script that
only do part of the process, well, just have them report a different PID
to the daemon eiter passed on the command line, or if not there, their
parent's process ID.
--
Boyd Stephen Smith Jr.
[EMAIL PROTECTED]
ICQ: 514984 YM/AIM: DaTwinkDaddy
--
[email protected] mailing list