[bug #18755] exported var-define and var-define from command line should appear in $(shell ) env

2007-01-10 Thread Jun Chen

URL:
  

 Summary: exported var-define and var-define from command
line should appear in $(shell ) env 
 Project: make
Submitted by: chjfth
Submitted on: Thursday 01/11/07 at 10:23
Severity: 3 - Normal
  Item Group: Enhancement
  Status: None
 Privacy: Public
 Assigned to: None
 Open/Closed: Open
 Discussion Lock: Any
   Component Version: 3.81
Operating System: None
   Fixed Release: None

___

Details:


See the following makefile:

===
export EXVAR = exval

_temp := $(shell echo "EXVAR = $${EXVAR}, CMDVAR = $${CMDVAR}" 1>&2)

all:
===

``make CMDVAR=cmdval'' currently outputs: 

---
EXVAR = , CMDVAR =
make: `all' is up to date.
---

But in some situation, the following output is expected:

---
EXVAR = exval, CMDVAR = cmdval
make: `all' is up to date.
---

I encountered this problem when I was working on my GnumakeUniproc project in
the past year(http://sf.net/projects/gnumakeuniproc), and I have to find a
workaround for it -- unfortunately.

I hope GNUmake's next version gives a option to enable my suggested
behavior.

By the way, an anonymous person has posted such issue two years ago.
https://savannah.gnu.org/bugs/?10593




___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/



___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make


Re: EAGAIN in "read jobs pipe"

2007-01-10 Thread Howard Chu

james coleman wrote:

 not much!

also a build making more calls to make can result in
 jfactor * number of make calls jobs


Only if you're using a totally braindead implementation of make. The 
whole point of the jobserver pipe is to eliminate this fanout problem.


So I might regularily use maybe -j 10 (when I know 10 more calls to 
make are made)
On some machines this might be unacceptable and bring them to their 
knees.

On other machines it can work really well.

--
 -- Howard Chu
 Chief Architect, Symas Corp.  http://www.symas.com
 Director, Highland Sunhttp://highlandsun.com/hyc
 OpenLDAP Core Teamhttp://www.openldap.org/project/



___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make


Re: EAGAIN in "read jobs pipe"

2007-01-10 Thread Paul Smith
On Wed, 2007-01-10 at 01:53 -0800, Howard Chu wrote:

> An essential design choice. This stuff relies on reads and writes of the 
> job_fd being atomic and the writes never blocking. POSIX guarantees a 4K 
> buffer for pipes. Perhaps the code should check the resource limit and 
> complain if the -j argument exceeds the resource limit: "Error: why 
> don't you just use '-j' ??"

Yes, that's a good idea.  I will add some error checking to the code
that initially seeds the pipe with tokens.

-- 
---
 Paul D. Smith <[EMAIL PROTECTED]>  Find some GNU make tips at:
 http://www.gnu.org  http://make.paulandlesley.org
 "Please remain calm...I may be mad, but I am a professional." --Mad Scientist


___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make


Re: EAGAIN in "read jobs pipe"

2007-01-10 Thread james coleman


Oh dear. Sorry people. I was of course being a bit silly when talking about 
make -j 16385!

Howard Chu wrote:

[EMAIL PROTECTED] wrote:

 Perhaps... On the other hand, if you're using -j 65K, why not
just -j? Does you build even have 65K jobs?


of course I do not use -j65k ! :-O :-)


Very good question.

I could be wrong of course, but I in my experience you don't gain any
real benefit from going beyond 3-4 jobs per (virtual) core... What's the
difference in build time from, say, -j 128 and -j 65385 for you?


 not much!

also a build making more calls to make can result in
 jfactor * number of make calls jobs

So I might regularily use maybe -j 10 (when I know 10 more calls to make are 
made)
On some machines this might be unacceptable and bring them to their knees.
On other machines it can work really well.

  


I usually count on 1.5 jobs per core, but obviously the right balance 
depends on your disk speeds relative to your CPUs...


I find that the limiting factor in the speed of builds is disk access.
 (gcc preprocessing following all header files)
So all of the multiple processes spawned are mostly just waiting on disk.
So using a parallel build makes optimum use of both cpu & disk together.

So yes, between 3-4 jobs per core, maybe a little bit more.

Of course Your Mile^H^H^H^HKilometerage Will Vary alot with different projects 
and machines.

James.


___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make


Re: EAGAIN in "read jobs pipe"

2007-01-10 Thread Howard Chu

[EMAIL PROTECTED] wrote:

Noticed something else.

Something else completely pointless!

 must not be > 16385



True, if you have 16 KB pipes...

  

 gmake -j 16386; # or above hang
 gmake -j 16385; # or below fire along rapidly



Because you're trying to write more into the job tokens pipe than it can
hold...

You appearantly have 16K pipes (ulimit -a should tell you)... I get the
same on x86 Linux if I go beyond 4K jobs...

This code (from main.c) assumes that the pipe will be big enough to hold
all tokens:

  while (--job_slots)
{
  int r;

  EINTRLOOP (r, write (job_fds[1], &c, 1));
  if (r != 1)
pfatal_with_name (_("init jobserver pipe"));
}

A bug?


An essential design choice. This stuff relies on reads and writes of the 
job_fd being atomic and the writes never blocking. POSIX guarantees a 4K 
buffer for pipes. Perhaps the code should check the resource limit and 
complain if the -j argument exceeds the resource limit: "Error: why 
don't you just use '-j' ??"



 Perhaps... On the other hand, if you're using -j 65K, why not
just -j? Does you build even have 65K jobs?
  


Very good question.

I could be wrong of course, but I in my experience you don't gain any
real benefit from going beyond 3-4 jobs per (virtual) core... What's the
difference in build time from, say, -j 128 and -j 65385 for you?
  


I usually count on 1.5 jobs per core, but obviously the right balance 
depends on your disk speeds relative to your CPUs...

The fact that your machine stays responsive during a build does not
necessarily mean that you will gain any benefit from increasing the -j
number...

  
When I see that total CPU idle time is about 3% or less I figure that's 
enough jobs.


--
 -- Howard Chu
 Chief Architect, Symas Corp.  http://www.symas.com
 Director, Highland Sunhttp://highlandsun.com/hyc
 OpenLDAP Core Teamhttp://www.openldap.org/project/



___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make


RE: EAGAIN in "read jobs pipe"

2007-01-10 Thread lasse.makholm

>Noticed something else.
>
>Something else completely pointless!
>
> must not be > 16385

True, if you have 16 KB pipes...

>  gmake -j 16386; # or above hang
>  gmake -j 16385; # or below fire along rapidly

Because you're trying to write more into the job tokens pipe than it can
hold...

You appearantly have 16K pipes (ulimit -a should tell you)... I get the
same on x86 Linux if I go beyond 4K jobs...

This code (from main.c) assumes that the pipe will be big enough to hold
all tokens:

  while (--job_slots)
{
  int r;

  EINTRLOOP (r, write (job_fds[1], &c, 1));
  if (r != 1)
pfatal_with_name (_("init jobserver pipe"));
}

A bug? Perhaps... On the other hand, if you're using -j 65K, why not
just -j? Does you build even have 65K jobs?

I could be wrong of course, but I in my experience you don't gain any
real benefit from going beyond 3-4 jobs per (virtual) core... What's the
difference in build time from, say, -j 128 and -j 65385 for you?

The fact that your machine stays responsive during a build does not
necessarily mean that you will gain any benefit from increasing the -j
number...

/Lasse


___
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make