Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Michał Górny
On Thu, 18 Oct 2012 16:12:55 -0700
Brian Harring  wrote:

> On Thu, Oct 18, 2012 at 11:54:21PM +0200, Micha?? G??rny wrote:
> > On Thu, 18 Oct 2012 02:15:43 -0700
> > Brian Harring  wrote:
> > 
> > > There's a trick to this; currently, those generated scripts hardcode 
> > > the allowed/known python versions for that package.  We obviously have 
> > > to preserve that; I propose we shove it into the symlink path.
> > > 
> > > Basically, we add a /usr/libexec/python directory; within it, we have 
> > > a wrapper binary (explained below), and a set of symlinks pointing at 
> > > the root of that directory.  To cover our current python versions, the 
> > > following would suffice:
> > > 
> > > for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8} 
> > > \2.7-pypy-1.9; do
> > >   ln -s ./ /usr/libexec/python/$x 
> > > done
> > > 
> > > While that seems insane, there is a reason; via that, we can encode 
> > > the allowed versions into the symlink.  Using pkgcore's pquery for 
> > > example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3) 
> > > instead of a wrapper script at /usr/bin/pquery, we'd have thus:
> > > 
> > > targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
> > > targets=$(IFS=/;echo -n "${targets[*]}")
> > > # This results in
> > > # targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
> > > ln -s "/usr/libexec/python/${targets}/wrapper" \
> > >   /usr/bin/pquery
> > > 
> > > /usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
> > > sees how it was invoked basically.  This will be the /usr/bin/whatever 
> > > pathway.  It reads the symlink, in the process getting the allowed 
> > > versions and preferred order of the versions.
> > > 
> > > Few notes; vast majority of filesystems will store the symlink target 
> > > into the core inode if at all possible- in doing so, this avoids 
> > > wasting an inode and is only limited by the length of the target.  
> > > That length is capped by PATH_MAX- which can range from 256 to 4k (or 
> > > higher).
> > > 
> > > For the pquery example above, that comes out to ~73 bytes for the 
> > > symlink pathway; well under PATH_MAX.
> > > 
> > > For the scenarios where PATH_MAX caps the symlink pathway, or for 
> > > whatever reason we don't want to use that trick, a tree of files 
> > > contained within /usr/libexec/python/ holding the allowed versions for 
> > > the matching pathway would suffice.
> > 
> > While I agree that it's a clever trick, I doubt it's worth the effort.
> > Did you got any numbers proving it being superior over, say, trying to
> > exec() scripts like I do in python-exec?
> > 
> > While I can imagine, that in an worst case that bunch of exec()s is
> > going to be definitely slower than storing the list anyway, I doubt
> > such a bad case is often.
> 
> The difference in performance there is going to be negligable; I'm not 
> particularly concerned about that, and it shouldn't be a debate point 
> between my notion and yours (it's only a debate point when one is 
> talking about a c binary vs a python script).
> 
> One thing you're ignoring here is that the route(s) I mentioned all 
> allow for control/specifying what the order of preference is for 
> lookup/fallback (each link/shebang can encode that order how ever the 
> hell it wants).

Where's the use-case for that? Do we really prefer having the order of
preference hard-wired when installing a particular package instead of
having a control over it in the eclass/a single package?

> > Considering that the most common Python version used now is Python 2,
> > how often doesn't the script support that Python version? That's a very
> > rare case, and often just executing "foo-${EPYTHON}" works. In your
> > case, that common case involves readlink() + parsing + exec().
> >
> > Even in case of Python 3 being selected, I doubt the overhead
> > of multiple exec()s on the scripts not supporting Python 3 is really
> > relevant. Please measure it if you believe so.
> > 
> > To be honest, I don't see any real advantage in this solution. It is
> > complex; understanding it requires explanation or some thinking.
> > The code will be fragile, and I'm not even sure if I'm not missing
> > something important here.
> 
> Not particularly sure how you claim this is fragile, but whatever, 
> your view.

The fragile part is using a symlink to store *data*.

> > > Either proposal here would be far faster than what we've got now; also 
> > > will use less space (ancillary benefit).
> > > 
> > > One subtle aspect here is that if we did this, it makes it possible to 
> > > avoid losing the invocation information- currently if you did 
> > > `/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things 
> > > are implemented now (specifically the two layers of wrappers)- you'll 
> > > get python2.7 running that sphinx-build invocation.
> > > 
> > > This is wrong (it's directly discarding what the invocation 
> > > requested), although you're only going to see it for script

Re: [gentoo-dev] rfc: "need net" mis-use in init scripts

2012-10-18 Thread Dirkjan Ochtman
On Thu, Oct 18, 2012 at 8:40 PM, William Hubbs  wrote:
> Does anyone have any thoughts?

+1. I had a server become inaccessible recently because I took down
one of the interfaces and it took sshd down with it. That's because
there's also a problematic interaction with rc_depend_strict, which
for "need net" means all the interfaces have to be up for services
with "need net" to be started. It seems to me that any service that
binds to all interfaces (like sshd, by default) should not "need net".

Cheers,

Dirkjan



Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Zac Medico
On 10/18/2012 09:09 PM, Ryan Hill wrote:
> Anyways, we're seriously getting off topic here.  I don't think anyone
> objected to removing the EAPI 0 requirement for system packages (and in
> reality no one follows it anyways.

An EAPI 0 requirement for system packages is just silly these days.

>  Even portage is EAPI 3).

For the recored, stable portage is EAPI 2, and there wasn't much choice
in the matter since portage depends on python-2.6 which uses EAPI 2 (and
we don't want EAPI 0 or 1 package managers pulling in a portage which
depends on a python with an unsupported EAPI).
-- 
Thanks,
Zac



[gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Ryan Hill
On Thu, 18 Oct 2012 09:36:27 -0400
Rich Freeman  wrote:

> > Well, it's not just about ebuilds you maintain.  Think about something
> > like the gcc-porting trackers where you have to touch a lot of ebuilds
> > across the tree.  You really do have to have a working knowledge of the
> > differences between EAPIs to do so.  My browser bookmark to the EAPI
> > cheatsheet is one of the more frequently used as it is.
> 
> Can't you just ask the maintainers to fix their ebuilds?  And if they
> don't respond or at least cooperate, well, then treeclean them.

Seriously, no you can't.  I think you greatly underestimate the number of
ebuilds in the tree that don't have an actual maintainer, and the
availability of maintainers for those that do.  If I had to wait for people
to fix stuff on their own we'd still be on gcc 4.4.

> I do agree that trying to auto-mangle ebuilds from 47 different EAPIs
> doesn't make sense.  Just assign a bug to the maintainer saying "do
> this to your ebuild, or get it on EAPI foo so that I can fix it, by
>  or it is gone."

So I can twiddle my thumbs for months waiting for something to happen or I
can take 2 minutes to look at the EAPI spec.

And I have absolutely no interest whatsoever in forcing people to update their
ebuilds just to suit my particular needs. They're the maintainer, they can
run their shop however they see fit.  I'm not going to try to get something
removed just because I can't be bothered to remember a few details.

Anyways, we're seriously getting off topic here.  I don't think anyone
objected to removing the EAPI 0 requirement for system packages (and in
reality no one follows it anyways.  Even portage is EAPI 3).


-- 
gcc-porting
toolchain, wxwidgets  we were never more here, expanse getting broader
@ gentoo.org  but bigger boats been done by less water


signature.asc
Description: PGP signature


Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
On Thu, Oct 18, 2012 at 11:54:21PM +0200, Micha?? G??rny wrote:
> On Thu, 18 Oct 2012 02:15:43 -0700
> Brian Harring  wrote:
> 
> > There's a trick to this; currently, those generated scripts hardcode 
> > the allowed/known python versions for that package.  We obviously have 
> > to preserve that; I propose we shove it into the symlink path.
> > 
> > Basically, we add a /usr/libexec/python directory; within it, we have 
> > a wrapper binary (explained below), and a set of symlinks pointing at 
> > the root of that directory.  To cover our current python versions, the 
> > following would suffice:
> > 
> > for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8} 
> > \2.7-pypy-1.9; do
> >   ln -s ./ /usr/libexec/python/$x 
> > done
> > 
> > While that seems insane, there is a reason; via that, we can encode 
> > the allowed versions into the symlink.  Using pkgcore's pquery for 
> > example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3) 
> > instead of a wrapper script at /usr/bin/pquery, we'd have thus:
> > 
> > targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
> > targets=$(IFS=/;echo -n "${targets[*]}")
> > # This results in
> > # targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
> > ln -s "/usr/libexec/python/${targets}/wrapper" \
> >   /usr/bin/pquery
> > 
> > /usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
> > sees how it was invoked basically.  This will be the /usr/bin/whatever 
> > pathway.  It reads the symlink, in the process getting the allowed 
> > versions and preferred order of the versions.
> > 
> > Few notes; vast majority of filesystems will store the symlink target 
> > into the core inode if at all possible- in doing so, this avoids 
> > wasting an inode and is only limited by the length of the target.  
> > That length is capped by PATH_MAX- which can range from 256 to 4k (or 
> > higher).
> > 
> > For the pquery example above, that comes out to ~73 bytes for the 
> > symlink pathway; well under PATH_MAX.
> > 
> > For the scenarios where PATH_MAX caps the symlink pathway, or for 
> > whatever reason we don't want to use that trick, a tree of files 
> > contained within /usr/libexec/python/ holding the allowed versions for 
> > the matching pathway would suffice.
> 
> While I agree that it's a clever trick, I doubt it's worth the effort.
> Did you got any numbers proving it being superior over, say, trying to
> exec() scripts like I do in python-exec?
> 
> While I can imagine, that in an worst case that bunch of exec()s is
> going to be definitely slower than storing the list anyway, I doubt
> such a bad case is often.

The difference in performance there is going to be negligable; I'm not 
particularly concerned about that, and it shouldn't be a debate point 
between my notion and yours (it's only a debate point when one is 
talking about a c binary vs a python script).

One thing you're ignoring here is that the route(s) I mentioned all 
allow for control/specifying what the order of preference is for 
lookup/fallback (each link/shebang can encode that order how ever the 
hell it wants).


> Considering that the most common Python version used now is Python 2,
> how often doesn't the script support that Python version? That's a very
> rare case, and often just executing "foo-${EPYTHON}" works. In your
> case, that common case involves readlink() + parsing + exec().
>
> Even in case of Python 3 being selected, I doubt the overhead
> of multiple exec()s on the scripts not supporting Python 3 is really
> relevant. Please measure it if you believe so.
> 
> To be honest, I don't see any real advantage in this solution. It is
> complex; understanding it requires explanation or some thinking.
> The code will be fragile, and I'm not even sure if I'm not missing
> something important here.

Not particularly sure how you claim this is fragile, but whatever, 
your view.

I suggest you read my response to floppym.  There's differing ways to 
tackle this while getting the required speed, and covering what I want 
(python3.2 sphinx-build working).


> Thus, unless you've got a good arguments how this solution is superior
> to the straightforward one done in python-exec, or numbers proving that
> it is more efficient in a way noticeable to our users, -1. Smart hack,
> yes, but not really beneficial.

Your -1 is duly noted.


> > Either proposal here would be far faster than what we've got now; also 
> > will use less space (ancillary benefit).
> > 
> > One subtle aspect here is that if we did this, it makes it possible to 
> > avoid losing the invocation information- currently if you did 
> > `/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things 
> > are implemented now (specifically the two layers of wrappers)- you'll 
> > get python2.7 running that sphinx-build invocation.
> > 
> > This is wrong (it's directly discarding what the invocation 
> > requested), although you're only going to see it for scripts that 
> > do python introspection.

[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
On Thu, Oct 18, 2012 at 04:50:04PM -0400, Mike Gilbert wrote:
> On Thu, Oct 18, 2012 at 3:29 PM, Brian Harring  wrote:
> > On Thu, Oct 18, 2012 at 11:28:59AM -0400, Mike Gilbert wrote:
> >> Regarding your /usr/bin/python3.2 /usr/bin/sphinx-build example:
> >> invoking python on a binary (or a symlink to a binary) isn't going to
> >> work at all. So I don't see how you've solved that issue.
> >
> > Kindly provide reasoning/justification for such a claim in the future-
> > saves a round of email ping/ponging, and saves me getting pissy
> > towards people if they persist in it. :)
> >
> > Lecture aside, "nuh uh, it does work!" is appropriate.  I already
> > hacked up a prototype locally that works fine and does this via an
> > env passing hack- w/ what was proposed, this can be done w/out those
> > hacks, and would be faster.
> >
> 
> When you invoke a command like:
> 
> /usr/bin/python3.2 /usr/bin/somebinary
> 
> The python3.2 program assumes somebinary is a python script and tries
> to parse it. Since somebinary is actually an ELF object, python bombs
> out immediately.
> 
> Am I missing something fundamental here?

Nope, you've got me embarassingly, absolutely fricking dead to rights 
on that one; I overlooked at the prototype I did was a python script 
for 'somebinary', which worked via the env passing.

Would've spotted it upon implementing it in c to say the least, but 
still, *cough*. ;)

As you mentioned in a separate email; if we stick a wrapper in front 
of the python binaries, that would fly; I don't much like that 
however.

Offhand... and this makes it a bit more complex, but still tenuable 
imo, but we could get around this via shoving the symlink pathway into 
the shebang itself.

scenario 1:
Script gets installed for 2.7, 3.1, 3.2; ignoring the shebang, it's 
exactly the same md5 (meaning we can dedup it).  The shebang instead 
of being /usr/bin/python-whatever, instead is the symlink pathway I'd 
referenced.  Meaning it would be 
/usr/libexec/python/2.7/3.1/3.2/wrapper .  If that script was 
sphinx-build, this means that sphinx-build{,-{2.7,3.1,3.2}} are all 
valid python scripts- they can be invoked either via /path/to/script 
or ${python} /path/to/script.  There's a few details to sort there, 
but nothing egregious best I can tell- implicit in this proposal is 
that the wrapper would be smart enough to tell in which context it's 
being invoked (sphinx-build vs sphinx-build-2.7).

Scenario 2:
2.7, 3.1, 3.2 are in play; they do /not/ dedupe to the same content, 
meaning we actually need indirection in 'sphinx-build' (but not 
sphinx-build-{2.7,3.1,3.2} although we would dedupe where possible 
mind you).

In that case, to support python3.2 sphinx-build, we'd need a python 
script that does the searching/re-exec for the wrapper.  This can be 
standardized- just have it search for sphinx-build-${PYTHON_ABIS} to 
identify the installed versions, then do the usual active 
version/epython trick to decide which to use.  This honestly isn't too 
far off from what we have now.

We can however optimize that; same trick, for the wrapper the shebang 
isn't /usr/bin/python; it's just a c version of the python script, 
done in c for speed.

So... caveats.  If we screw w/ the shebang like this, file isn't going 
to like it- it'll fail to identify certain scripts as python.  That 
said, it's worth keeping in mind that if we use the symlink route, a 
file /usr/bin/sphinx-build would result in ELF, vs file 
/usr/bin/sphinx-build-2.7 being python.

This can be fixed via an additional rule added to the magic db.  
Pretty straight forward, and eselect-python (once it gets these 
wrappers) could level an appropriate dep to ensure that's sorted.

So... my preference is shebang; it's basically an extension of the 
symlink trick I'd originally mentioned, just cleaner imo.

What are folks thoughts on that?
~harring



Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Michał Górny
Ah, and a minor note.

On Thu, 18 Oct 2012 02:15:43 -0700
Brian Harring  wrote:

> /usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
> sees how it was invoked basically.  This will be the /usr/bin/whatever 
> pathway.  It reads the symlink, in the process getting the allowed 
> versions and preferred order of the versions.
> 
> Few notes; vast majority of filesystems will store the symlink target 
> into the core inode if at all possible- in doing so, this avoids 
> wasting an inode and is only limited by the length of the target.  
> That length is capped by PATH_MAX- which can range from 256 to 4k (or 
> higher).

It is capped by SYMLINK_MAX, I believe.

-- 
Best regards,
Michał Górny


signature.asc
Description: PGP signature


Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Michał Górny
On Thu, 18 Oct 2012 02:15:43 -0700
Brian Harring  wrote:

> There's a trick to this; currently, those generated scripts hardcode 
> the allowed/known python versions for that package.  We obviously have 
> to preserve that; I propose we shove it into the symlink path.
> 
> Basically, we add a /usr/libexec/python directory; within it, we have 
> a wrapper binary (explained below), and a set of symlinks pointing at 
> the root of that directory.  To cover our current python versions, the 
> following would suffice:
> 
> for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8} 
> \2.7-pypy-1.9; do
>   ln -s ./ /usr/libexec/python/$x 
> done
> 
> While that seems insane, there is a reason; via that, we can encode 
> the allowed versions into the symlink.  Using pkgcore's pquery for 
> example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3) 
> instead of a wrapper script at /usr/bin/pquery, we'd have thus:
> 
> targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
> targets=$(IFS=/;echo -n "${targets[*]}")
> # This results in
> # targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
> ln -s "/usr/libexec/python/${targets}/wrapper" \
>   /usr/bin/pquery
> 
> /usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
> sees how it was invoked basically.  This will be the /usr/bin/whatever 
> pathway.  It reads the symlink, in the process getting the allowed 
> versions and preferred order of the versions.
> 
> Few notes; vast majority of filesystems will store the symlink target 
> into the core inode if at all possible- in doing so, this avoids 
> wasting an inode and is only limited by the length of the target.  
> That length is capped by PATH_MAX- which can range from 256 to 4k (or 
> higher).
> 
> For the pquery example above, that comes out to ~73 bytes for the 
> symlink pathway; well under PATH_MAX.
> 
> For the scenarios where PATH_MAX caps the symlink pathway, or for 
> whatever reason we don't want to use that trick, a tree of files 
> contained within /usr/libexec/python/ holding the allowed versions for 
> the matching pathway would suffice.

While I agree that it's a clever trick, I doubt it's worth the effort.
Did you got any numbers proving it being superior over, say, trying to
exec() scripts like I do in python-exec?

While I can imagine, that in an worst case that bunch of exec()s is
going to be definitely slower than storing the list anyway, I doubt
such a bad case is often.

Considering that the most common Python version used now is Python 2,
how often doesn't the script support that Python version? That's a very
rare case, and often just executing "foo-${EPYTHON}" works. In your
case, that common case involves readlink() + parsing + exec().

Even in case of Python 3 being selected, I doubt the overhead
of multiple exec()s on the scripts not supporting Python 3 is really
relevant. Please measure it if you believe so.

To be honest, I don't see any real advantage in this solution. It is
complex; understanding it requires explanation or some thinking.
The code will be fragile, and I'm not even sure if I'm not missing
something important here.

Thus, unless you've got a good arguments how this solution is superior
to the straightforward one done in python-exec, or numbers proving that
it is more efficient in a way noticeable to our users, -1. Smart hack,
yes, but not really beneficial.

> Either proposal here would be far faster than what we've got now; also 
> will use less space (ancillary benefit).
> 
> One subtle aspect here is that if we did this, it makes it possible to 
> avoid losing the invocation information- currently if you did 
> `/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things 
> are implemented now (specifically the two layers of wrappers)- you'll 
> get python2.7 running that sphinx-build invocation.
> 
> This is wrong (it's directly discarding what the invocation 
> requested), although you're only going to see it for scripts that 
> do python introspection.
> 
> Via doing the restructuring I'm mentioning above, that issue can be 
> fixed, while making things faster/saner.

I don't see how this is relevant to the wrapper. As Mike pointed out,
python3.2 is the actual Python executable, and the wrapper you're
suggesting is a C executable -- something that does not really work
together like that.

So please elaborate on how you are actually going to solve this. Hope
it's not through patching Python...

-- 
Best regards,
Michał Górny


signature.asc
Description: PGP signature


[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Mike Gilbert
On Thu, Oct 18, 2012 at 4:50 PM, Mike Gilbert  wrote:
> On Thu, Oct 18, 2012 at 3:29 PM, Brian Harring  wrote:
>> On Thu, Oct 18, 2012 at 11:28:59AM -0400, Mike Gilbert wrote:
>>> Regarding your /usr/bin/python3.2 /usr/bin/sphinx-build example:
>>> invoking python on a binary (or a symlink to a binary) isn't going to
>>> work at all. So I don't see how you've solved that issue.
>>
>> Kindly provide reasoning/justification for such a claim in the future-
>> saves a round of email ping/ponging, and saves me getting pissy
>> towards people if they persist in it. :)
>>
>> Lecture aside, "nuh uh, it does work!" is appropriate.  I already
>> hacked up a prototype locally that works fine and does this via an
>> env passing hack- w/ what was proposed, this can be done w/out those
>> hacks, and would be faster.
>>
>
> When you invoke a command like:
>
> /usr/bin/python3.2 /usr/bin/somebinary
>
> The python3.2 program assumes somebinary is a python script and tries
> to parse it. Since somebinary is actually an ELF object, python bombs
> out immediately.
>
> Am I missing something fundamental here?

Oh, perhaps you are suggesting that /usr/bin/python3.2 would no longer
be the actual python binary, but instead be a link to your wrapper
program? It's really not clear to me.



[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Mike Gilbert
On Thu, Oct 18, 2012 at 3:29 PM, Brian Harring  wrote:
> On Thu, Oct 18, 2012 at 11:28:59AM -0400, Mike Gilbert wrote:
>> Regarding your /usr/bin/python3.2 /usr/bin/sphinx-build example:
>> invoking python on a binary (or a symlink to a binary) isn't going to
>> work at all. So I don't see how you've solved that issue.
>
> Kindly provide reasoning/justification for such a claim in the future-
> saves a round of email ping/ponging, and saves me getting pissy
> towards people if they persist in it. :)
>
> Lecture aside, "nuh uh, it does work!" is appropriate.  I already
> hacked up a prototype locally that works fine and does this via an
> env passing hack- w/ what was proposed, this can be done w/out those
> hacks, and would be faster.
>

When you invoke a command like:

/usr/bin/python3.2 /usr/bin/somebinary

The python3.2 program assumes somebinary is a python script and tries
to parse it. Since somebinary is actually an ELF object, python bombs
out immediately.

Am I missing something fundamental here?



Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Rich Freeman
On Thu, Oct 18, 2012 at 3:05 PM, Pacho Ramos  wrote:
> Personally I see no major difficult in moving to eapi4, what exact
> difficult are you (I mean people still sticking with eapi0/1) seeing?

It is harder than cp.  :)

If I write a new ebuild I would always target the most recent EAPI.
However, if I'm just doing a revbump, why fix what ain't broken?

That is rhetorical.  I do understand your logic.  However, if it takes
me 15min to do something now, and it might take me 15min to do it
later, I'll take later every time since who knows if the package will
even be around later.

Rich



[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
On Thu, Oct 18, 2012 at 11:28:59AM -0400, Mike Gilbert wrote:
> On Thu, Oct 18, 2012 at 5:15 AM, Brian Harring  wrote:
> If we are somehow going to eliminate the installation of a separate
> script for each python version, then the symlink idea sounds like a
> good solution for expressing the supported python versions. However,
> I'm really not sure how you would eliminate the separate scripts, in
> light of the 2to3 issue.

The elimination is just a dedup run; track the PYTHON_ABI script's 
being installed- check the md5 (minus the shebang).  If it's the same, 
they can be hardlinked together with the appropriate shebang wrapper.

It's doable, is the short version.  Matter of fact, the dedupe run 
there could be extended to do deduplication in general across python 
versions, but that's an optimization for further down the line.


> Regarding your /usr/bin/python3.2 /usr/bin/sphinx-build example:
> invoking python on a binary (or a symlink to a binary) isn't going to
> work at all. So I don't see how you've solved that issue.

Kindly provide reasoning/justification for such a claim in the future- 
saves a round of email ping/ponging, and saves me getting pissy 
towards people if they persist in it. :)

Lecture aside, "nuh uh, it does work!" is appropriate.  I already 
hacked up a prototype locally that works fine and does this via an 
env passing hack- w/ what was proposed, this can be done w/out those 
hacks, and would be faster.


> Back to the discussion djc pointed out: mgorny has implemented a
> binary solution already, called "python-exec", in conjunction with
> python-r1.eclass.
> 
> https://bitbucket.org/mgorny/python-exec

Another one I didn't see in the ML emails... joy.  This is partial.  
Finishing this to what I'm proposing above isn't too hard to do 
however- that said, I strongly believe this doesn't belong in a 
separate repo/package, this should be shoved into eselect-python.

eselect-python already gives us python-wrapper after all; no point 
in having two layers of wrappers, and content in two repos.


> python-exec assumes there will be a separate script for each version,
> and just calls exec several times. In shell code, it would look like
> this:
> 
> scriptname=$0 # argv[0]
> exec scriptname-${EPYTHON}
> exec scriptname-$(< /etc/env.d/python/config)
> for x in python2.7 python2.6 python3.2 python3.1 ... ; do
>   exec scriptname-${x}
> done
> 
> The list of implementations for the loop at the end is hard-coded when
> python-exec is built. This is the weak point in the solution. It could
> be improved with your symlink idea.
> 
> What are your thoughts on mgorny's python-exec solution?

Code's a bit raw, but as I mentioned in my other email, it's on a 
similar path to what I'm proposing- same core idea, just specifics in 
mine + the symlink trick.


> We could adapt it for python.eclass if you really want to support
> that. Probably by modifying python.eclass to install scripts like
> sphinx-python2.7 instead of sphinx-2.7.

Honestly, w/ what I'm intending, we don't need to change the python 
eclass much- we can preserve the same naming in use if desired.  If a 
rename is desired, sure, do it when this is phased in.

That said if there isn't a reason for the rename, then don't do it, 
imo.

~harring



Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Pacho Ramos
El jue, 18-10-2012 a las 13:49 -0400, Rich Freeman escribió:
> On Thu, Oct 18, 2012 at 11:49 AM, Pacho Ramos  wrote:
> > I didn't think eapi4 features were still "unfamiliar" to so many
> > people... let's say, what about deprecating eapi1, 2 and 0 for newer
> > ebuilds? Is eapi2 so unfamiliar also to not force it as older eapi for
> > newer ebuilds (eapi3 changes look to be minor when compared with
> > eapi2) ?
> 
> This still involves the issue that what would be simple ebuild bumps
> turn into a need to make more substantial changes to an ebuild.

But that changes will save us from needing to move a lot of ebuilds to
newer eapis if some years later we decide to deprecate some of them. For
example, if every package using eapi1 is forced to be bumped to newer
eapi, we won't need to manually do that work in the future if we decide
to deprecate old eapis. Also, it's probably better to force new ebuilds
to use things like splitted configure phase instead of keeping with old
eapi0/1 src_compile one, also the same for deprecated things like dosed
and dohard. If there were valid reasons to ban then on newer eapi, I
think it's better to not allow people to still use old eapi to skip that
banning (or were they banned only for cosmetic reasons?)

> 
> And the concern still exists that a policy that says all new ebuilds
> shall use EAPI foo might result in fewer new ebuilds.  Sure, they'll
> have new and shiny fooness, but arguably I'd rather have more packages
> supported on older EAPIs then fewer packages supported on newer ones.
> 
> If migrating to newer EAPIs is so simple, why aren't more doing it already?

Personally I see no major difficult in moving to eapi4, what exact
difficult are you (I mean people still sticking with eapi0/1) seeing? I
have re-read http://devmanual.gentoo.org/ebuild-writing/eapi/index.html
and I can't see anything specially hard :/ 

> 
> Rich
> 
> 




signature.asc
Description: This is a digitally signed message part


Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
On Thu, Oct 18, 2012 at 08:02:42AM -0700, Brian Dolbec wrote:
> On Thu, 2012-10-18 at 02:15 -0700, Brian Harring wrote:
> > On a related note; we currently install multiple versions of the same 
> > script- the only difference being the shebang.  If one ignores the 
> > shebang, in some cases this is necessary- where the script is 2to3 
> > translated, and the code for py2k vs py3k differs.  For most, the only 
> > difference is in the shebang however.
> > 
> 
> What if the invoking script is not needed to be 2to3 translated (super
> minimal python code) but the remaining python libs need to be?

Then we dedup 'em, using a shebang that knows to look at argv[0]'s 
last few chars to find the appropriate python binary to invoke via.

As for the installation- either hardlinked'd scripts, or symlinked; 
doesn't really matter, although hardlink gets my vote (so 
sphinx-build-2.7 would be a hardlink to the same inode as 
sphinx-build-3.2).

> > While it's minor in space savings, it's possible to eliminate that 
> > redundancy via a shebang target that looks at the pathway it was 
> > invoked via.  Fairly easy actually, and basically zero overhead if 
> > done.
> > 
> > Either way, thoughts?
> > 
> > What I'm proposing isn't perfect, but I'm of the view it's a step up 
> > from what's in place now- and via centralizing this crap, makes it 
> > easier to change/maintain this going forward as necessary.
> > ~harring
> > 
> 
> +1 from me.
> 
> Eclean has been checking the name it was invoked with long before I did
> the major re-write.  From that it cleans either distfiles or packages if
> invoked by either the eclean-dist or eclean-pkg symlinks.  If invoked by
> eclean itself then it looks for the target in the arguments.  So Brian's
> proposal is not something totally new, never been done before...
>
> While this proposes something a little different.  It is still very much
> along the same line and in my opinion a much better solution.

Regarding "having the script check on it's own"... yeah, this has been 
on pkgcore's todo for a while, but basically got blocked by the 
current wrapper bits.  I expect once this wrapper is in place, I may 
screw with the setup a bit to find a way so that scripts that can 
handle this on their own, are left to handle this on their own.

Any other +1's/-1's?  Else I may do this on the weekend for 
destress-hacking time...

~harring



[gentoo-dev] rfc: "need net" mis-use in init scripts

2012-10-18 Thread William Hubbs
All,

If an init script has "need net" in its dependencies, this is
automatically problematic, because the script will be stopped when *any*
provider of net is stopped.

This is not good for services like sshd or ntpd for example, because, in
their default configuration they are able to start regardless of which
network interface(s) are active or not and adapt as interfaces go up and
down.

I think we should re-evaluate, and remove "need net" from init
scripts where possible.

The thing we would be looking for is services like sshd or ntpd which
can start successfully in their default configuration and
adapt as network interfaces go up and down and addresses and routes change.

In those situations, which are probably pretty common, we can just drop
"need net" from the init script.

If a service genuinely _NEEDS_ a specific network address, because it
binds to it _AND_ fails to start with out it, "need net" still isn't
accurate, because of the issue I pointed out above, so it should be
removed and this should be documented in the services conf.d file and left
to the user to set up.

If a user is using oldnet, they can do this by adding something like this
to your service's conf.d file:
rc_need="net.ifname"

If they are using newnet and configuring the interface in
/etc/conf.d/network as a static interface, they should add the following
to your conf.d file:
rc_need="network"

Does anyone have any thoughts?

William



pgpKhZWPVzoh8.pgp
Description: PGP signature


Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Rich Freeman
On Thu, Oct 18, 2012 at 11:49 AM, Pacho Ramos  wrote:
> I didn't think eapi4 features were still "unfamiliar" to so many
> people... let's say, what about deprecating eapi1, 2 and 0 for newer
> ebuilds? Is eapi2 so unfamiliar also to not force it as older eapi for
> newer ebuilds (eapi3 changes look to be minor when compared with
> eapi2) ?

This still involves the issue that what would be simple ebuild bumps
turn into a need to make more substantial changes to an ebuild.

And the concern still exists that a policy that says all new ebuilds
shall use EAPI foo might result in fewer new ebuilds.  Sure, they'll
have new and shiny fooness, but arguably I'd rather have more packages
supported on older EAPIs then fewer packages supported on newer ones.

If migrating to newer EAPIs is so simple, why aren't more doing it already?

Rich



Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Pacho Ramos
El jue, 18-10-2012 a las 09:36 -0400, Rich Freeman escribió:
> On Thu, Oct 18, 2012 at 12:07 AM, Ryan Hill  wrote:
> > On Wed, 17 Oct 2012 15:00:12 -0400
> > Rich Freeman  wrote:
> >> I think the whole developers-can't-handle-47-EAPIs thing is a red
> >> herring.  The fact that there are packages written in Erlang in the
> >> tree doesn't cause me any issues even though I haven't had to do any
> >> work in Erlang.  If I ever wanted to maintain such a package then I'd
> >> take the time to learn it as needed.  Likewise, if I wanted to
> >> maintain a package that used EAPI joe and I really prefer to work in
> >> EAPI fred, then I'd revise it at my next convenience.
> >
> > Well, it's not just about ebuilds you maintain.  Think about something
> > like the gcc-porting trackers where you have to touch a lot of ebuilds
> > across the tree.  You really do have to have a working knowledge of the
> > differences between EAPIs to do so.  My browser bookmark to the EAPI
> > cheatsheet is one of the more frequently used as it is.
> 
> Can't you just ask the maintainers to fix their ebuilds?  And if they
> don't respond or at least cooperate, well, then treeclean them.  I
> don't think that library maintainers should have to bend over
> backwards to fix reverse dependencies, within reason.  If out of the
> whole tree two packages are blocking an upgrade, give a deadline or
> treeclean them.  If we have 47 bazillion packages that don't work on
> the newer lib, then slot it and bug upstream.
> 
> I do agree that trying to auto-mangle ebuilds from 47 different EAPIs
> doesn't make sense.  Just assign a bug to the maintainer saying "do
> this to your ebuild, or get it on EAPI foo so that I can fix it, by
>  or it is gone."  The deadline is important - I've seen a
> pattern on -dev where bugs linger without deadlines for months, and
> then a deadline of two days is imposed, and then a big flame war
> breaks out.  Just set a deadline up-front and make it reasonable.
> 
> Rich
> 
> 

I didn't think eapi4 features were still "unfamiliar" to so many
people... let's say, what about deprecating eapi1, 2 and 0 for newer
ebuilds? Is eapi2 so unfamiliar also to not force it as older eapi for
newer ebuilds (eapi3 changes look to be minor when compared with
eapi2) ?


signature.asc
Description: This is a digitally signed message part


[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Mike Gilbert
On Thu, Oct 18, 2012 at 5:15 AM, Brian Harring  wrote:
> If folks haven't looked at python_generate_wrapper_scripts in
> python.eclass, I'd suggest doing so.  For examples of it's usage, grep
> for 'python_generate_wrapper_scripts' in /usr/bin/; any place you see
> it, look for -${PYTHON_TARGETS} (for example,
> /usr/bin/sphinx-build{,-2.7,-3.2}.
>
> Each usage there is a separate custom script for that specific binary;
> if there is a bug in the script, well, we're screwed- requires
> re-merging the package.
>
> This setup, at least on my hardware, is .04s added to every
> invocation; this is ignoring the inode cost for each, and the issue if
> a bug ever appears in the script generation code (in which case we're
> screwed- would require re-merging the package).
>
> In parallel, we've got python-wrapper (ls /usr/bin/python -l); this is
> provided by eselect-python and basically discern what the active
> python version is, and use that in the absense of any directives.
> This is implemented in C, and is reasonably sane; the overhead for
> that is basically non-existant.
>
> Roughly, I'm proposing we do away with python eclass's
> generate_python_wrapper_scripts generation of a script, instead having
> that just symlink to a binary provided by eselect-python that handles
> this.  This centralizes the implementation (fix in one spot), and
> would allow a c version to be used- basically eliminating the
> overhead.
>
>
> There's a trick to this; currently, those generated scripts hardcode
> the allowed/known python versions for that package.  We obviously have
> to preserve that; I propose we shove it into the symlink path.
>
> Basically, we add a /usr/libexec/python directory; within it, we have
> a wrapper binary (explained below), and a set of symlinks pointing at
> the root of that directory.  To cover our current python versions, the
> following would suffice:
>
> for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8}
> \2.7-pypy-1.9; do
>   ln -s ./ /usr/libexec/python/$x
> done
>
> While that seems insane, there is a reason; via that, we can encode
> the allowed versions into the symlink.  Using pkgcore's pquery for
> example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3)
> instead of a wrapper script at /usr/bin/pquery, we'd have thus:
>
> targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
> targets=$(IFS=/;echo -n "${targets[*]}")
> # This results in
> # targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
> ln -s "/usr/libexec/python/${targets}/wrapper" \
>   /usr/bin/pquery
>
> /usr/libexec/python/wrapper upon invocation, takes a look at argv[0];
> sees how it was invoked basically.  This will be the /usr/bin/whatever
> pathway.  It reads the symlink, in the process getting the allowed
> versions and preferred order of the versions.
>
> Few notes; vast majority of filesystems will store the symlink target
> into the core inode if at all possible- in doing so, this avoids
> wasting an inode and is only limited by the length of the target.
> That length is capped by PATH_MAX- which can range from 256 to 4k (or
> higher).
>
> For the pquery example above, that comes out to ~73 bytes for the
> symlink pathway; well under PATH_MAX.
>
> For the scenarios where PATH_MAX caps the symlink pathway, or for
> whatever reason we don't want to use that trick, a tree of files
> contained within /usr/libexec/python/ holding the allowed versions for
> the matching pathway would suffice.
>
> Either proposal here would be far faster than what we've got now; also
> will use less space (ancillary benefit).
>
> One subtle aspect here is that if we did this, it makes it possible to
> avoid losing the invocation information- currently if you did
> `/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things
> are implemented now (specifically the two layers of wrappers)- you'll
> get python2.7 running that sphinx-build invocation.
>
> This is wrong (it's directly discarding what the invocation
> requested), although you're only going to see it for scripts that
> do python introspection.
>
> Via doing the restructuring I'm mentioning above, that issue can be
> fixed, while making things faster/saner.
>
> On a related note; we currently install multiple versions of the same
> script- the only difference being the shebang.  If one ignores the
> shebang, in some cases this is necessary- where the script is 2to3
> translated, and the code for py2k vs py3k differs.  For most, the only
> difference is in the shebang however.
>
> While it's minor in space savings, it's possible to eliminate that
> redundancy via a shebang target that looks at the pathway it was
> invoked via.  Fairly easy actually, and basically zero overhead if
> done.
>
> Either way, thoughts?
>
> What I'm proposing isn't perfect, but I'm of the view it's a step up
> from what's in place now- and via centralizing this crap, makes it
> easier to change/maintain this going forward as necessary.
> ~harring

If we are somehow going to elimi

Re: [gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Dolbec
On Thu, 2012-10-18 at 02:15 -0700, Brian Harring wrote:
> If folks haven't looked at python_generate_wrapper_scripts in 
> python.eclass, I'd suggest doing so.  For examples of it's usage, grep 
> for 'python_generate_wrapper_scripts' in /usr/bin/; any place you see 
> it, look for -${PYTHON_TARGETS} (for example, 
> /usr/bin/sphinx-build{,-2.7,-3.2}.
> 
> Each usage there is a separate custom script for that specific binary; 
> if there is a bug in the script, well, we're screwed- requires 
> re-merging the package.
> 
> This setup, at least on my hardware, is .04s added to every 
> invocation; this is ignoring the inode cost for each, and the issue if 
> a bug ever appears in the script generation code (in which case we're 
> screwed- would require re-merging the package).
> 
> In parallel, we've got python-wrapper (ls /usr/bin/python -l); this is 
> provided by eselect-python and basically discern what the active 
> python version is, and use that in the absense of any directives.  
> This is implemented in C, and is reasonably sane; the overhead for 
> that is basically non-existant.
> 
> Roughly, I'm proposing we do away with python eclass's 
> generate_python_wrapper_scripts generation of a script, instead having 
> that just symlink to a binary provided by eselect-python that handles 
> this.  This centralizes the implementation (fix in one spot), and 
> would allow a c version to be used- basically eliminating the 
> overhead.
> 
> 
> There's a trick to this; currently, those generated scripts hardcode 
> the allowed/known python versions for that package.  We obviously have 
> to preserve that; I propose we shove it into the symlink path.
> 
> Basically, we add a /usr/libexec/python directory; within it, we have 
> a wrapper binary (explained below), and a set of symlinks pointing at 
> the root of that directory.  To cover our current python versions, the 
> following would suffice:
> 
> for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8} 
> \2.7-pypy-1.9; do
>   ln -s ./ /usr/libexec/python/$x 
> done
> 
> While that seems insane, there is a reason; via that, we can encode 
> the allowed versions into the symlink.  Using pkgcore's pquery for 
> example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3) 
> instead of a wrapper script at /usr/bin/pquery, we'd have thus:
> 
> targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
> targets=$(IFS=/;echo -n "${targets[*]}")
> # This results in
> # targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
> ln -s "/usr/libexec/python/${targets}/wrapper" \
>   /usr/bin/pquery
> 
> /usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
> sees how it was invoked basically.  This will be the /usr/bin/whatever 
> pathway.  It reads the symlink, in the process getting the allowed 
> versions and preferred order of the versions.
> 
> Few notes; vast majority of filesystems will store the symlink target 
> into the core inode if at all possible- in doing so, this avoids 
> wasting an inode and is only limited by the length of the target.  
> That length is capped by PATH_MAX- which can range from 256 to 4k (or 
> higher).
> 
> For the pquery example above, that comes out to ~73 bytes for the 
> symlink pathway; well under PATH_MAX.
> 
> For the scenarios where PATH_MAX caps the symlink pathway, or for 
> whatever reason we don't want to use that trick, a tree of files 
> contained within /usr/libexec/python/ holding the allowed versions for 
> the matching pathway would suffice.
> 
> Either proposal here would be far faster than what we've got now; also 
> will use less space (ancillary benefit).
> 
> One subtle aspect here is that if we did this, it makes it possible to 
> avoid losing the invocation information- currently if you did 
> `/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things 
> are implemented now (specifically the two layers of wrappers)- you'll 
> get python2.7 running that sphinx-build invocation.
> 
> This is wrong (it's directly discarding what the invocation 
> requested), although you're only going to see it for scripts that 
> do python introspection.
> 
> Via doing the restructuring I'm mentioning above, that issue can be 
> fixed, while making things faster/saner.
> 
> On a related note; we currently install multiple versions of the same 
> script- the only difference being the shebang.  If one ignores the 
> shebang, in some cases this is necessary- where the script is 2to3 
> translated, and the code for py2k vs py3k differs.  For most, the only 
> difference is in the shebang however.
> 

What if the invoking script is not needed to be 2to3 translated (super
minimal python code) but the remaining python libs need to be?

> While it's minor in space savings, it's possible to eliminate that 
> redundancy via a shebang target that looks at the pathway it was 
> invoked via.  Fairly easy actually, and basically zero overhead if 
> done.
> 
> Either way, thoughts?
> 
> What I'm proposing isn't per

Re: [gentoo-dev] Re: [RFC] Drop EAPI=0 requirement for system packages.

2012-10-18 Thread Rich Freeman
On Thu, Oct 18, 2012 at 12:07 AM, Ryan Hill  wrote:
> On Wed, 17 Oct 2012 15:00:12 -0400
> Rich Freeman  wrote:
>> I think the whole developers-can't-handle-47-EAPIs thing is a red
>> herring.  The fact that there are packages written in Erlang in the
>> tree doesn't cause me any issues even though I haven't had to do any
>> work in Erlang.  If I ever wanted to maintain such a package then I'd
>> take the time to learn it as needed.  Likewise, if I wanted to
>> maintain a package that used EAPI joe and I really prefer to work in
>> EAPI fred, then I'd revise it at my next convenience.
>
> Well, it's not just about ebuilds you maintain.  Think about something
> like the gcc-porting trackers where you have to touch a lot of ebuilds
> across the tree.  You really do have to have a working knowledge of the
> differences between EAPIs to do so.  My browser bookmark to the EAPI
> cheatsheet is one of the more frequently used as it is.

Can't you just ask the maintainers to fix their ebuilds?  And if they
don't respond or at least cooperate, well, then treeclean them.  I
don't think that library maintainers should have to bend over
backwards to fix reverse dependencies, within reason.  If out of the
whole tree two packages are blocking an upgrade, give a deadline or
treeclean them.  If we have 47 bazillion packages that don't work on
the newer lib, then slot it and bug upstream.

I do agree that trying to auto-mangle ebuilds from 47 different EAPIs
doesn't make sense.  Just assign a bug to the maintainer saying "do
this to your ebuild, or get it on EAPI foo so that I can fix it, by
 or it is gone."  The deadline is important - I've seen a
pattern on -dev where bugs linger without deadlines for months, and
then a deadline of two days is imposed, and then a big flame war
breaks out.  Just set a deadline up-front and make it reasonable.

Rich



[gentoo-dev] Re: reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
On Thu, Oct 18, 2012 at 12:00:43PM +0200, Dirkjan Ochtman wrote:
> On Thu, Oct 18, 2012 at 11:15 AM, Brian Harring  wrote:
> > Either way, thoughts?
> 
> It looks like you haven't looked at the python-r1 effort. That means
> you probably also aren't subscribed to the gentoo-python mailing list.
> I suggest you do both first. :)

Yeah, I'm not on the ML- as for the eclass, I read the source; 
nothing in there in re: to wrappers currently.

Only thread I see that is relevant is 
http://article.gmane.org/gmane.linux.gentoo.python/11 ; roughly 
executable wrapper, just what I'm spelling out here is the 
exact mechanism's of how to do it.

Re: python-r1; frankly... if folks are fine w/ the wrapper, I don't 
want it delayed waiting on python-r1 (or only used there).

As I laid out, what I'm proposing can be dropped in *now* for existing 
python eclass- basically moves that logic out of it and into 
eselect-python.

It'll be faster/saner to maintain, less code in python eclass, and 
reusable going forward for the new attempts; plus I'm willing to do it 
(meaning it'll get done in the next week or two).

That said; +1 or -1, or some where in between?

~harring



[gentoo-dev] reworking python-wrapper and wrapper scripts.

2012-10-18 Thread Brian Harring
If folks haven't looked at python_generate_wrapper_scripts in 
python.eclass, I'd suggest doing so.  For examples of it's usage, grep 
for 'python_generate_wrapper_scripts' in /usr/bin/; any place you see 
it, look for -${PYTHON_TARGETS} (for example, 
/usr/bin/sphinx-build{,-2.7,-3.2}.

Each usage there is a separate custom script for that specific binary; 
if there is a bug in the script, well, we're screwed- requires 
re-merging the package.

This setup, at least on my hardware, is .04s added to every 
invocation; this is ignoring the inode cost for each, and the issue if 
a bug ever appears in the script generation code (in which case we're 
screwed- would require re-merging the package).

In parallel, we've got python-wrapper (ls /usr/bin/python -l); this is 
provided by eselect-python and basically discern what the active 
python version is, and use that in the absense of any directives.  
This is implemented in C, and is reasonably sane; the overhead for 
that is basically non-existant.

Roughly, I'm proposing we do away with python eclass's 
generate_python_wrapper_scripts generation of a script, instead having 
that just symlink to a binary provided by eselect-python that handles 
this.  This centralizes the implementation (fix in one spot), and 
would allow a c version to be used- basically eliminating the 
overhead.


There's a trick to this; currently, those generated scripts hardcode 
the allowed/known python versions for that package.  We obviously have 
to preserve that; I propose we shove it into the symlink path.

Basically, we add a /usr/libexec/python directory; within it, we have 
a wrapper binary (explained below), and a set of symlinks pointing at 
the root of that directory.  To cover our current python versions, the 
following would suffice:

for x in {2.{4,5,6,7},3.{0,1,2,3,4}}-cpy 2.5-jython 2.7-pypy-1.{7,8} 
\2.7-pypy-1.9; do
  ln -s ./ /usr/libexec/python/$x 
done

While that seems insane, there is a reason; via that, we can encode 
the allowed versions into the symlink.  Using pkgcore's pquery for 
example (which should support cpy: 2.5, 2.6, 2.7, 3.1, 3.2, 3.3) 
instead of a wrapper script at /usr/bin/pquery, we'd have thus:

targets=( 2.{5,6,7}-cpy 3.{1,2,3}-cpy )
targets=$(IFS=/;echo -n "${targets[*]}")
# This results in
# targets=2.5-cpy/2.6-cpy/2.7-cpy/3.1-cpy/3.2-cpy/3.3-cpy
ln -s "/usr/libexec/python/${targets}/wrapper" \
  /usr/bin/pquery

/usr/libexec/python/wrapper upon invocation, takes a look at argv[0]; 
sees how it was invoked basically.  This will be the /usr/bin/whatever 
pathway.  It reads the symlink, in the process getting the allowed 
versions and preferred order of the versions.

Few notes; vast majority of filesystems will store the symlink target 
into the core inode if at all possible- in doing so, this avoids 
wasting an inode and is only limited by the length of the target.  
That length is capped by PATH_MAX- which can range from 256 to 4k (or 
higher).

For the pquery example above, that comes out to ~73 bytes for the 
symlink pathway; well under PATH_MAX.

For the scenarios where PATH_MAX caps the symlink pathway, or for 
whatever reason we don't want to use that trick, a tree of files 
contained within /usr/libexec/python/ holding the allowed versions for 
the matching pathway would suffice.

Either proposal here would be far faster than what we've got now; also 
will use less space (ancillary benefit).

One subtle aspect here is that if we did this, it makes it possible to 
avoid losing the invocation information- currently if you did 
`/usr/bin/python3.2 $(which sphinx-build) blah`, because of how things 
are implemented now (specifically the two layers of wrappers)- you'll 
get python2.7 running that sphinx-build invocation.

This is wrong (it's directly discarding what the invocation 
requested), although you're only going to see it for scripts that 
do python introspection.

Via doing the restructuring I'm mentioning above, that issue can be 
fixed, while making things faster/saner.

On a related note; we currently install multiple versions of the same 
script- the only difference being the shebang.  If one ignores the 
shebang, in some cases this is necessary- where the script is 2to3 
translated, and the code for py2k vs py3k differs.  For most, the only 
difference is in the shebang however.

While it's minor in space savings, it's possible to eliminate that 
redundancy via a shebang target that looks at the pathway it was 
invoked via.  Fairly easy actually, and basically zero overhead if 
done.

Either way, thoughts?

What I'm proposing isn't perfect, but I'm of the view it's a step up 
from what's in place now- and via centralizing this crap, makes it 
easier to change/maintain this going forward as necessary.
~harring



Re: [gentoo-dev] Proposal: removing "server" profile variants from profiles.desc

2012-10-18 Thread Markos Chandras
On Thu, Oct 18, 2012 at 1:37 AM, Peter Stuge  wrote:
> Ben Kohler wrote:
>> In my ideal world ("if I were king"), today I would delist them
>> from profiles.desc, and send out a news item warning of their
>> immediate deprecation and planned removal 3 months from now.
>
> I'm strongly in favor of this, but of course I am no developer.
>
>
> //Peter
>

This is what will happen when someone finds the time to write this
news item ( as I already explained ... )

-- 
Regards,
Markos Chandras / Gentoo Linux Developer / Key ID: B4AFF2C2