Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Mojca Miklavec
On Thu, Nov 12, 2015 at 1:34 PM, Clemens Lang wrote:
> Hi Mojca,
>
> - On 12 Nov, 2015, at 10:04, Mojca Miklavec mo...@macports.org wrote:
>
>> Clemens, are you willing to set up some (temporary) repository with
>> regular updates, even if it's not perfect yet, just so that we can
>> follow the changes? (I can offer to run a script for incremental
>> updates on my server if there is a need for that.)
>
> I have a server to do the incremental updating myself, it's just a matter
> of finishing the conversion at a high quality level and automating the
> update (which shouldn't be too hard, but I haven't gotten around to doing
> it).
>
> If anybody wants to help out with that, that would be welcome.

I can check the list of names against mine. It's weird that you had to
manually add some names to it.

According to the definition of a "clean repository", I have some
troubles understanding all the issues. I would assume that many GSOC
projects would copy the core, do something in their branch and leave
it at that. Do we need to manually identify all these cases and create
a branch of the main tree with those changes?

How do you treat the cases when developers copy part of the code
(either ports or core code) to their personal tree and then merge it
back to the main tree?

How many repositories would you make out of contrib?

(Subversion is simply a slightly different philosophy and I never
thought about the many problems arising from trying to "switch the
philosophy" when you end up with a completely different set of
repositories as in the initial tree.)

Mojca
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Sean Farley

Clemens Lang  writes:

> Hi,
>
> - On 12 Nov, 2015, at 19:48, Sean Farley s...@macports.org wrote:
>
>> Sure, it could include Jira, HipChat, and Bamboo, if you want. I only
>> said 'lost cause' because GitHub is so popular for open source projects.
>
> I don't think we would have a use case for HipChat, but I like Jira (even
> though Trac isn't too bad either and moving would be considerable effort).

I, too, like IRC more than HipChat :-) So much so that I wrote an ERC
module to make the bitlbee support better:

http://bitbucket.org/seanfarley/erc-hipchatify

> The Bamboo agent runs on OS X, which could be used with our build servers,
> but support for older Java versions is deprecated, which could become a
> problem for our older OS buildbots :/

That's a good point.

> Is Crucible possible as well? Not that we currently do any code reviews,
> but it could become a useful development model to make contributions
> easier.

Sure but I've actually never used it. I mostly just use Bitbucket
commenting.

>> We have the Bamboo service which integrates with Bitbucket and we can
>> set up for open source projects. I can do it personally so as to skip
>> the form filling out.
>
>> Bamboo just spins up Amazon vms but having these dedicated machines from
>> MacOSForge is pretty nice.
>
> Yeah, the Amazon VMs don't really buy us anything apart from the Linux
> base builder, which is the least of our problems. OS X on Amazon EC2 isn't
> going to happen.

Yeah, I totally didn't even think about that. >_< We'd still need
somewhere that has physical Apple hardware (I'm pretty sure there is
some extra hardware here, I'd just need to ask IT).
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Clemens Lang
Hi,

- On 12 Nov, 2015, at 19:48, Sean Farley s...@macports.org wrote:

> Sure, it could include Jira, HipChat, and Bamboo, if you want. I only
> said 'lost cause' because GitHub is so popular for open source projects.

I don't think we would have a use case for HipChat, but I like Jira (even
though Trac isn't too bad either and moving would be considerable effort).
The Bamboo agent runs on OS X, which could be used with our build servers,
but support for older Java versions is deprecated, which could become a
problem for our older OS buildbots :/

Is Crucible possible as well? Not that we currently do any code reviews,
but it could become a useful development model to make contributions
easier.


> We have the Bamboo service which integrates with Bitbucket and we can
> set up for open source projects. I can do it personally so as to skip
> the form filling out.

> Bamboo just spins up Amazon vms but having these dedicated machines from
> MacOSForge is pretty nice.

Yeah, the Amazon VMs don't really buy us anything apart from the Linux
base builder, which is the least of our problems. OS X on Amazon EC2 isn't
going to happen.

-- 
Clemens Lang
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Sean Farley

Ryan Schmidt  writes:

> On Nov 12, 2015, at 3:04 AM, Mojca Miklavec wrote:
>> On Wed, Nov 11, 2015 at 11:37 AM, Ryan Schmidt wrote:
>>> On Nov 10, 2015, at 10:57 PM, Mojca Miklavec wrote:
 
 At the moment it's the lack of Trac's functionality to browse the tree
 and the logs that has triggered the "demand" for this.
>>> 
>>> That will get fixed. In asking about github, we're just trying to explore 
>>> all options.
>> 
>> I hope so at least. But until this gets fixed (and it might take a
>> while before it does), it would be extremely helpful to have an
>> up-to-date git mirror to be able to browse through the changes in a
>> slightly more "user-friendly" way than with the command-line only.
>> 
>> I could use
>>https://github.com/neverpanic/macports-ports
>> now that you mentioned it, but that one is also not being synced
>> automatically (yet).
>> 
>> Clemens, are you willing to set up some (temporary) repository with
>> regular updates, even if it's not perfect yet, just so that we can
>> follow the changes? (I can offer to run a script for incremental
>> updates on my server if there is a need for that.)
>
> We will get the existing infrastructure back up and running soon. Please bear 
> with us a little while longer.
>
>
>> On Wed, Nov 11, 2015 at 11:39 PM, Sean Farley wrote:
>>> Clemens Lang writes:
>>> 
 [1] https://github.com/neverpanic?tab=repositories
 [2] https://github.com/neverpanic/svn2git
 [3] https://github.com/neverpanic/macports-svn2git-rules
>>> 
>>> Sounds like this is a lost cause for me, but for what it's worth, I work
>>> at Bitbucket now and we could offer hosting (plus direct admin support).
>> 
>> What exactly do you consider "lost" and why?
>
> Good to know, Sean, and thanks for the offer. Would that refer only to the 
> usual services that BitBucket offers, or additional services as well? Mac OS 
> Forge currently provides us much more than just repository, issue tracker and 
> wiki.

Sure, it could include Jira, HipChat, and Bamboo, if you want. I only
said 'lost cause' because GitHub is so popular for open source projects.

>> At the moment we are (hopefully) discussing just a mirror of the code
>> being put on some server, still hoping that everything will get back
>> to normal. This mirroring can be done on any of the existing services
>> (github, bitbucket, gitlab, ...) or even on all of them at the same
>> time.
>> 
>> Mojca
>> 
>> (Off-topic, but: If we were discussing other services, the biggest
>> problem would probably be all the buildbots anyway, and I doubt that
>> BitBucket would be interested in helping out with that. Purchasing the
>> necessary hardware alone would be expensive enough. We currently have
>> 10 virtual machines and we could easily request 5 or more right now
>> [two for 10.11, three for libc++ on 10.6-10.8, potentially another
>> bunch of them for universal binaries] and two new ones each year after
>> the release of a new OS. All of that has to run on Macs with
>> sufficient memory, disk space and potentially more than one core per
>> buildbot.)

We have the Bamboo service which integrates with Bitbucket and we can
set up for open source projects. I can do it personally so as to skip
the form filling out.

> There are currently six buildbot slave VMs, five of which run OS X (Snow 
> Leopard, Lion, Mountain Lion, Mavericks, Yosemite) and run two buildbot 
> builders each -- one for ports, one for base -- and one which runs Oracle 
> Linux with a single buildbot builder for base. A seventh VM has been 
> requested for El Capitan and I've been told that should be a resource 
> problem, it's just a matter of setting it up, which will happen soon.
>
> Hardware for a replacement buildbot system is not a problem, if that becomes 
> necessary.

Bamboo just spins up Amazon vms but having these dedicated machines from
MacOSForge is pretty nice.
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"? (Was: Experiences with El Capitan)

2015-11-12 Thread Ryan Schmidt
On Nov 12, 2015, at 10:58 AM, Michael David Crawford wrote:
> 
> On Thu, Nov 12, 2015 at 5:43 AM, Ryan Schmidt wrote:
>> 
>> On Nov 12, 2015, at 6:55 AM, Michael David Crawford wrote:
>> 
>>> There have been plenty of times that the only Mac available to me for
>>> development has been my mother's Tiger G4 iMac.  I was at least able
>>> to install a PowerPC backport of Firefox.
>>> 
>>> What was most upsetting to me when I used it was that I often had to
>>> build my own tools from source, because the powerpc binaries had been
>>> withdrawn.
>> 
>> Not sure if you were talking about MacPorts or other projects, but MacPorts 
>> has never offered PowerPC binaries. We started offering binaries with OS X 
>> 10.6, for x86_64 only.
> 
> Not MacPorts specifically but that has been my experience with
> numerous software packages.

In many cases, the software is no longer compatible with the older versions of 
OS X required for PowerPC machines, so they couldn't provide a binary if they 
wanted to because it won't compile anymore. Though there probably are other 
projects that would still work on PowerPC but binaries aren't provided because 
few people use PowerPC machines anymore. In any case, it's a matter you would 
have to take up with the particular project in question.
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Nonexistent mpi.default variable lets port upgrade fail

2015-11-12 Thread David Strubbe
Discussed and resolved here: https://trac.macports.org/ticket/49669

David

On Thu, Nov 12, 2015 at 3:33 AM, David Evans  wrote:

> On 11/11/15 11:36 PM, Marko Käning wrote:
> > Ooops, what does this error suddenly happen?
> >
> > ---
> > --->  Updating MacPorts base sources using rsync
> > MacPorts base version 2.3.4 installed,
> > MacPorts base version 2.3.4 downloaded.
> > --->  Updating the ports tree
> > --->  MacPorts base is already the latest version
> >
> > The ports tree has been updated. To upgrade your installed ports, you
> should run
> >   port upgrade outdated
> > The following installed ports are outdated:
> > kdetoys4   4.10.5_0 < 4.10.5_1
> > Error: Unable to open port: can't read "mpi.default": no such variable
> > To report a bug, follow the instructions in the guide:
> > http://guide.macports.org/#project.tickets
> > ---
> >
> > Greets,
> > Marko
> > ___
> > macports-dev mailing list
> > macports-dev@lists.macosforge.org
> > https://lists.macosforge.org/mailman/listinfo/macports-dev
> >
>
> The error appears to come from port boost (may be others) which is broken
> (Portfile fails to parse) after
> the recent update to mpi portgroup in r142438.
>
> Dave
>
> ___
> macports-dev mailing list
> macports-dev@lists.macosforge.org
> https://lists.macosforge.org/mailman/listinfo/macports-dev
>
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"? (Was: Experiences with El Capitan)

2015-11-12 Thread Ryan Schmidt

On Nov 12, 2015, at 6:55 AM, Michael David Crawford wrote:

> There have been plenty of times that the only Mac available to me for
> development has been my mother's Tiger G4 iMac.  I was at least able
> to install a PowerPC backport of Firefox.
> 
> What was most upsetting to me when I used it was that I often had to
> build my own tools from source, because the powerpc binaries had been
> withdrawn.

Not sure if you were talking about MacPorts or other projects, but MacPorts has 
never offered PowerPC binaries. We started offering binaries with OS X 10.6, 
for x86_64 only.


> It's one thing to stop supporting a product, quite another to actively
> prevent its use.


___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Clemens Lang
Hi Mojca,

- On 12 Nov, 2015, at 10:04, Mojca Miklavec mo...@macports.org wrote:

> Clemens, are you willing to set up some (temporary) repository with
> regular updates, even if it's not perfect yet, just so that we can
> follow the changes? (I can offer to run a script for incremental
> updates on my server if there is a need for that.)

I have a server to do the incremental updating myself, it's just a matter
of finishing the conversion at a high quality level and automating the
update (which shouldn't be too hard, but I haven't gotten around to doing
it).

If anybody wants to help out with that, that would be welcome.

-- 
Clemens Lang
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Ryan Schmidt

On Nov 12, 2015, at 6:01 AM, Mojca Miklavec wrote:

> On Wed, Nov 11, 2015 at 12:15 PM, Ryan Schmidt wrote:
>> 
>> Another option: It should be possible to write code to detect whether any 
>> files in a destroot use a C++ library, and if so, MacPorts could include the 
>> C++ library name in the archive filename, otherwise don't. At archive fetch 
>> time, MacPorts wouldn't know whether a port uses C++ or not, but it could 
>> try to fetch both filenames and use whichever one exists.
> 
> Just one problem: let's assume that the package mirrors end up with a
> package built against stdlibc++, but not in one built against libc++
> (presumably because there was a build error). How would the client
> know whether to fetch the package without libc++ in the name (assuming
> it doen't contain any C++ code) or whether to build one itself? This
> would only work if all libstdc++ archives would contain libstdc++ in
> the name and the files without any "name extension" would be
> guaranteed to be portable.

Right, this would only work if the C++ library name is in every package name. 
And that would look silly for the majority of ports, which don't use C++ at 
all. So this idea of mine isn't working.

> Plus one additional concern. There is a chance that we would have to
> use (or at least deliberately decide to use) clang-3.4 as the default
> compiler rather than gcc 4.2 on the system with libc++. Then users of
> libc++/clang-3.4 would get code compiled with gcc 4.2 when fetching
> binaries. I'm not saying this would necessarily lead to any problems,
> but ...

Since you're talking about gcc 4.2, you're talking about Snow Leopard and 
earlier.

gcc 4.2 cannot be used with libc++.

According to https://trac.macports.org/wiki/LibcxxOnOlderSystems, the 
bootstrapping procedure for those old systems does involve changing 
default_compilers so that clang-3.4 is used. So those steps would be done on 
user systems, and also on the buildbot builders. Doing it on the buildbot 
builders is a one-time thing that would be no problem. Once we get to the point 
of inviting users to switch to this, we would want to have this automated. For 
example, might offer a libc++ flavor of the MacPorts installer that would do it 
for the user.



___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Mojca Miklavec
On Wed, Nov 11, 2015 at 12:15 PM, Ryan Schmidt wrote:
>
> Another option: It should be possible to write code to detect whether any 
> files in a destroot use a C++ library, and if so, MacPorts could include the 
> C++ library name in the archive filename, otherwise don't. At archive fetch 
> time, MacPorts wouldn't know whether a port uses C++ or not, but it could try 
> to fetch both filenames and use whichever one exists.

Just one problem: let's assume that the package mirrors end up with a
package built against stdlibc++, but not in one built against libc++
(presumably because there was a build error). How would the client
know whether to fetch the package without libc++ in the name (assuming
it doen't contain any C++ code) or whether to build one itself? This
would only work if all libstdc++ archives would contain libstdc++ in
the name and the files without any "name extension" would be
guaranteed to be portable.

Plus one additional concern. There is a chance that we would have to
use (or at least deliberately decide to use) clang-3.4 as the default
compiler rather than gcc 4.2 on the system with libc++. Then users of
libc++/clang-3.4 would get code compiled with gcc 4.2 when fetching
binaries. I'm not saying this would necessarily lead to any problems,
but ...

Mojca
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Ryan Schmidt
On Nov 12, 2015, at 4:19 AM, Mojca Miklavec wrote:

> On Tue, Nov 10, 2015 at 7:21 PM, Joshua Root wrote:
>> On 2015-11-11 00:26 , Mojca Miklavec wrote:
>>> If we start including macosx_deployment_target, macosx_sdk, prefix,
>>> applications_dir, frameworks_dir, ... we'll sooner or later end up in
>>> an exponential mess
>> 
>> These are the type of settings that are associated with an entire source
>> in archive_sites.conf, because all the archives from a given source will
>> have them set the same. If any of these settings for a source don't
>> match the ones used locally, the source is simply not used. Putting
>> cxx_stdlib in here as well would be a good fit.
> 
> Are you saying that putting
>   cxx_stdlib libc++
> inside archive_sites.conf instead of macports.conf (and possibly

Not instead of: in addition to. The setting in macports.conf specifies what C++ 
library will be used for software installed on your Mac with MacPorts. The 
hypothetical entries in archive_sites.conf would specify which C++ library the 
packages on a given packages server were built with.

> making some additional changes in the code) would automatically make
> sure that one would not have to write an explicit:
>   buildfromsource always
> and MacPorts would make sure that one would not fetch the binary from
> a server with binaries using different settings?
> 
> In a way that would make sense to me as well, but I don't exactly
> understand how everything works.


___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Ryan Schmidt
On Nov 12, 2015, at 5:39 AM, Mojca Miklavec wrote:
> 
> On Wed, Nov 11, 2015 at 2:18 PM, Ryan Schmidt wrote:
> 
>> Having all of a port's packages -- for all platforms, variants and build 
>> options -- in a single directory is nice though because it's a single 
>> directory listing to look at to figure out if a particular binary exists.
> 
> Truth to be told it would be even nicer if we had a nice website where
> you would get all the information collected on the same site: which
> binaries exist, when were the builds attempted (with a link to the
> build) and whether the build failed or succeeded. If we had such a
> site, this argument would no longer be important.

I agree. I didn't yet put that into the new web site I'm working on, but I do 
want to.


___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Ryan Schmidt
On Nov 12, 2015, at 5:20 AM, Mojca Miklavec wrote:

> On Wed, Nov 11, 2015 at 12:15 PM, Ryan Schmidt wrote:
> 
>> I like the idea of only adding a non-default cxxstdlib to the filename, 
>> since that would allow all the existing archives to continue to be valid. 
>> However if we were to switch to xz compression for archives, we might want 
>> to repackage everything that way, in which case it wouldn't matter.
> 
> Yes, it would be helpful to repackage everything. But despite that I
> don't see the added value of adding the default stdlib to the package
> name. Or in particular, I would not be in favour of adding libc++ to
> the filenames on >= 10.9. (First of all, we don't add "-variant" to
> the names either and it's nicer to keep names shorter. Second of all,
> this is mostly relevant for < 10.9, I don't believe that anyone is
> trying to support libstdc++ on >= 10.9, so in the long run we'll end
> up with shorter names overall.)

These are all good points.


>> I agree that since we don't have a mechanism for detecting C++ software, it 
>> would be simplest to add the tag to all archives, even though that will 
>> result in a rather large increase in disk usage on the packages servers. If 
>> the packages server were using some sort of advanced automatic deduplicating 
>> filesystem the impact might not be so large, since many ports would 
>> hopefully build identically, but I don't know what filesystems do that or if 
>> we're using one of them.
> 
> If that is going to be a problem, we could at some point in future
> start thinking about "small" and "big" mirrors. Small mirrors (those
> that don't have sufficient disk space) could include just the latest
> versions of software and/or just for just the latest OSes, while the
> "big" mirrors could include everything. Old versions of binaries
> (older than one year and older than the latest successfully built
> version of that software) are seldom needed and even then fetching
> them might be highly problematic if one doesn't take extra care to
> also fetch the appropriate version of all the dependencies.

Good point: old outdated archives are probably an even bigger space hog on the 
packages server. We have a script to clean up old unneeded packages, but no 
process in place to run it automatically. If we set up a job to run that 
regularly, that would probably more than make up for any extra disk space the 
libc++ packages would use.


> Personally I don't believe that we would experience so much
> duplication that would be worth caring about. First of all, there's no
> need to use libc++ for noarch ports (except that we would need to put
> a bit of extra effort in making sure that versions of binaries
> compiled on two systems would not compete with each other). Second, we
> would probably want to use "delete_la_files yes" as well which also
> affects the builds of ports that don't need libc++. And finally it
> might be that we would at some point retire the builds for stdlibc++
> anyway.

True, we could use this opportunity to enable delete_la_files on older systems 
too, provided this switch to libc++ would require the user to uninstall and 
reinstall all ports, which might be the case.


>> I worry about increasing the disk space used, both for our current host Mac 
>> OS Forge as well as for the various voluntary mirror servers around the 
>> world, and it's also a consideration in the event that we might one day need 
>> to leave Mac OS Forge and have to find another place to house these 
>> mountains of data we're contemplating creating.
>> 
>> Another option: It should be possible to write code to detect whether any 
>> files in a destroot use a C++ library, and if so, MacPorts could include the 
>> C++ library name in the archive filename, otherwise don't. At archive fetch 
>> time, MacPorts wouldn't know whether a port uses C++ or not, but it could 
>> try to fetch both filenames and use whichever one exists.
> 
> Well, yes, that's also an option, even if a slightly more complicated
> one to implement. But what about delete_la_files? Should/could we
> attempt to rebuild all packages for < 10.9 in that case, making sure
> that all of them use delete_la_files or would this cause any problems?
> Given how long 10.6 and 10.7 have been offline, I don't think it would
> make much of a difference in terms of processor time if we would
> attempt a completely new rebuilds anyway, so we could just as well
> rebuild the latest versions of all ports with delete_la_files and
> store the resulting files into an xz file, all at the same time. (Or
> maybe that's simply too much a time after all.)

delete_la_files cannot be changed if any ports are installed, so if we want to 
change it from off to on on systems older than 10.9, that will require users to 
essentially follow the migration instructions.

MacPorts works fine with or without .la files, so if there is no other reason 
why we need to force everything to be reinstalled, then we could spare the u

Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Mojca Miklavec
On Wed, Nov 11, 2015 at 2:18 PM, Ryan Schmidt wrote:
> On Nov 11, 2015, at 7:05 AM, Rainer Müller wrote:
>> On 2015-11-11 12:17, Ryan Schmidt wrote:
>>> On Nov 10, 2015, at 12:21 PM, Joshua Root wrote:
>>>
 On 2015-11-11 00:26 , Mojca Miklavec wrote:
> If we start including macosx_deployment_target, macosx_sdk,
> prefix, applications_dir, frameworks_dir, ... we'll sooner or
> later end up in an exponential mess

 These are the type of settings that are associated with an entire
 source in archive_sites.conf, because all the archives from a given
 source will have them set the same. If any of these settings for a
 source don't match the ones used locally, the source is simply not
 used. Putting cxx_stdlib in here as well would be a good fit.
>>>
>>> That sounds reasonable, except that it would create duplicate
>>> archives (one "libstdcxx", one "libcxx") for noarch ports that
>>> definitely don't use any C++ library, wouldn't it?
>>
>> Technically you are correct, an archive for a noarch port would be
>> identical regardless of the value of the cxx_stdlib option.
>>
>> However, an archive site can only host archives with one configuration
>> set (prefix, applications_dir, ...). If we add cxx_stdlib there, either
>> all ports will use libstdc++ or all will use libc++.
>
> So then there will be two archive sites: the existing one for libstdc++ on 
> older systems and libc++ on newer systems, and another for libc++ on older 
> systems.

I find this split more than a bit weird. Either we have a separate
directory for each configuration or we keep all the archives together
unless there are strong technical reasons against that.

>> We could also host the packages in subdirectories:
>>  .../${os.platform}/${os.major}/${build_arch}/*.tbz2
>> However, you would have to replicate this structure on fetch. It would
>> not make much difference whether this is in the path or in the filename.
>
> Yes, for my hypothetical scenario, I was assuming that, in the absence of the 
> OS name and version being in the filename, it would be in a directory name.
>
> packages.m.o/darwin_15/boost/...
> packages.m.o/darwin_10/boost/...
> packages.m.o/darwin_10-libc++/boost/...
>
> Having all of a port's packages -- for all platforms, variants and build 
> options -- in a single directory is nice though because it's a single 
> directory listing to look at to figure out if a particular binary exists.

Truth to be told it would be even nicer if we had a nice website where
you would get all the information collected on the same site: which
binaries exist, when were the builds attempted (with a link to the
build) and whether the build failed or succeeded. If we had such a
site, this argument would no longer be important.

> I use this all the time when deciding whether I need to force a buildbot 
> build or investigate a buildbot build failure, and it would be less good for 
> there to be even two directories to need to check, much less more than that.
>
>>> If we used this strategy, what hypothetical base URL would we use for
>>> libc++ packages on older systems? Would you define a second hostname
>>> in addition to packages.macports.org (inconvenient for mirrors), or
>>> would you create a subdirectory on that server?
>>
>> Is your intention to build all packages twice, for both libc++ and
>> libstdc++? Is that even worth the effort?
>
> Yes, my intention is that there would be additional buildbot slave servers 
> set up for libc++ on each older OS version. Why wouldn't that be worth the 
> effort? Are you suggesting as an alternative that we would immediately switch 
> the older OS buildbot slaves over to libc++ and no longer provide binaries 
> for libstdc++? That would mean we would need an immediate plan for how to 
> switch users from libstdc++ to libc++. I was hoping to take this one problem 
> at a time, not all problems at once.

I agree. We should first introduce buildbot slaves for libc++, then
take enough time to deal with all the new problems discovered during
the transition, and only then suggest users to switch. After a while
(maybe a year or so) of stability it would perhaps be acceptable to
switch off the buildbot slaves for libstdc++. Users would then have a
choice of either continuing to use it, but having to compile
everything on their own, or switching to libc++ and still getting
binary updates.

Doing the switch all at once would be a bit too challenging in my
opinion. Users will have to reinstall everything and you cannot ask
them to do that until at least the most obvious problems have been
solved.

Mojca
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Joshua Root
On 2015-11-12 21:19 , Mojca Miklavec wrote:
> On Tue, Nov 10, 2015 at 7:21 PM, Joshua Root wrote:
>> On 2015-11-11 00:26 , Mojca Miklavec wrote:
>>> If we start including macosx_deployment_target, macosx_sdk, prefix,
>>> applications_dir, frameworks_dir, ... we'll sooner or later end up in
>>> an exponential mess
>>
>> These are the type of settings that are associated with an entire source
>> in archive_sites.conf, because all the archives from a given source will
>> have them set the same. If any of these settings for a source don't
>> match the ones used locally, the source is simply not used. Putting
>> cxx_stdlib in here as well would be a good fit.
> 
> Are you saying that putting
>cxx_stdlib libc++
> inside archive_sites.conf instead of macports.conf (and possibly
> making some additional changes in the code) would automatically make
> sure that one would not have to write an explicit:
>buildfromsource always
> and MacPorts would make sure that one would not fetch the binary from
> a server with binaries using different settings?

Yes, that could be done with some minor code changes.

- Josh
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Mojca Miklavec
On Wed, Nov 11, 2015 at 12:15 PM, Ryan Schmidt wrote:
> On Nov 10, 2015, at 7:26 AM, Mojca Miklavec wrote:
>
>> I assume that
>> should be easy enough to add.
>
> It would be easy enough to do, once we reach a decision on what, 
> specifically, to do.

So I suggest we finish this discussion and meet the decision soon,
before we all forget about this and everyone who might potentially
still be interested in fixing bugs moves on and looses interest in
supporting libc++ on < 10.9.

>> A while back we were discussing about
>> how to implement this. My suggestion would be to only add that on <
>> 10.9 if libc++ is used. And to make things easier: unless it's a
>> noarch port, always add it, even if the port doesn't link against
>> libc++ explicitly. At the moment we don't have any procedure in place
>> to aid with decision about whether or not "libc++" is needed in the
>> filename, but we would like to start somewhere. For completeness one
>> could add "libstdc++" if libstdc++ is needed on >= 10.9. This approach
>> isn't 100% correct in all cases (in particular not in cases where g++
>> from macports-gcc is used), but it will allow us to proceed and to fix
>> more subtle problems later.
>
> I like the idea of only adding a non-default cxxstdlib to the filename, since 
> that would allow all the existing archives to continue to be valid. However 
> if we were to switch to xz compression for archives, we might want to 
> repackage everything that way, in which case it wouldn't matter.

Yes, it would be helpful to repackage everything. But despite that I
don't see the added value of adding the default stdlib to the package
name. Or in particular, I would not be in favour of adding libc++ to
the filenames on >= 10.9. (First of all, we don't add "-variant" to
the names either and it's nicer to keep names shorter. Second of all,
this is mostly relevant for < 10.9, I don't believe that anyone is
trying to support libstdc++ on >= 10.9, so in the long run we'll end
up with shorter names overall.)

> I agree that since we don't have a mechanism for detecting C++ software, it 
> would be simplest to add the tag to all archives, even though that will 
> result in a rather large increase in disk usage on the packages servers. If 
> the packages server were using some sort of advanced automatic deduplicating 
> filesystem the impact might not be so large, since many ports would hopefully 
> build identically, but I don't know what filesystems do that or if we're 
> using one of them.

If that is going to be a problem, we could at some point in future
start thinking about "small" and "big" mirrors. Small mirrors (those
that don't have sufficient disk space) could include just the latest
versions of software and/or just for just the latest OSes, while the
"big" mirrors could include everything. Old versions of binaries
(older than one year and older than the latest successfully built
version of that software) are seldom needed and even then fetching
them might be highly problematic if one doesn't take extra care to
also fetch the appropriate version of all the dependencies.

Personally I don't believe that we would experience so much
duplication that would be worth caring about. First of all, there's no
need to use libc++ for noarch ports (except that we would need to put
a bit of extra effort in making sure that versions of binaries
compiled on two systems would not compete with each other). Second, we
would probably want to use "delete_la_files yes" as well which also
affects the builds of ports that don't need libc++. And finally it
might be that we would at some point retire the builds for stdlibc++
anyway.

> I worry about increasing the disk space used, both for our current host Mac 
> OS Forge as well as for the various voluntary mirror servers around the 
> world, and it's also a consideration in the event that we might one day need 
> to leave Mac OS Forge and have to find another place to house these mountains 
> of data we're contemplating creating.
>
> Another option: It should be possible to write code to detect whether any 
> files in a destroot use a C++ library, and if so, MacPorts could include the 
> C++ library name in the archive filename, otherwise don't. At archive fetch 
> time, MacPorts wouldn't know whether a port uses C++ or not, but it could try 
> to fetch both filenames and use whichever one exists.

Well, yes, that's also an option, even if a slightly more complicated
one to implement. But what about delete_la_files? Should/could we
attempt to rebuild all packages for < 10.9 in that case, making sure
that all of them use delete_la_files or would this cause any problems?
Given how long 10.6 and 10.7 have been offline, I don't think it would
make much of a difference in terms of processor time if we would
attempt a completely new rebuilds anyway, so we could just as well
rebuild the latest versions of all ports with delete_la_files and
store the resulting files into an xz file, all at the same time.

Re: Volunteer for a workshop on "setting up your own buildbot/buildslave"?

2015-11-12 Thread Mojca Miklavec
On Tue, Nov 10, 2015 at 7:21 PM, Joshua Root wrote:
> On 2015-11-11 00:26 , Mojca Miklavec wrote:
>> If we start including macosx_deployment_target, macosx_sdk, prefix,
>> applications_dir, frameworks_dir, ... we'll sooner or later end up in
>> an exponential mess
>
> These are the type of settings that are associated with an entire source
> in archive_sites.conf, because all the archives from a given source will
> have them set the same. If any of these settings for a source don't
> match the ones used locally, the source is simply not used. Putting
> cxx_stdlib in here as well would be a good fit.

Are you saying that putting
   cxx_stdlib libc++
inside archive_sites.conf instead of macports.conf (and possibly
making some additional changes in the code) would automatically make
sure that one would not have to write an explicit:
   buildfromsource always
and MacPorts would make sure that one would not fetch the binary from
a server with binaries using different settings?

In a way that would make sense to me as well, but I don't exactly
understand how everything works.

Mojca
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Ryan Schmidt
On Nov 12, 2015, at 3:43 AM, Ryan Schmidt wrote:

> A seventh VM has been requested for El Capitan and I've been told that should 
> be a resource problem, it's just a matter of setting it up, which will happen 
> soon.

Correction: that should *not* be a resource problem.

___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Ryan Schmidt

On Nov 12, 2015, at 3:04 AM, Mojca Miklavec wrote:
> On Wed, Nov 11, 2015 at 11:37 AM, Ryan Schmidt wrote:
>> On Nov 10, 2015, at 10:57 PM, Mojca Miklavec wrote:
>>> 
>>> At the moment it's the lack of Trac's functionality to browse the tree
>>> and the logs that has triggered the "demand" for this.
>> 
>> That will get fixed. In asking about github, we're just trying to explore 
>> all options.
> 
> I hope so at least. But until this gets fixed (and it might take a
> while before it does), it would be extremely helpful to have an
> up-to-date git mirror to be able to browse through the changes in a
> slightly more "user-friendly" way than with the command-line only.
> 
> I could use
>https://github.com/neverpanic/macports-ports
> now that you mentioned it, but that one is also not being synced
> automatically (yet).
> 
> Clemens, are you willing to set up some (temporary) repository with
> regular updates, even if it's not perfect yet, just so that we can
> follow the changes? (I can offer to run a script for incremental
> updates on my server if there is a need for that.)

We will get the existing infrastructure back up and running soon. Please bear 
with us a little while longer.


> On Wed, Nov 11, 2015 at 11:39 PM, Sean Farley wrote:
>> Clemens Lang writes:
>> 
>>> [1] https://github.com/neverpanic?tab=repositories
>>> [2] https://github.com/neverpanic/svn2git
>>> [3] https://github.com/neverpanic/macports-svn2git-rules
>> 
>> Sounds like this is a lost cause for me, but for what it's worth, I work
>> at Bitbucket now and we could offer hosting (plus direct admin support).
> 
> What exactly do you consider "lost" and why?

Good to know, Sean, and thanks for the offer. Would that refer only to the 
usual services that BitBucket offers, or additional services as well? Mac OS 
Forge currently provides us much more than just repository, issue tracker and 
wiki.


> At the moment we are (hopefully) discussing just a mirror of the code
> being put on some server, still hoping that everything will get back
> to normal. This mirroring can be done on any of the existing services
> (github, bitbucket, gitlab, ...) or even on all of them at the same
> time.
> 
> Mojca
> 
> (Off-topic, but: If we were discussing other services, the biggest
> problem would probably be all the buildbots anyway, and I doubt that
> BitBucket would be interested in helping out with that. Purchasing the
> necessary hardware alone would be expensive enough. We currently have
> 10 virtual machines and we could easily request 5 or more right now
> [two for 10.11, three for libc++ on 10.6-10.8, potentially another
> bunch of them for universal binaries] and two new ones each year after
> the release of a new OS. All of that has to run on Macs with
> sufficient memory, disk space and potentially more than one core per
> buildbot.)

There are currently six buildbot slave VMs, five of which run OS X (Snow 
Leopard, Lion, Mountain Lion, Mavericks, Yosemite) and run two buildbot 
builders each -- one for ports, one for base -- and one which runs Oracle Linux 
with a single buildbot builder for base. A seventh VM has been requested for El 
Capitan and I've been told that should be a resource problem, it's just a 
matter of setting it up, which will happen soon.

Hardware for a replacement buildbot system is not a problem, if that becomes 
necessary.


___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Owner of MacPorts account on GitHub

2015-11-12 Thread Mojca Miklavec
On Wed, Nov 11, 2015 at 11:37 AM, Ryan Schmidt wrote:
> On Nov 10, 2015, at 10:57 PM, Mojca Miklavec wrote:
>>
>> At the moment it's the lack of Trac's functionality to browse the tree
>> and the logs that has triggered the "demand" for this.
>
> That will get fixed. In asking about github, we're just trying to explore all 
> options.

I hope so at least. But until this gets fixed (and it might take a
while before it does), it would be extremely helpful to have an
up-to-date git mirror to be able to browse through the changes in a
slightly more "user-friendly" way than with the command-line only.

I could use
https://github.com/neverpanic/macports-ports
now that you mentioned it, but that one is also not being synced
automatically (yet).

Clemens, are you willing to set up some (temporary) repository with
regular updates, even if it's not perfect yet, just so that we can
follow the changes? (I can offer to run a script for incremental
updates on my server if there is a need for that.)


On Wed, Nov 11, 2015 at 11:39 PM, Sean Farley wrote:
> Clemens Lang writes:
>
>> [1] https://github.com/neverpanic?tab=repositories
>> [2] https://github.com/neverpanic/svn2git
>> [3] https://github.com/neverpanic/macports-svn2git-rules
>
> Sounds like this is a lost cause for me, but for what it's worth, I work
> at Bitbucket now and we could offer hosting (plus direct admin support).

What exactly do you consider "lost" and why?

At the moment we are (hopefully) discussing just a mirror of the code
being put on some server, still hoping that everything will get back
to normal. This mirroring can be done on any of the existing services
(github, bitbucket, gitlab, ...) or even on all of them at the same
time.

Mojca

(Off-topic, but: If we were discussing other services, the biggest
problem would probably be all the buildbots anyway, and I doubt that
BitBucket would be interested in helping out with that. Purchasing the
necessary hardware alone would be expensive enough. We currently have
10 virtual machines and we could easily request 5 or more right now
[two for 10.11, three for libc++ on 10.6-10.8, potentially another
bunch of them for universal binaries] and two new ones each year after
the release of a new OS. All of that has to run on Macs with
sufficient memory, disk space and potentially more than one core per
buildbot.)
___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev


Re: Nonexistent mpi.default variable lets port upgrade fail

2015-11-12 Thread David Evans
On 11/11/15 11:36 PM, Marko Käning wrote:
> Ooops, what does this error suddenly happen?
> 
> ---
> --->  Updating MacPorts base sources using rsync
> MacPorts base version 2.3.4 installed,
> MacPorts base version 2.3.4 downloaded.
> --->  Updating the ports tree
> --->  MacPorts base is already the latest version
> 
> The ports tree has been updated. To upgrade your installed ports, you should 
> run
>   port upgrade outdated
> The following installed ports are outdated:
> kdetoys4   4.10.5_0 < 4.10.5_1   
> Error: Unable to open port: can't read "mpi.default": no such variable
> To report a bug, follow the instructions in the guide:
> http://guide.macports.org/#project.tickets
> ---
> 
> Greets,
> Marko
> ___
> macports-dev mailing list
> macports-dev@lists.macosforge.org
> https://lists.macosforge.org/mailman/listinfo/macports-dev
> 

The error appears to come from port boost (may be others) which is broken 
(Portfile fails to parse) after
the recent update to mpi portgroup in r142438.

Dave

___
macports-dev mailing list
macports-dev@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-dev