Re: [aur-general] TU Application: Daniel M. Capella

2018-11-15 Thread Daniel M. Capella via aur-general
Quoting Levente Polyak via aur-general (2018-11-15 05:19:10)
> On 11/15/18 5:50 AM, Daniel M. Capella via aur-general wrote:
> >> - tests are awesome <3 run them whenever possible! more is better!
> >>   pulling sources from github is favorable when you get free tests
> >>   and sometimes manpages/docs
> > 
> > Will work with the upstreams to distribute these. I prefer to use published
> > offerings as they are what the authors intend to be used. GitHub 
> > autogenerated
> > tarballs are also subject to change:
> > https://marc.info/?l=openbsd-ports=151973450514279=2
> > 
> 
> Well source tree is the source of truth as that's where processed stuff
> comes from :P
> 
> If you can't convince all upstream do distribute their tests and
> especially not convince them to distribute the sources for generating
> docs I would still say please go for source tree instead as the value of
> such additional content is high. We love tests.

Agreed. Will do.

> Side note, nowadays there are lots of python and other project that git
> sign their latest tags and commits but have no other way of detatched
> signatures, this adds a big gain as well. Remember two of your packages
> had upstream tag signatures but i forgot to point them out.
> If I can't convince them to use detatched sinatures as well I nowadays
> just switch to pull git and use ?signed

Noted.

> 
> >> python-soco:
> >> - maybe distribute some docs as well like manpages from docs dir
> > 
> > I don't see any manpages there. This is a library.
> 
> make -C doc man
> Manpages are not exclusive for tools, they are for any kind of
> documentation like library APIs
> try running:
> man 3 sprintf

Done.

> > 
> >> razercfg:
> >> - do we _really_ need to split razer mouse tool UI and daemon here?
> >> doubt it tbh.
> > 
> > The UI is completely optional. ¯\_(ツ)_/¯
> 
> My point is, its the same source and the packages are not huge and it
> doesn't have crazy dependency hells.
> Only split up if it really makes sense, if there is no real reason we
> keep them combined, like tools + libs + headers and don't go as crazy as
> debian about splitting up everything.

It was split for people who don't need to change options on the fly.
Also because I thought it would be kinda neat. :p I don't mind merging
them back together.

> 
> > 
> >> - CPPFLAGS are not respected and should be populated properly
> >>   an upstream patch for that would be best
> > 
> > Will have to figure that out.
> 
> All you need in the PR is add $CPPFLAGS where you already see $CFLAGS.
> For time being either backport this patch or just export
> CFLAGS="${CFLAGS} ${CPPFLAGS} until its done.

This took me for a ride until I realized I just needed to switch to
`CPPFLAGS +=` in the config.mk. A lesson I won't soon forget.

> > 
> > Thank you very much for the review. Go LDFLAGS is still on the todo. 
> > Packaging
> > for Go has perhaps been more traumatizing than even Node.js.
> > 
> 
> 
> You're welcome, always a plesure if people are happy about it :-)
> 

--
Best,
polyzen


signature.asc
Description: signature


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread Jiachen YANG via aur-general

On 2018/11/16 7:31, Uwe Koloska wrote:
> Hi Eli,
>
> Am 15.11.18 um 20:26 schrieb Eli Schwartz via aur-general:
>> The source code running the website is here:
>> https://git.archlinux.org/aurweb.git/about/
>>
>> We currently provide the url, but not the sources for download, since
>> the use case for our community has not (yet?) proposed that the latter
>> is something needed. I'm unsure who would use it other than repology.
> I don't understand what "url" and "sources" refer to. Obviously its not
> the Sourcecode of aurweb, because that's available in the linked git
> repo, isn't it?
>
> If both refer to something inside the quote, then the reference is very
> far from its destination ...
>
> Uwe


Hi Uwe,


First of all thank you for Repology, a interesting and useful project.

I think the "url" and "sources" are refering to the 2 variables in
PKGBUILD. "url" is the url to the upstream homepage, and "sources" are
urls to download the source code, or in the case of VCS packages, the
urls to fetch VCS repositories. I think "sources" is closest thing to
"distfiles" you asked in your first message. Please see the manpage of
PKGBUILD for details [1]. These are defined in PKGBUILD and generated in
.SRCINFO for the AUR packages. And currently we only have "URL" field
exposed in the aur rpc api.

[1]:
https://www.archlinux.org/pacman/PKGBUILD.5.html#_options_and_directives



farseerfc


signature.asc
Description: OpenPGP digital signature


Re: [aur-general] chroot clean building and "http 429"

2018-11-15 Thread hagar

On 16/11/18 1:03 am, Eli Schwartz via aur-general wrote:

On 11/15/18 7:57 AM, Morgan Adamiec via aur-general wrote:

On Thu, 15 Nov 2018 at 12:49, hagar  wrote:>> But my main 
question is how do you get around a circular dependancy. eg.

mingw-w64-harfbuzz depends on - mingw-w64-freetype2

mingw-w64-freetype2 depends on -mingw-w64-harfbuzz

You cant build one unless the other is already built.


You'd want to bootstrap with a binary package.

if foo depends on bar and vice versa, build foo against a binary version of bar
then rebuild bar against your version of foo.

This advice basically boils down to "you need to already have it in
order to build it", which isn't really the most useful advice unless
you're using the official repos...


Another way if possible is to configure foo and disable any features
that require bar.
Then after building bar, rebuild a fully featured foo. Although I'm not sure if
freetype/harfbuzz can be configured this way

Looking at the AUR, mingw-w64-harfbuzz depends on mingw-w64-freetype2
which is also provided by mingw-w64-freetype2-minimal,
mingw-w64-freetype2-bootstrap. I'm not sure why both variant packages
exist, but the -bootstrap variant is pretty obviously meant to do
exactly this.


Thanks all, I found the bootstrap ones.

Ive been redoing all my 340 odd builds in a chroot with distcc to fix 
some small issues.



Thanks

Macca


Re: [aur-general] TU Application: Daniel M. Capella

2018-11-15 Thread Morten Linderud via aur-general
On Thu, Nov 15, 2018 at 06:51:31PM -0500, Daniel M. Capella via aur-general 
wrote:
> Quoting Eli Schwartz via aur-general (2018-11-15 00:52:50)
> > On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
> > > Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
> > >> - tests are awesome <3 run them whenever possible! more is better!
> > >>   pulling sources from github is favorable when you get free tests
> > >>   and sometimes manpages/docs
> > > 
> > > Will work with the upstreams to distribute these. I prefer to use 
> > > published
> > > offerings as they are what the authors intend to be used. GitHub 
> > > autogenerated
> > > tarballs are also subject to change:
> > > https://marc.info/?l=openbsd-ports=151973450514279=2
> > 
> > I've seen the occasional *claim* that this happens, but I've yet to see
> > any actual case where this happens and it isn't because of upstream
> > force-pushing a tag.
> > 
> > GitHub is supposed to use git-archive(1) for this, which is guaranteed
> > to be reproducible when generating .tar, although in theory
> > post-filtering this through a compressor like gzip can result in changes
> > from one version of git to another. I say in theory because I don't
> > recall this ever happening, and git-archive uses the fairly boring defaults.
> > 
> > I don't see any reason to use substandard sources in order to avoid
> > checksum problems I don't believe in.
> 
> "substandard" 樂 
> https://wiki.archlinux.org/index.php/Python_package_guidelines#Source

Those guidelines are mainly in the context of the python ecosystem. There are no
prefferences, only options. If tests, manpages or  sources are missing from the
pypi mirrors because of mismanagement from upstream, then they are indeed
substandard.

-- 
Morten Linderud
PGP: 9C02FF419FECBE16


signature.asc
Description: PGP signature


Re: [aur-general] TU Application: Daniel M. Capella

2018-11-15 Thread Levente Polyak via aur-general
On 11/16/18 12:51 AM, Daniel M. Capella via aur-general wrote:
> Quoting Eli Schwartz via aur-general (2018-11-15 00:52:50)
>> On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
>>> Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
 - tests are awesome <3 run them whenever possible! more is better!
   pulling sources from github is favorable when you get free tests
   and sometimes manpages/docs
>>>
>>> Will work with the upstreams to distribute these. I prefer to use published
>>> offerings as they are what the authors intend to be used. GitHub 
>>> autogenerated
>>> tarballs are also subject to change:
>>> https://marc.info/?l=openbsd-ports=151973450514279=2
>>
>> I've seen the occasional *claim* that this happens, but I've yet to see
>> any actual case where this happens and it isn't because of upstream
>> force-pushing a tag.
>>
>> GitHub is supposed to use git-archive(1) for this, which is guaranteed
>> to be reproducible when generating .tar, although in theory
>> post-filtering this through a compressor like gzip can result in changes
>> from one version of git to another. I say in theory because I don't
>> recall this ever happening, and git-archive uses the fairly boring defaults.
>>
>> I don't see any reason to use substandard sources in order to avoid
>> checksum problems I don't believe in.
> 
> "substandard" 樂 
> https://wiki.archlinux.org/index.php/Python_package_guidelines#Source
> 

Does the wiki really need to be overly specific when its sane to use
which source? Especially when you have one that gives tests, docs and
signatures and the other not?
Or do we really expect to have a paragraph to explicitly allow building
python from the original unprocessed main sources as well?

I don't think so.



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] TU Application: Daniel M. Capella

2018-11-15 Thread Daniel M. Capella via aur-general
Quoting Eli Schwartz via aur-general (2018-11-15 00:52:50)
> On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
> > Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
> >> - tests are awesome <3 run them whenever possible! more is better!
> >>   pulling sources from github is favorable when you get free tests
> >>   and sometimes manpages/docs
> > 
> > Will work with the upstreams to distribute these. I prefer to use published
> > offerings as they are what the authors intend to be used. GitHub 
> > autogenerated
> > tarballs are also subject to change:
> > https://marc.info/?l=openbsd-ports=151973450514279=2
> 
> I've seen the occasional *claim* that this happens, but I've yet to see
> any actual case where this happens and it isn't because of upstream
> force-pushing a tag.
> 
> GitHub is supposed to use git-archive(1) for this, which is guaranteed
> to be reproducible when generating .tar, although in theory
> post-filtering this through a compressor like gzip can result in changes
> from one version of git to another. I say in theory because I don't
> recall this ever happening, and git-archive uses the fairly boring defaults.
> 
> I don't see any reason to use substandard sources in order to avoid
> checksum problems I don't believe in.

"substandard" 樂 
https://wiki.archlinux.org/index.php/Python_package_guidelines#Source

> > For Rust sources there's also this problem:
> > https://doc.rust-lang.org/cargo/faq.html#why-do-binaries-have-cargolock-in-version-control-but-not-libraries
> > 
> > Crates explicitly filter out lock files. `publish-lockfile` for binary 
> > crates
> > is still only available in Cargo nightly. Communication is already in
> > progress with the relevant upstreams.
> 
> I have no clue what this is even supposed to mean. I'm reading your link
> and it says that "it's conventional" for upstream developers of rust
> applications to commit their lockfiles to git, but not to do so for
> libraries. Given that one builds applications, and not libraries, I'm
> unsure what the problem is here.
> 
> Do you mean to say that crates.io doesn't ship all the files available
> in git? Okay, I agree that git is the superior source. I don't generally
> trust "intelligent" tools to decide what's important.
> 
> Still not sure what this doc describing the split between libraries and
> binaries, has to do with anything.

https://github.com/bluejekyll/trust-dns/issues/604#issuecomment-436510626

> >> python-soco:
> >> - there are tests available for check() via py.test
> > 
> > Requires jumping through hoops. See the note:
> > https://soco.readthedocs.io/en/latest/development/unittests.html?highlight=test#running-the-unit-tests
> 
> Sounds like they have two sets of tests: some unittests for the code
> correctness, plus unittests to test the interaction with an online server.
> 
> Also sounds like the former, which are all we care about I suppose,
> should be easy to do without any hoops, just by running pytest?

Yeah, I tried that.

> >> - do we _really_ need to split razer mouse tool UI and daemon here?
> >> doubt it tbh.
> > 
> > The UI is completely optional. ¯\_(ツ)_/¯
> 
> Why does this mean they cannot be a single package?
> 
> -- 
> Eli Schwartz
> Bug Wrangler and Trusted User
> 

--
Best,
polyzen


signature.asc
Description: signature


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread Uwe Koloska
Hi Eli,

Am 15.11.18 um 20:26 schrieb Eli Schwartz via aur-general:
> The source code running the website is here:
> https://git.archlinux.org/aurweb.git/about/
> 
> We currently provide the url, but not the sources for download, since
> the use case for our community has not (yet?) proposed that the latter
> is something needed. I'm unsure who would use it other than repology.

I don't understand what "url" and "sources" refer to. Obviously its not
the Sourcecode of aurweb, because that's available in the linked git
repo, isn't it?

If both refer to something inside the quote, then the reference is very
far from its destination ...

Uwe


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread brent s.
On 11/15/18 2:58 PM, Eli Schwartz via aur-general wrote:
> 
> It's a pity that I forgot to reply with the exact same link and almost
> the exact same caveat in the very next paragraph, isn't it?
> 
> The paragraph which you quoted as "(SNIP)".
> 

it most likely would have been more noticeable if you trimmed the quoted
content down to the relevant bits instead of including it whole.


-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread Eli Schwartz via aur-general
On 11/15/18 2:50 PM, brent s. wrote:
> On 11/15/18 14:26, Eli Schwartz via aur-general wrote:
>> On 11/15/18 1:25 PM, Dmitry Marakasov wrote:
>>> Hi!
>>>
> (SNIP)
>>> While fetching data from API, Repology does a 1 second pause between
>>> requests to not create excess load on the server, but there are still
>>> frequent 429 errors. I've tried 2 second delays, but the 429s are still
>>> there, and fetch time increases dramatically as we have to do more than
>>> 500 requests. Probably API is loaded by other clients as well.
>>
>> Our rate limit is 4000 per 24 hours. One-second pauses aren't taken into
>> account, and our initial motivation to add rate limiting was to ban
>> users who were using 5-second delays...
>>
> (SNIP)
> 
> 
> don't forget about the URI max length, too. staggering into requests of
> 100 pkgs would work fine, but worth noting the max length is 4443 bytes
> 
> https://wiki.archlinux.org/index.php/Aurweb_RPC_interface#Limitations

It's a pity that I forgot to reply with the exact same link and almost
the exact same caveat in the very next paragraph, isn't it?

The paragraph which you quoted as "(SNIP)".

-- 
Eli Schwartz
Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread brent s.
On 11/15/18 14:26, Eli Schwartz via aur-general wrote:
> On 11/15/18 1:25 PM, Dmitry Marakasov wrote:
>> Hi!
>>
(SNIP)
>> While fetching data from API, Repology does a 1 second pause between
>> requests to not create excess load on the server, but there are still
>> frequent 429 errors. I've tried 2 second delays, but the 429s are still
>> there, and fetch time increases dramatically as we have to do more than
>> 500 requests. Probably API is loaded by other clients as well.
> 
> Our rate limit is 4000 per 24 hours. One-second pauses aren't taken into
> account, and our initial motivation to add rate limiting was to ban
> users who were using 5-second delays...
> 
(SNIP)


don't forget about the URI max length, too. staggering into requests of
100 pkgs would work fine, but worth noting the max length is 4443 bytes

https://wiki.archlinux.org/index.php/Aurweb_RPC_interface#Limitations

-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] AUR package metadata dump

2018-11-15 Thread Florian Pritz via aur-general
On Thu, Nov 15, 2018 at 09:25:02PM +0300, Dmitry Marakasov  
wrote:
> The way Repology currently fetches AUR package data is as follows:
> - fetch https://aur.archlinux.org/packages.gz
> - split packages into 100 item packs
> - fetch JSON data for packages in each pack from 
> https://aur.archlinux.org/rpc/?v=5=info[]=
> 
> While fetching data from API, Repology does a 1 second pause between
> requests to not create excess load on the server, but there are still
> frequent 429 errors. I've tried 2 second delays, but the 429s are still
> there, and fetch time increases dramatically as we have to do more than
> 500 requests. Probably API is loaded by other clients as well.

The rate limit allows 4000 API requests per source IP in a 24 hour
window. It does not matter which type of request you send or how many
packages you request information for. Spreading out requests is still
appreciated, but it mostly won't influence your rate limit.

The packages.gz file currently contains around 53000 packages. If you
split those into packs of 100 each and then perform a single API request
for each pack to fetch all the details, you end up with roughly 530
requests. Given you hit the limit, you probably check multiple times
each day, correct? I'd suggest to spread the checks over a 6 hour period
or longer. This should keep you well below the limit.

> I suggest to implement a regularly updated JSON dump of information
> on all packages and make it available for the site, like packages.gz is.
> The content should be similar to what 
> https://aur.archlinux.org/rpc/?v=5=info
> would return for all packages at once.
>
> This will eliminate the need to access the API and generate load
> on it, simplify and speed up fetching dramatically for both Repology
> and possible other clients.

It may also generate much more network traffic since the problem that
prompted the creation of the rate limit was that people ran update check
scripts every 5 or 10 seconds via conky. Some of those resulted in up to
40 millions of requests on a single day due to inefficient clients and a
huge number of checked packages. I'm somewhat worried that a central
dump may just invite people to write clients that fetch it and then we
start this whole thing again. Granted, it's only a single request per
check, but the response is likely quite big. Maybe the best way to do
this is to actually implement it as an API call and thus share the rate
limit with the rest of the API to prevent abuse.

Apart from all that, I'd suggest that you propose the idea (or a patch)
on the aur-dev mailing list, assuming that there isn't a huge discussion
about it here first.

Florian


signature.asc
Description: PGP signature


[aur-general] AUR package metadata dump

2018-11-15 Thread Dmitry Marakasov
Hi!

I'm maintainer of Repology.org, a service which monitors, aggregates
and compares package vesion accross 200+ package repositories with
a purpose of simplifying package maintainers work by discovering
new versions faster, improving collaboration between maintainers
and giving software authhors a complete overview of how well their
projects are packaged.

Repology does obviously support AUR, however there were some problems
with retrieving information on AUR packages and I think this could
be improved.

The way Repology currently fetches AUR package data is as follows:
- fetch https://aur.archlinux.org/packages.gz
- split packages into 100 item packs
- fetch JSON data for packages in each pack from 
https://aur.archlinux.org/rpc/?v=5=info[]=

While fetching data from API, Repology does a 1 second pause between
requests to not create excess load on the server, but there are still
frequent 429 errors. I've tried 2 second delays, but the 429s are still
there, and fetch time increases dramatically as we have to do more than
500 requests. Probably API is loaded by other clients as well.

I suggest to implement a regularly updated JSON dump of information
on all packages and make it available for the site, like packages.gz is.
The content should be similar to what 
https://aur.archlinux.org/rpc/?v=5=info
would return for all packages at once.

This will eliminate the need to access the API and generate load
on it, simplify and speed up fetching dramatically for both Repology
and possible other clients.

Additionally, I'd like to suggest to add information on distfiles to the
dump (and probably an API as well for consistency). For instance,
Repology checks availability for all (homepage and download) links
it retreives from package repositories and reports broken ones so
the packages could be fixed.

-- 
Dmitry Marakasov   .   55B5 0596 FF1E 8D84 5F56  9510 D35A 80DD F9D2 F77D
amd...@amdmi3.ru  ..:  https://github.com/AMDmi3


[aur-general] chroot clean building and "http 429"

2018-11-15 Thread Eli Schwartz via aur-general
On 11/15/18 7:57 AM, Morgan Adamiec via aur-general wrote:
> On Thu, 15 Nov 2018 at 12:49, hagar  wrote:>> But my main 
> question is how do you get around a circular dependancy. eg.
>>
>> mingw-w64-harfbuzz depends on - mingw-w64-freetype2
>>
>> mingw-w64-freetype2 depends on -mingw-w64-harfbuzz
>>
>> You cant build one unless the other is already built.
>>
> 
> You'd want to bootstrap with a binary package.
> 
> if foo depends on bar and vice versa, build foo against a binary version of 
> bar
> then rebuild bar against your version of foo.

This advice basically boils down to "you need to already have it in
order to build it", which isn't really the most useful advice unless
you're using the official repos...

> Another way if possible is to configure foo and disable any features
> that require bar.
> Then after building bar, rebuild a fully featured foo. Although I'm not sure 
> if
> freetype/harfbuzz can be configured this way

Looking at the AUR, mingw-w64-harfbuzz depends on mingw-w64-freetype2
which is also provided by mingw-w64-freetype2-minimal,
mingw-w64-freetype2-bootstrap. I'm not sure why both variant packages
exist, but the -bootstrap variant is pretty obviously meant to do
exactly this.

-- 
Eli Schwartz
Bug Wrangler and Trusted User



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] chroot clean building and "http 429"

2018-11-15 Thread Jelle van der Waa
On 11/15/18 at 08:49pm, hagar wrote:
> G'Day all,
> 
> I have been rebuilding all my packages properly in a clean chroot.
> 
> I wwas writing a smaill script to put all the packages in order and I
> accidentaly got "http 429".
> 
> Any Idea how long that lasts? - I figured out what caused it and fixed it. (
> damned thing wasnt querying the PKGBUILDS but going online ).

The rate limit ban lasts 24 hous.
> 
> 
> But my main question is how do you get around a circular dependancy. eg.
> 
> mingw-w64-harfbuzz depends on - mingw-w64-freetype2
> 
> mingw-w64-freetype2 depends on -mingw-w64-harfbuzz
> 
> You cant build one unless the other is already built.
> 
> 
> Is this allowed ?
> 
> 
> Thanks
> 
> Macca

-- 
Jelle van der Waa


signature.asc
Description: PGP signature


Re: [aur-general] chroot clean building and "http 429"

2018-11-15 Thread Morgan Adamiec via aur-general
On Thu, 15 Nov 2018 at 12:49, hagar  wrote:
>
> G'Day all,
>
> I have been rebuilding all my packages properly in a clean chroot.
>
> I wwas writing a smaill script to put all the packages in order and I
> accidentaly got "http 429".
>
> Any Idea how long that lasts? - I figured out what caused it and fixed
> it. ( damned thing wasnt querying the PKGBUILDS but going online ).

Don't worry it only lasts a day.

>
> But my main question is how do you get around a circular dependancy. eg.
>
> mingw-w64-harfbuzz depends on - mingw-w64-freetype2
>
> mingw-w64-freetype2 depends on -mingw-w64-harfbuzz
>
> You cant build one unless the other is already built.
>

You'd want to bootstrap with a binary package.

if foo depends on bar and vice versa, build foo against a binary version of bar
then rebuild bar against your version of foo.

Another way if possible is to configure foo and disable any features
that require bar.
Then after building bar, rebuild a fully featured foo. Although I'm not sure if
freetype/harfbuzz can be configured this way

>
> Is this allowed ?
>
>
> Thanks
>
> Macca


[aur-general] chroot clean building and "http 429"

2018-11-15 Thread hagar

G'Day all,

I have been rebuilding all my packages properly in a clean chroot.

I wwas writing a smaill script to put all the packages in order and I 
accidentaly got "http 429".


Any Idea how long that lasts? - I figured out what caused it and fixed 
it. ( damned thing wasnt querying the PKGBUILDS but going online ).



But my main question is how do you get around a circular dependancy. eg.

mingw-w64-harfbuzz depends on - mingw-w64-freetype2

mingw-w64-freetype2 depends on -mingw-w64-harfbuzz

You cant build one unless the other is already built.


Is this allowed ?


Thanks

Macca


Re: [aur-general] Auto-generated Github tarballs format change (Was: TU Application: Daniel M. Capella)

2018-11-15 Thread Levente Polyak via aur-general
On 11/15/18 10:52 AM, Baptiste Jonglez wrote:
> On 15-11-18, Eli Schwartz via aur-general wrote:
>> On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
>>> Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
 - tests are awesome <3 run them whenever possible! more is better!
   pulling sources from github is favorable when you get free tests
   and sometimes manpages/docs
>>>
>>> Will work with the upstreams to distribute these. I prefer to use published
>>> offerings as they are what the authors intend to be used. GitHub 
>>> autogenerated
>>> tarballs are also subject to change:
>>> https://marc.info/?l=openbsd-ports=151973450514279=2
>>
>> I've seen the occasional *claim* that this happens, but I've yet to see
>> any actual case where this happens and it isn't because of upstream
>> force-pushing a tag.
> 
> See https://bugs.archlinux.org/task/60382 for an example.
> 
> I still had the old archive around so I spent some time comparing it with
> the new one:
> 
> - I compared the checksum of each individual file in the archives, and
>   they were all identical
> 
> - I compared the raw tar files after decompressing, and there were just a
>   few bytes that were moved around
> 
> This really suggests a slight format change in the way the tarball was
> generated (could be file ordering).
> 
> If you want to double check, here they are:
> 
> - old archive from May 2017: 
> https://files.polyno.me/arch/kashmir-20150805-20170525.tar.gz
> 
> - new archive: https://files.polyno.me/arch/kashmir-20150805.tar.gz
> 
> Baptiste
> 

GitHub invalidating caches is not the problem here, they should be
allowed to do it whenever they wish. The root of the issue is
unreproduciblility as already pointed out here.

The tarballs are stable per se if no weird magic applies via git export
rules like dates being exported into files or no force pushes are done
to the tree, they use git archive via tar which itself is reproducible.
In fact, detatched pre-generated tarballs sometimes changes as well so
blame upstream for any such happening (at least nowadays :P).

Anyway, the differences we see here are just our digital legacy where
the format was not reproducible yet.

The example tarball indeed only contains metadata changes related to
ordering of filenames inside the structure. This is definitively stable
today.


PS: You can simply use diffoscope for such analysis, it has been
invented for this very purpose and is not only content but also
meta-data aware.

cheers,
Levente



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] TU Application: Daniel M. Capella

2018-11-15 Thread Levente Polyak via aur-general
On 11/15/18 5:50 AM, Daniel M. Capella via aur-general wrote:
>> - tests are awesome <3 run them whenever possible! more is better!
>>   pulling sources from github is favorable when you get free tests
>>   and sometimes manpages/docs
> 
> Will work with the upstreams to distribute these. I prefer to use published
> offerings as they are what the authors intend to be used. GitHub autogenerated
> tarballs are also subject to change:
> https://marc.info/?l=openbsd-ports=151973450514279=2
> 

Well source tree is the source of truth as that's where processed stuff
comes from :P

If you can't convince all upstream do distribute their tests and
especially not convince them to distribute the sources for generating
docs I would still say please go for source tree instead as the value of
such additional content is high. We love tests.

Side note, nowadays there are lots of python and other project that git
sign their latest tags and commits but have no other way of detatched
signatures, this adds a big gain as well. Remember two of your packages
had upstream tag signatures but i forgot to point them out.
If I can't convince them to use detatched sinatures as well I nowadays
just switch to pull git and use ?signed


>> python-soco:
>> - maybe distribute some docs as well like manpages from docs dir
> 
> I don't see any manpages there. This is a library.

make -C doc man
Manpages are not exclusive for tools, they are for any kind of
documentation like library APIs
try running:
man 3 sprintf

> 
>> razercfg:
>> - do we _really_ need to split razer mouse tool UI and daemon here?
>> doubt it tbh.
> 
> The UI is completely optional. ¯\_(ツ)_/¯

My point is, its the same source and the packages are not huge and it
doesn't have crazy dependency hells.
Only split up if it really makes sense, if there is no real reason we
keep them combined, like tools + libs + headers and don't go as crazy as
debian about splitting up everything.


> 
>> - CPPFLAGS are not respected and should be populated properly
>>   an upstream patch for that would be best
> 
> Will have to figure that out.

All you need in the PR is add $CPPFLAGS where you already see $CFLAGS.
For time being either backport this patch or just export
CFLAGS="${CFLAGS} ${CPPFLAGS} until its done.

> 
> Thank you very much for the review. Go LDFLAGS is still on the todo. Packaging
> for Go has perhaps been more traumatizing than even Node.js.
> 


You're welcome, always a plesure if people are happy about it :-)



signature.asc
Description: OpenPGP digital signature


Re: [aur-general] Auto-generated Github tarballs format change (Was: TU Application: Daniel M. Capella)

2018-11-15 Thread Bruno Pagani via aur-general
Le 15/11/2018 à 10:52, Baptiste Jonglez a écrit :
> On 15-11-18, Eli Schwartz via aur-general wrote:
>> On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
>>> Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
 - tests are awesome <3 run them whenever possible! more is better!
   pulling sources from github is favorable when you get free tests
   and sometimes manpages/docs
>>> Will work with the upstreams to distribute these. I prefer to use published
>>> offerings as they are what the authors intend to be used. GitHub 
>>> autogenerated
>>> tarballs are also subject to change:
>>> https://marc.info/?l=openbsd-ports=151973450514279=2
>> I've seen the occasional *claim* that this happens, but I've yet to see
>> any actual case where this happens and it isn't because of upstream
>> force-pushing a tag.
> See https://bugs.archlinux.org/task/60382 for an example.
>
> I still had the old archive around so I spent some time comparing it with
> the new one:
>
> - I compared the checksum of each individual file in the archives, and
>   they were all identical
>
> - I compared the raw tar files after decompressing, and there were just a
>   few bytes that were moved around
>
> This really suggests a slight format change in the way the tarball was
> generated (could be file ordering).
>
> If you want to double check, here they are:
>
> - old archive from May 2017: 
> https://files.polyno.me/arch/kashmir-20150805-20170525.tar.gz
>
> - new archive: https://files.polyno.me/arch/kashmir-20150805.tar.gz
>
> Baptiste

But those are not tag tarballs though.

That being said, yes, the tarball format changed once in the past, on
purpose, so that it could actually be reproducible and allow things like
the “alternative local workflow” of https://wiki.debian.org/Creating
signed GitHub releases. I can’t remember when that happened, but per
this page that was prior to April 2016. And AFAIK, it is not subject to
change again for this exact reason. ;)

Regards,
Bruno




signature.asc
Description: OpenPGP digital signature


[aur-general] Auto-generated Github tarballs format change (Was: TU Application: Daniel M. Capella)

2018-11-15 Thread Baptiste Jonglez
On 15-11-18, Eli Schwartz via aur-general wrote:
> On 11/14/18 11:50 PM, Daniel M. Capella via aur-general wrote:
> > Quoting Levente Polyak via aur-general (2018-11-14 17:00:38)
> >> - tests are awesome <3 run them whenever possible! more is better!
> >>   pulling sources from github is favorable when you get free tests
> >>   and sometimes manpages/docs
> > 
> > Will work with the upstreams to distribute these. I prefer to use published
> > offerings as they are what the authors intend to be used. GitHub 
> > autogenerated
> > tarballs are also subject to change:
> > https://marc.info/?l=openbsd-ports=151973450514279=2
> 
> I've seen the occasional *claim* that this happens, but I've yet to see
> any actual case where this happens and it isn't because of upstream
> force-pushing a tag.

See https://bugs.archlinux.org/task/60382 for an example.

I still had the old archive around so I spent some time comparing it with
the new one:

- I compared the checksum of each individual file in the archives, and
  they were all identical

- I compared the raw tar files after decompressing, and there were just a
  few bytes that were moved around

This really suggests a slight format change in the way the tarball was
generated (could be file ordering).

If you want to double check, here they are:

- old archive from May 2017: 
https://files.polyno.me/arch/kashmir-20150805-20170525.tar.gz

- new archive: https://files.polyno.me/arch/kashmir-20150805.tar.gz

Baptiste


signature.asc
Description: PGP signature