Re: [Pulp-dev] Pulp 3 release quality

2018-04-06 Thread Robin Chan
Dennis,

Thanks for putting this together. I don't see any responses on this thread
and take that to mean there were no concerns about this proposal.

Would this process/responsibility change need to go anywhere? (Side
questions, was this technically a PUP?)
"author of the PR would need to be responsible for making additional PRs to
pulp_file and/or pulp-smash to fix the smash tests"

I think this is minor enough to skip some process, however I would like to
see the above quote go into the developers guide. A new contributor
shouldn't have to glean through mail list archives to understand this new
responsibility.

Thanks,
Robin

On Tue, Mar 27, 2018 at 8:22 PM, Dennis Kliban  wrote:

> One of the requirements for this plan to work is making sure that pulp,
> pulp_file, and pulp-smash always continue working together. This means that
> any time we have a PR that breaks pulp-smash tests, the author of the PR
> would need to be responsible for making additional PRs to pulp_file and/or
> pulp-smash to fix the smash tests. To enforce this requirement using
> Travis, I have filed 2 tasks[0,1] in redmine that I would like to get
> groomed and added to the sprint. I plan on working on these tasks as I
> introduce a change that will result from resolving issue 3488[2].
>
> [0] https://pulp.plan.io/issues/3530
> [1] https://pulp.plan.io/issues/3531
> [2] https://pulp.plan.io/issues/3488
>
> On Fri, Mar 23, 2018 at 2:37 PM, Dennis Kliban  wrote:
>
>> I've started putting together a Continuous Delivery of Pulp 3 page[0] on
>> our wiki.
>>
>> This page outlines a plan for how we can ensure and prove the quality of
>> Pulp 3 releases by relying on pulp-smash tests and unit tests.
>>
>> This plan enables anyone to improve the quality of Pulp 3 releases
>> through contributions to pulp-smash and unit tests.
>>
>> Please take a look at the plan and provide feedback on this thread or
>> feel free to make edits directly on the page.
>>
>> [0] https://pulp.plan.io/projects/pulp/wiki/Continuous_Delivery_of_Pulp_3
>>
>> -Dennis
>>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Changesets Challenges

2018-04-06 Thread Dennis Kliban
On Fri, Apr 6, 2018 at 10:15 AM, Brian Bouterse  wrote:

> Several plugins have started using the Changesets including pulp_ansible,
> pulp_python, pulp_file, and perhaps others. The Changesets provide several
> distinct points of value which are great, but there are two challenges I
> want to bring up. I want to focus only on the problem statements first.
>
> 1. There is redundant "differencing" code in all plugins. The Changeset
> interface requires the plugin writer to determine what units need to be
> added and those to be removed. This requires all plugin writers to write
> the same non-trivial differencing code over and over. For example, you can
> see the same non-trivial differencing code present in pulp_ansible
> ,
> pulp_file
> ,
> and pulp_python
> .
> Line-wise, this "differencing" code makes up a large portion (maybe 50%) of
> the sync code itself in each plugin.
>
>
That is definitely a problem. We should address this.


> 2. Plugins can't do end-to-end stream processing. The Changesets
> themselves do stream processing, but when you call into
> changeset.apply_and_drain() you have to have fully parsed the metadata
> already. Currently when fetching all metadata from Galaxy, pulp_ansible
> takes about 380 seconds (6+ min). This means that the actual Changeset
> content downloading starts 380 seconds later than it could. At the heart of
> the problem, the fetching+parsing of the metadata is not part of the stream
> processing.
>
>
This is the same problem we currently have in Pulp 2. We should address
this.


> Do you see the same challenges I do? Are these the right problem
> statements? I think with clear problem statements a solution will be easy
> to see and agree on.
>
>
Yes, I do. You described the problems very well.


> Thanks!
> Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Content paths in Pulp 3

2018-04-06 Thread Austin Macdonald
IMO:
We should suggest v3/content///. [Proposal 1] We should
mention the other options with the pros, cons in the plugin writer docs.

On Thu, Apr 5, 2018 at 10:54 AM, David Davis  wrote:

>
> [0] https://pulp.plan.io/issues/3407
>

The correct link is: https://pulp.plan.io/issues/3472
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Changesets Challenges

2018-04-06 Thread Brian Bouterse
Several plugins have started using the Changesets including pulp_ansible,
pulp_python, pulp_file, and perhaps others. The Changesets provide several
distinct points of value which are great, but there are two challenges I
want to bring up. I want to focus only on the problem statements first.

1. There is redundant "differencing" code in all plugins. The Changeset
interface requires the plugin writer to determine what units need to be
added and those to be removed. This requires all plugin writers to write
the same non-trivial differencing code over and over. For example, you can
see the same non-trivial differencing code present in pulp_ansible
,
pulp_file
,
and pulp_python
.
Line-wise, this "differencing" code makes up a large portion (maybe 50%) of
the sync code itself in each plugin.

2. Plugins can't do end-to-end stream processing. The Changesets themselves
do stream processing, but when you call into changeset.apply_and_drain()
you have to have fully parsed the metadata already. Currently when fetching
all metadata from Galaxy, pulp_ansible takes about 380 seconds (6+ min).
This means that the actual Changeset content downloading starts 380 seconds
later than it could. At the heart of the problem, the fetching+parsing of
the metadata is not part of the stream processing.

Do you see the same challenges I do? Are these the right problem
statements? I think with clear problem statements a solution will be easy
to see and agree on.

Thanks!
Brian
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev