On 21-Jun-08, at 9:37 PM, Brett Porter wrote:

Hi Oleg, all,

I haven't dug into any of the code yet, but I've read through the document you were working on. It seems positive and makes sense. I did get a bit lost on the meaning of the "Tree Builder" section as it doesn't seem to describe the current tree or graph technique, but maybe illustrate the possible combinations?

Anyway, a few things caught my attention...

You mentioned about the inconsistency of whether 3.8.1 means "must be 3.8.1" vs "I'm using 3.8.1 right now but I'm not really sure which versions it works with". Pretty much everyone putting POMs together does the latter, as they are not trained in specifying ranges, and because projects do not always follow the guidelines on versioning such that compatibility can be measured. You also mentioned that some change to the way ranges are specified might be needed. I'm wondering what thoughts you have on how we specify dependencies need to be different from today?

I do think the compatibility question is an interesting one. We could theoretically build up a compatibility database in the central repository, even using clirr to make initial assessment on java artifacts. However this is unfortunately never a binary decision either, as you may not be using the incompatible parts, and compatibility can break in non-API ways.


There will never be a heuristic that can replace people just taking some care with the APIs and their testing. Without that combination on the part of the project an external party trying to collect these metrics to use is probably not going to be of great value. Retaining API compatibility along with a high degree of coverage and you probably have a useful metric and really if it's not a conscious effort the API will be broken even if inadvertently. The mechanical turk is not going to help us much there.

I'm curious about how these work in terms of remote retrieval (ie, where consulting metadatasource is a costly operation). When I looked at p2 in Sept last year, they stored all the repository artifact metadata in a single, large file.

This is just one implementation of the MetadataRepository API, it's not very costly because they only deal with dependencies. Our currently because of the POM swizzling we do.

Are we expecting to need some partial or full resolution of the repository content?


When you deal with ranges the number of possible paths grows very quickly, and the SAT solver is very fast. You could predigest dependency information but pre-calculated subgraphs are probably not useful as one change in a range in a path and you're recalculating anyway.

Do you think depth resolution is still relevant if we have a better method? I believe the only depth we really should care about is "1" - those immediately specified (and particularly those managed by depMgmt). With that in mind, how can the use of depMgmt help in reducing the problem space during resolution?

It will boil down to something far simpler. The SAT solver is guaranteed to find you a working solution. All ranges are a set where that set can be one, or fixed. If you want it weighted to what you say in your POM then fix the version. The depMan should not change the calculation like it does now, again if you want it fixed then fix it. We will let the SAT solver find the solution and we take the magic out.



You also mention OSGi resolution at the end. Is it your intent that one of the goals of the mechanism is to be able to be a complete OSGi resolver?

We already have an OSGi resolver in Tycho, this is not the same thing. What is in p2 uses SAT, but this is not what's OSGi. No one has made it clear if that's what OSGi proper will use but right now the OSGi resolver is a simple state machine. What's in p2 is not an OSGi resolver even though what it retrieves is ultimately fed into an OSGi state machine.



I'm also interested in seeing explicit listing of the problems we aim to solve to ensure we have objectives/scope. For example:
- lack of alternate conflict resolution in current artifact mechanism
- reduction in amount of metadata and artifacts dragged in (ie, stuff that could use maven 2.0.9 still pulls down whatever version of maven it was built against, even when it's not used)
- deterministic building over time

Just my initial thoughts - I'll follow your progress as you continue investigating :)

Cheers,
Brett

--
Brett Porter
[EMAIL PROTECTED]
http://blogs.exist.com/bporter/


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Thanks,

Jason

----------------------------------------------------------
Jason van Zyl
Founder,  Apache Maven
jason at sonatype dot com
----------------------------------------------------------

Three people can keep a secret provided two of them are dead.

 -- Unknown


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to