reporting jasper reports

2007-08-01 Thread Brett Porter

Hi all,

This is mainly for Deng and Teody as I've seen them working through  
the issue for reporting... I took a look at the latest patch and it's  
looking pretty good.


I did go to check about the jasper reports license, though, and it  
appears from the POM that it is LGPL.


Though it's not official (yet), we shouldn't be distributing it  
according to the ASF policy: http://people.apache.org/~cliffs/ 
3party.html#transition


So, I think we have 3 options if we continue to do this:
- put it in a separate module (and profile) that isn't distributed  
with continuum, but can be built manually and installed
- require the user to drop jasperreports into WEB-INF/lib (and  
gracefully fail if they don't)
- come up with a whiz-bang addon installer that can get them to  
confirm the license and grab the jasper stuff from the repository (a  
bit much for now :)


I'm thinking 1) is the best way to go, and provide a bland, default  
implementation of the reporting pages that doesn't use jasper (just a  
spit out a table of everything).


WDYT?

Cheers,
Brett


Re: reporting jasper reports

2007-08-01 Thread Maria Odea Ching

Brett Porter wrote:

Hi all,

This is mainly for Deng and Teody as I've seen them working through 
the issue for reporting... I took a look at the latest patch and it's 
looking pretty good.


I did go to check about the jasper reports license, though, and it 
appears from the POM that it is LGPL.


Though it's not official (yet), we shouldn't be distributing it 
according to the ASF policy: 
http://people.apache.org/~cliffs/3party.html#transition


So, I think we have 3 options if we continue to do this:
- put it in a separate module (and profile) that isn't distributed 
with continuum, but can be built manually and installed


I take it this should be archiva not continuum? :)
anyway, +1 to this!

- require the user to drop jasperreports into WEB-INF/lib (and 
gracefully fail if they don't)
- come up with a whiz-bang addon installer that can get them to 
confirm the license and grab the jasper stuff from the repository (a 
bit much for now :)


I'm thinking 1) is the best way to go, and provide a bland, default 
implementation of the reporting pages that doesn't use jasper (just a 
spit out a table of everything).


WDYT?

Cheers,
Brett



Thanks,
Deng


Archiva releases (Take 2)

2007-08-01 Thread Maria Odea Ching

Hi All,

We (Brett, Joakim and I) have segregated the issues in Jira to prep for 
the upcoming 1.0 release (finally! :-) ). Anyway, there'll be two beta 
releases first before 1.0.


So..here's the plan (excerpted from Brett):
- finish all issues for 1.0-beta-1
- then move on to 1.0-beta-2 issues for the next release
- do a big bug bash to find the problems in it
- decide on these found bugs for 1.0-beta-3 or 1.0.x (known issues)


Btw, I'll prepare 1.0-beta-1 for release on August 6 (that'll be Monday 
my time), and this will include the following:

1. Resolved/Closed:
- MRM-290 (Ability to pre-configure the Jetty port in conf/plexus.xml)
- MRM-326 (Adding/Editing repositories doesn't have validation)
- MRM-425 (Search and Browse do not work for snapshots)
- MRM-426 (Search does not work for snapshots because of different 
version values in index and database when the snapshot version is unique)


2. In Progress/Open:
- MRM-143 (Improve error reporting on corrupt jars, poms, etc)
- MRM-275 (add remove old snapshots Scheduler)
- MRM-294 (Repository purge feature for snapshots)
- MRM-329 (The Reports link gives an HTTP 500)
- MRM-347 (Undefined ${appserver.home} and ${appserver.base})
- MRM-373 (Unable to delete the pre-configured example network proxy)
- MRM-412 (Add support for maven 1 (legacy) request to access a maven2 
(default layout) repo )

- MRM-428 (Managed and remote repositories with same name causes problems)
- MRM-429 (Find artifact does not work when the applet is disabled)
- MRM-430 (Archiva always writes to ~/.m2/archiva.xml)

Everyone okay with this? :)


Thanks,
Deng








Re: Archiva releases (Take 2)

2007-08-01 Thread Fabrice Bellingard
I've closed MRM-373 (Unable to delete the pre-configured example network
proxy), which was no longer valid.

Fabrice.

On 8/1/07, Maria Odea Ching [EMAIL PROTECTED] wrote:

 Hi All,

 We (Brett, Joakim and I) have segregated the issues in Jira to prep for
 the upcoming 1.0 release (finally! :-) ). Anyway, there'll be two beta
 releases first before 1.0.

 So..here's the plan (excerpted from Brett):
 - finish all issues for 1.0-beta-1
 - then move on to 1.0-beta-2 issues for the next release
 - do a big bug bash to find the problems in it
 - decide on these found bugs for 1.0-beta-3 or 1.0.x (known issues)


 Btw, I'll prepare 1.0-beta-1 for release on August 6 (that'll be Monday
 my time), and this will include the following:
 1. Resolved/Closed:
 - MRM-290 (Ability to pre-configure the Jetty port in conf/plexus.xml)
 - MRM-326 (Adding/Editing repositories doesn't have validation)
 - MRM-425 (Search and Browse do not work for snapshots)
 - MRM-426 (Search does not work for snapshots because of different
 version values in index and database when the snapshot version is unique)

 2. In Progress/Open:
 - MRM-143 (Improve error reporting on corrupt jars, poms, etc)
 - MRM-275 (add remove old snapshots Scheduler)
 - MRM-294 (Repository purge feature for snapshots)
 - MRM-329 (The Reports link gives an HTTP 500)
 - MRM-347 (Undefined ${appserver.home} and ${appserver.base})
 - MRM-373 (Unable to delete the pre-configured example network proxy)
 - MRM-412 (Add support for maven 1 (legacy) request to access a maven2
 (default layout) repo )
 - MRM-428 (Managed and remote repositories with same name causes problems)
 - MRM-429 (Find artifact does not work when the applet is disabled)
 - MRM-430 (Archiva always writes to ~/.m2/archiva.xml)

 Everyone okay with this? :)


 Thanks,
 Deng









Re: Archiva releases (Take 2)

2007-08-01 Thread Ludovic Maitre

Hi Maria, all,

Although i'm a new user of Archiva, i would like to know when in the 
roadmap the team plan to fix issues related to running Archiva inside 
Tomcat (issues like http://jira.codehaus.org/browse/MRM-323 ) ? Is it 
something important for the team ? Did you expect users to deploy 
Archiva standalone in production ?

Best regards,

Maria Odea Ching wrote:

Hi All,

We (Brett, Joakim and I) have segregated the issues in Jira to prep 
for the upcoming 1.0 release (finally! :-) ). Anyway, there'll be two 
beta releases first before 1.0.


So..here's the plan (excerpted from Brett):
- finish all issues for 1.0-beta-1
- then move on to 1.0-beta-2 issues for the next release
- do a big bug bash to find the problems in it
- decide on these found bugs for 1.0-beta-3 or 1.0.x (known issues)


Btw, I'll prepare 1.0-beta-1 for release on August 6 (that'll be 
Monday my time), and this will include the following:

1. Resolved/Closed:
- MRM-290 (Ability to pre-configure the Jetty port in conf/plexus.xml)
- MRM-326 (Adding/Editing repositories doesn't have validation)
- MRM-425 (Search and Browse do not work for snapshots)
- MRM-426 (Search does not work for snapshots because of different 
version values in index and database when the snapshot version is unique)


2. In Progress/Open:
- MRM-143 (Improve error reporting on corrupt jars, poms, etc)
- MRM-275 (add remove old snapshots Scheduler)
- MRM-294 (Repository purge feature for snapshots)
- MRM-329 (The Reports link gives an HTTP 500)
- MRM-347 (Undefined ${appserver.home} and ${appserver.base})
- MRM-373 (Unable to delete the pre-configured example network proxy)
- MRM-412 (Add support for maven 1 (legacy) request to access a maven2 
(default layout) repo )
- MRM-428 (Managed and remote repositories with same name causes 
problems)

- MRM-429 (Find artifact does not work when the applet is disabled)
- MRM-430 (Archiva always writes to ~/.m2/archiva.xml)

Everyone okay with this? :)


Thanks,
Deng











--
Cordialement,
Ludo - http://www.ubik-products.com
---
L'amour pour principe et l'ordre pour base; le progres pour but (A.Comte) 



Re: Archiva releases (Take 2)

2007-08-01 Thread Maria Odea Ching

Ok, thanks Fabrice :)
I'll add that to the list.

-Deng

Fabrice Bellingard wrote:

I've closed MRM-373 (Unable to delete the pre-configured example network
proxy), which was no longer valid.

Fabrice.

On 8/1/07, Maria Odea Ching [EMAIL PROTECTED] wrote:
  

Hi All,

We (Brett, Joakim and I) have segregated the issues in Jira to prep for
the upcoming 1.0 release (finally! :-) ). Anyway, there'll be two beta
releases first before 1.0.

So..here's the plan (excerpted from Brett):
- finish all issues for 1.0-beta-1
- then move on to 1.0-beta-2 issues for the next release
- do a big bug bash to find the problems in it
- decide on these found bugs for 1.0-beta-3 or 1.0.x (known issues)


Btw, I'll prepare 1.0-beta-1 for release on August 6 (that'll be Monday
my time), and this will include the following:
1. Resolved/Closed:
- MRM-290 (Ability to pre-configure the Jetty port in conf/plexus.xml)
- MRM-326 (Adding/Editing repositories doesn't have validation)
- MRM-425 (Search and Browse do not work for snapshots)
- MRM-426 (Search does not work for snapshots because of different
version values in index and database when the snapshot version is unique)

2. In Progress/Open:
- MRM-143 (Improve error reporting on corrupt jars, poms, etc)
- MRM-275 (add remove old snapshots Scheduler)
- MRM-294 (Repository purge feature for snapshots)
- MRM-329 (The Reports link gives an HTTP 500)
- MRM-347 (Undefined ${appserver.home} and ${appserver.base})
- MRM-373 (Unable to delete the pre-configured example network proxy)
- MRM-412 (Add support for maven 1 (legacy) request to access a maven2
(default layout) repo )
- MRM-428 (Managed and remote repositories with same name causes problems)
- MRM-429 (Find artifact does not work when the applet is disabled)
- MRM-430 (Archiva always writes to ~/.m2/archiva.xml)

Everyone okay with this? :)


Thanks,
Deng










  




Re: Database model change for beta-1

2007-08-01 Thread Emmanuel Venisse

I think null value will be enough. I think we'll need to test the null value in 
the UI but it isn't a problem.

Emmanuel

Brett Porter a écrit :

Hi,

I've narrowed down the problem in upgrading from alpha-2 to beta-1 to 
the following model change:


field
  namebuildDefinition/name
  version1.1.0+/version
  association xml.reference=true stash.part=true 
jpox.dependent=false

typeBuildDefinition/type
  /association
/field


The problem here is that Continuum has no idea how to pre-populate the 
value for this. Should we/can we simply add a default value of null for 
this? Or will the data management app need some smarts to set it to the 
default build definition where it doesn't exist?


Thanks,
Brett








Re: Solving the notification problem

2007-08-01 Thread Emmanuel Venisse

For a project notifier, I think we can keep what we have actually, but for a 
group notifier, we can send a single mail by project group.
The mail can be sent after the build of the latest project of the group, I don't think it will be a problem to know if the project is the latest and we won't need to modify the db schema for this new 
feature.


I'd like to keep the usage we have actually, so we can use a new parameter in 
the continuum conf where admin will choose if mail are sent one by one or by 
project group.

In 1.2, we'd can remove it and let users choose how they want to receive 
notifications.

Emmanuel

Brett Porter a écrit :

Hi,

I would like to see us address the problems of mass-notification which 
tends to be a blight on Continuum when dealing with projects with lots 
of modules in 1.1. I think with this simple usability improvement, the 
perception of the quality of the overall system would be greatly improved.


So I wanted to start this thread to brainstorm ideas for what we might 
do to make it less noisy, but just as effective. I have some thoughts, 
but I'm interested to hear others first.


What do others think? Should this be in 1.1, and if so how should it work?

Cheers,
Brett







Re: Solving the notification problem

2007-08-01 Thread Brett Porter

On 02/08/2007, at 7:46 AM, Emmanuel Venisse wrote:

For a project notifier, I think we can keep what we have actually,  
but for a group notifier, we can send a single mail by project group.
The mail can be sent after the build of the latest project of the  
group, I don't think it will be a problem to know if the project is  
the latest and we won't need to modify the db schema for this new  
feature.




Sounds good to me. That and eliminating the error condition would be  
great.


I'd like to keep the usage we have actually, so we can use a new  
parameter in the continuum conf where admin will choose if mail are  
sent one by one or by project group.


Do you think this is a continuum conf, or a group notifier conf?

- Brett


Defining a custom lifecycle

2007-08-01 Thread Sebastien Brunot
Hi all,

Is it possible to develop a plugin that defines a custom type of
artifact (a test campaign in our case), a custom lifecycle (with, for
examples, phases such as pre-start-servers, start-servers,
post-start-servers, pre-initialize-environment,
initialize-environment, etc...) and a default mapping to the lifecycle
?

Thanks for your help,

Sebastien Brunot
 
Make simple things simple before making complex things possible (David
S. Platt in Why Software Sucks ?)
and then... Make complex things simple and accessible (Steve Demuth)

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



how we handle JIRA versions

2007-08-01 Thread Brett Porter

Hi,

A while back I promised to write up what we are doing with jira  
versions now, and finally got around to it. In the process, I came up  
with a couple of tweaks we could make (see actions). Here it is in  
point form - does everyone agree this is the process we are following  
now? Missing anything?


- [ ] versions:
- [ ] 2.0.8 - the next release, currently being worked on for the
  2.0.x branch
- [ ] 2.0.x - issues that are likely to be fixed in the 2.0.x
  series, but not yet scheduled
- [ ] 2.1-alpha-1 - the next release, currently being worked on for
  trunk
- [ ] 2.1-alpha-2 - the release after next, and so on
- [ ] 2.1 - issues to fix by the 2.1 final release
- [ ] 2.1.x - issues to list as known issues in 2.1, and to be
  fixed in the releases after 2.1.
- [ ] 2.2 - issues to fix in the 2.2 release (not yet broken down
  as it is a future release)
- [ ] 2.x - issues to fix in later 2.x releases (not yet scheduled)
- [ ] Backlog - issues that have not been reviewed for a version
  assignment (and may be duplicates, won't fix, unreproducible,
  etc.)
- [ ] Unscheduled - new issues that haven't been reviewed yet
- [ ] actions
- [ ] rename 2.1.x to 2.1
- [ ] create 2.1.x after 2.1
- [ ] rename 2.2.x to 2.2
- [ ] create 2.x
- [ ] take a knife to 2.1 (currently 2.1.x) which has 254 issues
- [ ] rename Reviewed Pending Version Assignment to Backlog
- [ ] move all documentation issues either to the site project
  (this should all be done by now), or to a scheduled version
  (or the backlog)
- [ ] create a shared jira and move the shared component issues to
  that.
- [ ] workflow
- [ ] for a new issue in unscheduled
- [ ] should attempt to review them quickly and regularly
- [ ] first action is to attempt to resolve reasonably
  (duplicate, won't fix if it's inappropriate, or
  incomplete if there is not enough detail)
- [ ] double check priority and other details like affects
  version and component and prompt for more information if
  needed
- [ ] all issues should have an affects version(s) and
  component(s)
- [ ] assign a version depending on whether it's a bug or a
  feature, and what it's severity is
- [ ] unless it is a regression in the current code, it should
  not be assigned to the current version
- [ ] for an issue in backlog
- [ ] periodically review issues related to other ongoing work
  to attempt to close duplicates or assign to an
  appropriate version
- [ ] for an issue in the current release that could be bumped
- [ ] should not be done if it's a blocker or a regression
- [ ] can be done en masse for remaining issues when a release
  is called based on an agreed date and nothing left in
  scheduled features/blockers/regressions list
- [ ] can be done for individual issues earlier than that if
  time runs short or priority becomes reduced
- [ ] planning for the next release
- [ ] during the previous release or after it's complete,
  planning for the next one should occur
- [ ] should reasonably prevent adding new issues to a release
  once it becomes the current one (unless the issue is a
  blocker or regression)
- [ ] create a new version and pull back from the generic
  bucket (for 2.1-alpha-2, these are taken from 2.1, for
  2.0.8 they are taken from 2.0.x, for 2.1's first cut they
  are taken from 2.x).
- [ ] use votes, priorities and effort/relatedness to other
  issues to determine which issues to schedule
- [ ] closing an issue
- [ ] if the resolution is other than fixed, the fix for
  should be unset to make the release notes more accurate
- [ ] if set to fixed, the fix for version MUST be set
- [ ] documentation issues
- [ ] documentation is versioned, while the project site is not
- [ ] the project site should have it's own jira project
- [ ] documentation issues should be scheduled like any other
  component of the system
- [ ] working on issues
- [ ] always assign to self before starting to avoid conflict
  and give a heads up
- [ ] setting the issue in progress is preferable, esp. for a
  long running task, once the work is actually under way.
- [ ] attempt to keep issues small and completable with a
  commit rather than leaving open (particularly with a
  dangling assignment that you are no longer working on)

-
To unsubscribe, e-mail: [EMAIL 

Re: RDF/XML For Repository Information

2007-08-01 Thread Jason van Zyl


On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:


I would like to see if there is any general interest from the Maven
community in using RDF for storing and retrieving of repository  
information.


As the only means, and not accessed via some API shielding the  
underlying store then my vote will always be -1. I hope that's not  
what you've done with NMaven as that would be a fatal flaw. I'm  
assuming there is some API sitting on top of it.


I switched NMaven's resolver implementation to one using RDF and am  
happy
with the results. This implementation allows: 1) easily extending  
meta-data,


Which I'm always skeptical of as we potentially wind up which schisms  
and I'm not sure what kind of extension you need for purely  
dependency information which the resolver is concerned with.


in my case for attaching requirements to an artifact; 2) writing  
queries
against the repo, as opposed to reading and manipulating the  
hierarchy of

poms. This also results in cleaner, simpler code;


This should be done with an API, not against a fixed datastore. Using  
RDF and only RDF is not something I would ever support because I know  
of two implementations of repository managers that use their own  
internal format. Proximity uses one and I use Lucene indices so the  
unifier will be an API.



3) exporting all the
meta-data to a single RDF/XML file, which has helped me with  
debugging and
understanding the entire repository. A future benefit would be the  
ability

to run distributed queries across multiple repos.


It's sounding like you've build this on RDF which I don't think is  
wise at all. For example this is not hard with any underlying store  
with the right API. I don't think it would be hard for you to use an  
API though. I'll never support a single storage format that isn't  
accessed via an API.




One of the implications is that while the pom is still used for  
building,
the local and remote repositories would just contain RDF/XML files:  
this
would, of course, be a big change. I would just like to run this  
idea by the
general group; if there is enough interest, I would be happy to do  
some
prototype work to see how RDF would work within Maven core. You can  
look at

one of NMaven's core classes that used RDF here:
https://svn.apache.org/repos/asf/incubator/nmaven/trunk/components/ 
dotnet-dao/project/src/main/java/org/apache/maven/dotnet/dao/impl/




As a backing store for a rework of the artifact API sure, as the  
primary means. I'd never support that myself.



Regards,
Shane


Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[MNG-3043] Allow 'mvn test' to work with test-jar dependencies in a reactor

2007-08-01 Thread Jamie Whitehouse
I think this bug is related to an issue I'm currently having but I'd like to
confirm.

From the Getting
Startedhttp://maven.apache.org/guides/getting-started/index.htmlpage
there's mention that when performing a multi-module build that it is
not required that you run install to successfully perform these steps - you
can run package on its own and the artifacts in the reactor will be used
from the target directories instead of the local repository.

This doesn't seem to be true for dependencies on attached
testshttp://maven.apache.org/guides/mini/guide-attached-tests.html
.

An example using the following multi-module structure:
|+POM.XML   -- multi-module and parent pom (root)
|
|--A
|  |+POM.XML   -- attaches tests for other projects to use
|
|--B
|  |+POM.XML
|
|--C   -- depends on test classes from A
   |+POM.XML

Results from running at the root:
mvn clean package - project c fails stating Failed to resolve artifact.
org.example:a:test-jar:tests:1.0.0-SNAPSHOT
mvn clean install - build successful, presumably since the test artifact was
installed prior to module 'c' being built

I know ideally the test infrastructure would be it's own project/module, but
at the moment this isn't feasible (and I assume the build would work fine
under this model).

Is there a work around for this?  Is this a manifestation of issue
MNG-3043http://jira.codehaus.org/browse/MNG-3043
?

Thanks,
Jamie.


Re: RDF/XML For Repository Information

2007-08-01 Thread Jason van Zyl

A couple more points I thought of:

There is another implementation that uses JSR 170 so it would be very  
hard to have any level of interoperability without an API. I  
certainly don't want that being RDF.


There are plenty of good tools that can be leveraged, for example I  
use Lucene because it's easy to do incremental updates because it can  
easily treat many disparate files as a single logical index. I don't  
want to be force, and I don't want anyone else to be forced, to use  
RDF directly.


For any sort of distribution things like Terracotta can be leveraged,  
or you might use a tool like Hadoop.


The only unifier for all these tools that have cropped up to date is  
a good API.


I'll wait for you to answer but I really hope you didn't bind NMaven  
to RDF.


On 1 Aug 07, at 1:17 PM 1 Aug 07, Jason van Zyl wrote:



On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:


I would like to see if there is any general interest from the Maven
community in using RDF for storing and retrieving of repository  
information.


As the only means, and not accessed via some API shielding the  
underlying store then my vote will always be -1. I hope that's not  
what you've done with NMaven as that would be a fatal flaw. I'm  
assuming there is some API sitting on top of it.


I switched NMaven's resolver implementation to one using RDF and  
am happy
with the results. This implementation allows: 1) easily extending  
meta-data,


Which I'm always skeptical of as we potentially wind up which  
schisms and I'm not sure what kind of extension you need for purely  
dependency information which the resolver is concerned with.


in my case for attaching requirements to an artifact; 2) writing  
queries
against the repo, as opposed to reading and manipulating the  
hierarchy of

poms. This also results in cleaner, simpler code;


This should be done with an API, not against a fixed datastore.  
Using RDF and only RDF is not something I would ever support  
because I know of two implementations of repository managers that  
use their own internal format. Proximity uses one and I use Lucene  
indices so the unifier will be an API.



3) exporting all the
meta-data to a single RDF/XML file, which has helped me with  
debugging and
understanding the entire repository. A future benefit would be the  
ability

to run distributed queries across multiple repos.


It's sounding like you've build this on RDF which I don't think is  
wise at all. For example this is not hard with any underlying store  
with the right API. I don't think it would be hard for you to use  
an API though. I'll never support a single storage format that  
isn't accessed via an API.




One of the implications is that while the pom is still used for  
building,
the local and remote repositories would just contain RDF/XML  
files: this
would, of course, be a big change. I would just like to run this  
idea by the
general group; if there is enough interest, I would be happy to do  
some
prototype work to see how RDF would work within Maven core. You  
can look at

one of NMaven's core classes that used RDF here:
https://svn.apache.org/repos/asf/incubator/nmaven/trunk/components/ 
dotnet-dao/project/src/main/java/org/apache/maven/dotnet/dao/impl/




As a backing store for a rework of the artifact API sure, as the  
primary means. I'd never support that myself.



Regards,
Shane


Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [MNG-3043] Allow 'mvn test' to work with test-jar dependencies in a reactor

2007-08-01 Thread Piotr Tabor

Hi Jamie,

Please, look at this issue: http://jira.codehaus.org/browse/MJAR-75 
(Jason  - please, look at this issue / patch)


I think that If it is applied and you  do the tests after package 
phase (run mvn package to fire the tests),

the test's will use the new jar.

Thanks,
Piotr Tabor




Jamie Whitehouse pisze:

I think this bug is related to an issue I'm currently having but I'd like to
confirm.

From the Getting
Startedhttp://maven.apache.org/guides/getting-started/index.htmlpage
there's mention that when performing a multi-module build that it is
not required that you run install to successfully perform these steps - you
can run package on its own and the artifacts in the reactor will be used
from the target directories instead of the local repository.

This doesn't seem to be true for dependencies on attached
testshttp://maven.apache.org/guides/mini/guide-attached-tests.html
.

An example using the following multi-module structure:
|+POM.XML   -- multi-module and parent pom (root)
|
|--A
|  |+POM.XML   -- attaches tests for other projects to use
|
|--B
|  |+POM.XML
|
|--C   -- depends on test classes from A
   |+POM.XML

Results from running at the root:
mvn clean package - project c fails stating Failed to resolve artifact.
org.example:a:test-jar:tests:1.0.0-SNAPSHOT
mvn clean install - build successful, presumably since the test artifact was
installed prior to module 'c' being built

I know ideally the test infrastructure would be it's own project/module, but
at the moment this isn't feasible (and I assume the build would work fine
under this model).

Is there a work around for this?  Is this a manifestation of issue
MNG-3043http://jira.codehaus.org/browse/MNG-3043
?

Thanks,
Jamie.

  



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: how we handle JIRA versions

2007-08-01 Thread Dennis Lundberg

Excellent stuff Brett. Let me know if I can help.

Most of this is equally important for plugins and other maven sub 
projects. We should try to make an additional, more general, description 
of versions that is not tied to MNG.


I have a couple of questions, see inline...

Brett Porter wrote:

Hi,

A while back I promised to write up what we are doing with jira versions 
now, and finally got around to it. In the process, I came up with a 
couple of tweaks we could make (see actions). Here it is in point form 
- does everyone agree this is the process we are following now? Missing 
anything?


- [ ] versions:
- [ ] 2.0.8 - the next release, currently being worked on for the
  2.0.x branch
- [ ] 2.0.x - issues that are likely to be fixed in the 2.0.x
  series, but not yet scheduled
- [ ] 2.1-alpha-1 - the next release, currently being worked on for
  trunk
- [ ] 2.1-alpha-2 - the release after next, and so on
- [ ] 2.1 - issues to fix by the 2.1 final release
- [ ] 2.1.x - issues to list as known issues in 2.1, and to be
  fixed in the releases after 2.1.
- [ ] 2.2 - issues to fix in the 2.2 release (not yet broken down
  as it is a future release)
- [ ] 2.x - issues to fix in later 2.x releases (not yet scheduled)
- [ ] Backlog - issues that have not been reviewed for a version
  assignment (and may be duplicates, won't fix, unreproducible,
  etc.)
- [ ] Unscheduled - new issues that haven't been reviewed yet


Hmm, has anyone looked at issues that are in Backlog? If not, what is 
the difference between Backlog and Unassigned?



- [ ] actions
- [ ] rename 2.1.x to 2.1
- [ ] create 2.1.x after 2.1


Don't we need 2.1.x *before* 2.1 is released so that we can move known 
issues to it before the release?



- [ ] rename 2.2.x to 2.2
- [ ] create 2.x
- [ ] take a knife to 2.1 (currently 2.1.x) which has 254 issues
- [ ] rename Reviewed Pending Version Assignment to Backlog
- [ ] move all documentation issues either to the site project
  (this should all be done by now), or to a scheduled version
  (or the backlog)
- [ ] create a shared jira and move the shared component issues to
  that.


+1


- [ ] workflow
- [ ] for a new issue in unscheduled
- [ ] should attempt to review them quickly and regularly
- [ ] first action is to attempt to resolve reasonably
  (duplicate, won't fix if it's inappropriate, or
  incomplete if there is not enough detail)
- [ ] double check priority and other details like affects
  version and component and prompt for more information if
  needed
- [ ] all issues should have an affects version(s) and
  component(s)
- [ ] assign a version depending on whether it's a bug or a
  feature, and what it's severity is
- [ ] unless it is a regression in the current code, it should
  not be assigned to the current version
- [ ] for an issue in backlog
- [ ] periodically review issues related to other ongoing work
  to attempt to close duplicates or assign to an
  appropriate version
- [ ] for an issue in the current release that could be bumped
- [ ] should not be done if it's a blocker or a regression
- [ ] can be done en masse for remaining issues when a release
  is called based on an agreed date and nothing left in
  scheduled features/blockers/regressions list
- [ ] can be done for individual issues earlier than that if
  time runs short or priority becomes reduced
- [ ] planning for the next release
- [ ] during the previous release or after it's complete,
  planning for the next one should occur
- [ ] should reasonably prevent adding new issues to a release
  once it becomes the current one (unless the issue is a
  blocker or regression)
- [ ] create a new version and pull back from the generic
  bucket (for 2.1-alpha-2, these are taken from 2.1, for
  2.0.8 they are taken from 2.0.x, for 2.1's first cut they
  are taken from 2.x).
- [ ] use votes, priorities and effort/relatedness to other
  issues to determine which issues to schedule
- [ ] closing an issue
- [ ] if the resolution is other than fixed, the fix for
  should be unset to make the release notes more accurate
- [ ] if set to fixed, the fix for version MUST be set
- [ ] documentation issues
- [ ] documentation is versioned, while the project site is not
- [ ] the project site should have it's own jira project
- [ ] documentation issues should be scheduled like any other
  component of the system


How do we educate our users (and ourselves for 

Re: RDF/XML For Repository Information

2007-08-01 Thread Shane Isbell
To clarify: NMaven uses RDF for its local repository store (and has
abstractions of an AssemblyResolver/ProjectDao for the API). It does not
require that the remote repository contain RDF, but rather parses and stores
the information from the remote pom artifact into the RDF store.  There is
also an RDF repository converter that takes the local repository and RDF
store and converts it into the default local repository format, generating
the pom files. This allows maven plugins like the assembler to still
function.

Under its current form, RDF is only used locally, with the RDF--pom
conversion occurring for remote repos. The way I see it eventually working
is to be able to execute SPARQL ( http://www.w3.org/TR/rdf-sparql-query/)
queries against the remote repo. This doesn't preclude any other format or
interface from working. A good way to look at it is to separate out the
concepts of the meta-data from the concept of binary artifacts (like jar,
dll, exe). The binary artifacts can be in any repository format: it doesn't
matter. The meta-data however, needs multiple interfaces (RDF, SOAP, pom
files, etc). A good repository manager would need to be able to support
multiple adapters for these interfaces.

Regards,
Shane


On 8/1/07, Jason van Zyl [EMAIL PROTECTED] wrote:


 On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:

  I would like to see if there is any general interest from the Maven
  community in using RDF for storing and retrieving of repository
  information.

 As the only means, and not accessed via some API shielding the
 underlying store then my vote will always be -1. I hope that's not
 what you've done with NMaven as that would be a fatal flaw. I'm
 assuming there is some API sitting on top of it.

  I switched NMaven's resolver implementation to one using RDF and am
  happy
  with the results. This implementation allows: 1) easily extending
  meta-data,

 Which I'm always skeptical of as we potentially wind up which schisms
 and I'm not sure what kind of extension you need for purely
 dependency information which the resolver is concerned with.

  in my case for attaching requirements to an artifact; 2) writing
  queries
  against the repo, as opposed to reading and manipulating the
  hierarchy of
  poms. This also results in cleaner, simpler code;

 This should be done with an API, not against a fixed datastore. Using
 RDF and only RDF is not something I would ever support because I know
 of two implementations of repository managers that use their own
 internal format. Proximity uses one and I use Lucene indices so the
 unifier will be an API.

  3) exporting all the
  meta-data to a single RDF/XML file, which has helped me with
  debugging and
  understanding the entire repository. A future benefit would be the
  ability
  to run distributed queries across multiple repos.

 It's sounding like you've build this on RDF which I don't think is
 wise at all. For example this is not hard with any underlying store
 with the right API. I don't think it would be hard for you to use an
 API though. I'll never support a single storage format that isn't
 accessed via an API.

 
  One of the implications is that while the pom is still used for
  building,
  the local and remote repositories would just contain RDF/XML files:
  this
  would, of course, be a big change. I would just like to run this
  idea by the
  general group; if there is enough interest, I would be happy to do
  some
  prototype work to see how RDF would work within Maven core. You can
  look at
  one of NMaven's core classes that used RDF here:
  https://svn.apache.org/repos/asf/incubator/nmaven/trunk/components/
  dotnet-dao/project/src/main/java/org/apache/maven/dotnet/dao/impl/
 

 As a backing store for a rework of the artifact API sure, as the
 primary means. I'd never support that myself.

  Regards,
  Shane

 Thanks,

 Jason

 --
 Jason van Zyl
 Founder and PMC Chair, Apache Maven
 jason at sonatype dot com
 --




 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: [VOTE] Release maven-docck-plugin 1.0-beta-2

2007-08-01 Thread Vincent Siveton
+1

Vincent

2007/7/30, Dennis Lundberg [EMAIL PROTECTED]:
 Hi,

 I'd like to release maven-docck-plugin 1.0-beta-2. It has been 9 months
 since the last release.

 Release Notes:
 http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=11361styleName=Htmlversion=13021

 Tag:
 http://svn.apache.org/repos/asf/maven/plugins/tags/maven-docck-plugin-1.0-beta-2/

 Staged at:
 http://people.apache.org/~dennisl/staging-repository-docck-plugin/

 The vote will be open for 72 hours.


 Here is my +1

 --
 Dennis Lundberg

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



M2Eclipse plugin issue.

2007-08-01 Thread Hothi_Amrit
Hi,

 I'm using Maven 2.0 Integration (0.0.9) plugin in my Eclipse 3.2.2.
The project I'm trying to build using the plugin has prerequisites tag
checking for maven version 2.0.6 or higher. I can build the maven
project independently fine, but trying to build using the maven plugin
in Eclipse the check for maven version is failing.  Looks like it only
works with version 2.0.4 or lower. 

Amrit.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: how we handle JIRA versions

2007-08-01 Thread Brett Porter


On 02/08/2007, at 5:31 AM, Dennis Lundberg wrote:


Excellent stuff Brett. Let me know if I can help.

Most of this is equally important for plugins and other maven sub  
projects. We should try to make an additional, more general,  
description of versions that is not tied to MNG.


Agreed, I figured I'd just start here. I can probably change the 2's  
to 1's and for the rest folks can do the math :)





- [ ] Backlog - issues that have not been reviewed for a version
  assignment (and may be duplicates, won't fix,  
unreproducible,

  etc.)
- [ ] Unscheduled - new issues that haven't been reviewed yet


Hmm, has anyone looked at issues that are in Backlog? If not, what  
is the difference between Backlog and Unassigned?


I assume you mean Unscheduled, not Unassigned?

Some people have given it a once over, it just needs to keep going  
until empty. Backlog is really for things that pre-date being  
diligent with JIRA :) We should aim to reduce and eventually remove it.





- [ ] actions
- [ ] rename 2.1.x to 2.1
- [ ] create 2.1.x after 2.1


Don't we need 2.1.x *before* 2.1 is released so that we can move  
known issues to it before the release?


Hmm, do you mean need it before as in it must exist to use, or it is  
scheduled before 2.1?


It's currently used as things that might go into 2.1, but I've seen  
it look confusing because it behaves differently to 2.0.x (clearly  
after 2.0, where 2.0 is already released). So I'm suggesting we  
reverse the naming and use 2.1 for things going into 2.1, and 2.1.x  
for things we know will remain issues in 2.1 and will be fixed in  
2.1.x. I agree that it needs to exist as a version in JIRA whenever  
2.1 does.




How do we educate our users (and ourselves for that matter) on the  
difference between documentation and site? Perhaps we can make the  
pages look slightly different: a special title prefix/suffix, color  
scheme, menu struture.


my hope is that docs are distributed, site is not :) I think it will  
be a case that users rarely care about the site, and only the docs,  
so they'll file in the right place. And if we can put links on all  
pages that say how to report an issue with that page, that'd be even  
better :)


But we don't have that separation yet anyway...



- [ ] If an issue is too large - clone it to create
  smaller and more manageable issues


Sounds good. I prefer this to subtasks (which should be reserved for  
when a big task is composed of a set of steps that are all required  
to complete it).


Thanks!
Brett

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: RDF/XML For Repository Information

2007-08-01 Thread Jason van Zyl


On 1 Aug 07, at 4:39 PM 1 Aug 07, Shane Isbell wrote:


To clarify: NMaven uses RDF for its local repository store (and has
abstractions of an AssemblyResolver/ProjectDao for the API). It  
does not
require that the remote repository contain RDF, but rather parses  
and stores
the information from the remote pom artifact into the RDF store.   
There is
also an RDF repository converter that takes the local repository  
and RDF
store and converts it into the default local repository format,  
generating

the pom files. This allows maven plugins like the assembler to still
function.


Again this is simply an API for accessing artifacts, and not just the  
binary artifacts but the metadata as well. So to anything using this  
API the underlying store, and query mechanism can be the  
responsibility of the particular provider.




Under its current form, RDF is only used locally, with the RDF--pom
conversion occurring for remote repos. The way I see it eventually  
working
is to be able to execute SPARQL ( http://www.w3.org/TR/rdf-sparql- 
query/)

queries against the remote repo.


If that's what a particular provider did sure.


This doesn't preclude any other format or
interface from working. A good way to look at it is to separate out  
the
concepts of the meta-data from the concept of binary artifacts  
(like jar,

dll, exe).


The final API for artifact manipulation must also take into account  
metadata. It does already but needs improvement.



The binary artifacts can be in any repository format: it doesn't
matter. The meta-data however, needs multiple interfaces (RDF,  
SOAP, pom
files, etc). A good repository manager would need to be able to  
support

multiple adapters for these interfaces.


The API will cover both, and no direct access to things like RDF will  
happen. A provider mechanism can easily provide direct access to an  
underlying query mechanism as well. Much like ORM provide direct/ 
native access provided the object format returned conforms to the  
API. So you can use RDF and SPARQL, and I can use a Lucene database  
and Lucene queries. I know of a fellow who has slurped all the POMs  
into a an XML database and used XQuery. It really won't matter. What  
matters is the API, then to things like the assembly plugin or  
anything else that might directly use the artifact mechanism it's  
really a matter of configuration of the provider. An API that doesn't  
encompass the acquisition and analysis of metadata is not very  
useful. I don't see any value in storing metadata with RDF and using  
that directly. As a provider it can be like anything else, and the  
overall API will allow for high-fidelity transitions from one format  
to another so if I wanted to take file system of POMs and create a  
database, or an RDF document it shouldn't matter.


As to the above I don't RDF being an interface. I see having an API  
which can certainly have REST/SOAP accessible methods but it will all  
revolve around an API which encompasses notions of metadata.




Regards,
Shane


On 8/1/07, Jason van Zyl [EMAIL PROTECTED] wrote:



On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:


I would like to see if there is any general interest from the Maven
community in using RDF for storing and retrieving of repository
information.


As the only means, and not accessed via some API shielding the
underlying store then my vote will always be -1. I hope that's not
what you've done with NMaven as that would be a fatal flaw. I'm
assuming there is some API sitting on top of it.


I switched NMaven's resolver implementation to one using RDF and am
happy
with the results. This implementation allows: 1) easily extending
meta-data,


Which I'm always skeptical of as we potentially wind up which schisms
and I'm not sure what kind of extension you need for purely
dependency information which the resolver is concerned with.


in my case for attaching requirements to an artifact; 2) writing
queries
against the repo, as opposed to reading and manipulating the
hierarchy of
poms. This also results in cleaner, simpler code;


This should be done with an API, not against a fixed datastore. Using
RDF and only RDF is not something I would ever support because I know
of two implementations of repository managers that use their own
internal format. Proximity uses one and I use Lucene indices so the
unifier will be an API.


3) exporting all the
meta-data to a single RDF/XML file, which has helped me with
debugging and
understanding the entire repository. A future benefit would be the
ability
to run distributed queries across multiple repos.


It's sounding like you've build this on RDF which I don't think is
wise at all. For example this is not hard with any underlying store
with the right API. I don't think it would be hard for you to use an
API though. I'll never support a single storage format that isn't
accessed via an API.



One of the implications is that while the pom is still used for
building,
the local and 

Re: how we handle JIRA versions

2007-08-01 Thread Jason van Zyl
All looks good, my only comments are I think the notions in Scrum  
like Sprints for a release are good like the idea of fixing the set  
of issues and sticking with it for the Sprint. Sensible patterns and  
there's already literature on that. So in any parts you're talking  
about planning I think it might be good to defer to Scrum.


To sustain any sort of visibility amongst us I think it would be wise  
for us to mandate the use of Mylyn. I don't use Eclipse but I use  
Eclipse for Mylyn. For anyone using Eclipse it's a no brainer, but I  
don't use Eclipse 100% of the time but I use it for Mylyn. It makes  
being diligent about issue management a lot easier. It also helps vet  
duplicates, and generally makes planning easier. At least I've found  
it to be a great boon after using it for quite a while now.


As far as the workflow, are you actually going to try and encapsulate  
that workflow in a JIRA workflow itself? I think that might be a bit  
masochistic but any workflow that is not strictly enforced in the  
tool is going to be hard to adhere to.


On 1 Aug 07, at 12:48 PM 1 Aug 07, Brett Porter wrote:


Hi,

A while back I promised to write up what we are doing with jira  
versions now, and finally got around to it. In the process, I came  
up with a couple of tweaks we could make (see actions). Here it  
is in point form - does everyone agree this is the process we are  
following now? Missing anything?


- [ ] versions:
- [ ] 2.0.8 - the next release, currently being worked on for the
  2.0.x branch
- [ ] 2.0.x - issues that are likely to be fixed in the 2.0.x
  series, but not yet scheduled
- [ ] 2.1-alpha-1 - the next release, currently being worked on  
for

  trunk
- [ ] 2.1-alpha-2 - the release after next, and so on
- [ ] 2.1 - issues to fix by the 2.1 final release
- [ ] 2.1.x - issues to list as known issues in 2.1, and to be
  fixed in the releases after 2.1.
- [ ] 2.2 - issues to fix in the 2.2 release (not yet broken down
  as it is a future release)
- [ ] 2.x - issues to fix in later 2.x releases (not yet  
scheduled)

- [ ] Backlog - issues that have not been reviewed for a version
  assignment (and may be duplicates, won't fix,  
unreproducible,

  etc.)
- [ ] Unscheduled - new issues that haven't been reviewed yet
- [ ] actions
- [ ] rename 2.1.x to 2.1
- [ ] create 2.1.x after 2.1
- [ ] rename 2.2.x to 2.2
- [ ] create 2.x
- [ ] take a knife to 2.1 (currently 2.1.x) which has 254 issues
- [ ] rename Reviewed Pending Version Assignment to Backlog
- [ ] move all documentation issues either to the site project
  (this should all be done by now), or to a scheduled version
  (or the backlog)
- [ ] create a shared jira and move the shared component issues to
  that.
- [ ] workflow
- [ ] for a new issue in unscheduled
- [ ] should attempt to review them quickly and regularly
- [ ] first action is to attempt to resolve reasonably
  (duplicate, won't fix if it's inappropriate, or
  incomplete if there is not enough detail)
- [ ] double check priority and other details like affects
  version and component and prompt for more information if
  needed
- [ ] all issues should have an affects version(s) and
  component(s)
- [ ] assign a version depending on whether it's a bug or a
  feature, and what it's severity is
- [ ] unless it is a regression in the current code, it should
  not be assigned to the current version
- [ ] for an issue in backlog
- [ ] periodically review issues related to other ongoing work
  to attempt to close duplicates or assign to an
  appropriate version
- [ ] for an issue in the current release that could be bumped
- [ ] should not be done if it's a blocker or a regression
- [ ] can be done en masse for remaining issues when a release
  is called based on an agreed date and nothing left in
  scheduled features/blockers/regressions list
- [ ] can be done for individual issues earlier than that if
  time runs short or priority becomes reduced
- [ ] planning for the next release
- [ ] during the previous release or after it's complete,
  planning for the next one should occur
- [ ] should reasonably prevent adding new issues to a release
  once it becomes the current one (unless the issue is a
  blocker or regression)
- [ ] create a new version and pull back from the generic
  bucket (for 2.1-alpha-2, these are taken from 2.1, for
  2.0.8 they are taken from 2.0.x, for 2.1's first cut  
they

  are taken from 2.x).
- [ ] use votes, priorities and effort/relatedness to 

Re: Archiva releases (Take 2)

2007-08-01 Thread Brett Porter
I'd be happy to run some tests on Tomcat post beta-1 as well. I've  
spent way too much time battling these things in a past life not to  
have learned something that might be helpful :)


Anyone here deploying to something other than Tomcat? We have Jetty5  
(via appserver) and 6 (via plugin) covered - I'm interested to know  
if these issues are isolated to TC.


- Brett

On 02/08/2007, at 10:32 AM, Maria Odea Ching wrote:

Yep, we also expect users to deploy Archiva standalone in  
production. So far, MRM-323 and all the other remaining Tomcat  
issues are scheduled for 1.0.x since most of them needs to be  
investigated further. I've looked at MRM-323 in jira and saw your  
findings for this issue. I'll take a look at it later on and  
maybe we could include this in beta-2 :)



Thanks,
Deng


Ludovic Maitre wrote:

Hi Maria, all,

Although i'm a new user of Archiva, i would like to know when in  
the roadmap the team plan to fix issues related to running Archiva  
inside Tomcat (issues like http://jira.codehaus.org/browse/ 
MRM-323 ) ? Is it something important for the team ? Did you  
expect users to deploy Archiva standalone in production ?

Best regards,

Maria Odea Ching wrote:

Hi All,

We (Brett, Joakim and I) have segregated the issues in Jira to  
prep for the upcoming 1.0 release (finally! :-) ). Anyway,  
there'll be two beta releases first before 1.0.


So..here's the plan (excerpted from Brett):
- finish all issues for 1.0-beta-1
- then move on to 1.0-beta-2 issues for the next release
- do a big bug bash to find the problems in it
- decide on these found bugs for 1.0-beta-3 or 1.0.x (known issues)


Btw, I'll prepare 1.0-beta-1 for release on August 6 (that'll be  
Monday my time), and this will include the following:

1. Resolved/Closed:
- MRM-290 (Ability to pre-configure the Jetty port in conf/ 
plexus.xml)

- MRM-326 (Adding/Editing repositories doesn't have validation)
- MRM-425 (Search and Browse do not work for snapshots)
- MRM-426 (Search does not work for snapshots because of  
different version values in index and database when the snapshot  
version is unique)


2. In Progress/Open:
- MRM-143 (Improve error reporting on corrupt jars, poms, etc)
- MRM-275 (add remove old snapshots Scheduler)
- MRM-294 (Repository purge feature for snapshots)
- MRM-329 (The Reports link gives an HTTP 500)
- MRM-347 (Undefined ${appserver.home} and ${appserver.base})
- MRM-373 (Unable to delete the pre-configured example network  
proxy)
- MRM-412 (Add support for maven 1 (legacy) request to access a  
maven2 (default layout) repo )
- MRM-428 (Managed and remote repositories with same name causes  
problems)

- MRM-429 (Find artifact does not work when the applet is disabled)
- MRM-430 (Archiva always writes to ~/.m2/archiva.xml)

Everyone okay with this? :)


Thanks,
Deng













Re: how we handle JIRA versions

2007-08-01 Thread Brett Porter


On 02/08/2007, at 1:52 PM, Jason van Zyl wrote:

I'd encourage us sharing a good Mylyn set up and putting it on the  
web site to encourage this consistency, but I think it's  
unreasonable to mandate it.




Why? It's free and it's the best way to keep some level visibility.  
For people submitting issues no, for developers yes. It just raises  
the general level of awareness by a great degree and is not hard to  
do. Makes the process of patching so much easier as well, and makes  
documenting how to do it easier. From a management perspective I  
think it's a no brainer, most people use Eclipse and for people  
like me who don't I would use it to make it easier for others.


I just don't believe in enforcing desktop tools on developers here -  
it's another barrier to entry. Encourage, promote, recommend,  
document - sure, just not mandate.


IMO only, I'll probably use it so it's really up to everyone else.

- Brett

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: how we handle JIRA versions

2007-08-01 Thread Brett Porter


On 02/08/2007, at 1:22 PM, Jason van Zyl wrote:

All looks good, my only comments are I think the notions in Scrum  
like Sprints for a release are good like the idea of fixing the set  
of issues and sticking with it for the Sprint. Sensible patterns  
and there's already literature on that. So in any parts you're  
talking about planning I think it might be good to defer to Scrum.


That's the intent (if it were to be summarised in a sentence), but I  
agree if anyone is looking for more detail that's the place to go.




To sustain any sort of visibility amongst us I think it would be  
wise for us to mandate the use of Mylyn. I don't use Eclipse but I  
use Eclipse for Mylyn. For anyone using Eclipse it's a no brainer,  
but I don't use Eclipse 100% of the time but I use it for Mylyn. It  
makes being diligent about issue management a lot easier. It also  
helps vet duplicates, and generally makes planning easier. At least  
I've found it to be a great boon after using it for quite a while now.


I'd encourage us sharing a good Mylyn set up and putting it on the  
web site to encourage this consistency, but I think it's unreasonable  
to mandate it.




As far as the workflow, are you actually going to try and  
encapsulate that workflow in a JIRA workflow itself? I think that  
might be a bit masochistic but any workflow that is not strictly  
enforced in the tool is going to be hard to adhere to.


It'd be nice to enforce it now, but I'm not prepared for that kind of  
pain :) If someone else who's done more jira workflowy things wants  
to try their hand, please do. Otherwise, I figure if we at least have  
a stated pattern in agreement, if we see regular violations we should  
either review the rule, or enforce it at that point.


Thanks,
Brett

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Two week sprints on Continuum Beta releases?

2007-08-01 Thread Brett Porter

Hi,

How do people feel about planning betas to fit into: Aug 15, Aug 29,  
Sep 12, Sep 26? (for releases, votes are the monday before)


If so, is the current beta-2 list achievable by Aug 15? Looks like it  
needs to be trimmed (and we also have 11 unscheduled to find a home for)


Cheers,
Brett


Re: how we handle JIRA versions

2007-08-01 Thread Jason van Zyl


On 1 Aug 07, at 11:37 PM 1 Aug 07, Brett Porter wrote:



On 02/08/2007, at 1:22 PM, Jason van Zyl wrote:

All looks good, my only comments are I think the notions in Scrum  
like Sprints for a release are good like the idea of fixing the  
set of issues and sticking with it for the Sprint. Sensible  
patterns and there's already literature on that. So in any parts  
you're talking about planning I think it might be good to defer to  
Scrum.


That's the intent (if it were to be summarised in a sentence), but  
I agree if anyone is looking for more detail that's the place to go.




To sustain any sort of visibility amongst us I think it would be  
wise for us to mandate the use of Mylyn. I don't use Eclipse but I  
use Eclipse for Mylyn. For anyone using Eclipse it's a no brainer,  
but I don't use Eclipse 100% of the time but I use it for Mylyn.  
It makes being diligent about issue management a lot easier. It  
also helps vet duplicates, and generally makes planning easier. At  
least I've found it to be a great boon after using it for quite a  
while now.


I'd encourage us sharing a good Mylyn set up and putting it on the  
web site to encourage this consistency, but I think it's  
unreasonable to mandate it.




Why? It's free and it's the best way to keep some level visibility.  
For people submitting issues no, for developers yes. It just raises  
the general level of awareness by a great degree and is not hard to  
do. Makes the process of patching so much easier as well, and makes  
documenting how to do it easier. From a management perspective I  
think it's a no brainer, most people use Eclipse and for people like  
me who don't I would use it to make it easier for others.




As far as the workflow, are you actually going to try and  
encapsulate that workflow in a JIRA workflow itself? I think that  
might be a bit masochistic but any workflow that is not strictly  
enforced in the tool is going to be hard to adhere to.


It'd be nice to enforce it now, but I'm not prepared for that kind  
of pain :) If someone else who's done more jira workflowy things  
wants to try their hand, please do. Otherwise, I figure if we at  
least have a stated pattern in agreement, if we see regular  
violations we should either review the rule, or enforce it at that  
point.


Thanks,
Brett

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: how we handle JIRA versions

2007-08-01 Thread Brian E. Fox
It'd be nice to enforce it now, but I'm not prepared for that kind of  
pain :) If someone else who's done more jira workflowy things wants  
to try their hand, please do. Otherwise, I figure if we at least have  
a stated pattern in agreement, if we see regular violations we should  
either review the rule, or enforce it at that point.

I setup and maintain Jira at my day job so I can help with workflows if
needed. It's a major PITA currently to change workflows that are in use
(and we have a boat-load of projects here at maven). I've been trying to
get Atlassian to commit to fixing it but no luck yet.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Announcing Mavenizer 1.0.0-alpha-1

2007-08-01 Thread Brett Porter

Hi Cédric,

Looks very cool. Nice work!

I was wondering if you've seen the maven-shared-jar component? http:// 
maven.apache.org/shared/maven-shared-jar/


This seems like something you could use (and if you have some  
additional features, as it appears you do, you might contribute them  
back). This would help use the same logic for identification across  
multiple applications - the project dependency reports, and Archiva  
for example.


While it dropped down my list recently, I'm about 2/3 done cleaning  
up or Aardvark code base which also uses this. It has some similar  
objectives to Mavenizer, but so far seems to have addressed the  
opposite use cases - it has a Swing UI and primarily does things like  
project.xml and build.xml conversion (though dependency selection is  
an important part of that). It might be good to keep in touch?
Original mail: http://mail-archives.apache.org/mod_mbox/maven-dev/ 
200705.mbox/[EMAIL PROTECTED]


Cheers,
Brett

On 27/07/2007, at 6:24 AM, Cédric Vidal wrote:


Hi Guys,

I would like to announce a new Apache licensed Maven 2 related tool  
called

Mavenizer:
http://mavenizer.sourceforge.net/

I've been heavily using Maven for a long time and just couldn't  
live without
it ;) but I often have to use third party libraries which have not  
been

mavenized already or simply have bad repository metadata. Either way,
mavenizing such libraries is a real pain especially when you want  
to get

Maven 2 transitive dependencies right.

Mavenizer attempts to ease the process of mavenizing such third party
libraries by trying to do as much guess work as possible.

A flash demo is available here, it illustrates the mavenization of
JFreeReport 0.8.7:
http://mavenizer.sourceforge.net/demos/jfreereport-0.8.7.html

This is considered alpha software. Nothing is stable yet; the code  
is pretty
monolithic and not pluggable at all. I plan on rewriting the code  
base from
the ground up in a more pluggable way so that Mavenizer can be  
extended with
user custom logic, naming strategies, etc ... But I wanted to  
release the

codebase as is as a proof of concept and why not, it could be helpful
already ;)

Any feedback on the way you use it and successful mavenizations using
Mavenizer would be really appreciated :) as well as bugs of course ^^

Kind regards,

Cédric Vidal

PS: Mavenizer is not related to Maveniser (with an 's' and also on
sourceforge), too bad the names are so close :(



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: RDF/XML For Repository Information

2007-08-01 Thread Shane Isbell
Okay. I see where you are coming from. If we are using multiple datasources
- traditional pom object hierachy, RDF, Lucene index, XML stores - then we
need a 1) common API (for application developers); 2) SPI (from Maven); 3)
adapters (from framework developers). The SPI would need (at a minimum) to
cover transaction management because in the case you outlined there may be
multiple data sources used simultaneously.

In the past, I developed JCA implementations for SOAP and JXTA data-sources;
and my guess is that JCA could be useful in this case as well.

Shane


On 8/1/07, Jason van Zyl [EMAIL PROTECTED] wrote:


 On 1 Aug 07, at 4:39 PM 1 Aug 07, Shane Isbell wrote:

  To clarify: NMaven uses RDF for its local repository store (and has
  abstractions of an AssemblyResolver/ProjectDao for the API). It
  does not
  require that the remote repository contain RDF, but rather parses
  and stores
  the information from the remote pom artifact into the RDF store.
  There is
  also an RDF repository converter that takes the local repository
  and RDF
  store and converts it into the default local repository format,
  generating
  the pom files. This allows maven plugins like the assembler to still
  function.

 Again this is simply an API for accessing artifacts, and not just the
 binary artifacts but the metadata as well. So to anything using this
 API the underlying store, and query mechanism can be the
 responsibility of the particular provider.

 
  Under its current form, RDF is only used locally, with the RDF--pom
  conversion occurring for remote repos. The way I see it eventually
  working
  is to be able to execute SPARQL ( http://www.w3.org/TR/rdf-sparql-
  query/)
  queries against the remote repo.

 If that's what a particular provider did sure.

  This doesn't preclude any other format or
  interface from working. A good way to look at it is to separate out
  the
  concepts of the meta-data from the concept of binary artifacts
  (like jar,
  dll, exe).

 The final API for artifact manipulation must also take into account
 metadata. It does already but needs improvement.

  The binary artifacts can be in any repository format: it doesn't
  matter. The meta-data however, needs multiple interfaces (RDF,
  SOAP, pom
  files, etc). A good repository manager would need to be able to
  support
  multiple adapters for these interfaces.

 The API will cover both, and no direct access to things like RDF will
 happen. A provider mechanism can easily provide direct access to an
 underlying query mechanism as well. Much like ORM provide direct/
 native access provided the object format returned conforms to the
 API. So you can use RDF and SPARQL, and I can use a Lucene database
 and Lucene queries. I know of a fellow who has slurped all the POMs
 into a an XML database and used XQuery. It really won't matter. What
 matters is the API, then to things like the assembly plugin or
 anything else that might directly use the artifact mechanism it's
 really a matter of configuration of the provider. An API that doesn't
 encompass the acquisition and analysis of metadata is not very
 useful. I don't see any value in storing metadata with RDF and using
 that directly. As a provider it can be like anything else, and the
 overall API will allow for high-fidelity transitions from one format
 to another so if I wanted to take file system of POMs and create a
 database, or an RDF document it shouldn't matter.

 As to the above I don't RDF being an interface. I see having an API
 which can certainly have REST/SOAP accessible methods but it will all
 revolve around an API which encompasses notions of metadata.

 
  Regards,
  Shane
 
 
  On 8/1/07, Jason van Zyl [EMAIL PROTECTED] wrote:
 
 
  On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:
 
  I would like to see if there is any general interest from the Maven
  community in using RDF for storing and retrieving of repository
  information.
 
  As the only means, and not accessed via some API shielding the
  underlying store then my vote will always be -1. I hope that's not
  what you've done with NMaven as that would be a fatal flaw. I'm
  assuming there is some API sitting on top of it.
 
  I switched NMaven's resolver implementation to one using RDF and am
  happy
  with the results. This implementation allows: 1) easily extending
  meta-data,
 
  Which I'm always skeptical of as we potentially wind up which schisms
  and I'm not sure what kind of extension you need for purely
  dependency information which the resolver is concerned with.
 
  in my case for attaching requirements to an artifact; 2) writing
  queries
  against the repo, as opposed to reading and manipulating the
  hierarchy of
  poms. This also results in cleaner, simpler code;
 
  This should be done with an API, not against a fixed datastore. Using
  RDF and only RDF is not something I would ever support because I know
  of two implementations of repository managers that use their own
  internal