RE: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

2015-04-09 Thread Loren Kratzke
The short term fix would be documentation. Say it in clear language right next 
to the download link - 

If you publish large artifacts then you must download Ivy+deps. 
Install commons httpclient, codec, and logging jars into ant/lib next to 
ivy jar.

Note that you need all three jars, not just httpclient. That detail is not 
documented anywhere that I know of.

That is what can be done now. Going forward the options are as follows:

1. Keep everything the same, consider the documentation as the solution.
2. Require httpclient jars to be installed.
3. Find a work around for the buffering/authentication issues of 
HttpURLConnection. 
4. Include necessary httpclient classes inside ivy.jar. 

Several options available. Each has its own merits.

L.K.

-Original Message-
From: Maarten Coene [mailto:maarten_co...@yahoo.com.INVALID] 
Sent: Thursday, April 09, 2015 7:51 AM
To: Ant Developers List
Subject: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies 
when using standard resolvers.
Perhaps it could be added to the documentation that if you use the URLresolver 
for large uploads you'll have to add httpclient to the classpath?


Maarten




- Oorspronkelijk bericht -
Van: Antoine Levy Lambert anto...@gmx.de
Aan: Ant Developers List dev@ant.apache.org
Cc: 
Verzonden: donderdag 9 april 3:50 2015
Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Also, I wonder whether we should not make the use of httpclient with ivy 
compulsory, since Loren says that the HttpUrlConnection of the JDK is always 
copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine

On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert anto...@gmx.de wrote:

 Hi,
 
 I wonder whether we should not upgrade ivy to use the latest http client 
 library too ?
 
 Regards,
 
 Antoine
 
 On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) j...@apache.org wrote:
 
 
   [ 
 https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jir
 a.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1448
 3468#comment-14483468 ]
 
 Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
 
 
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the stream 
 stored in over sized buffers (because they double in size when they fill 
 up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
 you allocate. Once the buffers total more than about ~1GB (which is what 
 happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
 space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no 
 benefit in buffering any of this data to begin with, it is just a side 
 effect of using high level copy methods. There is no memory ballooning at 
 all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will have 
 this for you asap.
 
 
 was (Author: qphase):
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the stream 
 stored in over sized buffers (because they double in size when they fill 
 up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
 you allocate. Once the buffers total more than about ~1GB (which is what 
 happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
 space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no 
 benefit in buffering any of this data to begin with, it is just a side 
 effect of using high level copy methods. There is no memory ballooning at 
 all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will have 
 this for you asap.
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org For additional 
 commands, e-mail: dev-h...@ant.apache.org


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

2015-04-09 Thread Maarten Coene
I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies 
when using standard resolvers.
Perhaps it could be added to the documentation that if you use the URLresolver 
for large uploads you'll have to add httpclient to the classpath?


Maarten




- Oorspronkelijk bericht -
Van: Antoine Levy Lambert anto...@gmx.de
Aan: Ant Developers List dev@ant.apache.org
Cc: 
Verzonden: donderdag 9 april 3:50 2015
Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Also, I wonder whether we should not make the use of httpclient with ivy 
compulsory, since Loren says that the HttpUrlConnection of the JDK is always 
copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine

On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert anto...@gmx.de wrote:

 Hi,
 
 I wonder whether we should not upgrade ivy to use the latest http client 
 library too ?
 
 Regards,
 
 Antoine
 
 On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) j...@apache.org wrote:
 
 
   [ 
 https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14483468#comment-14483468
  ] 
 
 Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
 
 
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the stream 
 stored in over sized buffers (because they double in size when they fill 
 up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
 you allocate. Once the buffers total more than about ~1GB (which is what 
 happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
 space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no 
 benefit in buffering any of this data to begin with, it is just a side 
 effect of using high level copy methods. There is no memory ballooning at 
 all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will have 
 this for you asap.
 
 
 was (Author: qphase):
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the stream 
 stored in over sized buffers (because they double in size when they fill 
 up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
 you allocate. Once the buffers total more than about ~1GB (which is what 
 happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
 space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no 
 benefit in buffering any of this data to begin with, it is just a side 
 effect of using high level copy methods. There is no memory ballooning at 
 all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will have 
 this for you asap.
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
 For additional commands, e-mail: dev-h...@ant.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org



Ivy extends loses defaultconfmapping

2015-04-09 Thread Gintautas Grigelionis
I tried to use extends tag to avoid boilerplate configurations like this

ivy-module ...
  info organisation=a module=b
 extends organisation=c module=d revision=1
extendType=configurations/
  /info

  publications
   ...
  /publications

  dependencies
   ...
  /dependencies
/ivy-module

The ivy.xml containing configurations is

ivy-module ...
info organisation=c module=d revision=1 status=integration
publication=.../
configurations defaultconfmapping=*-@
conf name=provided transitive=true description=Required for
compilation, but provided by the container or JRE at runtime./
conf name=compile transitive=true description=Required for
compilation/
conf name=runtime transitive=true extends=compile description
=Required at runtime/
conf name=test transitive=true extends=runtime
description=Required
for test only/
/configurations
publications
artifact name=d type=pom ext=pom conf=compile/
/publications
/ivy-module

I generate corresponding pom and publish it with ivy.xml so that I can run
publish as usual and have something to resolve for.

The resulting ivy.xml that is published looks like this

ivy-module ...
info organisation=a module=b revision=... status=integration
publication=...
!-- extends organisation=c module=d revision=1
extendType=configurations/ --
/info

configurations
!-- configurations inherited from c#d;1 --
conf name=provided visibility=public description=Required to
compile application, but provided by the container or JRE at runtime./
conf name=compile visibility=public description=Required to
compile application/
conf name=runtime visibility=public description=Required at
runtime extends=compile/
conf name=test visibility=public description=Required for
test only extends=runtime/
/configurations

publications
... /publications

dependencies...
/dependencies
/ivy-module
Please note the missing defaultconfmapping, which lets Ivy to revert to
default defaultconfmapping (*-*) which has the effect of putting all
configurations together with all other configurations, making
configurations useless in resolve.
Is this a bug or am I missing something?

I noticed other effects of extends that are undocumented, like looking
for a parent ivy.xml on resolve in .. (undocumented default value for
location attribute + location having preference over resolvers even when
not specified explicitly?) and treating the repository name where the
resolved ivy.xml used for extending was found as a resolver reference name
on retrieve and complaining that that name was not defined in Ivy settings.


Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

2015-04-09 Thread Nicolas Lalevée

 Le 9 avr. 2015 à 16:51, Maarten Coene maarten_co...@yahoo.com.INVALID a 
 écrit :
 
 I'm not a fan of this proposal, I like it that Ivy doesn't has any 
 dependencies when using standard resolvers.
 Perhaps it could be added to the documentation that if you use the 
 URLresolver for large uploads you'll have to add httpclient to the classpath?

+1
And considering we are packaging Ivy for Eclipse, we would have to make somehow 
httpclient installed there if not.

Nicolas

 
 
 Maarten
 
 
 
 
 - Oorspronkelijk bericht -
 Van: Antoine Levy Lambert anto...@gmx.de
 Aan: Ant Developers List dev@ant.apache.org
 Cc: 
 Verzonden: donderdag 9 april 3:50 2015
 Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish
 
 Also, I wonder whether we should not make the use of httpclient with ivy 
 compulsory, since Loren says that the HttpUrlConnection of the JDK is always 
 copying the full file into a ByteArray when authentication is performed.
 
 That would make the code more simple.
 
 Regards,
 
 Antoine
 
 On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert anto...@gmx.de wrote:
 
 Hi,
 
 I wonder whether we should not upgrade ivy to use the latest http client 
 library too ?
 
 Regards,
 
 Antoine
 
 On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) j...@apache.org wrote:
 
 
  [ 
 https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14483468#comment-14483468
  ] 
 
 Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
 
 
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the 
 stream stored in over sized buffers (because they double in size when they 
 fill up). Oddly, the twist is that the JVM hits a limit no matter how much 
 RAM you allocate. Once the buffers total more than about ~1GB (which is 
 what happens with a 100-200MB upload) the JVM refuses to allocate more 
 buffer space (even if you jack up the RAM to 20GB, no cigar). Honestly, 
 there is no benefit in buffering any of this data to begin with, it is just 
 a side effect of using high level copy methods. There is no memory 
 ballooning at all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will 
 have this for you asap.
 
 
 was (Author: qphase):
 I would be happy to provide you with a project that will reproduce the 
 issue. I can and will do that. 
 
 Generally speaking from a high level, the utility classes are calling 
 convenience methods and writing to streams that ultimately buffer the data 
 being written. There is buffering, then more buffering, and even more 
 buffering until you have multiple copies of the entire content of the 
 stream stored in over sized buffers (because they double in size when they 
 fill up). Oddly, the twist is that the JVM hits a limit no matter how much 
 RAM you allocate. Once the buffers total more than about ~1GB (which is 
 what happens with a 100-200MB upload) the JVM refuses to allocate more 
 buffer space (even is you jack up the RAM to 20GB, no cigar). Honestly, 
 there is no benefit in buffering any of this data to begin with, it is just 
 a side effect of using high level copy methods. There is no memory 
 ballooning at all when the content is written directly to the network.
 
 I will provide a test project and note the break points where you can debug 
 and watch the process walk all the way down the isle to an OOME. I will 
 have this for you asap.
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
 For additional commands, e-mail: dev-h...@ant.apache.org
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
 For additional commands, e-mail: dev-h...@ant.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org