Digging up this old discussion thread on removing the tomee-foo-webapp modules.

At the moment our TCK progress is essentially halted due to issues created by 
the complexity of these webapps and how we build the server.

The crux of the issue is that we're getting duplicate jars in our war files 
which causes the TCK runs to encounter startup errors and fail before any tests 
are run.  Here's an example:
    
    $ curl -O 
https://repository.apache.org/content/groups/snapshots/org/apache/tomee/apache-tomee/8.0.7-SNAPSHOT/apache-tomee-8.0.7-20210417.052409-158-plume.zip
 
    
    $ unzip -l apache-tomee-8.0.7-20210417.052409-158-plume.zip | grep cxf-core
      1431799  04-17-2021 05:23   
apache-tomee-plume-8.0.7-SNAPSHOT/lib/cxf-core-3.5.0-SNAPSHOT.jar
      1431799  04-17-2021 05:23   
apache-tomee-plume-8.0.7-SNAPSHOT/lib/cxf-core-3.5.0-20210417.035622-203.jar
    
There are duplicates of basically any SNAPSHOT dependency.  Sometimes there'll 
be duplicates even of openejb-core.  I've checked the sha256 hashes on the 
duplicate jars like the above and in most cases they are different, meaning 
we're getting two different builds of the SNAPSHOT showing up.

When we go to run the TCK we encounter issues as there are parts of the TCK 
that are standalone (no server) and we need to construct a classpath of 
specific jars.  That will fail like so:

    Caused by: java.lang.Exception: Found more than one file to be included 
into path; dir=target/apache-tomee-plume-8.0.7-SNAPSHOT/lib, 
includes=cxf-rt-rs-client-*.jar, excludes=null; found: 
/Users/dblevins/work/apache/tomee-tck/target/apache-tomee-plume-8.0.7-SNAPSHOT/lib/cxf-rt-rs-client-3.5.0-20210417.035728-202.jar:/Users/dblevins/work/apache/tomee-tck/target/apache-tomee-plume-8.0.7-SNAPSHOT/lib/cxf-rt-rs-client-3.5.0-SNAPSHOT.jar
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native 
Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance 
(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance 
(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance (Constructor.java:423)
        at org.codehaus.groovy.reflection.CachedConstructor.invoke 
(CachedConstructor.java:77)
        at org.codehaus.groovy.reflection.CachedConstructor.doConstructorInvoke 
(CachedConstructor.java:71)
        at 
org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrap.callConstructor
 (ConstructorSite.java:84)
        at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor 
(CallSiteArray.java:52)
        at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor 
(AbstractCallSite.java:192)
        at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor 
(AbstractCallSite.java:200)
        at openejb.tck.util.PathBuilder.append (PathBuilder.groovy:89)

What's worse is that this issue seems somewhat random.  Sometimes you get away 
with no duplicate jars.

Additionally, if you need to rebuild the server binary (say plume) to test a 
one line change it takes quite a while because we need to build several 
binaries first.  The tomee/tomee-webapp/ module builds a war that feeds into 
the tomee/tomee-plume-webapp/ module which feeds into tomee/apache-tomee/ which 
produces all the actual zips, tars.  Here's how big the target directories are 
for those three modules after a build:

    $ du -sh tomee/tomee-webapp/target/ tomee/tomee-plume-webapp/target/ 
tomee/apache-tomee/target/
     37M        tomee/tomee-webapp/target/
    206M        tomee/tomee-plume-webapp/target/
    1.1G        tomee/apache-tomee/target/

Overall we produce 3.3GB in our build.  To get your one-line change up to the 
internet for a TCK run, you need to up load an insane amount of binaries.  On 
an EC2 box with extremely good internet connection it takes about 2 hours to do 
a snapshot deploy.

A snapshot that is broken and unusable.

I think I can fix our duplicate jar issue and get things working, but FYI that 
doesn't really fix our build.  We'll need to do more serious work to get that 
in shape.


-David


> On May 23, 2019, at 1:28 AM, David Blevins <david.blev...@gmail.com> wrote:
> 
> We have a bit of unused legacy with regards to the following webapps:
> 
>    34M tomee-microprofile-webapp-8.0.0-M3.war
>    58M tomee-plume-webapp-8.0.0-M3.war
>    51M tomee-plus-webapp-8.0.0-M3.war
>   6.6M tomee-webaccess-8.0.0-M3.war
>    32M tomee-webapp-8.0.0-M3.war
> 
> From the early days of TomEE we created a "drop-in webapp" version for plain 
> Tomcat users.  This was largely for convenience to people who may have had to 
> use a stock Tomcat in cloud or other environments.  The idea being they could 
> upgrade their Tomcat to a TomEE by dropping in the war.
> 
> In practice, I don't believe anyone actually used it and we do not heavily 
> test this technique.  There is a known limitation that if your webapp starts 
> before the "tomee" webapp, the integration will have to do a separate 
> undeploy/redeploy of your webapp which is clunky.  As well the magic required 
> to load the tomee webapp's contents into the Tomcat server classloader is 
> obtuse and complicates the integration.
> 
> We should discuss removing them from TomEE 8.0.
> 
> There'd be a bit of work involved, but it would trim a good 181MB from the 
> release process.  We have to upload that 181MB twice; once to Nexus and once 
> to the Apache Mirror System staging repo.  So in the end it reduces the 
> upload overhead by 362MB, which is a big deal if you're on a network not 
> blazingly fast.
> 
> Thoughts?
> 
> -- 
> David Blevins
> http://twitter.com/dblevins
> http://www.tomitribe.com
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to