Hi everyone.

I found the problem with my configuration: Wildfly only accepts database 
connections through datasources and I was not using a datasource for the 
in-deployment connection. So this first problem is solved.


However I found another problem during the setup of integration tests.

I am able to:

   1. start a Server instance in the test case, using the appropriate API 
   Server.createTcpServer(...)
   2. obtain a connection to an in-memory database using a local url "
   *jdbc:h2:mem:arquillian;DB_CLOSE_DELAY=-1*"
   3. use a remote url to use the same in-memory database inside the 
   arquillian-managed micro-deployment: "
   *jdbc:h2:tcp://localhost:9128/mem:arquillian;DB_CLOSE_DELAY=-1*"

But I would like to automatically start the server during the Maven build, 
and I found that this can done with the "java" goal of the "
maven-exec-plugin" (link 
<https://www.mojohaus.org/exec-maven-plugin/java-mojo.html>), as this goal 
uses the same jvm used in the tests.

So I configured the plugin with the following code:
<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>3.0.0</version>
    <executions>
        <execution>
            <id>pre-integration-test</id>
            <phase>pre-integration-test</phase>
            <goals>
                <goal>java</goal>
            </goals>
            <configuration>
                <mainClass>org.h2.tools.Server</mainClass>
                <commandlineArgs>-tcp -tcpDaemon -tcpAllowOthers -tcpPort 
9128 -trace</commandlineArgs>
                <classpathScope>test</classpathScope>
                <cleanupDaemonThreads>false</cleanupDaemonThreads>
            </configuration>
        </execution>
    </executions>
</plugin>

I would have expected this to work for in-memory databases, as the java goal 
uses the same process of the tests to execute the server. Unfortunately 
this does not work.

With lsof I can see that the server is listening on 9128, but no other 
process uses that port, so the connections do not use that server:
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    24736 ivan94fi  218u  IPv6 446093      0t0  TCP *:9128 (LISTEN)


Instead if I run the server inside the test case with 
Server.startTcpServer(...), the output of lsof is the following:
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    24736 ivan94fi  218u  IPv6 446093      0t0  TCP *:9128 (LISTEN)
java    24736 ivan94fi  220u  IPv6 444986      0t0  TCP localhost:9128->
localhost:37394 (ESTABLISHED)
java    24777 ivan94fi  589u  IPv4 448079      0t0  TCP localhost:37394->
localhost:9128 (ESTABLISHED)


The only way I managed to make the exec plugin work was by using the remote 
TCP urls for both connections and adding the parameter "*-ifNotExists*" to 
the server creation, so that the database can be created from the remote 
url.

As this last approach seems not secure, and I would prefer to avoid it, is 
it possible to start the server *for an in-memory database* in some other 
way that does not clutter the test case code? (Mostly because I am planning 
to use other databases for tests, so I don't want the code for H2 server 
creation in tests that use another database.)




Il giorno domenica 7 giugno 2020 16:10:18 UTC+2, Ivan Prosperi ha scritto:
>
> Hi everyone.
>
> First of all I would like to thank the developers/maintainers of H2 
> database. It has always served me perfectly for my needs (especially for 
> automated testing of persistence layer).
>
> Iam running *H2 database version 1.4.200* and using *Maven* to manage 
> dependencies. I am using *Java 8* and running on *Ubuntu 18.04*.
>
> I am currently in this situation:
>
>    - I am developing a *Java EE 8 REST application* which uses a database
>    - I am using *Hibernate* for persistence
>    - I am using Wildfly 18 as application server
>    - I want to test the endpoints of my application (using *Arquillian*), 
>    e.g. test that if some entities are in the database and i call the 
>    respective endpoint, the entities are returned.
>
>
> My integration tests for endpoints are structured in the following way:
>
>    - Start an H2 TCP server
>    - Create a non-managed EntityManagerFactory with a persistence.xml 
>    referring to the H2 database, which will be used for populating the 
>    database in the tests
>    - Use Arquillian to deploy a micro-deployment for the application on 
>    Wildfly
>    - The deployment will internally use another persistence.xml, with a 
>    connection to the same H2 database 
>    - (The Arquillian tests are executed outside the container, with 
>    @RunAsClient and @Deployment(testable = false))
>
>
> To my understanding, at this point I should have two connections to a 
> single H2 database (running in the TCP server):
>
>    1. The first one using a *non-managed* EntityManagerFactory, which 
>    should be used to insert instances in the database from the tests. I will 
>    call this "external" connection
>    2. The second one, using a *managed* Factory, handled by the Wildfly 
>    environment, which is used internally by the deployed application when I 
>    call the REST API from outside the container. I will call this "internal" 
>    connection, in that it is used inside the deployed war application
>
> *The problem is that I cannot establish the two connections to the same 
> database*: from the logs I can see that two connections are created for 
> the same database (at least it seems so), but when I add some entities to 
> the database from the test case using the "external" connection, no 
> entities are retrieved from the "internal" connection.
>
> *So it seems that two separate databases are created, instead of the 
> single one that I need.*
>
> The configurations that I tried are the following:
>
>    1. *In-Memory*. Using an in-memory database, following the recommended 
>    approach 
>    
> <https://stackoverflow.com/questions/5077584/h2-database-in-memory-mode-cannot-be-accessed-by-console>
>    :
>       - Create the "external" EntityManagerFactory manually, using 
>       "jdbc:h2:mem:H2-db;DB_CLOSE_DELAY=-1"
>       as url inside persistence.xml. This should create the database
>       - Start a TCP server in the test case programmatically: 
>       Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", "9128"
>       , "-trace");
>       - Use a tcp url for the persistence.xml url inside the 
>       micro-deployment: 
>       
>       "jdbc:h2:tcp://localhost:9128/mem:H2-db;DB_CLOSE_DELAY=-1;IFEXISTS=TRUE"
>       - Let Arquillian deploy the micro-deployment (I can see the logs 
>       that the persistence unit is created successfully)
>    2. *Persistent*. Using persistent databases:
>
>
>    - Manually start the server with the following command:
>       - java -cp h2-1.4.200.jar org.h2.tools.Serve-tcp -tcpAllowOthers 
> -tcpPort 
>       9128 -trace -web -webAllowOthers
>       - Use the same TCP url for both persistence.xml files: 
>       
>       "jdbc:h2:tcp://localhost:9128/~/H2-db;DB_CLOSE_DELAY=-1;IFEXISTS=TRUE"
>       . As far as I understand this means that both the connections will 
>       be made to the same "remote" database, as in other DMBSs normal server 
> mode.
>       - Let Arquillian deploy the micro-deployment (I can see the logs 
>       that the persistence unit is created successfully)
>    
>
> Unfortunately both of the approaches seem correct but do not work, in the 
> sense that they do not throw errors and from the logs I can see that I can 
> both the persistence units being created successfully, but if I insert 
> entities from one persistence unit, the other does not see them, as if they 
> were linked to different databases.
>
> I would prefer an in-memory approach, as these are data used only for 
> tests and can be destroyed. I only tried the persistent approach because 
> the in memory one did not work.
>
> I hope I made myself clear, because it is a quite complex situation for 
> myself too.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/d43f4d2c-c4b4-4660-8f51-b95844d4fa84o%40googlegroups.com.

Reply via email to