yuqi1129 commented on code in PR #6432:
URL: https://github.com/apache/gravitino/pull/6432#discussion_r1969121631
##########
integration-test-common/src/test/java/org/apache/gravitino/integration/test/container/MySQLContainer.java:
##########
@@ -119,6 +119,14 @@ public void createDatabase(TestDatabaseName
testDatabaseName) {
StringUtils.substring(
getJdbcUrl(testDatabaseName), 0,
getJdbcUrl(testDatabaseName).lastIndexOf("/"));
+ // Fix https://github.com/apache/gravitino/issues/6392, MYSQL JDBC driver
may not load
+ // automatically.
+ try {
+ Class.forName("com.mysql.jdbc.Driver");
Review Comment:
@FANNG1 @jerryshao
The following the root cause:
When `SparkIcebergCatalogRestBackendIT33` executes Spark SQL, it will enable
the IsolatedClientLoader (used for isolating Hive dependencies; the parent
classloader of this classloader is the root classloader) to execute part of the
code. This causes the static code block(loadInitialDrivers) of
`java.sql.DriverManager` to be triggered by the `IsolatedClientLoader`, thereby
resulting in the driver's classloader in the `DriverInfo` registered with
`DriverManager` being `IsolatedClientLoader`.
`SparkIcebergCatalogRestBackendIT33` and `SparkJdbcMysqlCatalogIT33` run in
the same JVM. Since the static code block of a class loaded by the root
classloader can only be initialized once, executing `SparkJdbcMysqlCatalogIT33`
afterward will cause an error due to the incorrect driver classloader.
```
@CallerSensitive
public static Driver getDriver(String url)
throws SQLException {
println("DriverManager.getDriver(\"" + url + "\")");
Class<?> callerClass = Reflection.getCallerClass();
// Walk through the loaded registeredDrivers attempting to locate
someone
// who understands the given URL.
for (DriverInfo aDriver : registeredDrivers) {
// If the caller does not have permission to load the driver then
// skip it.
// aDriver loaded by IsolatedClientLoader will fail here
if(isDriverAllowed(aDriver.driver, callerClass)) {
try {
if(aDriver.driver.acceptsURL(url)) {
// Success!
println("getDriver returning " +
aDriver.driver.getClass().getName());
return (aDriver.driver);
}
} catch(SQLException sqe) {
// Drop through and try the next driver.
}
} else {
println(" skipping: " +
aDriver.driver.getClass().getName());
}
}
println("getDriver: no suitable driver");
throw new SQLException("No suitable driver", "08001");
}
private static boolean isDriverAllowed(Driver driver, ClassLoader
classLoader) {
boolean result = false;
if(driver != null) {
Class<?> aClass = null;
try {
aClass = Class.forName(driver.getClass().getName(), true,
classLoader);
} catch (Exception ex) {
result = false;
}
// The class loader of Class is `AppClassLoader` and that of the
driver.getClass() is IsolatedClientLoader, that is why the driver does exist,
but can't fetch it successfully.
result = ( aClass == driver.getClass() ) ? true : false;
}
return result;
}
```
The order of these unit tests matters. If SparkJdbcMysqlCatalogIT33 is
executed first each time, there will be no problem. Therefore, this issue is
not consistently reproducible.
Solutions:
- Re-register using Class.forName("") with the App Classloader.
- Ensure that SparkJdbcMysqlCatalogIT33 is always executed first in the
module
- Run the first test in a separate JVM. This can be achieved by.
```
test {
forkCount = 1
maxParallelForks = 1
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]