yuemenglong opened a new issue, #33598:
URL: https://github.com/apache/shardingsphere/issues/33598
## Question
I have configured read-write splitting as shown below, intending to ensure
that if one of the databases goes down, it will not affect the queries.
However, during testing, I encountered an issue where stopping any one of the
databases results in continuous errors without any successful queries.
My expectation was that stopping one database would either:
Cause half of the queries to fail, or
Allow queries to continue unaffected, with ShardingSphere handling fault
tolerance.
However, in reality, stopping any one database results in a persistent
error. Could you please advise if there is an issue with my configuration?
Thank you very much.
Code:
```
package sharding;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.Random;
import java.util.concurrent.TimeUnit;
public class ShardingRw {
private static final String shardingFile = "sharding_rw.yaml";
private static final String JDBC_URL =
"jdbc:shardingsphere:classpath:sharding/" + shardingFile;
private static final String DRIVER_CLASS_NAME =
"org.apache.shardingsphere.driver.ShardingSphereDriver";
private static final int RECORD_COUNT = 100; // Number of records to write
public static void main(String[] args) throws Exception {
// Initialize data source
HikariDataSource dataSource = new HikariDataSource();
dataSource.setDriverClassName(DRIVER_CLASS_NAME);
dataSource.setJdbcUrl(JDBC_URL);
try {
// Clear table
try (Connection conn = dataSource.getConnection();
PreparedStatement clearPs = conn.prepareStatement("TRUNCATE TABLE
test")) {
clearPs.executeUpdate();
} catch (Exception e) {
System.err.println("Error clearing table: " + e.getMessage());
e.printStackTrace();
}
// Insert 100 records
for (int i = 1; i <= RECORD_COUNT; i++) {
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("INSERT INTO test
(id, value) VALUES (?, ?)")) {
ps.setLong(1, i);
ps.setString(2, "value" + i);
ps.executeUpdate();
} catch (Exception e) {
System.err.println("Error inserting record " + i + ": " +
e.getMessage());
e.printStackTrace();
}
}
System.out.println("Data insertion completed.");
// Randomly read a record every second
Random random = new Random();
while (true) {
int randomId = random.nextInt(RECORD_COUNT) + 1;
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT * FROM
test WHERE id = ?")) {
ps.setInt(1, randomId);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
System.out.println("Random record read: id=" + rs.getInt("id")
+ ", value=" + rs.getString("value"));
} else {
System.out.println("Record " + randomId + " does not exist.");
}
}
} catch (Exception e) {
System.err.println("Error reading record " + randomId + ": " +
e.getMessage());
e.printStackTrace();
}
// Wait 1 second
TimeUnit.SECONDS.sleep(1);
}
} finally {
dataSource.close();
}
}
}
```
config:
```
dataSources:
test_m:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl:
jdbc:mysql://172.28.41.59:3008/test?useUnicode=true&characterEncoding=utf8&allowPublicKeyRetrieval=true
username: root
password: root
test_s:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl:
jdbc:mysql://172.28.41.59:3009/test?useUnicode=true&characterEncoding=utf8&allowPublicKeyRetrieval=true
username: root
password: root
rules:
- !SHARDING
tables:
test:
actualDataNodes: test.test
- !READWRITE_SPLITTING
dataSources:
test:
writeDataSourceName: test_m
readDataSourceNames:
- test_m
- test_s
transactionalReadQueryStrategy: PRIMARY
loadBalancerName: random
loadBalancers:
random:
type: RANDOM
props:
sql-show: false
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
[email protected]
For queries about this service, please contact Infrastructure at:
[email protected]