I got error on this: sysbench --test=/usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua --mysql-host=127.0.0.1 --mysql-port=33033 --mysql-user=sysbench --mysql-password=password --mysql-db=sysbench --mysql-table-engine=innodb --db-driver=mysql --oltp_tables_count=10 --oltp-test-mode=complex --oltp-read-only=off --oltp-table-size=200000 --threads=10 --rand-type=uniform --rand-init=on cleanup Unknown option: --oltp_tables_count. Usage: sysbench [general-options]... --test=<test-name> [test-options]... command
General options:
--num-threads=N number of threads to use [1]
--max-requests=N limit for total number of requests [10000]
--max-time=N limit for total execution time in seconds [0]
--forced-shutdown=STRING amount of time to wait after --max-time
before forcing shutdown [off]
--thread-stack-size=SIZE size of stack per thread [32K]
--init-rng=[on|off] initialize random number generator [off]
--test=STRING test to run
--debug=[on|off] print more debugging info [off]
--validate=[on|off] perform validation checks where possible [off]
--help=[on|off] print help and exit
--version=[on|off] print version and exit
Compiled-in tests:
fileio - File I/O test
cpu - CPU performance test
memory - Memory functions speed test
threads - Threads subsystem performance test
mutex - Mutex performance test
oltp - OLTP test
Commands: prepare run cleanup help version
See 'sysbench --test=<name> help' for a list of options for each test.
but i have these:
echo "Performing test SQ-${thread}T-${run}"
sysbench --test=oltp --db-driver=mysql --oltp-table-size=40000000
--mysql-db=sysbench --mysql-user=sysbench --mysql-password=password
--max-time=60 --max-requests=0 --num-threads=${thread} run >
/root/SQ-${thread}T-${run}
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned
in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
Done.
OLTP test statistics:
queries performed:
read: 84126
write: 30045
other: 12018
total: 126189
transactions: 6009 (100.14 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 114171 (1902.71 per sec.)
other operations: 12018 (200.28 per sec.)
Test execution summary:
total time: 60.0045s
total number of events: 6009
total time taken by event execution: 59.9812
per-request statistics:
min: 4.47ms
avg: 9.98ms
max: 91.38ms
approx. 95 percentile: 19.44ms
Threads fairness:
events (avg/stddev): 6009.0000/0.00
execution time (avg/stddev): 59.9812/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 4
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned
in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 3 times)
Done.
OLTP test statistics:
queries performed:
read: 372036
write: 132870
other: 53148
total: 558054
transactions: 26574 (442.84 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 504906 (8414.00 per sec.)
other operations: 53148 (885.68 per sec.)
Test execution summary:
total time: 60.0078s
total number of events: 26574
total time taken by event execution: 239.9016
per-request statistics:
min: 2.85ms
avg: 9.03ms
max: 770.42ms
approx. 95 percentile: 27.03ms
Threads fairness:
events (avg/stddev): 6643.5000/38.47
execution time (avg/stddev): 59.9754/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned
in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 7 times)
Done.
OLTP test statistics:
queries performed:
read: 622300
write: 222250
other: 88900
total: 933450
transactions: 44450 (740.78 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 844550 (14074.87 per sec.)
other operations: 88900 (1481.57 per sec.)
Test execution summary:
total time: 60.0041s
total number of events: 44450
total time taken by event execution: 479.7919
per-request statistics:
min: 3.07ms
avg: 10.79ms
max: 645.09ms
approx. 95 percentile: 28.71ms
Threads fairness:
events (avg/stddev): 5556.2500/35.87
execution time (avg/stddev): 59.9740/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned
in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 15 times)
Done.
OLTP test statistics:
queries performed:
read: 838124
write: 299330
other: 119732
total: 1257186
transactions: 59866 (997.59 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1137454 (18954.19 per sec.)
other operations: 119732 (1995.18 per sec.)
Test execution summary:
total time: 60.0107s
total number of events: 59866
total time taken by event execution: 959.8168
per-request statistics:
min: 3.12ms
avg: 16.03ms
max: 526.87ms
approx. 95 percentile: 33.19ms
Threads fairness:
events (avg/stddev): 3741.6250/13.91
execution time (avg/stddev): 59.9885/0.00
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 32
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned
in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 31 times)
Done.
OLTP test statistics:
queries performed:
read: 994980
write: 355350
other: 142140
total: 1492470
transactions: 71070 (1184.18 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1350330 (22499.43 per sec.)
other operations: 142140 (2368.36 per sec.)
Test execution summary:
total time: 60.0162s
total number of events: 71070
total time taken by event execution: 1920.0299
per-request statistics:
min: 3.10ms
avg: 27.02ms
max: 366.66ms
approx. 95 percentile: 49.12ms
Threads fairness:
events (avg/stddev): 2220.9375/17.12
execution time (avg/stddev): 60.0009/0.00
Gerhard W. Recher
net4sec UG (haftungsbeschränkt)
Leitenweg 6
86929 Penzing
+49 171 4802507
Am 04.12.2017 um 13:03 schrieb German Anders:
> Could anyone run the tests? and share some results..
>
> Thanks in advance,
>
> Best,
>
>
> **
>
> *German*
>
> 2017-11-30 14:25 GMT-03:00 German Anders <[email protected]
> <mailto:[email protected]>>:
>
> That's correct, IPoIB for the backend (already configured the irq
> affinity), and 10GbE on the frontend. I would love to try rdma
> but like you said is not stable for production, so I think I'll
> have to wait for that. Yeah, the thing is that it's not my
> decision to go for 50GbE or 100GbE... :( so.. 10GbE for the
> front-end will be...
>
> Would be really helpful if someone could run the following
> sysbench test on a mysql db so I could make some compares:
>
> *my.cnf *configuration file:
>
> [mysqld_safe]
> nice = 0
> pid-file =
> /home/test_db/mysql/mysql.pid
>
> [client]
> port = 33033
> socket =
> /home/test_db/mysql/mysql.sock
>
> [mysqld]
> user = test_db
> port = 33033
> socket =
> /home/test_db/mysql/mysql.sock
> pid-file =
> /home/test_db/mysql/mysql.pid
> log-error =
> /home/test_db/mysql/mysql.err
> datadir = /home/test_db/mysql/data
> tmpdir = /tmp
> server-id = 1
>
> # ** Binlogging **
> #log-bin =
> /home/test_db/mysql/binlog/mysql-bin
> #log_bin_index =
> /home/test_db/mysql/binlog/mysql-bin.index
> expire_logs_days = 1
> max_binlog_size = 512MB
>
> thread_handling = pool-of-threads
> thread_pool_max_threads = 300
>
>
> # ** Slow query log **
> slow_query_log = 1
> slow_query_log_file =
> /home/test_db/mysql/mysql-slow.log
> long_query_time = 10
> log_output = FILE
> log_slow_slave_statements = 1
> log_slow_verbosity = query_plan,innodb,explain
>
> # ** INNODB Specific options **
> transaction_isolation = READ-COMMITTED
> innodb_buffer_pool_size = 12G
> innodb_data_file_path = ibdata1:256M:autoextend
> innodb_thread_concurrency = 16
> innodb_log_file_size = 256M
> innodb_log_files_in_group = 3
> innodb_file_per_table
> innodb_log_buffer_size = 16M
> innodb_stats_on_metadata = 0
> innodb_lock_wait_timeout = 30
> # innodb_flush_method = O_DSYNC
> innodb_flush_method = O_DIRECT
> max_connections = 10000
> max_connect_errors = 999999
> max_allowed_packet = 128M
> skip-host-cache
> skip-name-resolve
> explicit_defaults_for_timestamp = 1
> performance_schema = OFF
> log_warnings = 2
> event_scheduler = ON
>
> # ** Specific Galera Cluster Settings **
> binlog_format = ROW
> default-storage-engine = innodb
> query_cache_size = 0
> query_cache_type = 0
>
>
> Volume is just an RBD (on a RF=3 pool) with the default 22 bit
> order mounted on */home/test_db/mysql/data*
> *
> *
> commands for the test:
>
> sysbench
> --test=/usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua
> --mysql-host=<hostname> --mysql-port=33033 --mysql-user=sysbench
> --mysql-password=sysbench --mysql-db=sysbench
> --mysql-table-engine=innodb --db-driver=mysql
> --oltp_tables_count=10 --oltp-test-mode=complex
> --oltp-read-only=off --oltp-table-size=200000 --threads=10
> --rand-type=uniform --rand-init=on cleanup > /dev/null 2>/dev/null
>
> sysbench
> --test=/usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua
> --mysql-host=<hostname> --mysql-port=33033 --mysql-user=sysbench
> --mysql-password=sysbench --mysql-db=sysbench
> --mysql-table-engine=innodb --db-driver=mysql
> --oltp_tables_count=10 --oltp-test-mode=complex
> --oltp-read-only=off --oltp-table-size=200000 --threads=10
> --rand-type=uniform --rand-init=on prepare > /dev/null 2>/dev/null
>
> sysbench
> --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua
> --mysql-host=<hostname> --mysql-port=33033 --mysql-user=sysbench
> --mysql-password=sysbench --mysql-db=sysbench
> --mysql-table-engine=innodb --db-driver=mysql
> --oltp_tables_count=10 --oltp-test-mode=complex
> --oltp-read-only=off --oltp-table-size=200000 --threads=20
> --rand-type=uniform --rand-init=on --time=120 run >
> result_sysbench_perf_test.out 2>/dev/null
>
> Im looking for tps, qps and 95th perc, could anyone with a
> all-nvme cluster run the test and share the results? I would
> really appreciate the help :)
>
> Thanks in advance,
>
> Best,
>
>
> **
>
> *German *
>
> 2017-11-29 19:14 GMT-03:00 Zoltan Arnold Nagy
> <[email protected] <mailto:[email protected]>>:
>
> On 2017-11-27 14:02, German Anders wrote:
>
> 4x 2U servers:
> 1x 82599ES 10-Gigabit SFI/SFP+ Network Connection
> 1x Mellanox ConnectX-3 InfiniBand FDR 56Gb/s Adapter
> (dual port)
>
> so I assume you are using IPoIB as the cluster network for the
> replication...
>
> 1x OneConnect 10Gb NIC (quad-port) - in a bond configuration
> (active/active) with 3 vlans
>
> ... and the 10GbE network for the front-end network?
>
> At 4k writes your network latency will be very high (see the
> flame graphs at the Intel NVMe presentation from the Boston
> OpenStack Summit - not sure if there is a newer deck that
> somebody could link ;)) and the time will be spent in the
> kernel. You could give RDMAMessenger a try but it's not stable
> at the current LTS release.
>
> If I were you I'd be looking at 100GbE - we've recently pulled
> in a bunch of 100GbE links and it's been wonderful to see
> 100+GB/s going over the network for just storage.
>
> Some people suggested mounting multiple RBD volumes - unless
> I'm mistaken and you're using very recent qemu/libvirt
> combinations with the proper libvirt disk settings all IO will
> still be single threaded towards librbd thus not making any
> speedup.
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
