Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
* Domingo Alvarez Duarte: > After 12 hours inserting of: > > 934,135,285 records on bolsas_familia > > 22,711,259 records in favorecidos > > 5,570 records in municipios > > ... Insertion will be faster if you create the index after populating the tables. > time sqlite3 bolsa_familia3.db "vacuum;" > > real147m6.252s > user10m53.790s > sys3m43.663s You really need to increase the cache size if at all possible. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
The vacuum removes empty pages by rewriting the database ground up. https://www.sqlite.org/lang_vacuum.html "The VACUUM command works by copying the contents of the database into a temporary database file and then overwriting the original with the contents of the temporary file. When overwriting the original, a rollback journal or write-ahead log WAL file is used just as it would be for any other database transaction. This means that when VACUUMing a database, as much as twice the size of the original database file is required in free disk space." I'm not aware whether or not the engine actually checks to see if there are free pages. I don't see anything in the documentation on that page that says that it does a pre-check on empty pages before making the decision to do the work of a vacuum. But the vacuum also defrags the database as well (Internally) and I doubt that there'd be a check for fragmentation of sorts, so I'd lean towards this is a command that will do its deed regardless. Try writing out a large database (Something that will take time to read and write. Maybe a few hundred meg) and run two vacuums one right after another, no other transactions done. On Sat, Oct 1, 2016 at 10:24 PM, Howard Chu wrote: > Domingo Alvarez Duarte wrote: > >> Hello Simon ! >> >> I already did it without using "wal" and the result was the same. >> >> And even for my surprise in one try I stopped at the middle performed an >> "analyze" and the performance deteriorated a lot to a point that I needed >> to >> delete the stats tables to get the better performance without "analyze". >> >> I also tried with the lsm module and got a bit better performance but >> with an >> irregular timing and a bigger disk usage (20%). >> >> Also tested with lmdb with an astonishing insertion rate but with a lot >> more >> disk usage and irregular timing. >> > > Using LMDB the VACUUM command is supposed to be a no-op; at least that's > how I intended it. Since LMDB deletes records immediately instead of > leaving tombstones, there is nothing to vacuum. > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Domingo Alvarez Duarte wrote: Hello Simon ! I already did it without using "wal" and the result was the same. And even for my surprise in one try I stopped at the middle performed an "analyze" and the performance deteriorated a lot to a point that I needed to delete the stats tables to get the better performance without "analyze". I also tried with the lsm module and got a bit better performance but with an irregular timing and a bigger disk usage (20%). Also tested with lmdb with an astonishing insertion rate but with a lot more disk usage and irregular timing. Using LMDB the VACUUM command is supposed to be a no-op; at least that's how I intended it. Since LMDB deletes records immediately instead of leaving tombstones, there is nothing to vacuum. Also tested with leveldb with a worse performance and almost twice disk space usage. The data distribution on some tables seem to fall into the worst corner cases for btrees. Cheers ! On 01/10/16 18:26, Simon Slavin wrote: On 1 Oct 2016, at 10:18pm, Domingo Alvarez Duarte wrote: About the vacuum I also understand the need to rewrite the whole database but I'm not sure if it's really necessary to do almost 5 times the database size in both reads and writes (also an equivalent amount of I/O happened during insertions). Can you try it without db.exec_dml("PRAGMA wal_checkpoint(FULL);"); and see if that improves time ? That's the only thing I can see. You're using a nested INSERT OR IGNORE command I'm not familiar with. Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users -- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/ ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
What was the size of the original database? To VACUUM a database, the process is: 1) Read the logical components of the database, write them to a new file. This will generate at least 1x reads (original size), and 1x writes (final size). In most cases the reads will be larger and the writes will be larger, because some blocks are touched more than once. This is especially true if there are large indexes. 2) The new file is then copied back to the original file, one block at a time. This requires 1x (final) read and 1x write (final). 3) Except, to make the write-back ACID safe, each block of the original database needs to be copied during the write-back process, which requires another 1x read (final) and 1x write (final) in journal mode. WAL numbers are similar, if not higher. In other words, in the ideal case you’re going to have a bare minimum of 3x final writes and 2x final + 1x original reads, but even on a freshly VACUUMed database, you’ll never see ideal numbers— especially if there are indexes… indexes are rebuilt by insertion, so if the source table data is not really in order, that can require a lot of data shuffling (i.e. extra read/writes). In a similar note, SQLite typically requires about ~2x final size of free storage space to complete a VACUUM. There are a number of ways to improve this. Most of the I/O is in the write-back process, which is required for ACID proof VACUUM transactions. In 2010 I proposed a “VACUUM TO” command that would VACUUM one database file to a new database file, essentially making a copy. This would only require 1x original reads, and ~1x+ final writes, and only 1x final free space. The disadvantage is that you end up with a new file that would require closing all connections (including those in other applications) and re-opening them. SQLite also does not trust OS filesystem commands (such as renaming a new file over and old one) to operate in any type of transaction/rollback safe way, so it avoids them. There seems to be a number of times when that’s an acceptable alternative, however. See: http://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg87972.html http://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg50941.html -j, author of “Using SQLite”, O’Reilly Media On Oct 1, 2016, at 3:27 PM, Domingo Alvarez Duarte wrote: > Hello ! > > I'm using sqlite (trunk) for a database (see bellow) and for a final database > file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( > 134GB reads and 117GB writes in 2h:30min). > > Can something be improved on sqlite to achieve a better performance ? > > The data is public available just in case it can be useful to perform tests. > > Cheers ! > > -- Jay A. Kreibich < J A Y @ K R E I B I.C H > "Intelligence is like underwear: it is important that you have it, but showing it to the wrong people has the tendency to make them feel uncomfortable." -- Angela Johnson ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Hello Bob ! I'm using the default sqlite page size, but I also did a try with 32KB page size and I've got a bi smaller overall database size but no visible perfomance gain in terms of time and I/O. Also the memory usage skyrocked, also forcing memory swap. The OS was OS X yosemite, I also posted before a small program with a sample of the problematic data only which end with a database of around 340MB and the same poor perfomance. Cheers ! On 01/10/16 19:34, Bob Friesenhahn wrote: On Sat, 1 Oct 2016, Domingo Alvarez Duarte wrote: Hello ! I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min). What means are you using the evaluate the total amount of I/O? At what level (e.g. OS system call, individual disk I/O) are you measuring the I/O? If the problem is more physical disk I/O than expected then is it possible that the underlying filesystem blocksize does not match the blocksize that SQLite is using? You may have an issue with write amplification at the filesystem level. Bob ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
On Sat, 1 Oct 2016, Domingo Alvarez Duarte wrote: Hello ! I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min). What means are you using the evaluate the total amount of I/O? At what level (e.g. OS system call, individual disk I/O) are you measuring the I/O? If the problem is more physical disk I/O than expected then is it possible that the underlying filesystem blocksize does not match the blocksize that SQLite is using? You may have an issue with write amplification at the filesystem level. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Hello Keith ! Probably have you seem in the code attached that also tried that once and even the mmap but it doesn't had any visible improvement. Also due to the data distribution 80% of the data inserts took 20% of the total time and the other 20% of data insert took 80% of the total time. The final database is has a poor overal performance for what I'm used when using sqlite for small databases, I also tried to do it with mysql and postgresql but the performance for a so simple database is terrible. Cheers ! db.exec_dml("PRAGMA synchronous = 0;"); db.exec_dml("PRAGMA journal_mode = WAL"); //db.exec_dml("PRAGMA journal_mode = MEMORY;"); //db.exec_dml("PRAGMA journal_mode = OFF;"); //db.exec_dml("PRAGMA locking_mode = EXCLUSIVE;"); db.exec_dml("PRAGMA temp_store = MEMORY;"); //db.exec_dml("PRAGMA threads = 4;"); //db.exec_dml("PRAGMA mmap_size = 6400;"); auto gigabyte = 1024*1024*1024; db.exec_dml("PRAGMA mmap_size=" + (gigabyte*16)); //print("mmap_size", db.exec_get_one("PRAGMA mmap_size;")); //db.exec_dml("PRAGMA cache_size = -64000"); //print("cache_size", db.exec_get_one("PRAGMA cache_size;")); On 01/10/16 19:21, Keith Medcalf wrote: Did you change the cache size? The default is rather small for a database of 22 GB. -Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Domingo Alvarez Duarte Sent: Saturday, 1 October, 2016 15:19 To: SQLite mailing list Subject: Re: [sqlite] Why so much I/O ? Can sqlite be improved ? Hello Simon ! Thanks for reply ! I already know your suggestions and if you look at the database schema and the program used to insert data you can see that there is no unnecessary indices active and all inside transactions. About the vacuum I also understand the need to rewrite the whole database but I'm not sure if it's really necessary to do almost 5 times the database size in both reads and writes (also an equivalent amount of I/O happened during insertions). Cheers ! On 01/10/16 18:12, Simon Slavin wrote: On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte wrote: I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min). Can something be improved on sqlite to achieve a better performance ? VACUUM rewrites the entire database. It will always do a lot of IO. You should never need to use VACUUM in a production setting. Perhaps in a once-a-year maintenance utility but not in normal use. The fastest way to do lots of insertion is DROP all INDEXes DELETE FROM all TABLEs Do your insertions, bundling up each thousand (ten thousand ? depends on your system) uses of INSERT in a transaction if you really want to do VACUUM, do it here reCREATE all your INDEXes ANALYZE (the ANALYZE will also do lots of IO, not as much as VACUUM, but it may speed up all WHERE / ORDER BY clauses). Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Did you change the cache size? The default is rather small for a database of 22 GB. > -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] > On Behalf Of Domingo Alvarez Duarte > Sent: Saturday, 1 October, 2016 15:19 > To: SQLite mailing list > Subject: Re: [sqlite] Why so much I/O ? Can sqlite be improved ? > > Hello Simon ! > > Thanks for reply ! > > I already know your suggestions and if you look at the database schema > and the program used to insert data you can see that there is no > unnecessary indices active and all inside transactions. > > About the vacuum I also understand the need to rewrite the whole > database but I'm not sure if it's really necessary to do almost 5 times > the database size in both reads and writes (also an equivalent amount of > I/O happened during insertions). > > Cheers ! > > > On 01/10/16 18:12, Simon Slavin wrote: > > On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte > wrote: > > > >> I'm using sqlite (trunk) for a database (see bellow) and for a final > database file of 22GB a "vacuum" was executed and doing so it made a lot > of I/O ( 134GB reads and 117GB writes in 2h:30min). > >> > >> Can something be improved on sqlite to achieve a better performance ? > > VACUUM rewrites the entire database. It will always do a lot of IO. > You should never need to use VACUUM in a production setting. Perhaps in a > once-a-year maintenance utility but not in normal use. > > > > The fastest way to do lots of insertion is > > > > DROP all INDEXes > > DELETE FROM all TABLEs > > Do your insertions, bundling up each thousand (ten thousand ? > > depends on your system) uses of INSERT in a transaction > > if you really want to do VACUUM, do it here > > reCREATE all your INDEXes > > ANALYZE > > > > (the ANALYZE will also do lots of IO, not as much as VACUUM, but it may > speed up all WHERE / ORDER BY clauses). > > > > Simon. > > ___ > > sqlite-users mailing list > > sqlite-users@mailinglists.sqlite.org > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Hello Simon ! I already did it without using "wal" and the result was the same. And even for my surprise in one try I stopped at the middle performed an "analyze" and the performance deteriorated a lot to a point that I needed to delete the stats tables to get the better performance without "analyze". I also tried with the lsm module and got a bit better performance but with an irregular timing and a bigger disk usage (20%). Also tested with lmdb with an astonishing insertion rate but with a lot more disk usage and irregular timing. Also tested with leveldb with a worse performance and almost twice disk space usage. The data distribution on some tables seem to fall into the worst corner cases for btrees. Cheers ! On 01/10/16 18:26, Simon Slavin wrote: On 1 Oct 2016, at 10:18pm, Domingo Alvarez Duarte wrote: About the vacuum I also understand the need to rewrite the whole database but I'm not sure if it's really necessary to do almost 5 times the database size in both reads and writes (also an equivalent amount of I/O happened during insertions). Can you try it without db.exec_dml("PRAGMA wal_checkpoint(FULL);"); and see if that improves time ? That's the only thing I can see. You're using a nested INSERT OR IGNORE command I'm not familiar with. Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
On 1 Oct 2016, at 10:18pm, Domingo Alvarez Duarte wrote: > About the vacuum I also understand the need to rewrite the whole database but > I'm not sure if it's really necessary to do almost 5 times the database size > in both reads and writes (also an equivalent amount of I/O happened during > insertions). Can you try it without db.exec_dml("PRAGMA wal_checkpoint(FULL);"); and see if that improves time ? That's the only thing I can see. You're using a nested INSERT OR IGNORE command I'm not familiar with. Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
Hello Simon ! Thanks for reply ! I already know your suggestions and if you look at the database schema and the program used to insert data you can see that there is no unnecessary indices active and all inside transactions. About the vacuum I also understand the need to rewrite the whole database but I'm not sure if it's really necessary to do almost 5 times the database size in both reads and writes (also an equivalent amount of I/O happened during insertions). Cheers ! On 01/10/16 18:12, Simon Slavin wrote: On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte wrote: I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min). Can something be improved on sqlite to achieve a better performance ? VACUUM rewrites the entire database. It will always do a lot of IO. You should never need to use VACUUM in a production setting. Perhaps in a once-a-year maintenance utility but not in normal use. The fastest way to do lots of insertion is DROP all INDEXes DELETE FROM all TABLEs Do your insertions, bundling up each thousand (ten thousand ? depends on your system) uses of INSERT in a transaction if you really want to do VACUUM, do it here reCREATE all your INDEXes ANALYZE (the ANALYZE will also do lots of IO, not as much as VACUUM, but it may speed up all WHERE / ORDER BY clauses). Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why so much I/O ? Can sqlite be improved ?
On 1 Oct 2016, at 9:27pm, Domingo Alvarez Duarte wrote: > I'm using sqlite (trunk) for a database (see bellow) and for a final database > file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( > 134GB reads and 117GB writes in 2h:30min). > > Can something be improved on sqlite to achieve a better performance ? VACUUM rewrites the entire database. It will always do a lot of IO. You should never need to use VACUUM in a production setting. Perhaps in a once-a-year maintenance utility but not in normal use. The fastest way to do lots of insertion is DROP all INDEXes DELETE FROM all TABLEs Do your insertions, bundling up each thousand (ten thousand ? depends on your system) uses of INSERT in a transaction if you really want to do VACUUM, do it here reCREATE all your INDEXes ANALYZE (the ANALYZE will also do lots of IO, not as much as VACUUM, but it may speed up all WHERE / ORDER BY clauses). Simon. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Why so much I/O ? Can sqlite be improved ?
Hello ! I'm using sqlite (trunk) for a database (see bellow) and for a final database file of 22GB a "vacuum" was executed and doing so it made a lot of I/O ( 134GB reads and 117GB writes in 2h:30min). Can something be improved on sqlite to achieve a better performance ? The data is public available just in case it can be useful to perform tests. Cheers ! After 12 hours inserting of: 934,135,285 records on bolsas_familia 22,711,259 records in favorecidos 5,570 records in municipios ... All that on mac-mini with i5 cpu 4GB memory: 1GB read 26MB write before vacuum time sqlite3 bolsa_familia3.db "vacuum;" real147m6.252s user10m53.790s sys3m43.663s ls -l bolsa_familia3.db -rw-r--r-- 1 staff 22772744192 Oct 1 14:58 bolsa_familia3.db MemRegions: 5656 total, 74M resident, 8904K private, 43M shared. PhysMem: 650M used (471M wired), 3444M unused. VM: 167G vsize, 1063M framework vsize, 18421751(0) swapins, 19671240(0) swapouts. Disks: 414062/135G read, 369485/118G written. time sqlite3_analyzer bolsa_familia3.db > bolsa_familia3.db.analyze.txt real5m7.607s user2m48.184s sys0m56.512s filefrag bolsa_familia3.db bolsa_familia3.db: 29 extents found === year_month|records_inserted|start_time|end_time|minutes_spent 201101|12851338|2016-09-30 20:55:26|2016-09-30 20:59:50|4.4 201102|12946306|2016-09-30 20:59:51|2016-09-30 21:03:26|3.58 201103|12944677|2016-09-30 21:03:26|2016-09-30 21:06:55|3.48 201104|13058478|2016-09-30 21:06:55|2016-09-30 21:10:52|3.95 201105|12986870|2016-09-30 21:10:53|2016-09-30 21:14:49|3.93 201106|12999562|2016-09-30 21:14:49|2016-09-30 21:18:30|3.68 201107|12952040|2016-09-30 21:18:33|2016-09-30 21:22:26|3.88 201108|12805039|2016-09-30 21:22:29|2016-09-30 21:26:15|3.77 201109|13179472|2016-09-30 21:26:16|2016-09-30 21:30:15|3.98 201110|13171810|2016-09-30 21:30:15|2016-09-30 21:34:34|4.32 20|13306920|2016-09-30 21:34:48|2016-09-30 21:40:26|5.63 201112|13352307|2016-09-30 21:40:36|2016-09-30 21:45:06|4.5 201201|13330714|2016-09-30 21:45:11|2016-09-30 21:58:05|12.9 201202|13407291|2016-09-30 21:58:06|2016-09-30 22:03:35|5.48 201203|13394893|2016-09-30 22:03:47|2016-09-30 22:08:52|5.08 201204|13462104|2016-09-30 22:09:05|2016-09-30 22:14:53|5.8 201205|13530036|2016-09-30 22:15:05|2016-09-30 22:25:13|10.13 201206|13462659|2016-09-30 22:25:38|2016-09-30 22:29:24|3.77 201207|13524123|2016-09-30 22:29:32|2016-09-30 22:35:55|6.38 201208|13770339|2016-09-30 22:36:11|2016-09-30 22:42:23|6.2 201209|13724590|2016-09-30 22:42:38|2016-09-30 22:46:39|4.02 201210|13758254|2016-09-30 22:46:54|2016-09-30 22:51:21|4.45 201211|13834007|2016-09-30 22:51:32|2016-09-30 22:56:25|4.88 201212|13672501|2016-09-30 22:56:35|2016-09-30 23:00:56|4.35 201301|13874422|2016-09-30 23:01:11|2016-09-30 23:05:21|4.17 201302|13602566|2016-09-30 23:05:21|2016-09-30 23:09:54|4.55 201303|13942944|2016-09-30 23:09:58|2016-09-30 23:18:32|8.57 201304|13722930|2016-09-30 23:18:56|2016-09-30 23:28:06|9.17 201305|13837042|2016-09-30 23:28:27|2016-09-30 23:45:38|17.18 201306|13717464|2016-09-30 23:46:00|2016-09-30 23:53:20|7.33 201307|13887105|2016-09-30 23:53:25|2016-10-01 00:00:37|7.2 201308|13893436|2016-10-01 00:00:55|2016-10-01 00:07:39|6.73 201309|13978918|2016-10-01 00:08:03|2016-10-01 00:13:59|5.93 201310|13964596|2016-10-01 00:14:07|2016-10-01 00:17:58|3.85 201311|13966149|2016-10-01 00:18:18|2016-10-01 00:24:28|6.17 201312|14211619|2016-10-01 00:24:47|2016-10-01 00:33:52|9.08 201401|14164022|2016-10-01 00:34:12|2016-10-01 00:44:48|10.6 201402|14228956|2016-10-01 00:44:49|2016-10-01 00:52:26|7.62 201403|14160545|2016-10-01 00:52:32|2016-10-01 01:05:17|12.75 201404|14270028|2016-10-01 01:05:40|2016-10-01 01:14:33|8.88 201405|14042255|2016-10-01 01:14:58|2016-10-01 01:21:38|6.67 201406|14134906|2016-10-01 01:22:04|2016-10-01 01:29:19|7.25 201407|14389582|2016-10-01 01:29:44|2016-10-01 01:45:08|15.4 201408|14131123|2016-10-01 01:45:32|2016-10-01 01:51:07|5.58 201409|14143630|2016-10-01 01:51:09|2016-10-01 01:55:50|4.68 201410|14076919|2016-10-01 01:56:04|2016-10-01 02:00:57|4.88 201411|14109947|2016-10-01 02:00:59|2016-10-01 02:07:17|6.3 201412|14054243|2016-10-01 02:07:29|2016-10-01 02:12:55|5.43 201501|14026988|2016-10-01 02:13:05|2016-10-01 02:17:41|4.6 201502|14042558|2016-10-01 02:17:42|2016-10-01 02:22:26|4.73 201503|14004026|2016-10-01 02:22:40|2016-10-01 02:27:15|4.58 201504|13787678|2016-10-01 02:27:20|2016-10-01 02:35:33|8.22 201505|13779988|2016-10-01 02:35:51|2016-10-01 02:40:51|5.0 201506|13753665|2016-10-01 02:40:58|2016-10-01 02:46:40|5.7 201507|13861879|2016-10-01 02:46:53|2016-10-01 02:53:12|6.32 201508|13823829|2016-10-01 02:53:35|2016-10-01 02:58:28|4.88 201509|13912767|2016-10-01 02:58:55|2016-10-01 03:11:22|12.45 201510|14002752|2016-10-01 03:11:49|2016-10-01 03:27:47|15.97 201511|13815096|2016-10-01 03:28:04|2016-10-01 03:37:37|9.55 201512|13980491|2016-10-01 03:38:03|2016-10-01 03:47:26|9.38 201601|14020581|2016-10-01 03:47:41|2016-10-01 04:37:41