This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 4584b5e  SQL support for time-ordered scan (#7373)
4584b5e is described below

commit 4584b5e13903c3ffb76865690231d882f70a6c4b
Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
AuthorDate: Tue Apr 2 15:46:01 2019 -0700

    SQL support for time-ordered scan (#7373)
    
    * Squashed commit of the following:
    
    commit 287a367f4170e7d0b3010d57788ea993688b9335
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 27 20:03:41 2019 -0700
    
        Implemented Clint's recommendations
    
    commit 07503ea5c00892bf904c0e16e7062fadabcb7830
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 27 17:49:09 2019 -0700
    
        doc fix
    
    commit 231a72e7d9c0f4bb2b3272134cf53fc8db8f0e73
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 27 17:38:20 2019 -0700
    
        Modified sequence limit to accept longs and added test for long limits
    
    commit 1df50de32137961d949c1aaa4e4791f6edfb3d77
    Merge: 480e932fd c7fea6ac8
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 26 15:23:01 2019 -0700
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 480e932fdf02ef85ba81181deb865d9977dfed24
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 26 14:58:04 2019 -0700
    
        Checkstyle and doc update
    
    commit 487f31fcf63a5e1fa9e802212b62206aec47fe25
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 26 14:39:25 2019 -0700
    
        Refixed regression
    
    commit fb858efbb75218bb80b8c77effb2456554aa57b2
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 26 13:14:48 2019 -0700
    
        Added test for n-way merge
    
    commit 376e8bf90610d43d2c7b278bf64525cab80267c5
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 26 11:42:54 2019 -0700
    
        Refactor n-way merge
    
    commit 8a6bb1127c1814470424da2e9d6bfdd55e726199
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 17:17:41 2019 -0700
    
        Fix docs and flipped boolean in ScanQueryLimitRowIterator
    
    commit 35692680fc7aba21c498a92307ef082a581cb23a
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 16:15:49 2019 -0700
    
        Fix bug messing up count of rows
    
    commit 219af478c8ec243700973e616c5be556a83422e2
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 15:57:55 2019 -0700
    
        Fix bug in numRowsScanned
    
    commit da4fc664031debae1dc3b4a0190125e979564aac
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 15:19:45 2019 -0700
    
        Check type of segment spec before using for time ordering
    
    commit b822fc73dfba7f69c7e960bb95b31cab8d27ef25
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 13:19:02 2019 -0700
    
        Revert "Merge branch '6088-Time-Ordering-On-Scans-N-Way-Merge' of 
github.com:justinborromeo/incubator-druid into 
6088-Time-Ordering-On-Scans-N-Way-Merge"
    
        This reverts commit 57033f36df6e3ef887e5f0399ad74bb091306de8, reversing
        changes made to 8f01d8dd16f40d10c60519ca0ec0d2e6b2dde941.
    
    commit 57033f36df6e3ef887e5f0399ad74bb091306de8
    Merge: 8f01d8dd1 86d9730fc
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 13:13:52 2019 -0700
    
        Merge branch '6088-Time-Ordering-On-Scans-N-Way-Merge' of 
github.com:justinborromeo/incubator-druid into 
6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 8f01d8dd16f40d10c60519ca0ec0d2e6b2dde941
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 13:13:32 2019 -0700
    
        Revert "Fixed failing tests -> allow usage of all types of segment spec"
    
        This reverts commit ec470288c7b725f5310bcf69d1db9f85ff509c8d.
    
    commit ec470288c7b725f5310bcf69d1db9f85ff509c8d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 11:01:35 2019 -0700
    
        Fixed failing tests -> allow usage of all types of segment spec
    
    commit 86d9730fc9f241b3010b123a45b1fc38a206a9af
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 25 11:01:35 2019 -0700
    
        Fixed failing tests -> allow usage of all types of segment spec
    
    commit 8b3b6b51ed0d3bc3c937620d5b92096998e32080
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 22 16:01:56 2019 -0700
    
        Nit comment
    
    commit a87d02127c72aa5e307af94b12b6be25150349be
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 22 15:54:42 2019 -0700
    
        Fix checkstyle and test
    
    commit 62dcedacdeeed570134e8b5185633b207e91a547
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 22 15:30:41 2019 -0700
    
        More comments
    
    commit 1b46b58aeccf13adc516a1d94054a98efc32184c
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 22 15:19:52 2019 -0700
    
        Added a bit of docs
    
    commit 49472162b7fc0879159866c3736e192fc88837a4
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 22 10:27:41 2019 -0700
    
        Rename segment limit -> segment partitions limit
    
    commit 43d490cc3ae697d0a61159ed6ae06906006cdf31
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Mar 21 13:16:58 2019 -0700
    
        Optimized n-way merge strategy
    
    commit 42f5246b8d0c1879c2dc45334966bf52f543ea74
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 20 17:40:19 2019 -0700
    
        Smarter limiting for pQueue method
    
    commit 4823dab895770a87356fe2ae4e9858bb4ba03fc3
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 20 16:05:53 2019 -0700
    
        Finish rename
    
    commit 2528a5614267c48714abdb30fd7a2ccdb61b802d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 18 14:00:50 2019 -0700
    
        Renaming
    
    commit 7bfa77d3c177be42d0db0b0bc3c19f9ef536ffeb
    Merge: a032c46ee 7e49d4739
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 12 16:57:45 2019 -0700
    
        Merge branch 'Update-Query-Interrupted-Exception' into 
6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 7e49d47391d17b411b0620794e503592d8f37481
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 12 16:51:25 2019 -0700
    
        Added error message for UOE
    
    commit a032c46ee09cd80b78f25d0da51f5179774aa75f
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 12 16:47:17 2019 -0700
    
        Updated error message
    
    commit 57b568265488066c046f225c1982dba85e8a64ba
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 12 12:44:02 2019 -0700
    
        Fixed tests
    
    commit 45e95bb1f40d50ab3a0a745d2b5fca34c3f53a82
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Mar 12 11:09:08 2019 -0700
    
        Optimization
    
    commit cce917ab846198706ec8177a91869f9aa43e0525
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 8 14:11:07 2019 -0800
    
        Checkstyle fix
    
    commit 73f4038068f2e30cb3487cc175730f3b97c5c8d2
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Mar 7 18:40:00 2019 -0800
    
        Applied Jon's recommended changes
    
    commit fb966def8335e6808f0fe5d2d6a122dcd28f2355
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Mar 7 11:03:01 2019 -0800
    
        Sorry, checkstyle
    
    commit 6dc53b311c568e29a6937fdd5f17a5623d14533f
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Mar 6 10:34:13 2019 -0800
    
        Improved test and appeased TeamCity
    
    commit 35c96d355726cf5d238435655fecbfe19ea8ddb6
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 4 16:00:44 2019 -0800
    
        Checkstyle fix
    
    commit 2d1978d5713187561a534c08eba51e383df66ce7
    Merge: 83ec3fe1f 3398d3982
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Mar 4 15:24:49 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 83ec3fe1f13c384aca52ceef0ba03b300b03d8d9
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 1 13:40:22 2019 -0800
    
        Nit-change on javadoc
    
    commit 47c970b5f476e5bfe5e03aa798f314f59aeb67db
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Mar 1 13:38:29 2019 -0800
    
        Wrote tests and added Javadoc
    
    commit 5ff59f5ca6c8058c04e500662b3691a4910aa842
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 28 15:58:20 2019 -0800
    
        Reset config
    
    commit 806166f9777cccae5e10eabbb256c5e33b0e13f7
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 28 15:49:07 2019 -0800
    
        Fixed failing tests
    
    commit de83b11a1bb24a0ae964240d9cb1ed17ea4a6c26
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 26 16:40:48 2019 -0800
    
        Fixed mistakes in merge
    
    commit 5bd0e1a32cec1a0e4dadd74dd530d13341ab7349
    Merge: 18cce9a64 9fa649b3b
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 26 16:39:16 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 18cce9a646139a57004ef4eccef8077c9775e992
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 26 13:16:44 2019 -0800
    
        Change so batching only occurs on broker for time-ordered scans
    
        Restricted batching to broker for time-ordered queries and adjusted
        tests
    
        Formatting
    
        Cleanup
    
    commit 451e2b43652020d6acb8b8db113fb34db0f50517
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 26 11:14:27 2019 -0800
    
        WIP
    
    commit 69b24bd851d721592324bcbeec1c4229ad9ff462
    Merge: 763c43df7 417b9f2fe
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 18:13:26 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-N-Way-Merge
    
    commit 763c43df7e99d4ab000f038a7c1b9ef98b479138
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 18:07:06 2019 -0800
    
        Multi-historical setup works
    
    commit 06a5218917bca0716b98c32c07415a7271711431
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 16:59:57 2019 -0800
    
        Wrote docs
    
    commit 3b923dac9cc82475795ee2f7691e6f96249560aa
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 14:03:22 2019 -0800
    
        Fixed bug introduced by replacing deque with list
    
    commit 023538d83117086647c69d5030f2e8cb3e039558
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 13:30:08 2019 -0800
    
        Sequence stuff is so dirty :(
    
    commit e1fc2955d361676eb6721ad31defd96d47fab999
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 22 10:39:59 2019 -0800
    
        WIP
    
    commit f57ff253fa659cbb5aa09b7c9bf03d8e7670b865
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 21 18:22:06 2019 -0800
    
        Ordering is correct on n-way merge -> still need to batch events into
        ScanResultValues
    
    commit 1813a5472c791509ba903f734b40b6102079876a
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 21 17:06:18 2019 -0800
    
        Cleanup
    
    commit f83e99655d11247f44018e0e5d36bd6eac1fb2a6
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 21 16:56:36 2019 -0800
    
        Refactor and pQueue works
    
    commit b13ff624a92a7e740eb1f74aa40c7a72165b9708
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 21 15:13:33 2019 -0800
    
        Set up time ordering strategy decision tree
    
    commit fba6b022f0395cc297e3b3726f817c986f97010b
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 21 15:08:27 2019 -0800
    
        Added config and get # of segments
    
    commit c9142e721c7ed824a54de7a160230cd959bb906d
    Merge: cd489a020 554b0142c
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 20 10:12:50 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-V2
    
    commit cd489a0208b0cfc475a34440caa1c1e99d22a281
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 20 00:16:48 2019 -0800
    
        Fixed failing test due to null resultFormat
    
    commit 7baeade8328776244e72a3cb5f2efb59111cf58b
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 19 17:52:06 2019 -0800
    
        Changes based on Gian's comments
    
    commit 35150fe1a63c5143f564c4435461929e619a0de2
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 15 15:57:53 2019 -0800
    
        Small changes
    
    commit 4e69276d57de4a9042b927efa5a864411aedacb4
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 11 12:09:54 2019 -0800
    
        Removed unused import to satisfy PMD check
    
    commit ecb0f483a9525ffc2844cb01a0daafe6bc4d2161
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 11 10:37:11 2019 -0800
    
        improved doc
    
    commit f0eddee66598095a767a1570516c5af59e58e2f6
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 11 10:18:45 2019 -0800
    
        Added more javadoc
    
    commit 5f92dd7325aeff0b2e3f87003263e083ba2b427d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 11 10:05:58 2019 -0800
    
        Unused import
    
    commit 93e1636287f45d38c80f275e4644c0b3222c65e7
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 11 10:03:14 2019 -0800
    
        Added javadoc on ScanResultValueTimestampComparator
    
    commit 134041c47965a8a199862ca33ef2119e29f67287
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 8 13:13:54 2019 -0800
    
        Renamed sort function
    
    commit 2e3577cd3d7b43e140d36aad944536f49287fbfa
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 7 13:01:25 2019 -0800
    
        Fixed benchmark queries
    
    commit d3b335af42602a771063bd8a63c89acf5c715938
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 7 11:08:07 2019 -0800
    
        added all query types to scan benchmark
    
    commit ab00eade9f0b8e8642da40905214653c04cba4d4
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Thu Feb 7 09:42:48 2019 -0800
    
        Kicking travis with change to benchmark param
    
    commit b432beaf84de5b363454fd8058ff653a097c713d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 17:45:59 2019 -0800
    
        Fixed failing calcite tests
    
    commit b2c8c77ad4ee5a9a273ee7a7870fb2d9b0ec9dd4
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 17:39:48 2019 -0800
    
        Fixing tests WIP
    
    commit 85e72a614ef49736d1142ce82d26b533b609c911
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 15:42:02 2019 -0800
    
        Set to spaces over tabs
    
    commit 7e872a8ebcea0d3a141addd122dd9f8b6629ead6
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 15:36:24 2019 -0800
    
        Created an error message for when someone tries to time order a result
        set > threshold limit
    
    commit e8a4b490443b1efe6c70f964b1757bf17a64e9f6
    Merge: 305876a43 8e3a58f72
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 15:05:11 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-V2
    
    commit 305876a4346c292296db623c1fcea688a29c0bb8
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 15:02:02 2019 -0800
    
        nit
    
    commit 8212a21cafc2ed4002607362f0661f4b5f6bef9d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 14:40:35 2019 -0800
    
        Improved conciseness
    
    commit 10b5e0ca93a529d1b0e018c11fafc9c63071b8cd
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 13:42:12 2019 -0800
    
        .
    
    commit dfe4aa9681d04b8a31dcc1486e0447f29f6eb7bd
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 13:41:18 2019 -0800
    
        Fixed codestyle and forbidden API errors
    
    commit 148939e88bfff021356bb532e4246e4c8e8ac333
    Merge: 4f51024b3 5edbe2ae1
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 13:26:17 2019 -0800
    
        Merge branch '6088-Create-Scan-Benchmark' into 
6088-Time-Ordering-On-Scans-V2
    
    commit 5edbe2ae12648b527e3e97b516127bf4b65196a3
    Merge: 60b7684db 315ccb76b
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 13:18:55 2019 -0800
    
        Merge github.com:apache/incubator-druid into 6088-Create-Scan-Benchmark
    
    commit 60b7684db725387b4d843385d9c61d50f2ed6744
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 13:02:13 2019 -0800
    
        Committing a param change to kick teamcity
    
    commit 4f51024b318bf744eddb9d2f9638f7590872cf14
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 12:08:12 2019 -0800
    
        Wrote more tests for scan result value sort
    
    commit 8b7d5f50818b00730965a55b1bf8ed27860bd6a4
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Wed Feb 6 11:55:09 2019 -0800
    
        Wrote tests for heapsort scan result values and fixed bug where iterator
        wasn't returning elements in correct order
    
    commit b6d4df3864e3910fa406dcc83f6644f45f496c5f
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 16:45:20 2019 -0800
    
        Decrease segment size for less memory usage
    
    commit d1a1793f36d4c9c910f84318f2bbbd355533c977
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 12:40:26 2019 -0800
    
        nit
    
    commit 7deb06f6df47c55469a77a92622509ce88150ad5
    Merge: b7d3a4900 86c5eee13
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 10:53:38 2019 -0800
    
        Merge branch '6088-Create-Scan-Benchmark' into 
6088-Time-Ordering-On-Scans-V2
    
    commit 86c5eee13b6ce18b33c723cd0c4e464eaf41f010
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 10:31:27 2019 -0800
    
        Broke some long lines into two lines
    
    commit b7d3a4900afb2b56b5e2667c2d37fa4872c67219
    Merge: 796083f2b 8bc5eaa90
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 10:23:32 2019 -0800
    
        Merge branch 'master' into 6088-Time-Ordering-On-Scans-V2
    
    commit 737a83321d74cd0b1f7b4ca800509c36056d08ff
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Tue Feb 5 10:15:32 2019 -0800
    
        Made Jon's changes and removed TODOs
    
    commit 796083f2bb188421f68858111bb39c988cb2f71c
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 15:37:42 2019 -0800
    
        Benchmark param change
    
    commit 20c36644dbbf46df1a9209a635e661c01aeec627
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 15:36:35 2019 -0800
    
        More param changes
    
    commit 9e6e71616bdcd9a7eea56e4bc1ef869c08bcf83c
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 15:31:21 2019 -0800
    
        Changed benchmark params
    
    commit 01b25ed11293f472dac78d4f793f2941c3b22a18
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 14:36:18 2019 -0800
    
        Added time ordering to the scan benchmark
    
    commit 432acaf08575c451ea02e8ec8d6318678dcf20cb
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 12:03:14 2019 -0800
    
        Change number of benchmark iterations
    
    commit 12e51a272124c7a75628fe5b2f65ddc00e34ba27
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 12:02:13 2019 -0800
    
        Added TimestampComparator tests
    
    commit e66339cd76cdb7f08a291e8488e3415518f3df63
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 10:56:41 2019 -0800
    
        Remove todos
    
    commit ad731a362b465e9b4ca0c9ad7050fc6555606d52
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 10:55:56 2019 -0800
    
        Change benchmark
    
    commit 989bd2d50e2419a715426b8fe903398d20429ff5
    Merge: 7b5847139 26930f8d2
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Mon Feb 4 10:46:38 2019 -0800
    
        Merge branch '6088-Create-Scan-Benchmark' into 
6088-Time-Ordering-On-Scans-V2
    
    commit 7b584713946b538d15da591e306ca4c0a7a378e3
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Sat Feb 2 03:48:18 2019 -0800
    
        Licensing stuff
    
    commit 79e8319383eddbb49ecb4c1785dcd3eed14a0634
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 18:22:58 2019 -0800
    
        Move ScanResultValue timestamp comparator to a separate class for 
testing
    
    commit 7a6080f636ab2ead5ff85a29ea6b9cc04d93b353
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 18:00:58 2019 -0800
    
        Stuff for time-ordered scan query
    
    commit 26930f8d2021d1d62322c54e0ec35e260137ab1d
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 16:38:49 2019 -0800
    
        It runs.
    
    commit dd4ec1ac9c1194144e3ec98b811adc59598c8d8c
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 15:12:17 2019 -0800
    
        Need to form queries
    
    commit dba6e492a067b9bb4f77f3db4b19c340f85ef54f
    Merge: 10e57d5f9 7d4cc2873
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 14:13:39 2019 -0800
    
        Merge branch 'master' into 6088-Create-Scan-Benchmark
    
    commit 10e57d5f9ed003e032c82240045125002903a5bb
    Author: Justin Borromeo <jborr...@edu.uwaterloo.ca>
    Date:   Fri Feb 1 14:04:13 2019 -0800
    
        Moved Scan Builder to Druids class and started on Scan Benchmark setup
    
    * Changed SQL planning to use scan over select
    
    * Fixed some bugs
    
    * Removed unused imports
    
    * Updated calcite query test and test segment walker
    
    * Fixed formatting recommendations
---
 .../apache/druid/sql/calcite/rel/DruidQuery.java   | 143 ++++++--------------
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 140 +++++++++++---------
 .../util/SpecificSegmentsQuerySegmentWalker.java   | 147 ++++++++++-----------
 3 files changed, 186 insertions(+), 244 deletions(-)

diff --git a/sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidQuery.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidQuery.java
index 98e4ff3..43f8751 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidQuery.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidQuery.java
@@ -51,7 +51,6 @@ import org.apache.druid.query.Query;
 import org.apache.druid.query.QueryDataSource;
 import org.apache.druid.query.aggregation.PostAggregator;
 import org.apache.druid.query.aggregation.post.ExpressionPostAggregator;
-import org.apache.druid.query.dimension.DefaultDimensionSpec;
 import org.apache.druid.query.dimension.DimensionSpec;
 import org.apache.druid.query.filter.DimFilter;
 import org.apache.druid.query.groupby.GroupByQuery;
@@ -61,7 +60,6 @@ import 
org.apache.druid.query.groupby.orderby.OrderByColumnSpec;
 import org.apache.druid.query.ordering.StringComparator;
 import org.apache.druid.query.ordering.StringComparators;
 import org.apache.druid.query.scan.ScanQuery;
-import org.apache.druid.query.select.PagingSpec;
 import org.apache.druid.query.select.SelectQuery;
 import org.apache.druid.query.timeseries.TimeseriesQuery;
 import org.apache.druid.query.topn.DimensionTopNMetricSpec;
@@ -93,7 +91,6 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeSet;
-import java.util.stream.Collectors;
 
 /**
  * A fully formed Druid query, built from a {@link PartialDruidQuery}. The 
work to develop this query is done
@@ -145,7 +142,13 @@ public class DruidQuery
     // Now the fun begins.
     this.filter = computeWhereFilter(partialQuery, plannerContext, 
sourceQuerySignature);
     this.selectProjection = computeSelectProjection(partialQuery, 
plannerContext, sourceQuerySignature);
-    this.grouping = computeGrouping(partialQuery, plannerContext, 
sourceQuerySignature, rexBuilder, finalizeAggregations);
+    this.grouping = computeGrouping(
+        partialQuery,
+        plannerContext,
+        sourceQuerySignature,
+        rexBuilder,
+        finalizeAggregations
+    );
 
     final RowSignature sortingInputRowSignature;
     if (this.selectProjection != null) {
@@ -436,9 +439,9 @@ public class DruidQuery
   /**
    * Returns dimensions corresponding to {@code aggregate.getGroupSet()}, in 
the same order.
    *
-   * @param partialQuery       partial query
-   * @param plannerContext     planner context
-   * @param querySignature     source row signature and re-usable virtual 
column references
+   * @param partialQuery   partial query
+   * @param plannerContext planner context
+   * @param querySignature source row signature and re-usable virtual column 
references
    *
    * @return dimensions
    *
@@ -452,12 +455,19 @@ public class DruidQuery
   {
     final Aggregate aggregate = 
Preconditions.checkNotNull(partialQuery.getAggregate());
     final List<DimensionExpression> dimensions = new ArrayList<>();
-    final String outputNamePrefix = Calcites.findUnusedPrefix("d", new 
TreeSet<>(querySignature.getRowSignature().getRowOrder()));
+    final String outputNamePrefix = Calcites.findUnusedPrefix(
+        "d",
+        new TreeSet<>(querySignature.getRowSignature().getRowOrder())
+    );
     int outputNameCounter = 0;
 
     for (int i : aggregate.getGroupSet()) {
       // Dimension might need to create virtual columns. Avoid giving it a 
name that would lead to colliding columns.
-      final RexNode rexNode = 
Expressions.fromFieldAccess(querySignature.getRowSignature(), 
partialQuery.getSelectProject(), i);
+      final RexNode rexNode = Expressions.fromFieldAccess(
+          querySignature.getRowSignature(),
+          partialQuery.getSelectProject(),
+          i
+      );
       final DruidExpression druidExpression = Expressions.toDruidExpression(
           plannerContext,
           querySignature.getRowSignature(),
@@ -478,7 +488,11 @@ public class DruidQuery
 
       final String dimOutputName;
       if (!druidExpression.isSimpleExtraction()) {
-        virtualColumn = 
querySignature.getOrCreateVirtualColumnForExpression(plannerContext, 
druidExpression, sqlTypeName);
+        virtualColumn = querySignature.getOrCreateVirtualColumnForExpression(
+            plannerContext,
+            druidExpression,
+            sqlTypeName
+        );
         dimOutputName = virtualColumn.getOutputName();
       } else {
         dimOutputName = outputNamePrefix + outputNameCounter++;
@@ -515,7 +529,10 @@ public class DruidQuery
   {
     final Aggregate aggregate = 
Preconditions.checkNotNull(partialQuery.getAggregate());
     final List<Aggregation> aggregations = new ArrayList<>();
-    final String outputNamePrefix = Calcites.findUnusedPrefix("a", new 
TreeSet<>(querySignature.getRowSignature().getRowOrder()));
+    final String outputNamePrefix = Calcites.findUnusedPrefix(
+        "a",
+        new TreeSet<>(querySignature.getRowSignature().getRowOrder())
+    );
 
     for (int i = 0; i < aggregate.getAggCallList().size(); i++) {
       final String aggName = outputNamePrefix + i;
@@ -745,11 +762,6 @@ public class DruidQuery
       return scanQuery;
     }
 
-    final SelectQuery selectQuery = toSelectQuery();
-    if (selectQuery != null) {
-      return selectQuery;
-    }
-
     throw new CannotBuildQueryException("Cannot convert query parts into an 
actual query");
   }
 
@@ -966,9 +978,12 @@ public class DruidQuery
       // Scan cannot GROUP BY.
       return null;
     }
-
-    if (limitSpec != null && limitSpec.getColumns().size() > 0) {
-      // Scan cannot ORDER BY.
+    if (limitSpec != null &&
+        (limitSpec.getColumns().size() > 1
+         || (limitSpec.getColumns().size() == 1 && 
!Iterables.getOnlyElement(limitSpec.getColumns())
+                                                             .getDimension()
+                                                             
.equals(ColumnHolder.TIME_COLUMN_NAME)))) {
+      // Scan cannot ORDER BY non-time columns.
       return null;
     }
 
@@ -984,6 +999,15 @@ public class DruidQuery
                            ? 0L
                            : (long) limitSpec.getLimit();
 
+    ScanQuery.Order order;
+    if (limitSpec == null || limitSpec.getColumns().size() == 0) {
+      order = ScanQuery.Order.NONE;
+    } else if (limitSpec.getColumns().get(0).getDirection() == 
OrderByColumnSpec.Direction.ASCENDING) {
+      order = ScanQuery.Order.ASCENDING;
+    } else {
+      order = ScanQuery.Order.DESCENDING;
+    }
+
     return new ScanQuery(
         dataSource,
         filtration.getQuerySegmentSpec(),
@@ -991,90 +1015,11 @@ public class DruidQuery
         ScanQuery.ResultFormat.RESULT_FORMAT_COMPACTED_LIST,
         0,
         scanLimit,
-        null, // Will default to "none"
+        order, // Will default to "none"
         filtration.getDimFilter(),
         
Ordering.natural().sortedCopy(ImmutableSet.copyOf(outputRowSignature.getRowOrder())),
         false,
         ImmutableSortedMap.copyOf(plannerContext.getQueryContext())
     );
   }
-
-  /**
-   * Return this query as a Select query, or null if this query is not 
compatible with Select.
-   *
-   * @return query or null
-   */
-  @Nullable
-  public SelectQuery toSelectQuery()
-  {
-    if (grouping != null) {
-      return null;
-    }
-
-    final Filtration filtration = 
Filtration.create(filter).optimize(sourceQuerySignature);
-    final boolean descending;
-    final int threshold;
-
-    if (limitSpec != null) {
-      // Safe to assume limitSpec has zero or one entry; DruidSelectSortRule 
wouldn't push in anything else.
-      if (limitSpec.getColumns().size() == 0) {
-        descending = false;
-      } else if (limitSpec.getColumns().size() == 1) {
-        final OrderByColumnSpec orderBy = 
Iterables.getOnlyElement(limitSpec.getColumns());
-        if (!orderBy.getDimension().equals(ColumnHolder.TIME_COLUMN_NAME)) {
-          // Select cannot handle sorting on anything other than __time.
-          return null;
-        }
-        descending = orderBy.getDirection() == 
OrderByColumnSpec.Direction.DESCENDING;
-      } else {
-        // Select cannot handle sorting on more than one column.
-        return null;
-      }
-
-      threshold = limitSpec.getLimit();
-    } else {
-      descending = false;
-      threshold = 0;
-    }
-
-    // We need to ask for dummy columns to prevent Select from returning all 
of them.
-    String dummyColumn = "dummy";
-    while (sourceQuerySignature.getRowSignature().getColumnType(dummyColumn) 
!= null
-           || outputRowSignature.getRowOrder().contains(dummyColumn)) {
-      dummyColumn = dummyColumn + "_";
-    }
-
-    final List<String> metrics = new ArrayList<>();
-
-    if (selectProjection != null) {
-      metrics.addAll(selectProjection.getDirectColumns());
-      metrics.addAll(selectProjection.getVirtualColumns()
-                                     .stream()
-                                     .map(VirtualColumn::getOutputName)
-                                     .collect(Collectors.toList()));
-    } else {
-      // No projection, rowOrder should reference direct columns.
-      metrics.addAll(outputRowSignature.getRowOrder());
-    }
-
-    if (metrics.isEmpty()) {
-      metrics.add(dummyColumn);
-    }
-
-    // Not used for actual queries (will be replaced by QueryMaker) but the 
threshold is important for the planner.
-    final PagingSpec pagingSpec = new PagingSpec(null, threshold);
-
-    return new SelectQuery(
-        dataSource,
-        filtration.getQuerySegmentSpec(),
-        descending,
-        filtration.getDimFilter(),
-        Granularities.ALL,
-        ImmutableList.of(new DefaultDimensionSpec(dummyColumn, dummyColumn)),
-        metrics.stream().sorted().distinct().collect(Collectors.toList()),
-        getVirtualColumns(true),
-        pagingSpec,
-        ImmutableSortedMap.copyOf(plannerContext.getQueryContext())
-    );
-  }
 }
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
index 472ff39..efd65d9 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
@@ -21,7 +21,6 @@ package org.apache.druid.sql.calcite;
 
 import com.google.common.base.Joiner;
 import com.google.common.collect.ImmutableList;
-import com.google.common.collect.ImmutableMap;
 import org.apache.druid.common.config.NullHandling;
 import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.Intervals;
@@ -62,7 +61,6 @@ import 
org.apache.druid.query.groupby.orderby.OrderByColumnSpec.Direction;
 import org.apache.druid.query.lookup.RegisteredLookupExtractionFn;
 import org.apache.druid.query.ordering.StringComparators;
 import org.apache.druid.query.scan.ScanQuery;
-import org.apache.druid.query.select.PagingSpec;
 import org.apache.druid.query.topn.DimensionTopNMetricSpec;
 import org.apache.druid.query.topn.InvertedTopNMetricSpec;
 import org.apache.druid.query.topn.NumericTopNMetricSpec;
@@ -157,7 +155,9 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
         CalciteTests.REGULAR_USER_AUTH_RESULT,
         ImmutableList.of(Druids.newTimeseriesQueryBuilder()
                                .dataSource(CalciteTests.DATASOURCE1)
-                               
.intervals(querySegmentSpec(Intervals.of("2999-01-01T00:00:00.000Z/146140482-04-24T15:36:27.903Z")))
+                               .intervals(querySegmentSpec(Intervals.of(
+                                   
"2999-01-01T00:00:00.000Z/146140482-04-24T15:36:27.903Z"))
+                               )
                                .granularity(Granularities.ALL)
                                .aggregators(aggregators(
                                    new CountAggregatorFactory("a0"),
@@ -308,7 +308,7 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
             .add(new Object[]{"sys", "server_segments", "SYSTEM_TABLE"})
             .add(new Object[]{"sys", "servers", "SYSTEM_TABLE"})
             .add(new Object[]{"sys", "tasks", "SYSTEM_TABLE"})
-        .build()
+            .build()
     );
   }
 
@@ -443,7 +443,16 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
             new Object[]{timestamp("2000-01-03"), 1L, "2", "", "d", 3f, 3.0, 
hyperLogLogCollectorClassName},
             new Object[]{timestamp("2001-01-01"), 1L, "1", "a", "", 4f, 4.0, 
hyperLogLogCollectorClassName},
             new Object[]{timestamp("2001-01-02"), 1L, "def", "abc", 
NULL_VALUE, 5f, 5.0, hyperLogLogCollectorClassName},
-            new Object[]{timestamp("2001-01-03"), 1L, "abc", NULL_VALUE, 
NULL_VALUE, 6f, 6.0, hyperLogLogCollectorClassName}
+            new Object[]{
+                timestamp("2001-01-03"),
+                1L,
+                "abc",
+                NULL_VALUE,
+                NULL_VALUE,
+                6f,
+                6.0,
+                hyperLogLogCollectorClassName
+            }
         )
     );
   }
@@ -576,16 +585,15 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
         "SELECT * FROM druid.foo ORDER BY __time DESC LIMIT 2",
         CalciteTests.REGULAR_USER_AUTH_RESULT,
         ImmutableList.of(
-            Druids.newSelectQueryBuilder()
-                  .dataSource(CalciteTests.DATASOURCE1)
-                  .intervals(querySegmentSpec(Filtration.eternity()))
-                  .granularity(Granularities.ALL)
-                  .dimensions(ImmutableList.of("dummy"))
-                  .metrics(ImmutableList.of("__time", "cnt", "dim1", "dim2", 
"dim3", "m1", "m2", "unique_dim1"))
-                  .descending(true)
-                  .pagingSpec(FIRST_PAGING_SPEC)
-                  .context(QUERY_CONTEXT_DEFAULT)
-                  .build()
+            newScanQueryBuilder()
+                .dataSource(CalciteTests.DATASOURCE1)
+                .intervals(querySegmentSpec(Filtration.eternity()))
+                .columns(ImmutableList.of("__time", "cnt", "dim1", "dim2", 
"dim3", "m1", "m2", "unique_dim1"))
+                .limit(2)
+                .order(ScanQuery.Order.DESCENDING)
+                
.resultFormat(ScanQuery.ResultFormat.RESULT_FORMAT_COMPACTED_LIST)
+                .context(QUERY_CONTEXT_DEFAULT)
+                .build()
         ),
         ImmutableList.of(
             new Object[]{timestamp("2001-01-03"), 1L, "abc", NULL_VALUE, 
NULL_VALUE, 6f, 6d, HLLC_STRING},
@@ -603,32 +611,15 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
         "SELECT * FROM druid.foo ORDER BY __time",
         CalciteTests.REGULAR_USER_AUTH_RESULT,
         ImmutableList.of(
-            Druids.newSelectQueryBuilder()
-                  .dataSource(CalciteTests.DATASOURCE1)
-                  .intervals(querySegmentSpec(Filtration.eternity()))
-                  .granularity(Granularities.ALL)
-                  .dimensions(ImmutableList.of("dummy"))
-                  .metrics(ImmutableList.of("__time", "cnt", "dim1", "dim2", 
"dim3", "m1", "m2", "unique_dim1"))
-                  .descending(false)
-                  .pagingSpec(FIRST_PAGING_SPEC)
-                  .context(QUERY_CONTEXT_DEFAULT)
-                  .build(),
-            Druids.newSelectQueryBuilder()
-                  .dataSource(CalciteTests.DATASOURCE1)
-                  .intervals(querySegmentSpec(Filtration.eternity()))
-                  .granularity(Granularities.ALL)
-                  .dimensions(ImmutableList.of("dummy"))
-                  .metrics(ImmutableList.of("__time", "cnt", "dim1", "dim2", 
"dim3", "m1", "m2", "unique_dim1"))
-                  .descending(false)
-                  .pagingSpec(
-                      new PagingSpec(
-                          
ImmutableMap.of("foo_1970-01-01T00:00:00.000Z_2001-01-03T00:00:00.001Z_1", 5),
-                          1000,
-                          true
-                      )
-                  )
-                  .context(QUERY_CONTEXT_DEFAULT)
-                  .build()
+            newScanQueryBuilder()
+                .dataSource(CalciteTests.DATASOURCE1)
+                .intervals(querySegmentSpec(Filtration.eternity()))
+                .columns(ImmutableList.of("__time", "cnt", "dim1", "dim2", 
"dim3", "m1", "m2", "unique_dim1"))
+                .limit(Long.MAX_VALUE)
+                .order(ScanQuery.Order.ASCENDING)
+                
.resultFormat(ScanQuery.ResultFormat.RESULT_FORMAT_COMPACTED_LIST)
+                .context(QUERY_CONTEXT_DEFAULT)
+                .build()
         ),
         ImmutableList.of(
             new Object[]{timestamp("2000-01-01"), 1L, "", "a", 
"[\"a\",\"b\"]", 1f, 1.0, HLLC_STRING},
@@ -669,17 +660,15 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
     testQuery(
         "SELECT dim1 FROM druid.foo ORDER BY __time DESC LIMIT 2",
         ImmutableList.of(
-            Druids.newSelectQueryBuilder()
-                  .dataSource(CalciteTests.DATASOURCE1)
-                  .intervals(querySegmentSpec(Filtration.eternity()))
-                  .dimensionSpecs(dimensions(new DefaultDimensionSpec("dim1", 
"d1")))
-                  .granularity(Granularities.ALL)
-                  .descending(true)
-                  .dimensions(ImmutableList.of("dummy"))
-                  .metrics(ImmutableList.of("__time", "dim1"))
-                  .pagingSpec(FIRST_PAGING_SPEC)
-                  .context(QUERY_CONTEXT_DEFAULT)
-                  .build()
+            newScanQueryBuilder()
+                .dataSource(CalciteTests.DATASOURCE1)
+                .intervals(querySegmentSpec(Filtration.eternity()))
+                .columns(ImmutableList.of("__time", "dim1"))
+                .limit(2)
+                .order(ScanQuery.Order.DESCENDING)
+                
.resultFormat(ScanQuery.ResultFormat.RESULT_FORMAT_COMPACTED_LIST)
+                .context(QUERY_CONTEXT_DEFAULT)
+                .build()
         ),
         ImmutableList.of(
             new Object[]{"abc"},
@@ -1714,8 +1703,8 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
                   .intervals(querySegmentSpec(Filtration.eternity()))
                   .granularity(Granularities.ALL)
                   .filters(expressionFilter("case_searched((\"dim2\" == 'a'),"
-                                             + 
(NullHandling.replaceWithDefault() ? "1" : "0")
-                                             + ",(\"dim2\" == ''))"))
+                                            + 
(NullHandling.replaceWithDefault() ? "1" : "0")
+                                            + ",(\"dim2\" == ''))"))
                   .aggregators(aggregators(new CountAggregatorFactory("a0")))
                   .context(TIMESERIES_CONTEXT_DEFAULT)
                   .build()
@@ -1743,8 +1732,8 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
                   .intervals(querySegmentSpec(Filtration.eternity()))
                   .granularity(Granularities.ALL)
                   .filters(expressionFilter("case_searched((\"dim2\" == 'a'),"
-                                             + 
(NullHandling.replaceWithDefault() ? "1" : "0")
-                                             + ",(\"dim2\" == null))"))
+                                            + 
(NullHandling.replaceWithDefault() ? "1" : "0")
+                                            + ",(\"dim2\" == null))"))
                   .aggregators(aggregators(new CountAggregatorFactory("a0")))
                   .context(TIMESERIES_CONTEXT_DEFAULT)
                   .build()
@@ -2022,7 +2011,9 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
                   .virtualColumns(
                       expressionVirtualColumn(
                           "v0",
-                          "case_searched((\"dim2\" == 'abc'),'yes',(\"dim2\" 
== 'def'),'yes'," + DruidExpression.nullLiteral() + ")",
+                          "case_searched((\"dim2\" == 'abc'),'yes',(\"dim2\" 
== 'def'),'yes',"
+                          + DruidExpression.nullLiteral()
+                          + ")",
                           ValueType.STRING
                       )
                   )
@@ -2303,7 +2294,8 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
                             expressionVirtualColumn(
                                 "v0",
                                 "floor(CAST(\"dim1\", 'DOUBLE'))",
-                                ValueType.DOUBLE)
+                                ValueType.DOUBLE
+                            )
                         )
                         .setDimFilter(
                             or(
@@ -4296,7 +4288,11 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
                                                     ValueType.LONG
                                                 )
                                             )
-                                            .setDimensions(dimensions(new 
DefaultDimensionSpec("v0", "v0", ValueType.LONG)))
+                                            .setDimensions(dimensions(new 
DefaultDimensionSpec(
+                                                "v0",
+                                                "v0",
+                                                ValueType.LONG
+                                            )))
                                             
.setAggregatorSpecs(aggregators(new CountAggregatorFactory("a0")))
                                             .setContext(QUERY_CONTEXT_DEFAULT)
                                             .build()
@@ -4355,13 +4351,21 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
                                                     ValueType.LONG
                                                 )
                                             )
-                                            .setDimensions(dimensions(new 
DefaultDimensionSpec("v0", "v0", ValueType.LONG)))
+                                            .setDimensions(dimensions(new 
DefaultDimensionSpec(
+                                                "v0",
+                                                "v0",
+                                                ValueType.LONG
+                                            )))
                                             .setAggregatorSpecs(
                                                 aggregators(
                                                     new 
CardinalityAggregatorFactory(
                                                         "a0:a",
                                                         null,
-                                                        dimensions(new 
DefaultDimensionSpec("cnt", "cnt", ValueType.LONG)),
+                                                        dimensions(new 
DefaultDimensionSpec(
+                                                            "cnt",
+                                                            "cnt",
+                                                            ValueType.LONG
+                                                        )),
                                                         false,
                                                         true
                                                     )
@@ -7471,7 +7475,10 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
         ImmutableList.of(
             GroupByQuery.builder()
                         .setDataSource(CalciteTests.DATASOURCE1)
-                        
.setInterval(querySegmentSpec(Intervals.utc(DateTimes.of("2000-01-01").getMillis(),
 JodaUtils.MAX_INSTANT)))
+                        .setInterval(querySegmentSpec(Intervals.utc(
+                            DateTimes.of("2000-01-01").getMillis(),
+                            JodaUtils.MAX_INSTANT
+                        )))
                         .setGranularity(Granularities.ALL)
                         .setDimFilter(not(selector("dim1", "", null)))
                         .setDimensions(dimensions(new ExtractionDimensionSpec(
@@ -7483,7 +7490,10 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
                         .build(),
             Druids.newTimeseriesQueryBuilder()
                   .dataSource(CalciteTests.DATASOURCE1)
-                  
.intervals(querySegmentSpec(Intervals.utc(DateTimes.of("2000-01-01").getMillis(),
 JodaUtils.MAX_INSTANT)))
+                  .intervals(querySegmentSpec(Intervals.utc(
+                      DateTimes.of("2000-01-01").getMillis(),
+                      JodaUtils.MAX_INSTANT
+                  )))
                   .granularity(Granularities.ALL)
                   .filters(in(
                       "dim2",
@@ -7630,8 +7640,8 @@ public class CalciteQueryTest extends BaseCalciteQueryTest
         PLANNER_CONFIG_DEFAULT,
         QUERY_CONTEXT_DONT_SKIP_EMPTY_BUCKETS,
         "SELECT exp(count(*)) + 10, sin(pi / 6), cos(pi / 6), tan(pi / 6), 
cot(pi / 6)," +
-            "asin(exp(count(*)) / 2), acos(exp(count(*)) / 2), 
atan(exp(count(*)) / 2), atan2(exp(count(*)), 1) " +
-            "FROM druid.foo WHERE  dim2 = 0",
+        "asin(exp(count(*)) / 2), acos(exp(count(*)) / 2), atan(exp(count(*)) 
/ 2), atan2(exp(count(*)), 1) " +
+        "FROM druid.foo WHERE  dim2 = 0",
         CalciteTests.REGULAR_USER_AUTH_RESULT,
         ImmutableList.of(Druids.newTimeseriesQueryBuilder()
                                .dataSource(CalciteTests.DATASOURCE1)
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/util/SpecificSegmentsQuerySegmentWalker.java
 
b/sql/src/test/java/org/apache/druid/sql/calcite/util/SpecificSegmentsQuerySegmentWalker.java
index de8cf2a..af0e894 100644
--- 
a/sql/src/test/java/org/apache/druid/sql/calcite/util/SpecificSegmentsQuerySegmentWalker.java
+++ 
b/sql/src/test/java/org/apache/druid/sql/calcite/util/SpecificSegmentsQuerySegmentWalker.java
@@ -19,15 +19,17 @@
 
 package org.apache.druid.sql.calcite.util;
 
-import com.google.common.base.Function;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Ordering;
 import com.google.common.io.Closeables;
 import org.apache.druid.java.util.common.ISE;
+import org.apache.druid.java.util.common.Intervals;
 import org.apache.druid.java.util.common.UOE;
 import org.apache.druid.java.util.common.concurrent.Execs;
 import org.apache.druid.java.util.common.guava.FunctionalIterable;
-import org.apache.druid.java.util.common.guava.Sequence;
+import org.apache.druid.query.Druids;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.NoopQueryRunner;
 import org.apache.druid.query.Query;
@@ -39,15 +41,15 @@ import org.apache.druid.query.QuerySegmentWalker;
 import org.apache.druid.query.QueryToolChest;
 import org.apache.druid.query.SegmentDescriptor;
 import org.apache.druid.query.TableDataSource;
+import org.apache.druid.query.scan.ScanQuery;
+import org.apache.druid.query.spec.MultipleSpecificSegmentSpec;
 import org.apache.druid.query.spec.SpecificSegmentQueryRunner;
 import org.apache.druid.query.spec.SpecificSegmentSpec;
 import org.apache.druid.segment.QueryableIndex;
 import org.apache.druid.segment.QueryableIndexSegment;
 import org.apache.druid.segment.Segment;
 import org.apache.druid.timeline.DataSegment;
-import org.apache.druid.timeline.TimelineObjectHolder;
 import org.apache.druid.timeline.VersionedIntervalTimeline;
-import org.apache.druid.timeline.partition.PartitionChunk;
 import org.apache.druid.timeline.partition.PartitionHolder;
 import org.joda.time.Interval;
 
@@ -98,9 +100,16 @@ public class SpecificSegmentsQuerySegmentWalker implements 
QuerySegmentWalker, C
       final Iterable<Interval> intervals
   )
   {
-    final QueryRunnerFactory<T, Query<T>> factory = 
conglomerate.findFactory(query);
+    Query<T> newQuery = query;
+    if (query instanceof ScanQuery && ((ScanQuery) query).getOrder() != 
ScanQuery.Order.NONE) {
+      newQuery = (Query<T>) Druids.ScanQueryBuilder.copy((ScanQuery) query)
+                                                   .intervals(new 
MultipleSpecificSegmentSpec(ImmutableList.of()))
+                                                   .build();
+    }
+
+    final QueryRunnerFactory<T, Query<T>> factory = 
conglomerate.findFactory(newQuery);
     if (factory == null) {
-      throw new ISE("Unknown query type[%s].", query.getClass());
+      throw new ISE("Unknown query type[%s].", newQuery.getClass());
     }
 
     final QueryToolChest<T, Query<T>> toolChest = factory.getToolchest();
@@ -109,56 +118,46 @@ public class SpecificSegmentsQuerySegmentWalker 
implements QuerySegmentWalker, C
         toolChest.postMergeQueryDecoration(
             toolChest.mergeResults(
                 toolChest.preMergeQueryDecoration(
-                    new QueryRunner<T>()
-                    {
-                      @Override
-                      public Sequence<T> run(QueryPlus<T> queryPlus, 
Map<String, Object> responseContext)
-                      {
-                        Query<T> query = queryPlus.getQuery();
-                        final VersionedIntervalTimeline<String, Segment> 
timeline = getTimelineForTableDataSource(query);
-                        return makeBaseRunner(
-                            query,
-                            toolChest,
-                            factory,
-                            FunctionalIterable
-                                .create(intervals)
-                                .transformCat(
-                                    new Function<Interval, 
Iterable<TimelineObjectHolder<String, Segment>>>()
-                                    {
-                                      @Override
-                                      public 
Iterable<TimelineObjectHolder<String, Segment>> apply(final Interval interval)
-                                      {
-                                        return timeline.lookup(interval);
-                                      }
-                                    }
-                                )
-                                .transformCat(
-                                    new Function<TimelineObjectHolder<String, 
Segment>, Iterable<SegmentDescriptor>>()
-                                    {
-                                      @Override
-                                      public Iterable<SegmentDescriptor> 
apply(final TimelineObjectHolder<String, Segment> holder)
-                                      {
-                                        return FunctionalIterable
-                                            .create(holder.getObject())
-                                            .transform(
-                                                new 
Function<PartitionChunk<Segment>, SegmentDescriptor>()
-                                                {
-                                                  @Override
-                                                  public SegmentDescriptor 
apply(final PartitionChunk<Segment> chunk)
-                                                  {
-                                                    return new 
SegmentDescriptor(
-                                                        holder.getInterval(),
-                                                        holder.getVersion(),
-                                                        chunk.getChunkNumber()
-                                                    );
-                                                  }
-                                                }
-                                            );
-                                      }
-                                    }
-                                )
-                        ).run(queryPlus, responseContext);
+                    (queryPlus, responseContext) -> {
+                      Query<T> query1 = queryPlus.getQuery();
+                      Query<T> newQuery1 = query1;
+                      if (query instanceof ScanQuery && ((ScanQuery) 
query).getOrder() != ScanQuery.Order.NONE) {
+                        newQuery1 = (Query<T>) 
Druids.ScanQueryBuilder.copy((ScanQuery) query)
+                                                                      
.intervals(new MultipleSpecificSegmentSpec(
+                                                                          
ImmutableList.of(new SegmentDescriptor(
+                                                                              
Intervals.of("2015-04-12/2015-04-13"),
+                                                                              
"4",
+                                                                              0
+                                                                          ))))
+                                                                      
.context(ImmutableMap.of(
+                                                                          
ScanQuery.CTX_KEY_OUTERMOST,
+                                                                          false
+                                                                      ))
+                                                                      .build();
                       }
+                      final VersionedIntervalTimeline<String, Segment> 
timeline = getTimelineForTableDataSource(
+                          newQuery1);
+                      return makeBaseRunner(
+                          newQuery1,
+                          toolChest,
+                          factory,
+                          FunctionalIterable
+                              .create(intervals)
+                              .transformCat(
+                                  interval -> timeline.lookup(interval)
+                              )
+                              .transformCat(
+                                  holder -> FunctionalIterable
+                                      .create(holder.getObject())
+                                      .transform(
+                                          chunk -> new SegmentDescriptor(
+                                              holder.getInterval(),
+                                              holder.getVersion(),
+                                              chunk.getChunkNumber()
+                                          )
+                                      )
+                              )
+                      ).run(QueryPlus.wrap(newQuery1), responseContext);
                     }
                 )
             )
@@ -228,31 +227,19 @@ public class SpecificSegmentsQuerySegmentWalker 
implements QuerySegmentWalker, C
                 FunctionalIterable
                     .create(specs)
                     .transformCat(
-                        new Function<SegmentDescriptor, 
Iterable<QueryRunner<T>>>()
-                        {
-                          @Override
-                          public Iterable<QueryRunner<T>> apply(final 
SegmentDescriptor descriptor)
-                          {
-                            final PartitionHolder<Segment> holder = 
timeline.findEntry(
-                                descriptor.getInterval(),
-                                descriptor.getVersion()
-                            );
-
-                            return Iterables.transform(
-                                holder,
-                                new Function<PartitionChunk<Segment>, 
QueryRunner<T>>()
-                                {
-                                  @Override
-                                  public QueryRunner<T> 
apply(PartitionChunk<Segment> chunk)
-                                  {
-                                    return new SpecificSegmentQueryRunner<T>(
-                                        
factory.createRunner(chunk.getObject()),
-                                        new SpecificSegmentSpec(descriptor)
-                                    );
-                                  }
-                                }
-                            );
-                          }
+                        descriptor -> {
+                          final PartitionHolder<Segment> holder = 
timeline.findEntry(
+                              descriptor.getInterval(),
+                              descriptor.getVersion()
+                          );
+
+                          return Iterables.transform(
+                              holder,
+                              chunk -> new SpecificSegmentQueryRunner<T>(
+                                  factory.createRunner(chunk.getObject()),
+                                  new SpecificSegmentSpec(descriptor)
+                              )
+                          );
                         }
                     )
             )


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to