[jira] [Updated] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-09-05 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13726:

Component/s: security

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, SolrJ
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-09-05 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923533#comment-16923533
 ] 

Kevin Risden commented on SOLR-13726:
-

NIFI-5148 handled this specifically for NiFi to avoid the JVM property change 
in Krb5HttpClientBuilder

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-08-29 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918808#comment-16918808
 ] 

Kevin Risden commented on SOLR-13726:
-

[~anshum] - curious if you have any opinions/thoughts here on not setting the 
useSubjectCredsOnly system property.

I don't have a patch yet - since I think overall this needs a bit more thought 
about how we handle Kerberos in SolrJ. Ideally we wrap every SolrJ call 
internally with the explicit subject. This would avoid having to fall back to 
the JVM JAAS config unless explicitly required. 

The Hadoop UserGroupInformation class wraps a lot of the ugly internals of JVM 
JAAS configs, but it is a pretty heavy dependency to bring into SolrJ (its part 
of hadoop-common). But it might give some ideas on how to better handle this.

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-08-29 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918806#comment-16918806
 ] 

Kevin Risden commented on SOLR-13726:
-

Some references about useSubjectCredsOnly:

* Source where default is true - 
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/security/jgss/GSSUtil.java#l259
* ugly issue where causes hung threads - 
https://risdenk.github.io/2018/03/15/hdf-apache-nifi-kerberos-errors-usesubjectcredsonly.html

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-08-29 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13726:

Component/s: SolrJ

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-08-29 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918804#comment-16918804
 ] 

Kevin Risden commented on SOLR-13726:
-

SOLR-7468 introduced this a long time ago. This came up recently while trying 
to debug an issue where the JVM hangs looking for Kerberos credentials. 
javax.security.auth.useSubjectCredsOnly=false causes this behavior. We should 
therefore definitely avoid setting the property. The warning should be enough 
to help correct this.

 

In an ideal world, the SolrJ kerberos handling would automatically set the Java 
Subject and not have to worry about this setting being configured at all.

> Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly
> ---
>
> Key: SOLR-13726
> URL: https://issues.apache.org/jira/browse/SOLR-13726
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Priority: Major
>
> Solr should avoid setting system properties that affect the entire JVM. 
> Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause 
> a bunch of issues if SolrJ is colocated with other Kerberos secured services.
> Krb5HttpClientBuilder changes the JVM default to false if it is not set. It 
> is defaulting to true. This affects everything in the JVM. Since SolrJ is 
> meant to be client side, we should avoid doing this.
> [https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13726) Krb5HttpClientBuilder avoid setting javax.security.auth.useSubjectCredsOnly

2019-08-29 Thread Kevin Risden (Jira)
Kevin Risden created SOLR-13726:
---

 Summary: Krb5HttpClientBuilder avoid setting 
javax.security.auth.useSubjectCredsOnly
 Key: SOLR-13726
 URL: https://issues.apache.org/jira/browse/SOLR-13726
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Kevin Risden


Solr should avoid setting system properties that affect the entire JVM. 
Specifically "javax.security.auth.useSubjectCredsOnly" is one that can cause a 
bunch of issues if SolrJ is colocated with other Kerberos secured services.

Krb5HttpClientBuilder changes the JVM default to false if it is not set. It is 
defaulting to true. This affects everything in the JVM. Since SolrJ is meant to 
be client side, we should avoid doing this.

[https://github.com/apache/lucene-solr/blame/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L144]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9952) S3BackupRepository

2019-08-09 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903955#comment-16903955
 ] 

Kevin Risden commented on SOLR-9952:


[~suryakant.jadhav] - this is the wrong place to ask. Use the solr-user mailing 
list for questions [1]. Solr 4.10.3 is old and most likely will not work 
backing up to S3.

[1] https://lucene.apache.org/solr/community.html#mailing-lists-irc

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6305:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.3
>
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, 
> SOLR-6305.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6305:
---
Fix Version/s: 8.3

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.3
>
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, 
> SOLR-6305.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6305:
---
Attachment: SOLR-6305.patch

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, 
> SOLR-6305.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899024#comment-16899024
 ] 

Kevin Risden commented on SOLR-6305:


Updated patch from [~bpasko] with commit message and CHANGES. Looking at 
committing soon.

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, 
> SOLR-6305.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6305:
---
Status: Patch Available  (was: Open)

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-08-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-6305:
--

Assignee: Kevin Risden

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Assignee: Kevin Risden
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage

2019-07-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876440#comment-16876440
 ] 

Kevin Risden commented on SOLR-13587:
-

[~mkhludnev] - No I asked around and didn't seem to be any way to work around 
the HDFS close stuff. Would need to be fixed in Hadoop first.

> Close BackupRepository after every usage
> 
>
> Key: SOLR-13587
> URL: https://issues.apache.org/jira/browse/SOLR-13587
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 8.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13587.patch
>
>
> Turns out BackupRepository is created every operation, but never closed. I 
> suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in 
> {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore 
> operation to make sure that closing hdfs filesystem doesn't break it see 
> SOLR-9961 for the case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage

2019-06-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875878#comment-16875878
 ] 

Kevin Risden commented on SOLR-13587:
-

And when I say "can't" I mean literally there is no close option on an HDFS 
filesystem instance. It leaks threads since the filesystem instance starts some 
thread pools and has no close method to stop them. It would be great if we 
could actually call close on an HDFS filesystem, but nope.

> Close BackupRepository after every usage
> 
>
> Key: SOLR-13587
> URL: https://issues.apache.org/jira/browse/SOLR-13587
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 8.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13587.patch
>
>
> Turns out BackupRepository is created every operation, but never closed. I 
> suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in 
> {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore 
> operation to make sure that closing hdfs filesystem doesn't break it see 
> SOLR-9961 for the case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage

2019-06-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875877#comment-16875877
 ] 

Kevin Risden commented on SOLR-13587:
-

Yea so HDFS filesystem instances can't be closed. Sadly. I looked into this as 
part of SOLR-5007. I found the same thing that backup repositories don't 
open/close things properly. Probably hidden by the fact that HDFS leaks threads.

> Close BackupRepository after every usage
> 
>
> Key: SOLR-13587
> URL: https://issues.apache.org/jira/browse/SOLR-13587
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 8.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13587.patch
>
>
> Turns out BackupRepository is created every operation, but never closed. I 
> suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in 
> {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore 
> operation to make sure that closing hdfs filesystem doesn't break it see 
> SOLR-9961 for the case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12988) Known OpenJDK >= 11 SSL (TLSv1.3) bugs can cause problems with Solr

2019-06-22 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870270#comment-16870270
 ] 

Kevin Risden commented on SOLR-12988:
-

http://jdk.java.net/13/

There are release notes and other info about different builds. There are also 
emails from JDK folks about certain builds from different phrases of the 
release process.

> Known OpenJDK >= 11 SSL (TLSv1.3) bugs can cause problems with Solr
> ---
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12, Java13
> Attachments: SOLR-12988.patch, SOLR-12988.patch, SOLR-13413.patch
>
>
> There are several known OpenJDK JVM bugs (begining with Java11, when TLS v1.3 
> support was first added) that are known to affect Solr's SSL support, and 
> have caused numerous test failures -- notably early "testing" builds of 
> OpenJDK 11, 12, & 13, as well as the officially released OpenJDK 11, 11.0.1, 
> and 11.0.2.
> From the standpoint of the Solr project, there is very little we can do to 
> mitigate these bugs, and have taken steps to ensure any code using our 
> {{SSLTestConfig}} / {{RandomizeSSL}} test-framework classes will be "SKIPed" 
> with an {{AssumptionViolatedException}} when used on JVMs that are known to 
> be problematic.
> Users who encounter any of the types of failures described below, or 
> developers who encounter test runs that "SKIP" with a message refering to 
> this issue ID, are encouraged to Upgrade their JVM. (or as a last resort: try 
> disabling "TLSv1.3" in your JVM security properties)
> 
> Examples of known bugs as they have manifested in Solr tests...
> * https://bugs.openjdk.java.net/browse/JDK-8212885
> ** "TLS 1.3 resumed session does not retain peer certificate chain"
> ** affects users with {{checkPeerNames=true}} in your SSL configuration
> ** causes 100% failure rate in Solr's 
> {{TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName}}
> ** can result in exceptions for SolrJ users, or in solr cloud server logs 
> when making intra-node requests, with a root cause of 
> "javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated"
> ** {noformat}
>[junit4]   2> Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer 
> not authenticated
>[junit4]   2>  at 
> java.base/sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:526)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:464)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:397)
>[junit4]   2>  at 
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
>[junit4]   2>  at 
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
>[junit4]   2>  at 
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359)
>[junit4]   2>  at 
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
>[junit4]   2>  at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
>[junit4]   2>  at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
>[junit4]   2>  at 
> org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
>[junit4]   2>  at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
>[junit4]   2>  at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>[junit4]   2>  at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542)
> {noformat}
> * https://bugs.openjdk.java.net/browse/JDK-8213202
> ** "Possible race condition in TLS 1.3 session resumption"
> ** May affect any and all Solr SSL users, although noted only in tests when 
> "clientAuth" was configured to be false
> ** Causes non-reproducing test failures, and sporadic end user exceptions 
> with a root cause of "javax.net.ssl.SSLException: Received fatal alert: 
> internal_error "
> ** SSL Debugging may indicate "Fatal (INTERNAL_ERROR): Session has no PSK"
> ** {noformat}
>[junit4]   2> Caused by: javax.net.ssl.SSLException: Received fatal alert: 
> 

[jira] [Commented] (SOLR-13338) HdfsAutoAddReplicasIntegrationTest failures

2019-06-14 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864422#comment-16864422
 ] 

Kevin Risden commented on SOLR-13338:
-

Now that Jetty was upgraded (SOLR-13413 / SOLR-13541), will check on the test 
failures for this test and see if they start to get better.

> HdfsAutoAddReplicasIntegrationTest failures
> ---
>
> Key: SOLR-13338
> URL: https://issues.apache.org/jira/browse/SOLR-13338
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
>
> HdfsAutoAddReplicasIntegrationTest failures have increased after SOLR-13330 
> (previously failed a different way with SOLR-13060), but they are starting to 
> reproduce and beasting causes failures locally. They fail the same each time. 
> Planning to figure out what is going on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13541) Upgrade Jetty to 9.4.19.v20190610

2019-06-13 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863154#comment-16863154
 ] 

Kevin Risden edited comment on SOLR-13541 at 6/13/19 2:50 PM:
--

Jetty 9.4.15+ has the endpointIdentificationAlgorithm enabled by default 
(https://github.com/eclipse/jetty.project/issues/3454) which causes the above 
error. Jetty 9.4.16+ has https://github.com/eclipse/jetty.project/issues/3464 
related to improving the situation. We might need some tweaks to our Jetty 
SslContextFactory.


was (Author: risdenk):
Jetty 9.4.15+ has the endpointIdentificationAlgorithm enabled by default 
(https://github.com/eclipse/jetty.project/issues/3454) which causes the above 
error.

> Upgrade Jetty to 9.4.19.v20190610
> -
>
> Key: SOLR-13541
> URL: https://issues.apache.org/jira/browse/SOLR-13541
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: _test.res
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13541) Upgrade Jetty to 9.4.19.v20190610

2019-06-13 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863154#comment-16863154
 ] 

Kevin Risden commented on SOLR-13541:
-

Jetty 9.4.15+ has the endpointIdentificationAlgorithm enabled by default 
(https://github.com/eclipse/jetty.project/issues/3454) which causes the above 
error.

> Upgrade Jetty to 9.4.19.v20190610
> -
>
> Key: SOLR-13541
> URL: https://issues.apache.org/jira/browse/SOLR-13541
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: _test.res
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13541) Upgrade Jetty to 9.4.19.v20190610

2019-06-13 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863148#comment-16863148
 ] 

Kevin Risden edited comment on SOLR-13541 at 6/13/19 2:44 PM:
--

[~erickerickson] - Pulled this out from the logs. I wonder if our tests aren't 
setting up the SAN (subject alternative names) correctly in the TLS/SSL 
certificate for localhost/127.0.0.1 TLS/SSL testing. There has been a push to 
move from CN -> SAN checking in certificates. Browsers/JDK/etc have been making 
that change. It accounts for at least a few of the test failures it looks like.

{code:java}
   [junit4]   2> Caused by: java.security.cert.CertificateException: No subject 
alternative names matching IP address 127.0.0.1 found
   [junit4]   2>at 
sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168)
   [junit4]   2>at 
sun.security.util.HostnameChecker.match(HostnameChecker.java:94)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkAdditionalTrust(SSLContextImpl.java:1068)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:1007)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1601)
   [junit4]   2>... 22 more
{code}



was (Author: risdenk):
[~erickerickson] - Pulled this out from the logs. I wonder if our tests aren't 
setting up the SAN (subject alternative names) correctly for local host TLS/SSL 
testing. There has been a push to move from CN -> SAN checking in certificates. 
Browsers/JDK/etc have been making that change. It accounts for at least a few 
of the test failures it looks like.

{code:java}
   [junit4]   2> Caused by: java.security.cert.CertificateException: No subject 
alternative names matching IP address 127.0.0.1 found
   [junit4]   2>at 
sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168)
   [junit4]   2>at 
sun.security.util.HostnameChecker.match(HostnameChecker.java:94)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkAdditionalTrust(SSLContextImpl.java:1068)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:1007)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1601)
   [junit4]   2>... 22 more
{code}


> Upgrade Jetty to 9.4.19.v20190610
> -
>
> Key: SOLR-13541
> URL: https://issues.apache.org/jira/browse/SOLR-13541
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: _test.res
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13541) Upgrade Jetty to 9.4.19.v20190610

2019-06-13 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863148#comment-16863148
 ] 

Kevin Risden commented on SOLR-13541:
-

[~erickerickson] - Pulled this out from the logs. I wonder if our tests aren't 
setting up the SAN (subject alternative names) correctly for local host TLS/SSL 
testing. There has been a push to move from CN -> SAN checking in certificates. 
Browsers/JDK/etc have been making that change. It accounts for at least a few 
of the test failures it looks like.

{code:java}
   [junit4]   2> Caused by: java.security.cert.CertificateException: No subject 
alternative names matching IP address 127.0.0.1 found
   [junit4]   2>at 
sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168)
   [junit4]   2>at 
sun.security.util.HostnameChecker.match(HostnameChecker.java:94)
   [junit4]   2>at 
sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkAdditionalTrust(SSLContextImpl.java:1068)
   [junit4]   2>at 
sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:1007)
   [junit4]   2>at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1601)
   [junit4]   2>... 22 more
{code}


> Upgrade Jetty to 9.4.19.v20190610
> -
>
> Key: SOLR-13541
> URL: https://issues.apache.org/jira/browse/SOLR-13541
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: _test.res
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13413) suspicious test failures caused by jetty TimeoutException related to using HTTP2

2019-06-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862229#comment-16862229
 ] 

Kevin Risden commented on SOLR-13413:
-

Looks like Jetty 9.4.19 was just released:

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.19.v20190610

> suspicious test failures caused by jetty TimeoutException related to using 
> HTTP2
> 
>
> Key: SOLR-13413
> URL: https://issues.apache.org/jira/browse/SOLR-13413
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.0
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 
> nocommit_TestDistributedStatsComponentCardinality_trivial-no-http2.patch
>
>
> There is evidence in some recent jenkins failures that we may have some manor 
> of bug in our http2 client/server code that can cause intra-node query 
> requests to stall / timeout non-reproducibly.
> In at least one known case, forcing the jetty & SolrClients used in the test 
> to use http1.1, seems to prevent these test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13434) OpenTracing support for Solr

2019-06-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862025#comment-16862025
 ] 

Kevin Risden edited comment on SOLR-13434 at 6/12/19 11:33 AM:
---

[~caomanhdat] I used "ant run-maven-build -DskipTests=true" when working with 
the Hadoop 3 upgrade to fix maven build issues. 


was (Author: risdenk):
[~caomanhdat] I used "ant run-maven-build" when working with the Hadoop 3 
upgrade to fix maven build issues. 

> OpenTracing support for Solr
> 
>
> Key: SOLR-13434
> URL: https://issues.apache.org/jira/browse/SOLR-13434
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13434.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> [OpenTracing|https://opentracing.io/] is a vendor neutral API and 
> infrastructure for distributed tracing. Many OSS tracers just as Jaeger, 
> OpenZipkin, Apache SkyWalking as well as commercial tools support OpenTracing 
> APIs. Ideally, we can implement it once and have integrations for popular 
> tracers like we have with metrics and prometheus.
> I'm aware of SOLR-9641 but HTrace has since retired from incubator for lack 
> of activity so this is a fresh attempt at solving this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13434) OpenTracing support for Solr

2019-06-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862025#comment-16862025
 ] 

Kevin Risden commented on SOLR-13434:
-

[~caomanhdat] I used "ant run-maven-build" when working with the Hadoop 3 
upgrade to fix maven build issues. 

> OpenTracing support for Solr
> 
>
> Key: SOLR-13434
> URL: https://issues.apache.org/jira/browse/SOLR-13434
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13434.patch
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> [OpenTracing|https://opentracing.io/] is a vendor neutral API and 
> infrastructure for distributed tracing. Many OSS tracers just as Jaeger, 
> OpenZipkin, Apache SkyWalking as well as commercial tools support OpenTracing 
> APIs. Ideally, we can implement it once and have integrations for popular 
> tracers like we have with metrics and prometheus.
> I'm aware of SOLR-9641 but HTrace has since retired from incubator for lack 
> of activity so this is a fresh attempt at solving this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13452) Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.

2019-06-11 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861518#comment-16861518
 ] 

Kevin Risden commented on SOLR-13452:
-

[~markrmil...@gmail.com] and [~joel.bernstein] - yea that sounds correct. If 
trying to use the esri functions, you will get an error today. I missed adding 
the new dependency when upgrading Calcite. Not sure about the materialize 
dependencies. The Calcite integration is very limited today so we probably 
haven't hit any of those classes. Would be good to open a Jira to track down if 
we should include those dependencies separately.

> Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.
> -
>
> Key: SOLR-13452
> URL: https://issues.apache.org/jira/browse/SOLR-13452
> Project: Solr
>  Issue Type: Improvement
>  Components: Build
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: master (9.0)
>
>
> I took some things from the great work that Dat did in 
> [https://github.com/apache/lucene-solr/tree/jira/gradle] and took the ball a 
> little further.
>  
> When working with gradle in sub modules directly, I recommend 
> [https://github.com/dougborg/gdub]
> This gradle branch uses the following plugin for version locking, version 
> configuration and version consistency across modules: 
> [https://github.com/palantir/gradle-consistent-versions]
>  
>  https://github.com/apache/lucene-solr/tree/jira/SOLR-13452_gradle_3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9952) S3BackupRepository

2019-05-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852482#comment-16852482
 ] 

Kevin Risden commented on SOLR-9952:


[~varunthacker] - not entirely sure what your question is about params 
conflicting. Hrishikesh and I talked about it earlier on this ticket on Jan 13, 
2017 about how the parameters shouldn't conflict if you use absolutely 
parameters or rename the system properties in solr.xml.

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9952) S3BackupRepository

2019-05-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852372#comment-16852372
 ] 

Kevin Risden commented on SOLR-9952:


[~Goodman] - I understand the frustration. With the Hadoop 3 work in Solr 8.0+ 
you will be MUCH better off as far as S3 goes. The hadoop-aws jars in Hadoop 
2.7.x are very old now. Lots of improvements but it requires upgrading the 
entire Hadoop package at once. If possible I would try with Solr 8.1.1 and see 
if you run into the same things. 

In theory on Solr 8.0+, you should be able to use s3a and not run into the 
connection pool shutdown stuff. I didn't play with the backup/restore but 
instead tested with running Solr collections off of s3a. Here is a reference to 
what I tried [https://github.com/risdenk/solr-s3a-testing]. I also tried 
against real s3 to make sure I didn't miss anything.

> S3BackupRepository
> --
>
> Key: SOLR-9952
> URL: https://issues.apache.org/jira/browse/SOLR-9952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: 
> 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, 
> 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr 
> on S3.pdf, core-site.xml.template
>
>
> I'd like to have a backup repository implementation allows to snapshot to AWS 
> S3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13434) OpenTracing support for Solr

2019-05-23 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846757#comment-16846757
 ] 

Kevin Risden commented on SOLR-13434:
-

[~caomanhdat] - might be easier to review as a PR? Nothing wrong with patches 
just seems like a big change.

> OpenTracing support for Solr
> 
>
> Key: SOLR-13434
> URL: https://issues.apache.org/jira/browse/SOLR-13434
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13434.patch
>
>
> [OpenTracing|https://opentracing.io/] is a vendor neutral API and 
> infrastructure for distributed tracing. Many OSS tracers just as Jaeger, 
> OpenZipkin, Apache SkyWalking as well as commercial tools support OpenTracing 
> APIs. Ideally, we can implement it once and have integrations for popular 
> tracers like we have with metrics and prometheus.
> I'm aware of SOLR-9641 but HTrace has since retired from incubator for lack 
> of activity so this is a fresh attempt at solving this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8807) Change all download URLs in build files to HTTPS

2019-05-21 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844899#comment-16844899
 ] 

Kevin Risden commented on LUCENE-8807:
--

[~thetaphi] - I think there is one typo in the patch from a quick review:



Should be https instead of http2

> Change all download URLs in build files to HTTPS
> 
>
> Key: LUCENE-8807
> URL: https://issues.apache.org/jira/browse/LUCENE-8807
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 8.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 7.7.2, master (9.0), 8.2, 8.1.1
>
> Attachments: LUCENE-8807.patch, LUCENE-8807.patch
>
>
> At least for Lucene this is not a security issue, because we have checksums 
> for all downloaded JAR dependencies, but ASF asked all projects to ensure 
> that download URLs for dependencies are using HTTPS:
> {quote}
> [...] Projects like Lucene do checksum whitelists of
> all their build dependencies, and you may wish to consider that as a
> protection against threats beyond just MITM [...]
> {quote}
> This patch fixes the URLs for most files referenced in {{*build.xml}} and 
> {{*ivy*.xml}} to HTTPS. There are a few data files in benchmark which use 
> HTTP only, but that's uncritical and I added a TODO. Some were broken already.
> I removed the "uk.maven.org" workarounds for Maven, as this does not work 
> with HTTPS. By keeping those inside, we break the whole chain of trust, as 
> any non-working HTTPS would fallback to the insecure uk.maven.org Maven 
> mirror.
> As the great chinese firewall is changing all the time, we should just wait 
> for somebody complaining.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13112) Upgrade jackson to 2.9.8

2019-05-10 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-13112.
-
Resolution: Fixed

Reresolving after pushing to branch_7_7

> Upgrade jackson to 2.9.8
> 
>
> Key: SOLR-13112
> URL: https://issues.apache.org/jira/browse/SOLR-13112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13112.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
> Solr - 7.6.0 Build,
> Using Scanner 1.56.0-01
> Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
> jackson-databind : 2.9.6
> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
> execute arbitrary code by leveraging failure to block the slf4j-ext class 
> from polymorphic deserialization.
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13461) Update Parallel SQL docs to be very clear select * isn't supported.

2019-05-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837444#comment-16837444
 ] 

Kevin Risden commented on SOLR-13461:
-

I would guess the score addition and making select * not work is due to this 
change:

[https://github.com/apache/lucene-solr/commit/ec6ee96ae6df1fdb2fffd881b45cb48670a10c5b#diff-378575439f2fa63a836101b4297e7ef0R259]

> Update Parallel SQL docs to be very clear select * isn't supported.
> ---
>
> Key: SOLR-13461
> URL: https://issues.apache.org/jira/browse/SOLR-13461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.0
>Reporter: Eric Pugh
>Priority: Minor
>
> Small tweak to documentation to really highlight select * not supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13112) Upgrade jackson to 2.9.8

2019-05-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837433#comment-16837433
 ] 

Kevin Risden commented on SOLR-13112:
-

Sounds good [~ctargett] - I'll take care of the commit. 

> Upgrade jackson to 2.9.8
> 
>
> Key: SOLR-13112
> URL: https://issues.apache.org/jira/browse/SOLR-13112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13112.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
> Solr - 7.6.0 Build,
> Using Scanner 1.56.0-01
> Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
> jackson-databind : 2.9.6
> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
> execute arbitrary code by leveraging failure to block the slf4j-ext class 
> from polymorphic deserialization.
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13338) HdfsAutoAddReplicasIntegrationTest failures

2019-05-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837393#comment-16837393
 ] 

Kevin Risden commented on SOLR-13338:
-

So a lot of these issues look like 
[SOLR-13413|https://www.google.com/url?q=https://issues.apache.org/jira/browse/SOLR-13413=D=hangouts=1557588898292000=AFQjCNGkWSSI7IjkUF5vpflX-dY0Gs6HLw]
 where there is just a timeout waiting. I haven't had a chance to test the 
Jetty change against this test.

> HdfsAutoAddReplicasIntegrationTest failures
> ---
>
> Key: SOLR-13338
> URL: https://issues.apache.org/jira/browse/SOLR-13338
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
>
> HdfsAutoAddReplicasIntegrationTest failures have increased after SOLR-13330 
> (previously failed a different way with SOLR-13060), but they are starting to 
> reproduce and beasting causes failures locally. They fail the same each time. 
> Planning to figure out what is going on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13112) Upgrade jackson to 2.9.8

2019-05-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837388#comment-16837388
 ] 

Kevin Risden commented on SOLR-13112:
-

[~ctargett] - are you taking care of the commit? If not I can backport the 
change this afternoon (~4 hours from now).

> Upgrade jackson to 2.9.8
> 
>
> Key: SOLR-13112
> URL: https://issues.apache.org/jira/browse/SOLR-13112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
> Environment: RedHat Linux.    May run from RHEL versions 5, 6 or 7 
> but this issue is from Sonatype component scan and should be independent of 
> Linux platform version.
>Reporter: RobertHathaway
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13112.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We can't move to Solr 7 without fixing this issue flagged by Sonatype scan Of 
> Solr - 7.6.0 Build,
> Using Scanner 1.56.0-01
> Threat Level 8   Against Solr v7.6.  com.fasterxml.jackson.core : 
> jackson-databind : 2.9.6
> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to 
> execute arbitrary code by leveraging failure to block the slf4j-ext class 
> from polymorphic deserialization.
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14718



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13461) Update Parallel SQL docs to be very clear select * isn't supported.

2019-05-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837358#comment-16837358
 ] 

Kevin Risden commented on SOLR-13461:
-

H so "select *" used to work Do you have the error message you get? I 
can try to look and see why select * doesn't work anymore. I found a comment 
about this being broken from 2017 that I somehow missed.

https://issues.apache.org/jira/browse/SOLR-8847?focusedCommentId=16014628=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16014628

Apparently the score field is being added somehow? I don't remember adding 
that, but seems like an issue since select * is useful.

[~joel.bernstein] - do you know what is happening with the score field and 
select *?

> Update Parallel SQL docs to be very clear select * isn't supported.
> ---
>
> Key: SOLR-13461
> URL: https://issues.apache.org/jira/browse/SOLR-13461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.0
>Reporter: Eric Pugh
>Priority: Minor
>
> Small tweak to documentation to really highlight select * not supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13413) suspicious test failures caused by jetty TimeoutException related to using HTTP2

2019-05-09 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836386#comment-16836386
 ] 

Kevin Risden commented on SOLR-13413:
-

[~caomanhdat] - thanks for tracking this down!

> suspicious test failures caused by jetty TimeoutException related to using 
> HTTP2
> 
>
> Key: SOLR-13413
> URL: https://issues.apache.org/jira/browse/SOLR-13413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 
> nocommit_TestDistributedStatsComponentCardinality_trivial-no-http2.patch
>
>
> There is evidence in some recent jenkins failures that we may have some manor 
> of bug in our http2 client/server code that can cause intra-node query 
> requests to stall / timeout non-reproducibly.
> In at least one known case, forcing the jetty & SolrClients used in the test 
> to use http1.1, seems to prevent these test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-05-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831146#comment-16831146
 ] 

Kevin Risden commented on SOLR-13294:
-

[~joel.bernstein] - Can this be resolved? Looks like no failures recently?

> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Joel Bernstein
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> 

[jira] [Commented] (SOLR-13040) Harden TestSQLHandler.

2019-05-01 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831147#comment-16831147
 ] 

Kevin Risden commented on SOLR-13040:
-

[~joel.bernstein] - Can this also be resolved since looks like SOLR-13294 fixed 
the issues? 

> Harden TestSQLHandler.
> --
>
> Key: SOLR-13040
> URL: https://issues.apache.org/jira/browse/SOLR-13040
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Joel Bernstein
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10053) TestSolrCloudWithDelegationTokens failures

2019-04-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-10053.
-
Resolution: Fixed

This test isn't disabled anymore, hasn't been failing, and after upgrade to 
Hadoop 3 HADOOP-14044 would have been fixed as well.

> TestSolrCloudWithDelegationTokens failures
> --
>
> Key: SOLR-10053
> URL: https://issues.apache.org/jira/browse/SOLR-10053
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: fail.log, stdout, stdout, stdout, stdout
>
>
> The TestSolrCloudWithDelegationTokens tests fail often at Jenkins. I have 
> been so far unable to reproduce them using the failing seeds. However, 
> beasting these tests seem to cause failures (once after about 10-12 runs).
> Latest Jenkins failure: 
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/12/
> It wasn't apparent what caused these failures. To cut down the noise on 
> Jenkins, I propose that we disable the test with @AwaitsFix (or bad apple) 
> annotation and continue to debug and fix this test.
> WDYT, [~markrmil...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9586) TestSolrCloudWithDelegationTokens fails regularly on Jenkins runs

2019-04-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-9586.

Resolution: Fixed

Looks like this was fixed in SOLR-10053

> TestSolrCloudWithDelegationTokens fails regularly on Jenkins runs
> -
>
> Key: SOLR-9586
> URL: https://issues.apache.org/jira/browse/SOLR-9586
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Priority: Major
>
> Mainly on Windows, sometimes on Solaris.  Failing seeds don't reproduce on a 
> Mac.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-04-26 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826974#comment-16826974
 ] 

Kevin Risden commented on SOLR-13394:
-

[~ichattopadhyaya] - Was this supposed to be changed on the 8.x branch or just 
on master (9.x) where the switch to JDK 11 was made?

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Attachment: SOLR-13414.patch

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, SOLR-13414.patch, 
> before_starting_solr.png, command_prompt.png, luke_out.xml, managed-schema, 
> new_solr-8983-console.log, new_solr.log, solr-8983-console.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Summary*
> If the underlying Lucene index has fields defined but no type, SolrSchema 
> fails with NPE. The index most likely has issues and would be better to 
> delete/recreate the index. This ticket adds a null check to prevent the NPE 
> and won't break on a potentially invalid index.
> *Initial Description*
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> 

[jira] [Updated] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13328:

Fix Version/s: (was: 8.0.1)
   (was: 8.1)

> HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates 
> connection
> ---
>
> Key: SOLR-13328
> URL: https://issues.apache.org/jira/browse/SOLR-13328
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.0
>Reporter: jefferyyuan
>Priority: Minor
>
> In SolrHttpClientBuilder, we can configure a lot of things including 
> HostnameVerifier.
> We have code like below:
> HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());
> CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
> checks subject dn name.
> But this doesn't work as when we create SSLConnectionSocketFactory at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't 
> check and use HostnameVerifier in SolrHttpClientBuilder at all.
> The fix would be very simple, at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
> HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
> logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Description: 
*Summary*
If the underlying Lucene index has fields defined but no type, SolrSchema fails 
with NPE. The index most likely has issues and would be better to 
delete/recreate the index. This ticket adds a null check to prevent the NPE and 
won't break on a potentially invalid index.

*Initial Description*
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After [initial discussion in 
solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
 I have been asked to open this ticket - The exception thrown does not provide 
sufficient detail to understand the underlying problem. Its it thought to be an 
issue with the schema not initialising correctly. 

Attached is the managed-schema after a downconfig.

Stack trace from email thread:

*Solr Admin UI Logging*
{code:java}
java.io.IOException: Failed to execute sqlQuery 'select id from document limit 
10' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id from document limit 10": null
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
at 
org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
at org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
 

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Priority: Minor  (was: Major)

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Fix Version/s: master (9.0)
   8.1
   7.7.2

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Summary: SolrSchema - Avoid NPE if Luke returns field with no type defined  
(was: Sql Schema is not initializing)

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> 

[jira] [Assigned] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-13414:
---

Assignee: Kevin Risden

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826264#comment-16826264
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - we can use this ticket to add the null check. Will rename the 
title and can put a quick patch together.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826206#comment-16826206
 ] 

Kevin Risden commented on SOLR-13414:
-

I think a reasonable fix would be to add a null check before the switch 
statement

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L103

This would prevent adding the field as an option in SQL and avoid the issue you 
ran into.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826193#comment-16826193
 ] 

Kevin Risden commented on SOLR-13414:
-

So Luke is looking at the actual index files. I would guess somewhere along the 
way in Solr, COUNTY was defined and then deleted or changed to be County. I 
think there were documents indexed as some point with the field name COUNTY. 
These documents were deleted but segments still have SOME reference to COUNTY 
(ie: not merged and fully removing the indexed documents).

Long story short - I don't know of a way to delete that field fully from the 
Lucene index under the hood.

The workaround of adding the field back works, but then you could end up with 
documents in either COUNTY or County. I'd actually be curious how Solr handles 
two fields with the same name different case when querying.

I think the SQL integration could add a check to make sure that we handle this 
case better, but it does highlight an interesting case with the underlying 
index.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826170#comment-16826170
 ] 

Kevin Risden commented on SOLR-13414:
-

In the output I noticed something interesting:

There is both COUNTY and County - same field name different case. The 
managed-schema attached previously, only has County in it. 

So it looks like docs are indexed with different case field names? Reimporting 
would force all docs to have the right field name definition I think which is 
why you wouldn't see this issue after recreating the index.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> 

[jira] [Comment Edited] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden edited comment on SOLR-13414 at 4/25/19 3:30 PM:
--

[~davebarnett] - Can you run this query and share the results (it shouldn't 
have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78


was (Author: risdenk):
[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826158#comment-16826158
 ] 

Kevin Risden commented on SOLR-13414:
-

Assuming the output of the above is correct, the issue might be with field 
"Field:COUNTY" since the way the debug logging works is that it would log that 
for each field before failing on the NPE.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

[jira] [Comment Edited] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden edited comment on SOLR-13414 at 4/25/19 3:21 PM:
--

[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78


was (Author: risdenk):
[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826148#comment-16826148
 ] 

Kevin Risden commented on SOLR-13414:
-

Hmmm so does that mean that "luceneFieldInfo.getType()" is returning null and 
breaking the switch on line 103?

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L103

The javadocs for LukeResponse.FieldInfo doesn't say anything about null 
guarantees. 
* 
https://lucene.apache.org/solr/7_7_0/solr-solrj/org/apache/solr/client/solrj/response/LukeResponse.FieldInfo.html#getType()

Checked the code and there is nothing stopping it from being a null there.
* 
https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/response/LukeResponse.java#L118

I think I can come up with a luke request and would get the same result for 
that collection so we can see what is getting returned. I think we should be 
able to do this without adding more logging yet.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2019-04-24 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825259#comment-16825259
 ] 

Kevin Risden commented on SOLR-12514:
-

>From the notification email: CVE-2018-11802

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 6.6.6, 7.7
>
> Attachments: SOLR-12514.patch, SOLR-12514.patch, Screen Shot 
> 2018-06-24 at 9.36.45 PM.png, demo.sh, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
> SolrCloud setup consists of 2 nodes (solr-7.3.1):
> {code:javascript}
> live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
> ]
> {code}
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
> {code:javascript}
> "authorization":{
>   "class":"solr.RuleBasedAuthorizationPlugin",
>   "permissions":[
> { "name":"read", "collection":"collection-rf-2", 
> "role":"collection-rf-2", "index":1},
> { "name":"read", "collection":"collection-rf-1", 
> "role":"collection-rf-1", "index":2},
> { "name":"read", "role":"*", "index":3},
> ...
>   "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]},
> ...
> {code}
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
> {code:javascript}
>  {
>"responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>"q":"*:*"}},
>"response":{"numFound":0,"start":0,"docs":[]
>  }}
> {code}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-23 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824344#comment-16824344
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - Just to make sure I understand what has been tried/attached so 
far. Here is what I think should happen:

# Stop Solr
# Put modified solr-core jar in place (move old one out of the Solr install 
directory)
# Start Solr
# Try to run SQL query
# Check logs for lines with 'Field Info is'

I think something wasn't done properly because the new stacktrace from the Solr 
log has line numbers that match the original report. I would have expected new 
line numbers with the new jar since there were added lines. 

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, command_prompt.png, managed-schema, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-23 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824204#comment-16824204
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - the System.out.println lines should go to the .out file if 
there is one in Solr. Not sure it will be in the  log file. Depending where you 
are storing Solr logs you might be able to find the right file with 


{code:java}
grep -rnF 'Field Info is' PATH_TO_SOLR_INSTALL
{code}


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, managed-schema, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> 

[jira] [Updated] (SOLR-13414) Sql Schema is not initializing

2019-04-22 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Description: 
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After [initial discussion in 
solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
 I have been asked to open this ticket - The exception thrown does not provide 
sufficient detail to understand the underlying problem. Its it thought to be an 
issue with the schema not initialising correctly. 

Attached is the managed-schema after a downconfig.

Stack trace from email thread:

*Solr Admin UI Logging*
{code:java}
java.io.IOException: Failed to execute sqlQuery 'select id from document limit 
10' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id from document limit 10": null
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
at 
org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
at org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 

[jira] [Updated] (SOLR-13414) Sql Schema is not initializing

2019-04-22 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Description: 
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After [initial discussion in 
solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
 I have been asked to open this ticket - The exception thrown does not provide 
sufficient detail to understand the underlying problem. Its it thought to be an 
issue with the schema not initialising correctly. 

Attached is the managed-schema after a downconfig.

Stack trace from email thread:

{code:java}
java.io.IOException: Failed to execute sqlQuery 'select id from document limit 
10' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id from document limit 10": null
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
at 
org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
at org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 

[jira] [Updated] (SOLR-13414) Sql Schema is not initializing

2019-04-22 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Description: 
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After [initial discussion in 
solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
 I have been asked to open this ticket - The exception thrown does not provide 
sufficient detail to understand the underlying problem. Its it thought to be an 
issue with the schema not initialising correctly. 

Attached is the managed-schema after a downconfig.

  was:
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After initial discussion in solr-user I have been asked to open this ticket - 
The exception thrown does not provide sufficient detail to understand the 
underlying problem. Its it thought to be an issue with the schema not 
initialising correctly. 

Attached is the managed-schema after a downconfig.


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, managed-schema, 
> solr-core-7.8.0-SNAPSHOT.jar
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly

2019-04-16 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819013#comment-16819013
 ] 

Kevin Risden commented on SOLR-13408:
-

StreamExpressionTest is where some existing daemon open/close tests are.

> Cannot start/stop DaemonStream repeatedly
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly

2019-04-16 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819011#comment-16819011
 ] 

Kevin Risden commented on SOLR-13408:
-

I think even just calling open twice would cause an issue. There is nothing 
stopping someone from opening the daemon stream programmatically twice.

> Cannot start/stop DaemonStream repeatedly
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-04-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816499#comment-16816499
 ] 

Kevin Risden commented on SOLR-13293:
-

Ah sorry I missed the "ConcurrentUpdate" part. I saw "metrics-core" and thought 
eh maybe related to metrics. Sorry don't have any other ideas right now.

> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.0
>Reporter: Karl Stoney
>Priority: Minor
>
> Hi, 
> Testing out branch_8x, we're randomly seeing the following errors on a simple 
> 3 node cluster.  It doesn't appear to affect replication (the cluster remains 
> green).
> They come in (mass, literally 1000s at a time) bulk.
> There we no network issues at the time.
> {code:java}
> 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
> r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
> s:shard1] ERROR 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> java.nio.channels.AsynchronousCloseException: null
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root
> - 2019-03-04 16:30:04]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
> 16:30:04]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-04-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816490#comment-16816490
 ] 

Kevin Risden commented on SOLR-13293:
-

No I understand the root cause is different - I meant more are these bulk HTTP 
requests from metrics somehow?

> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.0
>Reporter: Karl Stoney
>Priority: Minor
>
> Hi, 
> Testing out branch_8x, we're randomly seeing the following errors on a simple 
> 3 node cluster.  It doesn't appear to affect replication (the cluster remains 
> green).
> They come in (mass, literally 1000s at a time) bulk.
> There we no network issues at the time.
> {code:java}
> 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
> r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
> s:shard1] ERROR 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> java.nio.channels.AsynchronousCloseException: null
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root
> - 2019-03-04 16:30:04]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
> 16:30:04]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-04-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816490#comment-16816490
 ] 

Kevin Risden edited comment on SOLR-13293 at 4/12/19 5:13 PM:
--

No I understand the root cause is different - I meant more are these bulk HTTP 
requests from metrics somehow? Like if metrics are disabled do these errors go 
away.


was (Author: risdenk):
No I understand the root cause is different - I meant more are these bulk HTTP 
requests from metrics somehow?

> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.0
>Reporter: Karl Stoney
>Priority: Minor
>
> Hi, 
> Testing out branch_8x, we're randomly seeing the following errors on a simple 
> 3 node cluster.  It doesn't appear to affect replication (the cluster remains 
> green).
> They come in (mass, literally 1000s at a time) bulk.
> There we no network issues at the time.
> {code:java}
> 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
> r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
> s:shard1] ERROR 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> java.nio.channels.AsynchronousCloseException: null
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root
> - 2019-03-04 16:30:04]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
> 16:30:04]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-04-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816413#comment-16816413
 ] 

Kevin Risden commented on SOLR-13293:
-

[~kstoney] - I saw you posted about prometheus as well. Is it possible these 
are metrics related?

> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 8.0
>Reporter: Karl Stoney
>Priority: Minor
>
> Hi, 
> Testing out branch_8x, we're randomly seeing the following errors on a simple 
> 3 node cluster.  It doesn't appear to affect replication (the cluster remains 
> green).
> They come in (mass, literally 1000s at a time) bulk.
> There we no network issues at the time.
> {code:java}
> 16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
> r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
> s:shard1] ERROR 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> java.nio.channels.AsynchronousCloseException: null
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
> at 
> org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root
> - 2019-03-04 16:30:04]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
> 16:30:04]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
> b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13396) SolrCloud will delete the core data for any core that is not referenced in the clusterstate

2019-04-12 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816367#comment-16816367
 ] 

Kevin Risden commented on SOLR-13396:
-

I agree that arbitrarily deleting data is bad. The other issue is how do you 
clean up if you JUST have the error/warn. Would be nice to know what you needed 
to do in addition that it was a problem.

So I will caveat this by saying I have no idea how this works today, but when I 
read this I thought it would make sense for each node responsible for a 
shard/collection would have to "ack" that the operation was complete. If the 
node was down at the time, when it comes up it should know it needs to do "xyz" 
and finish the operation.

Again not sure of the ZK details, but some rough ideas:
* Create a znode for each node with list of operations it needs to complete - 
this would be written to by the leader?
* Keep track of which operations each node completed on existing list before 
deleting? - I think this could be hard since leader could change?

Some of the concerns would be added load on ZK for reading/writing operations.

The above could have already been thought about when building Solr Cloud so it 
might be a nonstarter.

> SolrCloud will delete the core data for any core that is not referenced in 
> the clusterstate
> ---
>
> Key: SOLR-13396
> URL: https://issues.apache.org/jira/browse/SOLR-13396
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3.1, 8.0
>Reporter: Shawn Heisey
>Priority: Major
>
> SOLR-12066 is an improvement designed to delete core data for replicas that 
> were deleted while the node was down -- better cleanup.
> In practice, that change causes SolrCloud to delete all core data for cores 
> that are not referenced in the ZK clusterstate.  If all the ZK data gets 
> deleted or the Solr instance is pointed at a ZK ensemble with no data, it 
> will proceed to delete all of the cores in the solr home, with no possibility 
> of recovery.
> I do not think that Solr should ever delete core data unless an explicit 
> DELETE action has been made and the node is operational at the time of the 
> request.  If a core exists during startup that cannot be found in the ZK 
> clusterstate, it should be ignored (not started) and a helpful message should 
> be logged.  I think that message should probably be at WARN so that it shows 
> up in the admin UI logging tab with default settings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13389) rectify discrepencies in socket (and connect) timeout values used throughout the code and tests - probably helping to reduce TimeoutExceptions in tests

2019-04-10 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814712#comment-16814712
 ] 

Kevin Risden commented on SOLR-13389:
-

Big plus one from me. I know I looked at this a bit as part of HDFS tests. I am 
99% sure what I put is not correct but it fixed some of the HDFS tests.

https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java#L115

This is just conjecture, but I think there might be some weirdness with the 
HTTP2 handling of sockets compared to HTTP 1.1. I just have that hunch based on 
some of the errors I've seen.

> rectify discrepencies in socket (and connect) timeout values used throughout 
> the code and tests - probably helping to reduce TimeoutExceptions in tests
> ---
>
> Key: SOLR-13389
> URL: https://issues.apache.org/jira/browse/SOLR-13389
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> While looking into some jenkins test failures caused by distributed requests 
> that timeout, i realized that the "socket timeout" aka "idle timeout" aka 
> "SO_TIMEOUT" values used in various places in the code & sample configs can 
> vary significantly, and in the case of *test* configs/code can differ from 
> the default / production configs by an order of magnitude.
> I think we should consider rectifying some of the various places/ways that 
> different values are sprinkled through out the code to reduce the number of 
> (different) places we have magic constants.  I believe a large number of 
> jenkins test failures we currently see due to timeout exceptions are simply 
> because tests (or test configs) override sensible defaults w/values that are 
> too low to be useful.
> (NOTE: all of these problems / discrepancies also apply to "connect timeout" 
> which should probably be addressed at the same time, but for now i'm focusing 
> on the "socket timeout" since it seems to be the bigger problem in jenkins 
> failures -- if we reach consensus on standardizing some values across the 
> board the same approach can be made to connect timeouts at the same time)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13385) Upgrade dependency jackson-databind in solr package contrib/prometheus-exporter/lib

2019-04-09 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-13385.
-
   Resolution: Duplicate
 Assignee: Kevin Risden
Fix Version/s: master (9.0)
   8.1

Duplicate of SOLR-13112

> Upgrade dependency jackson-databind in solr package 
> contrib/prometheus-exporter/lib
> ---
>
> Key: SOLR-13385
> URL: https://issues.apache.org/jira/browse/SOLR-13385
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6, 8.0.1
>Reporter: DW
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> The current used jackson-databind in 
> /contrib/prometheus-exporter/lib/jackson-databind-2.9.6.jar has known 
> Security Vulnerabilities record. Please upgrade to 2.9.8+. Thanks.
>  
> Please let me know if you would like detailed CVE records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13075) Harden SaslZkACLProviderTest.

2019-04-04 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809812#comment-16809812
 ] 

Kevin Risden commented on SOLR-13075:
-

`SaslZkACLProviderTest` doesn’t work still. New ZK versions still seem to be 
doing stuff to the localhost handling.
https://builds.apache.org/job/Lucene-Solr-Tests-master/3244/


{code:java}
   [junit4] Suite: org.apache.solr.cloud.SaslZkACLProviderTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.SaslZkACLProviderTest_347A62CA017B905C-001/init-core-data-001
   [junit4]   2> 1655307 WARN  
(SUITE-SaslZkACLProviderTest-seed#[347A62CA017B905C]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=74 numCloses=74
   [junit4]   2> 1655312 INFO  
(SUITE-SaslZkACLProviderTest-seed#[347A62CA017B905C]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1655315 INFO  
(SUITE-SaslZkACLProviderTest-seed#[347A62CA017B905C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1655315 INFO  
(SUITE-SaslZkACLProviderTest-seed#[347A62CA017B905C]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 1655322 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.SolrTestCaseJ4 ###Starting testSaslZkACLProvider
   [junit4]   2> 1655322 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.SaslZkACLProviderTest SETUP_START testSaslZkACLProvider
   [junit4]   2> 1655322 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.SaslZkACLProviderTest ZooKeeper 
dataDir:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.SaslZkACLProviderTest_347A62CA017B905C-001/tempDir-002/zookeeper/server1/data
   [junit4]   2> 1656570 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1656593 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1656593 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 1656782 INFO  (pool-2-thread-1) [] 
o.a.k.k.k.s.r.KdcRequest The preauth data is empty.
   [junit4]   2> 1656808 INFO  (pool-2-thread-1) [] o.a.k.k.k.s.KdcHandler 
KRB error occurred while processing request:Additional pre-authentication 
required
   [junit4]   2> 1656898 INFO  (pool-2-thread-1) [] o.a.k.k.k.s.r.AsRequest 
AS_REQ ISSUE: authtime 1554337820350,zookeeper/localh...@example.com for 
krbtgt/example@example.com
   [junit4]   2> 1656995 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.ZkTestServer start zk server on port:32773
   [junit4]   2> 1656995 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.ZkTestServer parse host and port list: localhost:32773
   [junit4]   2> 1656995 INFO  
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]) [
] o.a.s.c.ZkTestServer connecting to localhost 32773
   [junit4]   2> 1657076 INFO  (pool-2-thread-1) [] 
o.a.k.k.k.s.r.KdcRequest The preauth data is empty.
   [junit4]   2> 1657077 INFO  (pool-2-thread-1) [] o.a.k.k.k.s.KdcHandler 
KRB error occurred while processing request:Additional pre-authentication 
required
   [junit4]   2> 1657096 INFO  (pool-2-thread-1) [] o.a.k.k.k.s.r.AsRequest 
AS_REQ ISSUE: authtime 1554337820555,s...@example.com for 
krbtgt/example@example.com
   [junit4]   2> 1657149 INFO  (zkConnectionManagerCallback-3095-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1657170 ERROR (pool-2-thread-1) [] 
o.a.k.k.k.s.r.KdcRequest Principal: 
zookeeper/lucene2-us-west.apache@example.com is not known
   [junit4]   2> 1657173 ERROR 
(TEST-SaslZkACLProviderTest.testSaslZkACLProvider-seed#[347A62CA017B905C]-SendThread(localhost:32773))
 [] o.a.z.c.ZooKeeperSaslClient An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - Server not found 
in Kerberos database)]) occurred when evaluating Zookeeper Quorum Member's  
received SASL token. Zookeeper Client will go to AUTH_FAILED state.
   [junit4]   2> 1657175 ERROR 

[jira] [Commented] (SOLR-13075) Harden SaslZkACLProviderTest.

2019-04-03 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16808761#comment-16808761
 ] 

Kevin Risden commented on SOLR-13075:
-

[~gezapeti] attached slightly modified patch over here.

> Harden SaslZkACLProviderTest.
> -
>
> Key: SOLR-13075
> URL: https://issues.apache.org/jira/browse/SOLR-13075
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13075.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13075) Harden SaslZkACLProviderTest.

2019-04-03 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13075:

Attachment: SOLR-13075.patch

> Harden SaslZkACLProviderTest.
> -
>
> Key: SOLR-13075
> URL: https://issues.apache.org/jira/browse/SOLR-13075
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13075.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2019-04-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-7183:
--

Assignee: Ishan Chattopadhyaya  (was: Gregory Chanan)

> SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
> ---
>
> Key: SOLR-7183
> URL: https://issues.apache.org/jira/browse/SOLR-7183
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 5.2
>
> Attachments: SOLR-7183.patch
>
>
> SaslZkACLProviderTest has this blacklist of locales...
> {code}
>   // These Locales don't generate dates that are compatibile with Hadoop 
> MiniKdc.
>   protected final static List brokenLocales =
> Arrays.asList(
>   "th_TH_TH_#u-nu-thai",
>   "ja_JP_JP_#u-ca-japanese",
>   "hi_IN");
> {code}
> ..but this list is incomplete -- notably because it only focuses on one 
> specific Thai variant, and then does a string Locale.toString() comparison.  
> so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
> other variants that will fail as well
> * if there is a bug in "Hadoop MiniKdc" then that bug should be filed in 
> jira, and there should be Solr jira that refers to it -- the Solr jira URL 
> needs to be included her in the test case so developers in the future can 
> understand the context and have some idea of if/when the third-party lib bug 
> is fixed
> * if we need to work around some Locales because of this bug, then Locale 
> comparisons need be based on whatever aspects of the Locale are actually 
> problematic
> see for example SOLR-6387 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676=1618675=1618676
> Or SOLR-6991 + TIKA-1526 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708=1653707=1653708



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2019-04-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-7183.
--

> SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
> ---
>
> Key: SOLR-7183
> URL: https://issues.apache.org/jira/browse/SOLR-7183
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 5.2
>
> Attachments: SOLR-7183.patch
>
>
> SaslZkACLProviderTest has this blacklist of locales...
> {code}
>   // These Locales don't generate dates that are compatibile with Hadoop 
> MiniKdc.
>   protected final static List brokenLocales =
> Arrays.asList(
>   "th_TH_TH_#u-nu-thai",
>   "ja_JP_JP_#u-ca-japanese",
>   "hi_IN");
> {code}
> ..but this list is incomplete -- notably because it only focuses on one 
> specific Thai variant, and then does a string Locale.toString() comparison.  
> so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
> other variants that will fail as well
> * if there is a bug in "Hadoop MiniKdc" then that bug should be filed in 
> jira, and there should be Solr jira that refers to it -- the Solr jira URL 
> needs to be included her in the test case so developers in the future can 
> understand the context and have some idea of if/when the third-party lib bug 
> is fixed
> * if we need to work around some Locales because of this bug, then Locale 
> comparisons need be based on whatever aspects of the Locale are actually 
> problematic
> see for example SOLR-6387 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676=1618675=1618676
> Or SOLR-6991 + TIKA-1526 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708=1653707=1653708



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7183) SaslZkACLProviderTest reproducible failures due to poor locale blacklisting

2019-04-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-7183.

Resolution: Fixed

Marking as resolved since this was committed a long time ago. No recent locale 
failures.

> SaslZkACLProviderTest reproducible failures due to poor locale blacklisting
> ---
>
> Key: SOLR-7183
> URL: https://issues.apache.org/jira/browse/SOLR-7183
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 5.2
>
> Attachments: SOLR-7183.patch
>
>
> SaslZkACLProviderTest has this blacklist of locales...
> {code}
>   // These Locales don't generate dates that are compatibile with Hadoop 
> MiniKdc.
>   protected final static List brokenLocales =
> Arrays.asList(
>   "th_TH_TH_#u-nu-thai",
>   "ja_JP_JP_#u-ca-japanese",
>   "hi_IN");
> {code}
> ..but this list is incomplete -- notably because it only focuses on one 
> specific Thai variant, and then does a string Locale.toString() comparison.  
> so at a minimum {{-Dtests.locale=th_TH}} also fails - i suspect there are 
> other variants that will fail as well
> * if there is a bug in "Hadoop MiniKdc" then that bug should be filed in 
> jira, and there should be Solr jira that refers to it -- the Solr jira URL 
> needs to be included her in the test case so developers in the future can 
> understand the context and have some idea of if/when the third-party lib bug 
> is fixed
> * if we need to work around some Locales because of this bug, then Locale 
> comparisons need be based on whatever aspects of the Locale are actually 
> problematic
> see for example SOLR-6387 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_4x/solr/contrib/morphlines-core/src/test/org/apache/solr/morphlines/solr/AbstractSolrMorphlineZkTestBase.java?r1=1618676=1618675=1618676
> Or SOLR-6991 + TIKA-1526 & this commit: 
> https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_0/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java?r1=1653708=1653707=1653708



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13338) HdfsAutoAddReplicasIntegrationTest failures

2019-03-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805965#comment-16805965
 ] 

Kevin Risden commented on SOLR-13338:
-

I have a whole bunch of failure examples on my local Jenkins too. I need to sit 
down and figure out what is going on. Might take me a little while though.

> HdfsAutoAddReplicasIntegrationTest failures
> ---
>
> Key: SOLR-13338
> URL: https://issues.apache.org/jira/browse/SOLR-13338
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
>
> HdfsAutoAddReplicasIntegrationTest failures have increased after SOLR-13330 
> (previously failed a different way with SOLR-13060), but they are starting to 
> reproduce and beasting causes failures locally. They fail the same each time. 
> Planning to figure out what is going on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805943#comment-16805943
 ] 

Kevin Risden commented on SOLR-13359:
-

So while looking at this, I wonder if this hardcoded default for hdfs:// is the 
reason that indexing on HDFS is slow. There are a few examples linked to 
SOLR-7393. I have no idea how efficient HdfsUpdateLog is, but would be good to 
track down potentially.

> Make UpdateHandler support other prefixes (besides hdfs:/)
> --
>
> Key: SOLR-13359
> URL: https://issues.apache.org/jira/browse/SOLR-13359
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13359.patch
>
>
> Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
> paths
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java#L140



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13359:

Attachment: SOLR-13359.patch

> Make UpdateHandler support other prefixes (besides hdfs:/)
> --
>
> Key: SOLR-13359
> URL: https://issues.apache.org/jira/browse/SOLR-13359
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13359.patch
>
>
> Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
> paths
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java#L140



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13359:

Component/s: hdfs
 Hadoop Integration

> Make UpdateHandler support other prefixes (besides hdfs:/)
> --
>
> Key: SOLR-13359
> URL: https://issues.apache.org/jira/browse/SOLR-13359
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
>
> Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
> paths
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java#L140



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13359:

Description: 
Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
paths

https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java

  was:Just like SOLR-11473, the UpdateHandler needs to be able to handle non 
hdfs:/ paths


> Make UpdateHandler support other prefixes (besides hdfs:/)
> --
>
> Key: SOLR-13359
> URL: https://issues.apache.org/jira/browse/SOLR-13359
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
>
> Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
> paths
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13359:

Description: 
Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
paths

https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java#L140

  was:
Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
paths

https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java


> Make UpdateHandler support other prefixes (besides hdfs:/)
> --
>
> Key: SOLR-13359
> URL: https://issues.apache.org/jira/browse/SOLR-13359
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
>
> Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
> paths
> https://github.com/apache/lucene-solr/blob/branch_8_0/solr/core/src/java/org/apache/solr/update/UpdateHandler.java#L140



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-11473.
-
Resolution: Fixed

Created SOLR-13359 to fix the UpdateHandler

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reopened SOLR-11473:
-

Found a related place where we need to account for non hdfs:/ paths. 
UpdateHandler has a specific check for hdfs:/

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13359) Make UpdateHandler support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)
Kevin Risden created SOLR-13359:
---

 Summary: Make UpdateHandler support other prefixes (besides hdfs:/)
 Key: SOLR-13359
 URL: https://issues.apache.org/jira/browse/SOLR-13359
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Kevin Risden
Assignee: Kevin Risden
 Fix For: 8.1, master (9.0)


Just like SOLR-11473, the UpdateHandler needs to be able to handle non hdfs:/ 
paths



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10161) HdfsChaosMonkeySafeLeaderTest needs to be hardened.

2019-03-30 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10161:

Component/s: hdfs
 Hadoop Integration

> HdfsChaosMonkeySafeLeaderTest needs to be hardened.
> ---
>
> Key: SOLR-10161
> URL: https://issues.apache.org/jira/browse/SOLR-10161
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: logs.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-30 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805923#comment-16805923
 ] 

Kevin Risden commented on SOLR-11473:
-

This opens up the possibilities of using any HDFS compatible filesystem for 
storing indices. There are no Solr tests for other filesystems (besides local 
disk and HDFS) so you would need to test before using. 

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Attachment: SOLR-11473.patch

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Attachment: SOLR-11473.patch

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Attachment: (was: SOLR-11473.patch)

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805482#comment-16805482
 ] 

Kevin Risden commented on SOLR-11473:
-

Attached patch that does the following:
* Uses Hadoop Path for isAbsolute check.
* Adds test to ensure isAbsolute works correctly with non hdfs:// prefixes.
* Ensures that the Path scheme is set to disable the cache when getting from 
the filesystem.

The Path approach is just like URI, but Path wraps URI and does some more 
Hadoop specific handling. Made sure that we handle the case of 
fs.SCHEME.impl.disable.cache=true since that can cause some issues with 
filesystem already being closed if that is missed.

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Attachment: SOLR-11473.patch

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch, SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805453#comment-16805453
 ] 

Kevin Risden commented on SOLR-11473:
-

Assigning to myself to get this merged since it opens up interesting use cases 
with Hadoop compatible filesystems. 

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-11473:
---

Assignee: Kevin Risden

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-29 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Fix Version/s: master (9.0)
   8.1

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13338) HdfsAutoAddReplicasIntegrationTest failures

2019-03-29 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805232#comment-16805232
 ] 

Kevin Risden commented on SOLR-13338:
-

https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-NightlyTests-master/1804/

> HdfsAutoAddReplicasIntegrationTest failures
> ---
>
> Key: SOLR-13338
> URL: https://issues.apache.org/jira/browse/SOLR-13338
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
>
> HdfsAutoAddReplicasIntegrationTest failures have increased after SOLR-13330 
> (previously failed a different way with SOLR-13060), but they are starting to 
> reproduce and beasting causes failures locally. They fail the same each time. 
> Planning to figure out what is going on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   4   5   6   7   8   9   10   >