This is the full error message from the Node for the second example, which
is the following query that get stucked.
innerJoin(innerJoin(
search(people, q=*:*, fl="personId,name", sort="personId asc"),
search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
on="personId=petsId"
Hi Joel,
Below are the results which I am getting.
If I use this query;
innerJoin(innerJoin(
search(people, q=*:*, fl="personId,name", sort="personId asc"),
search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
on="personId=petsId"
),
search(collection1, q=*:*,
Never mind. I had a different config for zookeeper on second vm which
brought a different cloud.
On Fri, Jun 16, 2017, 8:48 PM Satya Marivada
wrote:
> Here is the image:
>
> https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
>
> There is a node on 002:
Here is the image:
https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
There is a node on 002: 15101 port missing from the cloud.
On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson
wrote:
> Images don't come through the mailer, they're stripped. You'll have to
Images don't come through the mailer, they're stripped. You'll have to put
it somewhere else and provide a link.
Best,
Erick
On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada
wrote:
> Hi,
>
> I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
>
Hi,
I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
are joining the cloud, the fourth one is coming up separately and not
joining the other 3 nodes.
Please see below in the picture on admin screen how fourth node is not
joining. Any suggestions.
Thanks,
satya
[image:
Markus: I'm 95% certain that as long as the uniqueKey (effectively) has
useDocValuesAsStored everything should still work and it's just the error
checking/warning that's broken -- if you want to test that out and file a
jira that would be helpful.
the one concern i have is that a deliberate
The docs for the JSON facet API tell us that the default ranges are inclusive
of the lower bounds and exclusive of the upper bounds. �I'd like to do the
opposite (exclusive lower, inclusive upper), but I can't figure out how to
combine the 'include' parameters to make it
work.
�
�
Solr is configured with the zookeeper ensemble as mentioned below.
I will provide logs in a later time.
From: Shawn Heisey >
Date: Friday, Jun 16, 2017, 12:27 PM
To: solr-user@lucene.apache.org
bq: But I only changed the docvalues not the multivalued
It's the same issue. There is remnant metadata when you change whether
a field uses docValues or not. The error message can be ambiguous
depending on where the issue is encountered.
Best,
Erick
On Fri, Jun 16, 2017 at 9:28 AM, Aman Deep
+solr-user
Might get a different audience on this list.
-- Forwarded message --
From: Christine Poerschke (BLOOMBERG/ LONDON)
Date: Fri, Jun 16, 2017 at 11:43 AM
Subject: (how) do folks use the Cloud Graph (Radial) in the Solr Admin UI?
To:
M,
You ask what is better, and that is often a matter of opinion. My guess is that
you should have that name stored in the Solr doc, so it can be in the response
when you have a match. Oh, and I find JSON easier to work with than XML. Cheers
-- Rick
On June 16, 2017 10:19:03 AM EDT, mganeshs
But I only changed the docvalues not the multivalued ,
Anyway I will try to reproduce this by deleting the entire data directory
On 16-Jun-2017 9:52 PM, "Erick Erickson" wrote:
> bq: deleted entire index from the solr by delete by query command
>
> That's not what I
On 6/16/2017 9:05 AM, Xie, Sean wrote:
> Is there a way to keep SOLR alive when zookeeper instances (3 instance
> ensemble) are rolling updated one at a time? It seems SOLR cluster use
> one of the zookeeper instance and when the communication is broken in
> between, it won’t be able to reconnect
bq: deleted entire index from the solr by delete by query command
That's not what I meant. Either
a> create an entirely new collection starting with the modified schema
or
b> shut down all your Solr instances. Go into each replica/core and
'rm -rf data'. Restart Solr.
That way you're absolutely
Yes ,it was a new schema(new collection),and after that I change only
docvalues= true using schema api,but before changing the schema I have
deleted entire index from the solr by delete by query command using admin
gui.
On 16-Jun-2017 9:28 PM, "Erick Erickson" wrote:
My
My guess is you changed the definition of the field from
multiValued="true" to "false" at some point. Even if you re-index all
docs, some of the metadata can still be present.
Did yo completely blow away the data? By that I mean remove the entire
data dir (i.e. the parent of the "index"
> It seems SOLR cluster use one of the zookeeper instance
Is solr configured to point to exactly one host or to a list of the hosts
in the zookeeper ensemble? The zkHost value should contain all of the zk
hosts.
We had a similar issue where the solr instance was only pointing to zk1 and
none of
Hi,
Moving over to docValues as stored field i got this:
o.a.s.s.IndexSchema uniqueKey is not stored - distributed search and
MoreLikeThis will not work
But distributed and MLT still work.
Is the warning actually obsolete these days?
Regards,
Markus
Is there a way to keep SOLR alive when zookeeper instances (3 instance
ensemble) are rolling updated one at a time? It seems SOLR cluster use one of
the zookeeper instance and when the communication is broken in between, it
won’t be able to reconnect to another zookeeper instance and keep
Yes, index the employee and item names instead of only their ID's. And if you
can't for some reason, i'd implement a DocTransformer instead of a
ResponseWriter.
Regards,
Markus
-Original message-
> From:mganeshs
> Sent: Friday 16th June 2017 16:19
> To:
Hi,
We have requirement like in the response we would like to add description of
an item with item id(this field comes from solr response by default) or
employee name along with employee id ( this is just an example use case ).
In the solr document what we have is only item id or employee id.
Yes that is correct.
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, Jun 16, 2017 at 9:55 AM, Aman Deep Singh
wrote:
> Thanks Joel,
> It is working now
> One quick question,as you say that we can use solr client cache multiple
> time so can I create a single
Thanks Joel,
It is working now
One quick question,as you say that we can use solr client cache multiple
time so can I create a single instance of solr client cache and use it
again and again ,since we are using one single bean for client object.
On 16-Jun-2017 6:28 PM, "Joel Bernstein"
The issue is that in 6.6 CloudSolrStream is expecting a StreamContext to be
set. So you'll need to update your code to do this. This was part of
changes made to make streaming work in non-SolrCloud environments.
You also need to create a SolrClientCache which caches the SolrClients.
Example:
Hi,
Facets are not working when i'm querying with group command
request-
facet.field=isBlibliShipping=true=true=productCode=true=on=*:*=json
Schema for facet field
It was throwing error stating
Type mismatch: isBlibliShipping was indexed with multiple values per
document, use SORTED_SET instead
Hi,
I think their is a possible bug in Solrj version 6.6.0 ,as streaming is not
working
as i have a piece of code
public Set getAllIds(String requestId, String field) {
LOG.info("Now Trying to fetch all the ids from SOLR for request Id
{}", requestId);
Map props = new HashMap();
27 matches
Mail list logo