Do you find your configs in the Solr admin panel, in the Cloud --> Tree
folder?
-Michael
Am 28.09.2017 um 04:50 schrieb Gunalan V:
> Hello,
>
> Could you please let me know where can I find the uploaded configset from
> SOLR into zookeeper ensemble ?
>
> In docs it says they will "/configs/" b
I'd rather say you didn't quote the URL when sending it using curl.
Bash accepts the ampersand as a request to execute curl including the
URL up to CREATE in background - that's why the error is included within
the next output, followed by "Exit" - and then tries to execute the
following part of t
Hello All,
Thanks to reply. After long time , I found the solution of configuration
uploading in sold cloud from the following link:
http://mtitek.com/tutorials/solr/collections.php
Regards,
Shashi Roushan
On Sep 20, 2017 3:42 AM, "John Bickerstaff"
wrote:
This may also be of some assistance:
Hello,
Could you please let me know where can I find the uploaded configset from
SOLR into zookeeper ensemble ?
In docs it says they will "/configs/" but I'm not able to see
the configs directory in zookeeper. Please let me know if I need to check
somewhere else.
Thanks!
Hello,
Could someone please tell me the difference between Solr Core (core),
Collections, Nodes, SolrCluster referred in SolrColud. It's bit confusing.
If there are any diagrammatic representation or example please share me.
Thanks!
so, it seems like two steps for WordDelimiterGraphFilterFactory (with
different config in each step) were causing the error. I am still not sure
how it ended up in this state and if there is any benefit of having two
lines. But removing one of them fixed my error.
Thanks
Nawab
On Wed, Sep 27, 20
Hi,
I upgraded to solr 7 today and i am seeing tonnes of following errors for
various fields.
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception
writing document id file_3881549 to the index; possible analysis error:
startOffset must be non-negative, and endOffset must
I understand that this has to be done on the command line, but I don't know
where to put this structure or what it should look like. Can you please be
more specific in this answer? I have only been working with Solr for about six
months.
~~~
William Kevin Miller
ECS Fe
Standard command-line. You're doing this on the box itself, not
through a REST API.
Erick
On Wed, Sep 27, 2017 at 10:26 AM, Miller, William K - Norman, OK -
Contractor wrote:
> This is my first time to try using the core admin API. How do I go about
> creating the directory structure?
>
>
Regarding not finding the issue, JIRA has a problem with queries when
the user is not logged in (see also
https://jira.atlassian.com/browse/JRASERVER-38511 if you're interested
in the details). There's unfortunately not much we can do about it
besides manually edit issues to remove a security setti
First, thanks for the quick response. Yes, it sounds like the same problem!!
I did a bunch of searching before repoting the issue, I didn't come across that
JIRA or I wouldn't have reported it. My apologies for the duplication
(although it is a new JIRA).
Is there a good place to start search
That sounds like https://issues.apache.org/jira/browse/SOLR-11406 if i'm
not mistaken?
-Stefan
On Sep 27, 2017 8:20 PM, "Wayne L. Johnson"
wrote:
> I’m testing Solr 7.0.0. When I start with an empty index, Solr comes up
> just fine, I can add documents and query documents. However when I star
I'm testing Solr 7.0.0. When I start with an empty index, Solr comes up just
fine, I can add documents and query documents. However when I start with an
already-populated set of documents (from 6.5.0), Solr will not start. The
relevant portion of the traceback seems to be:
Caused by: java.la
https://wiki.apache.org/solr/DataImportHandlerFaq#I.27m_using_DataImportHandler_with_a_MySQL_database._My_table_is_huge_and_DataImportHandler_is_going_out_of_memory._Why_does_DataImportHandler_bring_everything_to_memory.3F
-Original Message-
From: Deeksha Sharma [mailto:dsha...@flexera.co
I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in
mysql db and the number of records are more than 3.5 million. My solr server
stops due to OOM (out of memory error). I tried starting solr by giving 12GB of
RAM but still no luck.
Also, I see that Solr fetches all
So there is no way of telling SOLR to duplicate a column of CSV by just
using some parameters during the import request?
Just for the CSV.
The truth is copyfield crosseb my mind but it's just too brute force
because it will affect all documents imported. And CSV is NOT the only
method we are
This is my first time to try using the core admin API. How do I go about
creating the directory structure?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Wednesd
Right, the core admin API is pretty low-level, it expects the base
directory exists, you have to create the directory structure by hand.
Best,
Erick
On Wed, Sep 27, 2017 at 9:24 AM, Miller, William K - Norman, OK -
Contractor wrote:
> Thanks Erick for pointing me in this direction. Unfortunatel
Thanks Erick for pointing me in this direction. Unfortunately when I try to us
this I get an error. Here is the command that I am using and the response I
get:
https://solrserver:8983/solr/admin/cores?action=CREATE&name=mycore&instanceDir=/var/solr/data/mycore&dataDir=data&configSet=custom_con
Hi all,
Yesterday Yahoo open sourced Vespa (i.e.: The open big data serving engine:
Store, search, rank and organize big data at user serving time.), looking at
the API they provide search.
I did a quick search on the code for lucene, getting only 5 results.
Does anyone know more about the f
Thanks Erick,
I’ll add it on my TODO list.
Regards,
Emir
> On 27 Sep 2017, at 17:02, Erick Erickson wrote:
>
> No good reason, probably just "nobody got around to it".
>
> The switch to asciidoc has made it much easier to contribute doc
> changes, if you have the bandwidth please go ahead and
> In any case I figured out my problem. I was over thinking it.
Mind to share?
-Stefan
On Sep 27, 2017 4:34 PM, "Webster Homer" wrote:
> There is a need for a special filter since the input has to be normalized.
> That is the main requirement, splitting into pieces is optional. As far as
> I k
No good reason, probably just "nobody got around to it".
The switch to asciidoc has made it much easier to contribute doc
changes, if you have the bandwidth please go ahead and create a patch
for the docs
Best,
Erick
On Wed, Sep 27, 2017 at 1:53 AM, Emir Arnautović
wrote:
> Hi all,
> Is the
If you always want to do this exact thing, it looks like a copyField
directive in your schema.
If it has to be more nuanced, you can use something like
StatelessScriptUpdateProcessorFactory.
Both of these would affect _all_ documents coming in to Solr, so may
be too blunt a hammer.
Best,
Erick
There is a need for a special filter since the input has to be normalized.
That is the main requirement, splitting into pieces is optional. As far as
I know there is nothing in solr that knows about molecular formulas.
In any case I figured out my problem. I was over thinking it.
On Wed, Sep 27,
Hi Arun,
It is hard to measure something without affecting it, but we could use debug
results and combine with QTime without debug: If we ignore merging results, it
seems that majority of time is spent for retrieving docs (~500ms). You should
consider reducing number of rows if you want better r
hi everyone,
I am trying spatial query in solr such as intersects, within etc. I write below
query but it is wrong. I try 3 different method this query but all query return
same error.
How to run spatial query in solr? Such as intersect, iswithin etc.
Best Regards.
1-
http://localhost:8983/s
Hi,
i’ve just installed solr 7.0.0 and i’ve an error opening an index created with
6.6.1.
The server works fine if start it with an empty index so i suppose that
configurations is ok
this is the stack trace:
Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: o
Hi Emir,
Please find the response without bq parameter and debugQuery set to true.
Also it was noted that Qtime comes down drastically without the debug
parameter to about 700-800.
true
0
3446
("hybrid electric powerplant" "hybrid electric powerplants" "Electric"
"Electrical" "Electricity"
Is there a way to map a field value based on another field value without
replicatiing the columns in the CSV itself?
for example i tried : literal.title_fullStr=f.title_short but it doesn't
seem to work.
Thank you
Hi Erick,
Qtime comes down with rows set as 1. Also it was noted that qtime comes down
when debug parameter is not added with the query. It comes to about 900.
Thanks,
Arun
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On Tue, 2017-09-26 at 07:43 -0700, sasarun wrote:
> Allocated heap size for young generation is about 8 gb and old
> generation is about 24 gb. And gc analysis showed peak
> size utlisation is really low compared to these values.
That does not come as a surprise. Your collections would normally b
Hi all,
Is there some reason why PatternCaptureGroupTokenFilter is not documented even
included in the code base?
Thanks,
Emir
Hi Homer,
There is no need for special filter, there is one that is for some reason not
part of documentation (will ask why so follow that thread if decided to go this
way): You can use something like:
This will capture all atom counts as a separate tokens.
HTH,
Emir
> On 26 Sep 2017, at 23:1
I did not look at the code, but after deleting make sure all segments are gone
(maybe optimize), make sure you reloaded the core and if nothing works (and
this is the recommended solution) recreate your collection instead of deleting
all documents.
HTH,
Emir
> On 26 Sep 2017, at 23:04, Phil S
Hi Arun,
This is not the most simple query either - a dozen of phrase queries on several
fields + the same query as bq. Can you provide debugQuery info.
I did not look much into debug times and what includes what, but one thing that
is strange to me is that QTime is 4s while query in debug is 1.3
36 matches
Mail list logo