Hi,
I am having some problems getting the data-import-handler in Solr to work. I
have tried a lot of things but I simply get no response from Solr, not even an
error.
When calling the API:
http://localhost:8983/solr/nh/dataimport?command=full-import
{
"responseHeader":{
"status":0,
Hi,
Thanks Jan & Joel.
Though I will evaluate the performance over my data, but based on your
experience, which one of the two is better in performance ?. Please suggest
Yeah, I know export does not get the data from all shards, but we can write
code to aggregate the data from all shards. But
Shamik,
Wondering how to get this working? As I mentioned, my data is different for
each of the wizards. So not sure how to "return all the necessary data at
one shot and group them”
Any particular inputs?
Thanks
On 2 October 2018 at 15:47:50, Shamik Sinha (shamikchand...@gmail.com)
wrote:
perhaps you could do an OR query with the two requirements, and sort by an
identifier that makes each result set unique from the other
On Tue, Oct 2, 2018 at 11:05 AM Greenhorn Techie
wrote:
> Shamik,
>
> Wondering how to get this working? As I mentioned, my data is different for
> each of the
We run an old master/slave solr 4.3.0 solr cluster
14 nodes 7/7
indexes average 47/5 gig per shard around 2 mill docs per shard.
We have constant daily additions and a small amount of deletes.
We optimize nightly currently and it is a system hog.
Is it feasible to never run optimize?
I ask
Hi,
We are building a mobile app which would display results from Solr. At the
moment, the idea is to have multiple widgets / areas on the mobile screen,
with each area being served by a distinct Solr query. For example first
widget would be display customer’s aggregated product usage, second
The Solr uses REST based calls which is done over http or https which
cannot handle multiple requests at one shot. However what you can do is
return all the necessary data at one shot and group them according to your
needs.
Thanks and regards,
Shamik
On 02-Oct-2018 8:11 PM, "Greenhorn Techie"
You should not be exposing Solr publicly to the production client
unless you really, really know how to do security hardening. Normally,
treat it more like a database.
So, most of the cases, you would have an actual server app that talks
to Solr behind the scenes. That gives you more flexibility
Thanks Varun,
Until we move to Solr 7.5 is there a way that we can control sharding with the
core.properties file?
It seems to me that you use to be able to put a core.properties file in the
Solr home path with something like the following.
coreNodeName=bts_shard3_01
shard=shard3
Have a middle tier that does all the queries and returns combined results. Or
do all the queries simultaneously, like AJAX.
I think it is still true with SolrJ that async, parallel queries to Solr are
easy. Send each request, but don’t read the response. When you are done sending
requests,
Dear Solr Team, I need following clarification from you, please check and give
suggestion to me, 1. I want to store and search 200 Billions of documents(Each
document contains 16 fields). For my case can I able to achieve by using Solr
cloud? 2. For my case how many shard and nodes will be
> url="C:/Users/z6mhq/Desktop/data_import/nh_test.xml"
Have you tried url="C:\\Users\\z6mhq/Desktop\\data_import\\nh_test.xml" ?
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 2. okt. 2018 kl. 17:15 skrev Martin Frank Hansen (MHQ) :
>
> Hi,
>
> I am having some
Admin UI for DIH will show you the config file read. So, if nothing is
there, the path is most likely the issue
You can also provide or update the configuration right in UI if you
enable debug.
Finally, the config file is reread on every invocation, so you don't
need to restart the core after
Unfortunately, still no luck.
{
"responseHeader":{
"status":0,
"QTime":8},
"initArgs":[
"defaults",[
"config","C:/Users/z6mhq/Desktop/nh/nh/conf/data-config.xml"]],
"command":"full-import",
"status":"idle",
"importResponse":"",
"statusMessages":{
"Total Requests
On 10/2/2018 9:11 AM, Chuck Reynolds wrote:
Until we move to Solr 7.5 is there a way that we can control sharding with the
core.properties file?
It seems to me that you use to be able to put a core.properties file in the
Solr home path with something like the following.
Thanks for the info, the UI looks interesting... It does read the data-config
correctly, so the problem is probably in this file.
Martin Frank Hansen, Senior Data Analytiker
Data, IM & Analytics
Lautrupparken 40-42, DK-2750 Ballerup
E-mail m...@kmd.dk Web www.kmd.dk
Mobil +4525571418
Ok, so then you can switch to debug mode and keep trying to figure it
out. Also try BinFileDataSource or URLDataSource, maybe it will have
an easier way.
Or using relative path (example:
I made it work with the simplest of xml-files with some inspiration from
https://opensolr.com/blog/2011/09/how-to-import-data-from-xml-files-into-your-solr-collection
.
Data-config is now:
And the document is simply:
2165432
5
On 10/2/2018 9:33 AM, Rekha wrote:
Dear Solr Team, I need following clarification from you, please check and give
suggestion to me, 1. I want to store and search 200 Billions of documents(Each
document contains 16 fields). For my case can I able to achieve by using Solr
cloud? 2. For my case
Don’t optimize. The first article isn’t as clear as it should be. The important
sentence is "Unless you are running into resource problems, it’s best to leave
merging alone.”
I’ve been running Solr in production since version 1.3, with several different
kinds and sizes of collections. I’ve
On 10/2/2018 8:55 PM, Ganesh Sethuraman wrote:
We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
ensemble in AWS. There are about 60 collections at any point in time. We
have per JVM max heap of 8GB.
Let's focus for right now on a single Solr machine, rather than the
whole
Try adding echoParams=all and see what comes back. Maybe there is some sort
of "truthiness" mapping.
Still, the solution of catching this in middleware is probably the easiest
path.
Regards,
Alex
On Tue, Oct 2, 2018, 10:55 PM Erick Erickson,
wrote:
> That does, indeed, look odd. I might
Thanks a lot Shawn for your reply,
For what you said:
Shawn Heisey-2 wrote
> With a change to the log4j configuration file, you can direct all logs
> created by the DIH classes to a separate file, no code changes needed.
Since I'm a newbee regarding log4j, Can you please give me an example
That does, indeed, look odd. I might point out that specifying
facet.prefix without giving it a value is rather odd, but still that
should default to not specifying facet.prefix at all.
Sounds like a JIRA is in order but I doubt it'll get a very high
priority unless you supply a patch since the
The problem you're at now is that, having run optimize, that single
massive segment will accumulate deletes until it has < 2.5G "live"
documents. So once you do optimize (and until you get to Solr 7.5),
unless you can live with this one segment accumulating deletes for a
very long time, you must
Hi
We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
ensemble in AWS. There are about 60 collections at any point in time. We
have per JVM max heap of 8GB.
The problem is: We are seeing few collection's few replicas in "recovering"
state and few in the "down". Since we have 2
Hi,
Is there a way to set the log directory for a dih request and the log file
name?
thanks in advance...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello,
I got a trange behavior while using the facet parameter. When I do a query with
the facet.prefix parameter set I don't get the "true" entries of the facet.
Example call:
https://solr.url/solr/preview/select?facet.field=internal_b=on=id=on=*:*=json=
The result is:
"facet_counts": {
I'm not sure which log file you are referring to. You can configure where Solr
puts its log files through the SOLR_LOGS_DIR environment setting, see
https://lucene.apache.org/solr/guide/7_4/configuring-logging.html#permanent-logging-settings
I know tha Solr logs the dih operations (& most of other operations) in
server\logs\solr.log file. What I want is to configure the dih requests to
be logged in another path, with another name if it's possible.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I have been using Solr for a while for an online web store. After search a
filter box appears where user can filter results by many attributes. My
question is how can I do it with Solr that he filter box show only
available options based on result. For example if attribute "type" can be
1, 2,
Hi Tamas,
Maybe I am missing the point and you already discarded that option, but you
should be able to cover such cases with simple faceting?
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
On 10/2/2018 4:55 AM, Tamás Barta wrote:
I have been using Solr for a while for an online web store. After search a
filter box appears where user can filter results by many attributes. My
question is how can I do it with Solr that he filter box show only
available options based on result. For
Thank you for the answers!
Is it possible to get the facet result and the search results with only one
query? Or I have to send two queries for the Solr (one for search results
and one for facets)?
On Tue, Oct 2, 2018 at 1:02 PM Shawn Heisey wrote:
> On 10/2/2018 4:55 AM, Tamás Barta wrote:
>
On 10/2/2018 3:33 AM, lala wrote:
I know tha Solr logs the dih operations (& most of other operations) in
server\logs\solr.log file. What I want is to configure the dih requests to
be logged in another path, with another name if it's possible.
DIH doesn't make its own logfile. Just like the
On 10/2/2018 5:03 AM, Tamás Barta wrote:
Thank you for the answers!
Is it possible to get the facet result and the search results with only one
query? Or I have to send two queries for the Solr (one for search results
and one for facets)?
It only requires one query. You just add facet
36 matches
Mail list logo