I am trying write query to get below stats for orders data
Find Total Order Count, sum(Quantity) and sum(Cost) for specified date
range in a gap of 1 day. example if date range is for 10 days then get
these result for every day for 10 days.
Solr Version : 5.2.1
Example Order Solr Doc
You could start adding debug=true in your request, it will show complete
query execution time
On Tue, Nov 10, 2015 at 9:39 AM John Stric wrote:
> The speed of particular query has gone from about 42 msec to 66 msec
> without any changes.
>
> How do I go about troubleshooting
Docs are available in Ref guide for Json facet and Json request api
https://cwiki.apache.org/confluence/display/solr/Faceted+Search
https://cwiki.apache.org/confluence/display/solr/JSON+Request+API
On Sun, Oct 25, 2015 at 7:08 PM, hao jin wrote:
> Thanks, Yonik.
>
> When
may be this patch could help if ported to 5.x
https://issues.apache.org/jira/browse/SOLR-4787
*BitSetJoinQParserPlugin aka bjoin -> "*can provide sub-second response
times on result sets of tens of millions of records from the fromIndex and
hundreds of millions of records from the main query"
you might have this filter in query analyzer, which can spit token s-pass
https://cwiki.apache.org/confluence/display/solr/Filter+Descriptions#FilterDescriptions-WordDelimiterFilter
On Sun, May 24, 2015 at 5:36 AM, Ryan Yacyshyn ryan.yacys...@gmail.com
wrote:
Thanks all for your suggestions.
You could give a try for this join contrib patch
https://issues.apache.org/jira/browse/SOLR-4787
On Mon, Mar 2, 2015 at 12:04 PM, Matt B mat...@runbox.com wrote:
I've recently inherited a Solr instance that is required to perform
numerous joins between two cores, usually as filter queries,
I think until Solr become completely standalone, it could be major task for
all folks who run Solr as war or repackage Solr war maven release to adopt
5.0 release, since they need to remove tomcat or any other container they
have in production for running Solr.
Not to mention there will tools
More detail can be found in Solr Docs
https://cwiki.apache.org/confluence/display/solr/Suggester
On Thu, Dec 4, 2014 at 6:33 AM, Clemens Wyss DEV clemens...@mysign.ch
wrote:
Enter the factory! ;)
str
name=lookupImplorg.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory/str
you can register for webinar also to know more about Fusion on Oct 2nd.
http://lucidworks.com/blog/say-hello-to-lucidworks-fusion/
On Tue, Sep 23, 2014 at 7:39 AM, Jack Krupansky j...@basetechnology.com
wrote:
You simply download it yourself and give yourself a demo!!
Thanks for sharing, since in future Solr may move towards standalone server
this (undertow) could be one option.
On Sat, Sep 13, 2014 at 9:36 PM, William Bell billnb...@gmail.com wrote:
Can we get some stats? Do you have any numbers on performance?
On Sat, Sep 13, 2014 at 3:03 PM, Jayson
Do you have this tag uniqueKeyid/uniqueKey define in your schema , it
is not mandatory to have unique field but if you need it then u have to
provide it else you can remove it, see below wiki page for more details
http://wiki.apache.org/solr/SchemaXml#The_Unique_Key_Field
Some options to
This jira has some documentation, may be this will help you..
https://issues.apache.org/jira/browse/SOLR-5683
On Wed, Aug 13, 2014 at 1:28 AM, Harun Reşit Zafer
harun.za...@tubitak.gov.tr wrote:
Hi everyone,
Currently I'm using AnalyzingInfixLookupFactory with a suggestions file
Another option to get JMX data from Solr to Graphite
http://graphite.wikidot.com/ or Ganglia
http://ganglia.sourceforge.net/ using
jmxtrans
https://github.com/jmxtrans/jmxtrans/wiki
On Wed, Aug 6, 2014 at 3:09 AM, rulinma ruli...@gmail.com wrote:
good job .
--
View this message in
for reference implementation to Sanitize Unknown SolrFields, you can see
below link
https://github.com/cloudera/cdk/blob/master/cdk-morphlines/cdk-morphlines-solr-core/src/main/java/com/cloudera/cdk/morphline/solr/SanitizeUnknownSolrFieldsBuilder.java
On Sat, Aug 2, 2014 at 8:24 PM, Umesh
Shalin, correlated with how frequently you call commit is it soft commit
or hard commit? , I guess it should be later one.
just curious what data it update to zookeeper during commit
On Tue, Mar 18, 2014 at 9:12 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
SolrCloud will update
added, since our usecase required
to update few numeric fields at much higher rate than normal document
update.
On Wed, Nov 20, 2013 at 7:48 AM, Gopal Patwa gopalpa...@gmail.com wrote:
+1 to add this support in Solr
On Wed, Nov 20, 2013 at 7:16 AM, Otis Gospodnetic
otis.gospodne
Thanks Chirs, I found in our application code it was related to optimistic
concurrency failure.
On Mon, Mar 3, 2014 at 6:13 PM, Chris Hostetter hossman_luc...@fucit.orgwrote:
: Subject: java.lang.Exception: Conflict with StreamingUpdateSolrServer
the fact that you are using
Well said Jack, we are using Solr as NoSQL solution as Jack describe from
Solr version 3.x and still using it in Production with 4.x and on our
Stubhub site most visited page.
https://m.stubhub.com/los-angeles-kings-tickets/los-angeles-kings-los-angeles-staples-center-1-3-2014-4323511/
On Sat,
We have non SolrCloud cluster with Solr 4.1 version and often we get this
error during indexing, Did anyone else had similar experience with Indexing
or have seen this error.
2014-03-01 16:52:16,857 [1a44#fb2a/ActiveListingPump] priority=INFO
app_name=listing-search-index
you could just add a field with default value NOW in schema.xml, for example
field name=dateLastIndexed type=tdate indexed=true stored=true
default=NOW multiValued=false/
On Wed, Feb 26, 2014 at 10:44 PM, pratpor prat...@chatimity.com wrote:
Is it possible to know the indexing time of a
This blog by Eric will help you to understand different commit option and
transaction logs and it does provide some recommendation for ingestion
process.
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
On Tue, Feb 25, 2014 at 11:40 AM, Furkan
Solr = 4.6.1, attached solrcloud admin console view
Zookeeper 3.4.5 = 3 node ensemble
In my test setup, I have 3 Node SolrCloud setup with 2 shard. Today we had
power failure and all node went down.
I started 3 node zookeeper ensemble first then followed with 3 node
solrcloud, and one of
,generation=1}
On Thu, Jan 2, 2014 at 8:20 AM, Gopal Patwa gopalpa...@gmail.com wrote:
I am trying to setup Solr with HDFS following this wiki
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
My Setup:
***
VMWare: Cloudera Quick Start VM 4.4.0-1 default
I am trying to setup Solr with HDFS following this wiki
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
My Setup:
***
VMWare: Cloudera Quick Start VM 4.4.0-1 default setup (only hdfs1,
hive1,hue1,mapreduce1 and zookeeper1 is running)
if you are using Solr 4.0 there was some issue related to field alias which
was fixed in Solr 4.3
https://issues.apache.org/jira/browse/SOLR-4671
you should try to reproduce this issue using latest Solr version 4.5.1
On Fri, Nov 22, 2013 at 11:28 AM, GaneshSe ganeshmail...@gmail.com wrote:
+1 to add this support in Solr
On Wed, Nov 20, 2013 at 7:16 AM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
Numeric DocValues Updates functionality that came via
https://issues.apache.org/jira/browse/LUCENE-5189 sounds very
valuable, while we wait for full/arbitrary field
My case is also similar to Sujit Pal but we have jboss6.
On Tue, Nov 12, 2013 at 9:47 AM, Sujit Pal sujit@comcast.net wrote:
In our case, it is because all our other applications are deployed on
Tomcat and ops is familiar with the deployment process. We also had
customizations that
Did you try adding core.properties file in your core folder with below
content and change the value for name and collection property
ex: core1/core.properties content:
numShards=1
name=core1
shard=shard1
collection=collection1
On Mon, Nov 11, 2013 at 8:14 AM, michael.boom my_sky...@yahoo.com
Since you have define commit option as Auto Commit for hard and soft
commit then you don't have to explicitly call commit from SolrJ client. And
openSearcher=false for hard commit will make hard commit faster since it is
only makes sure that recent changes are flushed to disk (for durability)
and
agree with Anshum and Netflix has very nice supervisor system for ZooKeeper
if they goes down it will restart them automatically
http://techblog.netflix.com/2012/04/introducing-exhibitor-supervisor-system.html
https://github.com/Netflix/exhibitor
On Fri, May 3, 2013 at 6:53 PM, Anshum Gupta
you might want to added openSearcher=false for hard commit, so hard commit
also act like soft commit
autoCommit
maxDocs5/maxDocs
maxTime30/maxTime
openSearcherfalse/openSearcher
/autoCommit
please post field defination from solr schema.xml for
stats.field=login_attemptshttp://localhost:8080/solr/daycore/select?q=*:*stats=truestats.field=login_attemptsrows=0
,
it depends how you have defined stats field
I was looking too for this feature and it seems SOLR-4470 can work but I
haven't tried yet.
+1
On Sun, Apr 14, 2013 at 12:14 PM, Tim Vaillancourt t...@elementspace.comwrote:
I've thought about this too, and have heard of some people running a
lightweight http proxy upstream of Solr.
With
manually delete lock file
/data/solr1/example/solr/collection1/./data/index/write.lock,
And restart solr
On Sun, Mar 24, 2013 at 9:32 PM, Sandeep Kumar Anumalla
sanuma...@etisalat.ae wrote:
Hi,
I managed to resolve this issue and I am getting the results also. But
this time I am getting a
These classes were deprecated from 4.0 and it is replaced with
ConcurrentUpdateSolrServer and HttpSolrServer
On Tue, Jan 22, 2013 at 5:28 PM, Lewis John Mcgibbney
lewis.mcgibb...@gmail.com wrote:
Hi All,
As above, I am upgrading our application and I wish to give it a facelift
to use the
one thing I noticed in solrconfig xml that it set to use Lucene version 4.0
index format but you mention you are using it 4.1
luceneMatchVersionLUCENE_40/luceneMatchVersion
On Mon, Jan 21, 2013 at 4:26 PM, Brett Hoerner br...@bretthoerner.comwrote:
I have a collection in Solr 4.1 RC1 and
Hi, I am also very much interested in this, since we use Solr 4 with NRT
where we update index every second but most of time it update only stored
filed.
if Solr/Lucene could provide external datastore without re-indexing even
for stored field only, it would be very beneficial for frequent update
Did you tried below options
mergePolicy class=org.apache.lucene.index.TieredMergePolicy
double name=forceMergeDeletesPctAllowed0.0/double
double name=reclaimDeletesWeight10.0/double
/mergePolicy
This is from java doc
/** When forceMergeDeletes is called, we only merge away a
You need remove field after read solr doc, when u add new field it will
add to list, so when u try to commit the update field, it will be multi
value and in your schema it is single value
On Oct 10, 2012 9:26 AM, Ravi Solr ravis...@gmail.com wrote:
Hello,
I have a weird problem,
need to do to
avoid doing this awkward workaround.
Ravi Kiran Bhaskar
On Wed, Oct 10, 2012 at 12:36 PM, Gopal Patwa gopalpa...@gmail.com
wrote:
You need remove field after read solr doc, when u add new field it will
add to list, so when u try to commit the update field, it will be multi
Is this possible to make this improvement, so it can save lot of time and
code for using ConcurrentUpdateSolrServer with allowing to override default
http settings
On Sun, Apr 29, 2012 at 8:56 PM, Gopal Patwa gopalpa...@gmail.com wrote:
In Solr4j client trunk build for 4.0
I have similar issue using log4j for logging with trunk build, the
CoreConatainer class print big stack trace on our jboss 4.2.2 startup, I am
using sjfj 1.5.2
10:07:45,918 WARN [CoreContainer] Unable to read SLF4J version
java.lang.NoSuchMethodError:
to pass
HttpClient.
-Gopal Patwa
Great! I am going to try new Solr 4 build from April 23rd
On Sun, Apr 22, 2012 at 11:35 PM, Sami Siren ssi...@gmail.com wrote:
On Sat, Apr 21, 2012 at 9:57 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
I can reproduce some kind of searcher leak issue here, even w/o
SolrCloud, and I've
Yonik, This same issue we have on our production with Solr 4 Trunk build
running on Cent OS, JDK 6 64-bit
I have reported java.io.IOException: Map failed and Too many open files
issue, i seems their is a search leak in Solr which is not closing them and
file being kept open.
It would be great
forgot to mention we are not using Solr Cloud yet but we use Lucene NRT
feature, This issue is happening WITHOUT Solr Cloud
On Sat, Apr 21, 2012 at 8:14 PM, Gopal Patwa gopalpa...@gmail.com wrote:
Yonik, This same issue we have on our production with Solr 4 Trunk build
running on Cent OS, JDK
brightsolid Online Publishing
On 14 Apr 2012, at 17:40, Gopal Patwa wrote:
I checked it was MMapDirectory.UNMAP_SUPPORTED=true and below are my
system data. Is their any existing test case to reproduce this issue? I
am
trying understand how I can reproduce this issue with unit/integration
searchers or a
Lucene bug or something...) then you'll eventually run out of file
descriptors (ie, same problem, different manifestation).
Mike McCandless
http://blog.mikemccandless.com
2012/4/11 Gopal Patwa gopalpa...@gmail.com:
I have not change the mergefactor, it was 10. Compound index
On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa gopalpa...@gmail.com wrote:
*I need help!!*
*
*
*I am using Solr 4.0 nightly build with NRT and I often get this error
during auto commit **java.lang.OutOfMemoryError:* *Map* *failed. I
have search this forum and what I found
, 2012 at 7:47 PM, Gopal Patwa gopalpa...@gmail.com wrote:
I am using Solr 4.0 nightly build with NRT and I often get this
error during auto commit Too many open files. I have search this forum
and what I found it is related to OS ulimit setting, please see below my
ulimit settings. I am not sure
I am using Solr 4.0 nightly build with NRT and I often get this
error during auto commit Too many open files. I have search this forum
and what I found it is related to OS ulimit setting, please see below my
ulimit settings. I am not sure what ulimit setting I should have for open
file? ulimit -n
/autoCommit
autoSoftCommit
maxTime${inventory.solr.softcommit.duration:1000}/maxTime
/autoSoftCommit
/updateHandler
Thanks
Gopal Patwa
*
52 matches
Mail list logo