Yeah, you'll have to do the conversion yourself (or something internal,
like the currencyField).
Think about it as datetimes. You store everything in utc (cents), but
display to each user in it's own timezone (different currency, or just from
cents to full dollars).
On Wed, Dec 7, 2016 at 8:23
But if I index $1234.56 as "123456", won't it affect the search or facet if
I do a query directly to Solr?
Say if I search for index with amount that is lesser that $2000, it will
not match, unless when we do the search, we have to pass "20" to Solr?
Regards,
Edwin
On 7 December 2016 at
Thanks Erick! Should I create a JIRA issue for the same?
Regarding the logs, I have changed the log level to WARN. That may be the
reason, I couldn't get anything from it.
Thanks,
Manohar
On Tue, Dec 6, 2016 at 9:58 PM, Erick Erickson
wrote:
> Most likely reason is
They wanted out of the box solutions.
This is what I found too that it would be custom. i was hoping i just was
not finding something obvious.
Jeff Courtade
M: 240.507.6116
On Dec 6, 2016 7:07 PM, "John Bickerstaff" wrote:
> You know - if I had to build this, I
You know - if I had to build this, I would consider slurping up the
relevant log entries (if they exist) and feeding them to Kafka - then your
people who want to analyze what happened can get those entries again and
again (Think of Kafka kind of like a persistent messaging store that can
store log
: Thanks for your reply.
:
: That means the best fieldType to use for money is currencyField, and not
: any other fieldType?
The primary use case for CurrencyField is when you want to do dynamic
currency fluctuations between multiple currency types at query time -- but
to do that you either
Thanks for your reply.
That means the best fieldType to use for money is currencyField, and not
any other fieldType?
Regards,
Edwin
On 6 December 2016 at 21:33, Dorian Hoxha wrote:
> Don't use float for money (in whatever db).
>
If you can identify currently-logged messages that give you what you need
(even if you have to modify or process them afterwards) you can easily make
a custom log4j config that grabs ONLY what you want and dumps it into a
separate file...
I'm pretty sure I've seen all the request coming through
bq: maxWarmingSearchers is set to 6
Red flag ref. If this was done to avoid the warning in the logs about
too many warming searchers, it's a clear indication that you're
committing far too often. Let's see exactly what you're using to post
when you say you're "using the REST API". My bet: each
Hello,
I have an error that has been popping up randomly since 3 weeks ago and the
randomness of the issue makes it hard to troubleshoot.
I have a service that use the REST API to index documents (1000 docs at a time)
and in this process I often call the core status API
Cool, thanks for letting us know (and sorry about the typo!)
--
Steve
www.lucidworks.com
> On Dec 6, 2016, at 4:15 PM, Vinay B, wrote:
>
> Yes, that works (apart from the typo in PatternReplaceCharFilterFactory)
>
> Here is my config
>
>
> positionIncrementGap="100"
Yes, that works (apart from the typo in PatternReplaceCharFilterFactory)
Here is my config
On Wed, Nov 30, 2016 at 2:08 PM, Steve Rowe wrote:
> Hi Vinay,
>
> You should be able to use a
There is also Jetty level access log which shows the requests, though
it may not show the HTTP PUT bodies.
Finally, various online monitoring services probably have agents that
integrate with Solr to show what's happening. Usually costs money
though.
Regards,
Alex.
We would like to load a collection and have it replicate out to multiple
clusters. For example we want a US cluster to be able to replicate to
Europe and Asia.
I tried to create two source cdcrRequestHandlers
/cdcr01 and /cdcr02 each differing by their target zookeepers
When the target handlers
Thanks very much the trace idea is a brilliant way to dig into it. Did not
occur to me.
I had another coworker suggest the custom
http://lucene.apache.org/solr/6_3_0/solr-core/org/apache/solr/update/processor/LogUpdateProcessorFactory.html
this is beyond my litmited abilites.
I will see what
You could turn the trace mode for everything in the Admin UI (under
logs/levels) and see if any of the existing information is sufficient
for your needs. If yes, then you change log level in the configuration
just for that class/element.
Alternatively, you could do a custom UpdateRequestProcessor
Hello,
Could someone point me in the correct direction for this?
I am being asked to setup an "audit.log" or audit trail for writes and
changes to documents.
i do not know where to begin something like this.
I am guessing it is just configuration of log4j but that is about as far as
i can
Most likely reason is that the Solr node in question,
was not reachable thus it was removed from
live_nodes. Perhaps due to temporary network
glitch, long GC pause or the like. If you're rolling
your logs over it's quite possible that any illuminating
messages were lost. The default 4M size for
Don't use float for money (in whatever db).
https://wiki.apache.org/solr/CurrencyField
What you do is save the money as cents, and store that in a long. That's
what the currencyField probably does for you inside.
It provides currency conversion at query-time.
On Tue, Dec 6, 2016 at 4:45 AM,
On Fri, Dec 2, 2016 at 4:36 PM, Chris Rogers
wrote:
> Hi all,
>
> A question regarding using the DIH FileListEntityProcessor with SolrCloud
> (solr 6.3.0, zookeeper 3.4.8).
>
> I get that the config in SolrCloud lives on the Zookeeper node (a different
> server
We have a 16 node cluster of Solr (5.2.1) and 5 node Zookeeper (3.4.6).
All the Solr nodes were registered to Zookeeper (ls /live_nodes) when setup
was done 3 months back. Suddenly, few days back our search started failing
because one of the solr node(consider s16) was not seen in Zookeeper,
Hi
Has somebody a recent logstash config for parsing solr logs? I'm using
version 6.3.0
Thanks!
BR
Arkadi
On Mon, 2016-12-05 at 17:47 -0700, Chris Hostetter wrote:
> : One simple solution, in my case would be, now just thinking of it,
> : run the query with no facets and no rows, get the numFound, and set
> : that as facet.limit for the actual query.
>
> ...that assumes that the number of facet
23 matches
Mail list logo