Re: Classifier for query intent?

2018-04-04 Thread Georg Sorst
Hi wunder,

this sounds like an interesting topic. Can you elaborate a bit on query
intent classification? Where does the training data come from? Do you
manually assign an intent to a query or can this be done in a
(semi-)automatic way? Do you have a fixed list of possible intents
(something like Google has: informational, navigational, transactional)?

Any pointers to useful links or papers maybe?

Thanks!
Georg

Walter Underwood  schrieb am Di., 3. Apr. 2018 um
01:18 Uhr:

> We are experimenting with a text classifier for determining query intent.
> Anybody have a favorite (or anti-favorite) Java implementation? Speed and
> ease of implementation is important.
>
> Right now, we’re mostly looking at Weka and the Stanford Classifier.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>


Re: Use of blanks in context filter field with AnalyzingInfixLookupFactory

2017-06-24 Thread Georg Sorst
Alfonso,

I've run into similar issues with the context filter query, maybe this is
caused by the StandardTokenizer.

I've written a patch in SOLR-9968 that makes the analyzer for the context
filter query configurable. This has helped me at least. SOLR-7963 also
allows you to change the query parser.

Good luck,
Georg

Alfonso Muñoz-Pomer Fuentes  schrieb am Mo., 12. Juni
2017, 21:11:

> suggestAnalyzerFieldType and queryAnalyzerFieldType are related to the
> field parameter (in my case property_value), not to the contextField.
> Moreover, the change you suggest makes AnalyzingInfixLookupFactory always
> return 0 results (something that’s not discussed in the reference guide and
> has confused other users previously).
>
> Cheers,
> Alfonso
>
>
> > On 12 Jun 2017, at 19:10, Susheel Kumar  wrote:
> >
> > Change below type to string and try...
> >
> > text_en
> >text_en
> >
> > Thanks,
> > Susheel
> >
> > On Mon, Jun 12, 2017 at 1:28 PM, Alfonso Muñoz-Pomer Fuentes <
> > amu...@ebi.ac.uk> wrote:
> >
> >> Hi all,
> >>
> >> I was wondering if anybody has experience setting up a suggester with
> >> filtering using a context field that has blanks. Currently this is what
> I
> >> have in solr_config.xml:
> >> 
> >>  
> >>AnalyzingInfixLookupFactory
> >>DocumentDictionaryFactory
> >>species
> >>text_en
> >>text_en
> >>false
> >>  
> >> 
> >>
> >> And this is an example record in my index:
> >> {
> >>  "bioentity_identifier":["ENSG419"],
> >>  "bioentity_type":["ensgene"],
> >>  "species":"homo sapiens",
> >>  "property_value":["R-HSA-162699"],
> >>  "property_name":["pathwayid"],
> >>  "id":"795aedd9-54aa-44c9-99bf-8d195985b7cc",
> >>  "_version_”:1570016930397421568
> >> }
> >>
> >> When I request for suggestions like this, everything’s fine:
> >> http://localhost:8983/solr/bioentities/suggest?wt=json;
> >> indent=on=r
> >>
> >> But if I try to narrow by species, I get 0 results:
> >> http://localhost:8983/solr/bioentities/suggest?wt=json;
> >> indent=on=r=homo sapiens
> >>
> >> I’ve tried escaping the space, URL-encode it (with %20 and +), enclosing
> >> it in single quotes, double quotes, square brackets... to no avail
> (getting
> >> 0 results except when I enclose the parameter value with double quotes,
> in
> >> which case I get an exception). In the example record above, species is
> of
> >> type string. In schemaless mode the results are the same.
> >>
> >> Using underscores in the species lets me filter properly, so the
> filtering
> >> mechanism per se works fine.
> >>
> >> Any help greatly appreciated.
> >>
> >> --
> >> Alfonso Muñoz-Pomer Fuentes
> >> Software Engineer @ Expression Atlas Team
> >> European Bioinformatics Institute (EMBL-EBI)
> >> European Molecular Biology Laboratory
> >> Tel:+ 44 (0) 1223 49 2633
> >> Skype: amunozpomer
> >>
> >>
>
> --
> Alfonso Muñoz-Pomer Fuentes
> Software Engineer @ Expression Atlas Team
> European Bioinformatics Institute (EMBL-EBI)
> European Molecular Biology Laboratory
> Tel:+ 44 (0) 1223 49 2633
> Skype: amunozpomer
>
>


Re: Is it possible to support context filtering for FuzzyLookupFactory?

2017-06-22 Thread Georg Sorst
That would indeed be great! Does anyone know if there is a specific reason
for this or has it just not been implemented?

Jeffery Yuan  schrieb am Di., 20. Juni 2017, 22:54:

>
> FuzzyLookupFactory is great as it can still find matches even if users
> mis-spell.
>
> context filtering is also great, as we can only show suggestions based on
> user's languages, doc types etc
>
> But its a pity that (seems) FuzzyLookupFactory and context filtering don't
> work together.
>
> https://cwiki.apache.org/confluence/display/solr/Suggester
> Context filtering lets you filter suggestions by a separate context field,
> such as category, department or any other token. The
> AnalyzingInfixLookupFactory and BlendedInfixLookupFactory currently support
> this feature, when backed by DocumentDictionaryFactory.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Is-it-possible-to-support-context-filtering-for-FuzzyLookupFactory-tp4342051.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Using function queries for faceting

2017-04-08 Thread Georg Sorst
Hi Mikhail,

thanks, JSON facet domains may actually be the key! Something like (when a
user from group1 is searching):

1. Facet on price_group1
2. Facet on price for all results that do not have price_group1 field using
JSON facet domain
3. Sum up the facet counts

Best,

Georg
Mikhail Khludnev <m...@apache.org> schrieb am Di., 4. Apr. 2017, 17:05:

Exclude users' products, calculate default price facet, then facet only
user's products (in a main query) and sum facet counts. It's probably can
be done with switching domains in json facets.

On Tue, Apr 4, 2017 at 5:43 PM, Georg Sorst <georg.so...@gmail.com> wrote:

> Hi Mikhail,
>
> copying the default field was my first attempt as well - however, the
> system in total has over 50.000 users which may have an individual price
on
> every product (even though they usually don't). Still, with the copying
> approach this results in every document having 50.000 price fields. Solr
> completely chokes trying to import this data.
>
> Best,
> Georg
>
> Mikhail Khludnev <m...@apache.org> schrieb am Di., 4. Apr. 2017 um
> 15:28 Uhr:
>
> > Hello Georg,
> > You can probably use {!frange} and  and a few facet.query enumerating
> price
> > ranges, but probably it's easier to just copy default price across all
> > empty price groups in index time.
> >
> >
> > On Tue, Apr 4, 2017 at 1:14 PM, Georg Sorst <georg.so...@gmail.com>
> wrote:
> >
> > > Hi list!
> > >
> > > My documents are eCommerce items. They may have a special price for a
> > > certain group of users, but not for other groups of users; in that
case
> > the
> > > default price should be used. So the documents look like something
like
> > > this:
> > >
> > > item:
> > >   id: 1
> > >   price_default: 11.5
> > >   price_group1: 11.2
> > > item:
> > >   id: 2
> > >   price_default: 12.3
> > >   price_group2: 12.5
> > >
> > > Now when I want to fetch the documents and display the correct price
> for
> > > group1 I can use 'fl=def(price_group1,price_default)'. Works like a
> > charm!
> > > It will return price_group1 for document 1 and price_default for
> document
> > > 2.
> > >
> > > Is there a way to do this for faceting as well? I've unsuccessfully
> > tried:
> > >
> > > * facet.field=def(price_group1,price_default)
> > > * facet.field=effective_price:def(price_group1,price_default)
> > > * facet.field={!func}def(price_group1,price_default)
> > > * facet.field={!func}effective_price:def(price_group1,price_default)
> > > * json.facet={price:"def(price_group1,price_default)"}
> > >
> > > I'm fine with either the "old" facet API or the JSON facets.Any ideas?
> > >
> > > Thanks!
> > > Georg
> > >
> >
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
> >
>



--
Sincerely yours
Mikhail Khludnev


DisMax search on field only if it exists otherwise fall-back to another

2017-04-05 Thread Georg Sorst
Hi list!

The question was already asked by Neil Prosser sometime in 2015 but
apparently never got a reply, so here's to better luck this time:

At the moment I'm using a DisMax query which looks something like the
following (massively cut-down):

?defType=dismax
=some query
=field_one^0.5 field_two^1.0

I've got some localisation work coming up where I'd like to use the value
of one, sparsely populated field if it exists, falling back to another if
it doesn't (rather than duplicating some default value for all territories).

Using the standard query parser I understand I can do the following to get
this behaviour:

?q=if(exist(field_one),(field_one:some query),(field_three:some query))

However, I don't know how I would go about using DisMax for this type of
fallback.

Has anyone tried to do this sort of thing before? Is there a way to use
functions within the qf parameter?


Re: Using function queries for faceting

2017-04-04 Thread Georg Sorst
Hi Mikhail,

copying the default field was my first attempt as well - however, the
system in total has over 50.000 users which may have an individual price on
every product (even though they usually don't). Still, with the copying
approach this results in every document having 50.000 price fields. Solr
completely chokes trying to import this data.

Best,
Georg

Mikhail Khludnev <m...@apache.org> schrieb am Di., 4. Apr. 2017 um
15:28 Uhr:

> Hello Georg,
> You can probably use {!frange} and  and a few facet.query enumerating price
> ranges, but probably it's easier to just copy default price across all
> empty price groups in index time.
>
>
> On Tue, Apr 4, 2017 at 1:14 PM, Georg Sorst <georg.so...@gmail.com> wrote:
>
> > Hi list!
> >
> > My documents are eCommerce items. They may have a special price for a
> > certain group of users, but not for other groups of users; in that case
> the
> > default price should be used. So the documents look like something like
> > this:
> >
> > item:
> >   id: 1
> >   price_default: 11.5
> >   price_group1: 11.2
> > item:
> >   id: 2
> >   price_default: 12.3
> >   price_group2: 12.5
> >
> > Now when I want to fetch the documents and display the correct price for
> > group1 I can use 'fl=def(price_group1,price_default)'. Works like a
> charm!
> > It will return price_group1 for document 1 and price_default for document
> > 2.
> >
> > Is there a way to do this for faceting as well? I've unsuccessfully
> tried:
> >
> > * facet.field=def(price_group1,price_default)
> > * facet.field=effective_price:def(price_group1,price_default)
> > * facet.field={!func}def(price_group1,price_default)
> > * facet.field={!func}effective_price:def(price_group1,price_default)
> > * json.facet={price:"def(price_group1,price_default)"}
> >
> > I'm fine with either the "old" facet API or the JSON facets.Any ideas?
> >
> > Thanks!
> > Georg
> >
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


Getting counts for JSON facet percentiles

2017-04-04 Thread Georg Sorst
Hi list!

Is it possible to get counts for the JSON facet percentiles? Of course I
could trivially calculate them myself, they are percentiles after all, but
there are cases where these may be off by one such as calculating the 50th
percentile / median over 3 results.

Thanks and best,
Georg


Using function queries for faceting

2017-04-04 Thread Georg Sorst
Hi list!

My documents are eCommerce items. They may have a special price for a
certain group of users, but not for other groups of users; in that case the
default price should be used. So the documents look like something like
this:

item:
  id: 1
  price_default: 11.5
  price_group1: 11.2
item:
  id: 2
  price_default: 12.3
  price_group2: 12.5

Now when I want to fetch the documents and display the correct price for
group1 I can use 'fl=def(price_group1,price_default)'. Works like a charm!
It will return price_group1 for document 1 and price_default for document 2.

Is there a way to do this for faceting as well? I've unsuccessfully tried:

* facet.field=def(price_group1,price_default)
* facet.field=effective_price:def(price_group1,price_default)
* facet.field={!func}def(price_group1,price_default)
* facet.field={!func}effective_price:def(price_group1,price_default)
* json.facet={price:"def(price_group1,price_default)"}

I'm fine with either the "old" facet API or the JSON facets.Any ideas?

Thanks!
Georg


Use Solr Suggest to autocomplete words and suggest co-occurences

2017-03-05 Thread Georg Sorst
Hi all,

is there a way to get the suggester to autocomplete words and suggest
co-occurences instead of suggesting complete field values? The behavior I'm
looking for is quite similar to Google, only based on index values not
actual queries.

Let's say there are two items in the index:

   1. "Adidas running shoe"
   2. "Nike running shoe"

Now when the user types in "running sh" the suggestions should be something
like:

   - "running shoe" (completion)
   - "running shoe adidas" (completion + co-ocurrence)
   - "running shoe nike" (completion + co-ocurrence)

I've actually got this running already through some abomination that abuses
the facets built on the title field. This works surprisingly well, but I
can't find a way to make this error-tolerant ("runing sh" with a single "n"
should provide the same suggestions).

So, any ideas on how to get the suggester do this in a error-tolerant way?

Thanks and all the best,
Georg


Re: Split words with period in between into separate tokens

2016-10-12 Thread Georg Sorst
You could use a PatternReplaceCharFilter before your tokenizer to replace
the dot with a space character.

Derek Poh  schrieb am Mi., 12. Okt. 2016 11:38:

> Seems like LetterTokenizerFactory tokenise/discard on numbers as well. The
> field does has values with numbers in them therefore it is not applicable.
> Thank you.
>
>
> On 10/12/2016 4:22 PM, Dheerendra Kulkarni wrote:
> > You can use LetterTokenizerFactory instead.
> >
> > Regards,
> > Dheerendra Kulkarni
> >
> > On Wed, Oct 12, 2016 at 6:24 AM, Derek Poh 
> wrote:
> >
> >> Hi
> >>
> >> How can I split words with period in between into separate tokens.
> >> Eg. "Co.Ltd" => "Co" "Ltd" .
> >>
> >> I am using StandardTokenizerFactory and it does notreplace periods
> (dots)
> >> that are not followed by whitespace are kept as part of the token,
> >> including Internet domain names.
> >>
> >> This is the field definition,
> >>
> >>  >> positionIncrementGap="100">
> >>
> >>  
> >>   >> words="stopwords.txt" />
> >>  
> >>
> >>
> >>  
> >>   >> words="stopwords.txt" />
> >>   synonyms="synonyms.txt"
> >> ignoreCase="true" expand="true"/>
> >>  
> >>
> >> 
> >>
> >> Solr versionis 10.4.10.
> >>
> >> Derek
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> This e-mail (including any attachments) may contain confidential and/or
> >> privileged information. If you are not the intended recipient or have
> >> received this e-mail in error, please inform the sender immediately and
> >> delete this e-mail (including any attachments) from your computer, and
> you
> >> must not use, disclose to anyone else or copy this e-mail (including any
> >> attachments), whether in whole or in part.
> >> This e-mail and any reply to it may be monitored for security, legal,
> >> regulatory compliance and/or other appropriate reasons.
> >
> >
> >
>
> --
> CONFIDENTIALITY NOTICE
>
> This e-mail (including any attachments) may contain confidential and/or
> privileged information. If you are not the intended recipient or have
> received this e-mail in error, please inform the sender immediately and
> delete this e-mail (including any attachments) from your computer, and you
> must not use, disclose to anyone else or copy this e-mail (including any
> attachments), whether in whole or in part.
>
> This e-mail and any reply to it may be monitored for security, legal,
> regulatory compliance and/or other appropriate reasons.
>
>


Re: solr 5 leaving tomcat, will I be the only one fearing about this?

2016-10-09 Thread Georg Sorst
If you can, switch to Docker (https://hub.docker.com/_/solr/). It's a pain
to get everything going the right way, but once it's running you get a lot
of stuff for free:

* Deployment, scaling etc. is all taken care of by the Docker ecosystem
* Testing is a breeze. Need a clean Solr instance to run your application
against? It's just one command line away
* You can version the Dockerfile (Docker build instructions), so you can
version your whole setup. For example we add our own web app to the Docker
image (we shouldn't be doing that, I know) and put the resulting images
into our private Docker repository

Aristedes Maniatis  schrieb am So., 9. Okt. 2016 um
02:14 Uhr:

> On 9/10/16 11:11am, Aristedes Maniatis wrote:
> > * deployment is also scattered:
> >  - Solr platform specific package manager (pkg in FreeBSD in my case,
> which I've had to write myself since it didn't exist)
> >  - updating config files above
> >  - writing custom scripts to push Zookeeper configuration into production
> >  - creating collections/cores using the API rather than in a config file
>
> Oh, and pushing additional jars (like a JDBC adapter) into a special
> folder. Again, not easily testable or version controlled.
>
>
> Ari
>
>
>
> --
> -->
> Aristedes Maniatis
> GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A
>


How to limit resources in multi-tenant systems

2016-09-20 Thread Georg Sorst
Hi list!

I am running a multi-tenant system where the tenants can upload and import
their own data into their respective cores. Fortunately, Solr makes it easy
to make sure that the search indices don't mix and that clients can only
access their "cores".

However, isolating the resource consumption seems a little trickier. Of
course it's fairly easy to limit the number of documents and queries per
second for each tenant, but what if they add a few GBs of text to their
documents? What if they use millions of different filter values? This may
quickly fill up the VM heap and negatively impact the other tenants (I'm
totally fine if the search for that one tenant goes down).

Of course I can check their input data and apply a seemingly endless number
of limits for all kinds of cases but that smells. Is there a more general
solution to limit resource consumption per core? Something along the lines
of "each core may use up to 5% of the heap".

One suggestion I found on the mailing list was to run a separate Solr
instance for each tenant. While this is certainly possible there is a
significant administrative and resource overhead.

Another way may be to go full on SolrCloud and add shards and replicas as
required, but I have to limit the resources I can use.

Thanks!
Georg


Re: How to enable JMX to monitor Jetty

2016-09-15 Thread Georg Sorst
If you are using the Solr's Docker images this is even easier:

FROM solr:6.0.0

USER $SOLR_USER

# Expose JMX port
EXPOSE 1${SOLR_UID}

# Enable JMX
RUN sed -i -e
's/^ENABLE_REMOTE_JMX_OPTS=.*$/ENABLE_REMOTE_JMX_OPTS="true"/' bin/
solr.in.sh
RUN sed -i -e 's/^SOLR_JETTY_CONFIG=()$/SOLR_JETTY_CONFIG=("etc\/jetty.xml"
"etc\/jetty-jmx.xml")/' bin/solr

Rallavagu <rallav...@gmail.com> schrieb am Mo., 12. Sep. 2016 um 23:56 Uhr:

> I have modified modules/http.mod as following (for solr 5.4.1, Jetty 9).
> As you can see I have referred jetty-jmx.xml.
>
> #
> # Jetty HTTP Connector
> #
>
> [depend]
> server
>
> [xml]
> etc/jetty-http.xml
> etc/jetty-jmx.xml
>
>
>
> On 5/21/16 3:59 AM, Georg Sorst wrote:
> > Hi list,
> >
> > how do I correctly enable JMX in Solr 6 so that I can monitor Jetty's
> > thread pool?
> >
> > The first step is to set ENABLE_REMOTE_JMX_OPTS="true" in bin/solr.in.sh
> .
> > This will give me JMX access to JVM properties (garbage collection, class
> > loading etc.) and works fine. However, this will not give me any Jetty
> > specific properties.
> >
> > I've tried manually adding jetty-jmx.xml from the jetty 9 distribution to
> > server/etc/ and then starting Solr with 'java ... start.jar
> > etc/jetty-jmx.xml'. This works fine and gives me access to the right
> > properties, but seems wrong. I could similarly copy the contents of
> > jetty-jmx.xml into jetty.xml but this is not much better either.
> >
> > Is there a correct way for this?
> >
> > Thanks!
> > Georg
> >
>


Re: (Survey/Experiment) Are you interested in a Solr example reading group?

2016-09-14 Thread Georg Sorst
Hi Alexandre,

that's a great idea! Count me in (time permitting...).

I guess the intended outcome is to create documentation issues and fixes?

Best,
Georg

Alexandre Rafalovitch  schrieb am Di., 13. Sep. 2016
18:30:

> Is anybody interested in joining an example reading group for Solr
> (6.2 or latest).
>
> Basic idea: we take one of the examples that ship with Solr and ask
> each other any and all questions related to it. Basic/beginner level
> questions are allowed and welcomed. We could also share
> tools/tips/ideas to make the examples easier to understand, etc.
>
> Examples of potentially interesting questions:
> *) Is this text_rev actually doing anything?
> *) Why does this search against the example not do anything?
> *) How do I remove all comments from this example configuration?
> *) Can I delete this field/type/config section and have the example still
> work?
> *) Where is the documentation that makes "this" tick?
> *) What would this example data look like if it were in XML/CSV/JSONL?
> *) Is this a bug, a feature, or just me?
>
> This would be a separate time-bound group/list/slack (I am
> open-to-suggestions), so only people interested and ready for
> simple/narrow-focus questions be there.
>
> If you are interested (or even if not), I just setup a very basic
> survey to give your opinion at: https://www.surveymonkey.com/r/JH8S666
>
> Regards,
>Alex.
> 
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>


Re: How to swap two cores and then unload one of them

2016-09-12 Thread Georg Sorst
Hi Fabrizio,

I guess the correct way to add your modified / extended CoreAdminHandler
would be to add it as a  in solrconfig.xml

Best,
Georg

Fabrizio Fortino <ffort...@gilt.com> schrieb am Mo., 12. Sep. 2016 11:24:

> Hi George,
>
> Thank you for getting back to me.
>
> I am using Solr 6.
>
> I need to use coreContainer because I have created a CoreAdminHandler
> extension.
>
> Thanks,
> Fabrizio
>
> On Sun, Sep 11, 2016 at 6:42 PM, Georg Sorst <georg.so...@gmail.com>
> wrote:
>
> > Hi Fabrizio,
> >
> > which Solr version are you using? In more recent versions (starting with
> 5
> > I think) you should not use the coreContainer directly but instead go
> > through the HTTP API (which also supports the swap operation) or use
> SolrJ.
> >
> > Best,
> > Georg
> >
> > Fabrizio Fortino <ffort...@gilt.com> schrieb am Mo., 29. Aug. 2016
> 11:53:
> >
> > > I have a NON-Cloud Solr and I am trying to use the swap functionality
> to
> > > push an updated core into production without downtime.
> > >
> > > Here are the steps I am executing
> > > 1. Solr is up and running with a single core (name = 'livecore')
> > > 2. I create a new core with the latest version of my documents (name =
> > > 'newcore')
> > > 3. I swap the cores -> coreContainer.swap("newcore", "livecore")
> > > 4. I try to unload "newcore" (that points to the old one) and remove
> all
> > > the related dirs -> coreContainer.unload("newcore", true, true, true)
> > >
> > > The first three operations are OK. But when I try to execute the last
> one
> > > the Solr log starts printing the following messages forever
> > >
> > > 61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore
> > is
> > > not yet closed, waiting 100 ms before checking again.
> > >
> > > I have opened an issue on this problem (
> > > https://issues.apache.org/jira/browse/SOLR-8757) but I have not
> received
> > > any answer yet.
> > >
> > > In the meantime I have found the following workaround: I try to
> manually
> > > close all the core references before unloading it. Here is the code:
> > >
> > > SolrCore core = coreContainer.create("newcore", coreProps)
> > > coreContainer.swap("newcore", "livecore")
> > > // the old livecore is now newcore, so unload it and remove all the
> > > related dirsSolrCore oldCore = coreContainer.getCore("newCore")while
> > > (oldCore.getOpenCount > 1) {
> > >   oldCore.close()
> > > }
> > > coreContainer.unload("newcore", true, true, true)
> > >
> > >
> > > This seemed to work but there is some race conditions and from time to
> > time
> > > I get a ConcurrentModificationException and then an abnormal CPU
> > > consumption.
> > >
> > > I filed a separate issue on this
> > > https://issues.apache.org/jira/browse/SOLR-9208 but this is not
> > considered
> > > an issue by the Solr committers. The suggestion is to move and discuss
> it
> > > here in the mailing list.
> > >
> > > If this is not an issue, what are the steps to swap to cores and unload
> > one
> > > of them?
> > >
> > > Thanks a lot,
> > > Fabrizio
> > >
> >
>


Re: How to swap two cores and then unload one of them

2016-09-11 Thread Georg Sorst
Hi Fabrizio,

which Solr version are you using? In more recent versions (starting with 5
I think) you should not use the coreContainer directly but instead go
through the HTTP API (which also supports the swap operation) or use SolrJ.

Best,
Georg

Fabrizio Fortino  schrieb am Mo., 29. Aug. 2016 11:53:

> I have a NON-Cloud Solr and I am trying to use the swap functionality to
> push an updated core into production without downtime.
>
> Here are the steps I am executing
> 1. Solr is up and running with a single core (name = 'livecore')
> 2. I create a new core with the latest version of my documents (name =
> 'newcore')
> 3. I swap the cores -> coreContainer.swap("newcore", "livecore")
> 4. I try to unload "newcore" (that points to the old one) and remove all
> the related dirs -> coreContainer.unload("newcore", true, true, true)
>
> The first three operations are OK. But when I try to execute the last one
> the Solr log starts printing the following messages forever
>
> 61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore is
> not yet closed, waiting 100 ms before checking again.
>
> I have opened an issue on this problem (
> https://issues.apache.org/jira/browse/SOLR-8757) but I have not received
> any answer yet.
>
> In the meantime I have found the following workaround: I try to manually
> close all the core references before unloading it. Here is the code:
>
> SolrCore core = coreContainer.create("newcore", coreProps)
> coreContainer.swap("newcore", "livecore")
> // the old livecore is now newcore, so unload it and remove all the
> related dirsSolrCore oldCore = coreContainer.getCore("newCore")while
> (oldCore.getOpenCount > 1) {
>   oldCore.close()
> }
> coreContainer.unload("newcore", true, true, true)
>
>
> This seemed to work but there is some race conditions and from time to time
> I get a ConcurrentModificationException and then an abnormal CPU
> consumption.
>
> I filed a separate issue on this
> https://issues.apache.org/jira/browse/SOLR-9208 but this is not considered
> an issue by the Solr committers. The suggestion is to move and discuss it
> here in the mailing list.
>
> If this is not an issue, what are the steps to swap to cores and unload one
> of them?
>
> Thanks a lot,
> Fabrizio
>


Re: Monitoring Apache Solr

2016-09-11 Thread Georg Sorst
Hi Hardika,

you can get pretty far with basic Nagios / Icinga plugins (though Sematext
SPM may be a better option if you are operating on large scale).

JMX provides almost complete JVM information (Heap usage, GC info). A
Nagios Plugin to access JMX can be found here:
https://exchange.nagios.org/directory/Plugins/Java-Applications-and-Servers/check_jmx/details

Note that JMX is by default turned off in Solr 6. You can enable it by
adding the appropriate parameters to solr.in.sh (if you are on Linux).

Monitoring node or collection status can be done through Solr's HTTP
interface. There are plenty of Nagios plugins to call a URL and check the
output.

Best,
Georg

Emir Arnautovic  schrieb am Di., 30. Aug.
2016 13:13:

> Hi Hardika,
> You can try Sematext's SPM: http://sematext.com/spm
>
> Regards,
> Emir
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
> On 30.08.2016 12:59, vrindavda wrote:
> > Hi Hardika,
> >
> > To stop/restart solr you can try exploring  monit
> >   ( for Solr  Solr monit
> >   ) great tool to monitor you
> > services.
> >
> > Thank you,
> > Vrinda Davda
> >
> >
> >
> > --
> > View this message in context:
> http://lucene.472066.n3.nabble.com/Monitoring-Apache-Solr-tp4293938p4293946.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
>


Re: How to replicate config files in master-slave replication without commit on master?

2016-09-06 Thread Georg Sorst
Erick Erickson <erickerick...@gmail.com> schrieb am Di., 6. Sep. 2016 um
03:13 Uhr:

> Yes, replicating the ancillary files (e.g. elevate.xml) is contingent
> on the index
> changing. You wouldn't want these files copied down every time the slave
> polled the master, so the replication is part of index replication if
> (and only if) the
> index on the master has changed.
>

In the documentation for the master-slave replication it says:

> The slave issues a filelist command to get the list of the files. This
command returns the names of the files as well as some metadata (for
example, size, a lastmodified timestamp, an alias if any).

So if there is a lastmodified timestamp, it would be trivial to see which
config files have been updated in the meantime. Any idea why the
replication relies on an update to the index then?

In effect this means that the Query Elevation component does not work with
master-slave replication. Are you aware of any efforts to remedy this?

And finally, would this work with Solr Cloud?


>
> You could manually copy the files from the master to the slave perhaps?
>

We're already doing this with an awkward rsync setup. It works, but it's
ugly, and I was hoping for a better way

Best,
Georg


>
> Best,
> Erick
>
> On Mon, Sep 5, 2016 at 7:49 AM, Georg Sorst <georg.so...@gmail.com> wrote:
> > Hi!
> >
> > According to
> >
> https://cwiki.apache.org/confluence/display/solr/Index+Replication#IndexReplication-ReplicatingConfigurationFiles
> > :
> >
> >> Solr replicates configuration files only when the index itself is
> > replicated. That means even if a configuration file is changed on the
> > master, that file will be replicated only after there is a new
> > commit/optimize on master's index.
> >
> > However, the behaviour of some components can be changed by just
> changing a
> > configuration file. A good example is the Query Elevation component,
> which
> > is configured through conf/elevate.xml. The component explicitly allows
> > configuration during runtime, ie. without changing the index, by just
> > modifying the config.
> >
> > How can I get the Query Elevation component to play nice with
> Master-Slave
> > replication? So far I've tried:
> >
> > * Manually calling commit() after updating the elevate.xml (that's seems
> to
> > be no-op when the index hasn't changed: "No uncommitted changes. Skipping
> > IW.commit.")
> > * Manually calling update() after updating the elevate.xml (that's seems
> to
> > be no-op when the index hasn't changed: "No uncommitted changes. Skipping
> > IW.commit.")
> > * Manually calling
> > http://slave_host:port/solr/core_name/replication?command=fetchindex
> (doesn't
> > do anything either)
> >
> > What can I do, short of inserting some dummy data into the index?
>


How to replicate config files in master-slave replication without commit on master?

2016-09-05 Thread Georg Sorst
Hi!

According to
https://cwiki.apache.org/confluence/display/solr/Index+Replication#IndexReplication-ReplicatingConfigurationFiles
:

> Solr replicates configuration files only when the index itself is
replicated. That means even if a configuration file is changed on the
master, that file will be replicated only after there is a new
commit/optimize on master's index.

However, the behaviour of some components can be changed by just changing a
configuration file. A good example is the Query Elevation component, which
is configured through conf/elevate.xml. The component explicitly allows
configuration during runtime, ie. without changing the index, by just
modifying the config.

How can I get the Query Elevation component to play nice with Master-Slave
replication? So far I've tried:

* Manually calling commit() after updating the elevate.xml (that's seems to
be no-op when the index hasn't changed: "No uncommitted changes. Skipping
IW.commit.")
* Manually calling update() after updating the elevate.xml (that's seems to
be no-op when the index hasn't changed: "No uncommitted changes. Skipping
IW.commit.")
* Manually calling
http://slave_host:port/solr/core_name/replication?command=fetchindex (doesn't
do anything either)

What can I do, short of inserting some dummy data into the index?


Re: Stemming Help

2016-06-05 Thread Georg Sorst
Without having more context:

How do you know that it is not working?
What is the output you are getting in the analysis tool?
Do the analysis steps in the output match your configuration?
Are you sure you selected the right field / field type before running the
analysis?

Jamal, Sarfaraz  schrieb am
Fr., 3. Juni 2016 um 20:12 Uhr:

> Hi Guys,
>
> I am following this tutorial:
>
> http://thinknook.com/keyword-stemming-and-lemmatisation-with-apache-solr-2013-08-02/
>
> My (Managed) Schema file looks like this: (in the appropriate places)
>
>
> -   stored="true" />
>
> -positionIncrementGap="100">
> 
> 
> 
> 
>   
>
>  -   stored="true" />
>
> -
>
> I have re-indexed everything -
>
> It is not effecting my search at all -
>
> - from what I can tell from the analysis tool nothing is happening.
>
> Is there something else I am missing or should take a look at, or is it
> possible to debug this? Or some other documentation I can search though?
>
> Thanks!
>
> Sas
>
> -Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: Friday, June 3, 2016 2:02 PM
> To: solr-user@lucene.apache.org
> Subject: Re: [E] Re: Stemming and Managed Schema
>
> On 6/3/2016 9:22 AM, Jamal, Sarfaraz wrote:
> > I would edit the managed-schema, make my changes, shutdown solr? And
> > start it back up and verify it is still there?
>
> That's the sledgehammer approach.  Simple and effective, but Solr does go
> offline for a short time.
>
> > Or is there another way to reload the core/collection?
>
> For SolrCloud:
>
> https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api2
>
> For non-cloud mode:
>
> https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API#CoreAdminAPI-RELOAD
>
> Thanks,
> Shawn
>
>


Re: help need example code of solrj to get schema of a given core

2016-05-31 Thread Georg Sorst
Querying the schema can be done with the Schema API (
https://cwiki.apache.org/confluence/display/solr/Schema+API), which is
fully supported by SolrJ:
http://lucene.apache.org/solr/6_0_0/solr-solrj/org/apache/solr/client/solrj/request/schema/package-summary.html
.

Liu, Ming (Ming)  schrieb am Di., 31. Mai 2016 09:41:

> Hello,
>
> I am very new to Solr, I want to write a simple Java program to get a
> core's schema information. Like how many field and details of each field. I
> spent a few time searching on internet, but cannot get much information
> about this. The solrj wiki seems not updated for long time. I am using Solr
> 5.5.0
>
> Hope there are some example code, or please give me some advices, or
> simple hint like which java class I can take a look at.
>
> Thanks in advance!
> Ming
>


Re: Activate Fuzzy Queries for each term by default

2016-05-30 Thread Georg Sorst
AFAIK this is not possible, but it probably doesn't make so much sense
either. In my experience fuzzy search should be explicit to the user
(Google does a pretty good job at this, eg. "Did you mean" etc.).

What are you trying to achieve and what results do you want to return?

Sebastian Landwehr  schrieb am Mo., 30. Mai 2016 um
09:41 Uhr:

> Hi there,
>
> I got a question regarding fuzzy queries:
>
> I know that I can create a fuzzy query by appending a „~" with the maximal
> edit distance to a word. Is it also possible to automatically create a
> fuzzy query for each search term? I know that I could theoretically append
> the „~" programmatically, but it seems to be hard to handle all features if
> the query syntax.
>
> Thanks and best wishes,
> Sebastian


Re: Recommended api/lib to search Solr using PHP

2016-05-30 Thread Georg Sorst
We've had good experiences with Solarium, so it's probably worth spending
some time in getting it to run.

scott.chu  schrieb am Mo., 30. Mai 2016 um
09:30 Uhr:

>
> We have two legacy in-house applications written in PHP 5.2.6 and 5.5.3.
> Our engineers currently just use fopen with url to search Solr but it's
> kinda unenough when we want to do more advanced, complex queries. We've
> tried to use something called 'Solarium' but its installtion steps has
> something to do with symphony, which is kinda complicated. We can't get the
> installation done ok. I'd like to know if there are some other
> better-structured PHP libraries or APIs?
>
> Note: Solr is 5.4.1.
>
> scott.chu,scott@udngroup.com
> 2016/5/30 (週一)
>


How to enable JMX to monitor Jetty

2016-05-21 Thread Georg Sorst
Hi list,

how do I correctly enable JMX in Solr 6 so that I can monitor Jetty's
thread pool?

The first step is to set ENABLE_REMOTE_JMX_OPTS="true" in bin/solr.in.sh.
This will give me JMX access to JVM properties (garbage collection, class
loading etc.) and works fine. However, this will not give me any Jetty
specific properties.

I've tried manually adding jetty-jmx.xml from the jetty 9 distribution to
server/etc/ and then starting Solr with 'java ... start.jar
etc/jetty-jmx.xml'. This works fine and gives me access to the right
properties, but seems wrong. I could similarly copy the contents of
jetty-jmx.xml into jetty.xml but this is not much better either.

Is there a correct way for this?

Thanks!
Georg


Re: Mockito issues with private SolrTestCaseJ4.beforeClass

2016-05-06 Thread Georg Sorst
Anyway, this is now SOLR-9081.

Best,
Georg

Georg Sorst <g.so...@findologic.com> schrieb am So., 24. Apr. 2016 um
17:34 Uhr:

> Hi list,
>
> I just ran into some issues with Mockito and SolrTestCaseJ4. It looks like
> this:
>
> * Mockito requires all @BeforeClass methods in the class hierarchy to be
> "public static void"
> * SolrTestCaseJ4.beforeClass (which is @BeforeClass) is "private static
> void"
> * So I cannot use Mockito as a test runner when my tests are derived from
> SolrTestCaseJ4
>
> Is there a specific reason why it is private? Am I missing something? I'll
> gladly open a JIRA issue if someone can confirm that there is no good
> reason for it.
>
> Best,
> Georg
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> <https://www.xing.com/profile/Georg_Sorst>facebook
> <https://www.facebook.com/Findologic> Twitter
> <https://twitter.com/findologic>
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
<https://www.xing.com/profile/Georg_Sorst>facebook
<https://www.facebook.com/Findologic> Twitter
<https://twitter.com/findologic>

Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
vereinbaren!


Re: bf calculation

2016-05-02 Thread Georg Sorst
Hi Jan,

have you tried Solr's debug output? ie. add
"...=true=true" to your query. This should
answer your question.

Best,
Georg

Jan Verweij - Reeleez  schrieb am Mo., 2. Mai 2016 um
09:47 Uhr:

> Hi,
> I'm trying to understand the exact calculation that takes place when using
> edismax and the bf parameter.
> When searching I get a product returned with a score of 0.625
> Now, I have a field called productranking with a value of 0.5 for this
> specific
> product. If I add =field(productranking) to the request the score
> becomes 0.7954515
> How is this calculated?
> Cheers,
> Jan Verweij

-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: Decide on facets from results

2016-04-28 Thread Georg Sorst
Maybe you could do this:

   - MyFacetComponent extends FacetComponent
   - MyFacetComponent.process(): The results will be available at this
   point; look at them and decide which facets to fetch and return


Best,
Georg

Alexandre Rafalovitch  schrieb am Do., 28. Apr. 2016 um
06:44 Uhr:

> What about a custom component? Something similar to spell-checker? Add
> it last after everything else.
>
> It would have to be custom because you have some domain magic about
> how to decide what fields to facet on.
>
> Regards,
>   Alex.
> 
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 28 April 2016 at 11:45, Erick Erickson  wrote:
> > Mark:
> >
> > You can do anything you want that Java can do ;). Smart-alec comments
> > aside, there's
> > no mechanism for doing this in Solr that I know of. The first thing
> > I'd do is try the two-query-
> > from-the-client approach to see if it was "fast enough".
> >
> > Best,
> > Erick (the other one)
> >
> > On Wed, Apr 27, 2016 at 1:21 PM, Mark Robinson 
> wrote:
> >> Thanks Eric!
> >> So that will mean another call will be definitely required to SOLR with
> the
> >> facets,  before the results can be send back (with the facet fields
> being
> >> derived traversing through the response).
> >>
> >> I was basically checking on whether in the "process" method (I believe
> >> results will be accessed in the process method), we can dynamically
> >> generate facets after traversing through the results and identifying the
> >> fields for faceting, using some aggregation function or so, without
> having
> >> to make another call using facet=on=, before the
> >> response is send back to the user.
> >>
> >> Cheers!
> >>
> >> On Wed, Apr 27, 2016 at 2:27 PM, Erik Hatcher 
> >> wrote:
> >>
> >>> Results will vary based on how you indexed those fields, but sure…
> >>> =on= - with sufficient RAM, lots of fun
> to be
> >>> had!
> >>>
> >>> —
> >>> Erik Hatcher, Senior Solutions Architect
> >>> http://www.lucidworks.com 
> >>>
> >>>
> >>>
> >>> > On Apr 27, 2016, at 12:13 PM, Mark Robinson  >
> >>> wrote:
> >>> >
> >>> > Hi,
> >>> >
> >>> > If I don't have my facet list at query time, from the results can I
> >>> select
> >>> > some fields and by any means create a facet on them? ie after I get
> the
> >>> > results I want to identify some fields as facets and send back
> facets for
> >>> > them in the response.
> >>> >
> >>> > A kind of very dynamic faceting based on the results!
> >>> >
> >>> > Cld some one pls share their idea.
> >>> >
> >>> > Thanks!
> >>> > Anil.
> >>>
> >>>
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: how to retrieve json facet using solrj

2016-04-25 Thread Georg Sorst
Hi Yangrui,

from what I've gathered the JSON Facets are not supported in SolrJ yet, so
you need to manually work with the response, ie.:
response.getResponse().get("facets")

Best,
Georg

Yangrui Guo  schrieb am So., 24. Apr. 2016 um
22:13 Uhr:

> Hello
>
> I use json facet api to get facets. The response returned with facets and
> counts However, when I called the getFacetFields method in SolrJ client, I
> got null results. How can I get the facet results from solrj? I set my
> query as query.setParam("json.facet", "{entities : {type: terms,field:
> class2} }" Am I missing something? Thanks.
>
> Yangrui
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: How can I set the defaultOperator to be AND?

2016-04-25 Thread Georg Sorst
With Solr 6.0 I've had to set mm=100% & q.op=AND for a full AND query (and
mm=1 & q.op=OR for a full OR query).

Jan Høydahl  schrieb am Mo., 25. Apr. 2016 um
16:04 Uhr:

> I think a workaround for your specific case could be to set mm=100% &
> q.op=OR (although it used to work for q.op=AND before)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 25. apr. 2016 kl. 14.53 skrev Shawn Heisey :
> >
> > On 4/25/2016 6:39 AM, Bastien Latard - MDPI AG wrote:
> >> Remember:
> >> If I add the following line to the schema.xml, even if I do a search
> >> 'title:"test" OR author:"me"', it will returns documents matching
> >> 'title:"test" AND author:"me"':
> >> 
> >
> > The settings in the schema for default field and default operator were
> > deprecated a long time ago.  I actually have no idea whether they are
> > even supported in newer Solr versions.
> >
> > The q.op parameter controls the default operator, and the df parameter
> > controls the default field.  These can be set in the request handler
> > definition in solrconfig.xml -- usually in "defaults" but there might be
> > reason to put them in "invariants" instead.
> >
> > If you're using edismax, you'd be better off using the mm parameter
> > rather than the q.op parameter.  The behavior you have described above
> > sounds like a change in behavior (some call it a bug) introduced in the
> > 5.5 version:
> >
> > https://issues.apache.org/jira/browse/SOLR-8812
> >
> > If you are using edismax, I suspect that if you set mm=100% instead of
> > q.op=AND (or the schema default operator) that the problem might go away
> > ... but I am not sure.  Someone who is more familiar with SOLR-8812
> > probably should comment.
> >
> > Thanks,
> > Shawn
> >
>
> --
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Mockito issues with private SolrTestCaseJ4.beforeClass

2016-04-24 Thread Georg Sorst
Hi list,

I just ran into some issues with Mockito and SolrTestCaseJ4. It looks like
this:

* Mockito requires all @BeforeClass methods in the class hierarchy to be
"public static void"
* SolrTestCaseJ4.beforeClass (which is @BeforeClass) is "private static
void"
* So I cannot use Mockito as a test runner when my tests are derived from
SolrTestCaseJ4

Is there a specific reason why it is private? Am I missing something? I'll
gladly open a JIRA issue if someone can confirm that there is no good
reason for it.

Best,
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


ManagedSynonymFilterFactory per core instead of config set?

2016-04-17 Thread Georg Sorst
Hi list!

Is it possible to set synonyms per core when using
the ManagedSynonymFilterFactory, even when using config sets?

What makes me think that this is not possible is that the synonyms are
stored in
$solr_home/configsets/$config_set/conf/_schema_analysis_synonyms_$resource.json.
So when I add some synonyms to core A (which uses $config_set) and then
create core B (which also uses $config_set), then core B will have the same
synonyms as core A. I have validated this by GETing the synonyms for core B.

Am I doing something wrong, or is this on purpose? Is there a way to manage
synonyms per core? Should I use a differente value for $resource for each
core?

Thanks!
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: Set Config API user properties with SolrJ

2016-04-17 Thread Georg Sorst
For further reference: I solved this by using an HTTP-based Solr in the
tests. Look at SolrTestCaseJ4.buildJettyConfig for more info

Georg Sorst <g.so...@findologic.com> schrieb am Mo., 11. Apr. 2016 um
10:28 Uhr:

> The issue is here:
>
> org.apache.solr.handler.SolrConfigHandler.handleRequestBody()
>
> This method will check the 'httpMethod' of the request. The
> set-user-property call will only be evaluated if the method is POST.
> Apparently, for non-HTTP requests this will never be true.
>
> I'll gladly write an issue / testcase / patch if someone can give me a
> little help.
>
> Georg Sorst <g.so...@findologic.com> schrieb am So., 10. Apr. 2016 um
> 14:36 Uhr:
>
>> Addendum: Apparently the code works fine with HttpSolrClient, but not
>> with EmbeddedSolrServer (used in our tests).The most recent version I
>> tested this was 5.5.0
>>
>> Georg Sorst <g.so...@findologic.com> schrieb am So., 10. Apr. 2016 um
>> 01:49 Uhr:
>>
>>> Hi,
>>>
>>> how can you set Config API values from SolrJ? Does anyone have an
>>> example for this?
>>>
>>> Here's what I'm currently trying:
>>>
>>> /* Build the structure for the request */
>>> Map<String, String> parameters = new HashMap<String, String>() {{
>>>   put("key", "value");
>>> }};
>>> final NamedList requestParameters = new NamedList<>();
>>> requestParameters.add("set-user-property", parameters);
>>>
>>> /* Build the JSON */
>>> CharArr json = new CharArr();
>>> new SchemaRequestJSONWriter(json).write(requestParameters);
>>> ContentStreamBase.StringStream stringStream = new
>>> ContentStreamBase.StringStream(json.toString());
>>> Collection contentStreams = Collections.
>>> singletonList(stringStream);
>>>
>>> /* Send the request */
>>> GenericSolrRequest request = new
>>> GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
>>> request.setContentStreams(contentStreams);
>>> SimpleSolrResponse response = request.process(new HttpSolrClient("
>>> http://localhost:8983/solr/test;));
>>>
>>> The JSON is looking good, but it's doing... nothing. The response just
>>> contains the default config-overlay contents (znodeVersion). Any idea why?
>>>
>>> Thanks!
>>> Georg
>>> --
>>> *Georg M. Sorst I CTO*
>>> FINDOLOGIC GmbH
>>>
>>>
>>>
>>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>>> E.: g.so...@findologic.com
>>> www.findologic.com Folgen Sie uns auf: XING
>>> <https://www.xing.com/profile/Georg_Sorst>facebook
>>> <https://www.facebook.com/Findologic> Twitter
>>> <https://twitter.com/findologic>
>>>
>>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>>> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
>>> vereinbaren!
>>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>>> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
>>> vereinbaren!
>>>
>> --
>> *Georg M. Sorst I CTO*
>> FINDOLOGIC GmbH
>>
>>
>>
>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>> E.: g.so...@findologic.com
>> www.findologic.com Folgen Sie uns auf: XING
>> <https://www.xing.com/profile/Georg_Sorst>facebook
>> <https://www.facebook.com/Findologic> Twitter
>> <https://twitter.com/findologic>
>>
>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
>> vereinbaren!
>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
>> vereinbaren!
>>
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> <https://www.xing.com/profile/Georg_Sorst>facebook
> <https://www.facebook.com/Findologic> Twitter
> <https://twitter.com/findologic>
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
<https://www.xing.com/profile/Georg_Sorst>facebook
<https://www.facebook.com/Findologic> Twitter
<https://twitter.com/findologic>

Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
vereinbaren!


Re: Set Config API user properties with SolrJ

2016-04-11 Thread Georg Sorst
The issue is here:

org.apache.solr.handler.SolrConfigHandler.handleRequestBody()

This method will check the 'httpMethod' of the request. The
set-user-property call will only be evaluated if the method is POST.
Apparently, for non-HTTP requests this will never be true.

I'll gladly write an issue / testcase / patch if someone can give me a
little help.

Georg Sorst <g.so...@findologic.com> schrieb am So., 10. Apr. 2016 um
14:36 Uhr:

> Addendum: Apparently the code works fine with HttpSolrClient, but not with
> EmbeddedSolrServer (used in our tests).The most recent version I tested
> this was 5.5.0
>
> Georg Sorst <g.so...@findologic.com> schrieb am So., 10. Apr. 2016 um
> 01:49 Uhr:
>
>> Hi,
>>
>> how can you set Config API values from SolrJ? Does anyone have an example
>> for this?
>>
>> Here's what I'm currently trying:
>>
>> /* Build the structure for the request */
>> Map<String, String> parameters = new HashMap<String, String>() {{
>>   put("key", "value");
>> }};
>> final NamedList requestParameters = new NamedList<>();
>> requestParameters.add("set-user-property", parameters);
>>
>> /* Build the JSON */
>> CharArr json = new CharArr();
>> new SchemaRequestJSONWriter(json).write(requestParameters);
>> ContentStreamBase.StringStream stringStream = new
>> ContentStreamBase.StringStream(json.toString());
>> Collection contentStreams = Collections.
>> singletonList(stringStream);
>>
>> /* Send the request */
>> GenericSolrRequest request = new
>> GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
>> request.setContentStreams(contentStreams);
>> SimpleSolrResponse response = request.process(new HttpSolrClient("
>> http://localhost:8983/solr/test;));
>>
>> The JSON is looking good, but it's doing... nothing. The response just
>> contains the default config-overlay contents (znodeVersion). Any idea why?
>>
>> Thanks!
>> Georg
>> --
>> *Georg M. Sorst I CTO*
>> FINDOLOGIC GmbH
>>
>>
>>
>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>> E.: g.so...@findologic.com
>> www.findologic.com Folgen Sie uns auf: XING
>> <https://www.xing.com/profile/Georg_Sorst>facebook
>> <https://www.facebook.com/Findologic> Twitter
>> <https://twitter.com/findologic>
>>
>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
>> vereinbaren!
>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
>> vereinbaren!
>>
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> <https://www.xing.com/profile/Georg_Sorst>facebook
> <https://www.facebook.com/Findologic> Twitter
> <https://twitter.com/findologic>
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
<https://www.xing.com/profile/Georg_Sorst>facebook
<https://www.facebook.com/Findologic> Twitter
<https://twitter.com/findologic>

Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
vereinbaren!


Re: Set Config API user properties with SolrJ

2016-04-10 Thread Georg Sorst
Addendum: Apparently the code works fine with HttpSolrClient, but not with
EmbeddedSolrServer (used in our tests).The most recent version I tested
this was 5.5.0

Georg Sorst <g.so...@findologic.com> schrieb am So., 10. Apr. 2016 um
01:49 Uhr:

> Hi,
>
> how can you set Config API values from SolrJ? Does anyone have an example
> for this?
>
> Here's what I'm currently trying:
>
> /* Build the structure for the request */
> Map<String, String> parameters = new HashMap<String, String>() {{
>   put("key", "value");
> }};
> final NamedList requestParameters = new NamedList<>();
> requestParameters.add("set-user-property", parameters);
>
> /* Build the JSON */
> CharArr json = new CharArr();
> new SchemaRequestJSONWriter(json).write(requestParameters);
> ContentStreamBase.StringStream stringStream = new
> ContentStreamBase.StringStream(json.toString());
> Collection contentStreams = Collections.
> singletonList(stringStream);
>
> /* Send the request */
> GenericSolrRequest request = new
> GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
> request.setContentStreams(contentStreams);
> SimpleSolrResponse response = request.process(new HttpSolrClient("
> http://localhost:8983/solr/test;));
>
> The JSON is looking good, but it's doing... nothing. The response just
> contains the default config-overlay contents (znodeVersion). Any idea why?
>
> Thanks!
> Georg
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> <https://www.xing.com/profile/Georg_Sorst>facebook
> <https://www.facebook.com/Findologic> Twitter
> <https://twitter.com/findologic>
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier <berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
> <berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
<https://www.xing.com/profile/Georg_Sorst>facebook
<https://www.facebook.com/Findologic> Twitter
<https://twitter.com/findologic>

Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
vereinbaren!


Set Config API user properties with SolrJ

2016-04-09 Thread Georg Sorst
Hi,

how can you set Config API values from SolrJ? Does anyone have an example
for this?

Here's what I'm currently trying:

/* Build the structure for the request */
Map parameters = new HashMap() {{
  put("key", "value");
}};
final NamedList requestParameters = new NamedList<>();
requestParameters.add("set-user-property", parameters);

/* Build the JSON */
CharArr json = new CharArr();
new SchemaRequestJSONWriter(json).write(requestParameters);
ContentStreamBase.StringStream stringStream = new
ContentStreamBase.StringStream(json.toString());
Collection contentStreams = Collections.
singletonList(stringStream);

/* Send the request */
GenericSolrRequest request = new
GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
request.setContentStreams(contentStreams);
SimpleSolrResponse response = request.process(new HttpSolrClient("
http://localhost:8983/solr/test;));

The JSON is looking good, but it's doing... nothing. The response just
contains the default config-overlay contents (znodeVersion). Any idea why?

Thanks!
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: Use default field, if more specific field does not exist

2016-04-03 Thread Georg Sorst
Hi Emir,

could this be done with Pivot faceting?

The idea is to use facet.pivot=price_USER_ID,price. Then I should get all
values + number of matching documents for price_USER_ID (sub-faceted by
price, which I can just ignore). Additionally, there should be one facet
value for price_USER_ID for all documents without this field. Due to the
Pivot facet the price_USER_ID=unknown will then be further faceted with the
values of the price field. I can then sum up the number of matching
documents for each value from either price_USER_ID or price (or both) and
present that to the user. Like this:

price_USER_ID: 5$ (2 matching documents)
price_USER_ID: 3 (1)
price_USER_ID: unknown (4)
price: 5 (3)
price: 2 (1)

So the facets presented to the user will be:

price: 5 (5) // 2+3=5
price: 3 (1)

Then when the user selects price:5 the filter query will be (as you
suggested above): price_USER_ID:5 OR (-price_USER_ID:[* TO *] AND price:5)

Do you think this will work? Any idea what the performance might be like?

Best,
Georg

Emir Arnautovic <emir.arnauto...@sematext.com> schrieb am Mo., 28. März
2016 um 13:56 Uhr:

> Hi Georg,
> I cannot think of similar trick that would enable you to facet on all
> values (other than applying this trick to buckets of size 1) but would
> warn you about faceting of high cardinality fields such as price. Not
> sure if you have some specific case, but calculating facet for such
> field can be pretty expensive and slow.
> I haven't look at it in details, but maybe you could find something
> useful in new Json facet API.
>
> Regards,
> Emir
>
> On 26.03.2016 12:15, Georg Sorst wrote:
> > Hi Emir,
> >
> > that sounds like a great idea and filtering should be just fine!
> >
> > In our case we need the individual price values (not the buckets), just
> > like facet.field=price but with respect to the user prices. Is this
> > possible as well?
> >
> > About the performance: Are there any specific bottlenecks you would
> expect?
> >
> > Best regards,
> > Georg
> >
> > Emir Arnautovic <emir.arnauto...@sematext.com> schrieb am Fr., 25. März
> > 2016 um 11:47 Uhr:
> >
> >> Hi Georg,
> >> One solution that could work on existing schema is to use query faceting
> >> and queries like (for USER_ID = 1, bucker 100 to 200):
> >>
> >> price_1:[100 TO 200] OR (-price_1:[* TO *] AND price:[100 TO 200])
> >>
> >> Same query is used for filtering. What you should test is if
> >> performances are acceptable.
> >>
> >> Thanks,
> >> Emir
> >>
> >> On 24.03.2016 22:31, Georg Sorst wrote:
> >>> Hi list,
> >>>
> >>> we use Solr to search ecommerce products.
> >>>
> >>> Items have a default price which can be overwritten per user. So when
> >>> searching we have to return the user price if it is set, otherwise the
> >>> default price. Same goes for building facets and when filtering by
> price.
> >>>
> >>> What's the best way to achieve this in Solr? We know the user ID when
> >>> sending the request to Solr so we could do something like this:
> >>>
> >>> * Add the default price in the field "price" to the items
> >>> * Add all the user prices in a field like "price_"
> >>>
> >>> Now for displaying the correct price this is fine, just look if there
> is
> >> a
> >>> field "price_" for this result item, otherwise just display
> the
> >>> value of the "price" field.
> >>>
> >>> The tricky part is faceting and filtering. Which field do we use?
> >>> "price_"? What happens for users that don't have a user price
> >> set
> >>> for an item? In this case the "price_" field does not exist so
> >>> faceting and filtering will not work.
> >>>
> >>> We thought about adding a "price_" field for every item and
> >> every
> >>> user and fill in the default price for the item if the user does not
> have
> >>> an overwritten price for this item. This would potentially make our
> index
> >>> unnecessarily large. Consider 10,000 items and 10,000 users (quite
> >>> realistic), that's 100,000,000 "price_" fields, even though
> >> maybe
> >>> only a few users have overwritten prices.
> >>>
> >>> What I've been (unsuccessfully) looking for is some sort of field
> >> fallback
> >>> where I can tell Solr something like "use price_ for the
>

Re: Use default field, if more specific field does not exist

2016-03-26 Thread Georg Sorst
Hi Emir,

that sounds like a great idea and filtering should be just fine!

In our case we need the individual price values (not the buckets), just
like facet.field=price but with respect to the user prices. Is this
possible as well?

About the performance: Are there any specific bottlenecks you would expect?

Best regards,
Georg

Emir Arnautovic <emir.arnauto...@sematext.com> schrieb am Fr., 25. März
2016 um 11:47 Uhr:

> Hi Georg,
> One solution that could work on existing schema is to use query faceting
> and queries like (for USER_ID = 1, bucker 100 to 200):
>
> price_1:[100 TO 200] OR (-price_1:[* TO *] AND price:[100 TO 200])
>
> Same query is used for filtering. What you should test is if
> performances are acceptable.
>
> Thanks,
> Emir
>
> On 24.03.2016 22:31, Georg Sorst wrote:
> > Hi list,
> >
> > we use Solr to search ecommerce products.
> >
> > Items have a default price which can be overwritten per user. So when
> > searching we have to return the user price if it is set, otherwise the
> > default price. Same goes for building facets and when filtering by price.
> >
> > What's the best way to achieve this in Solr? We know the user ID when
> > sending the request to Solr so we could do something like this:
> >
> > * Add the default price in the field "price" to the items
> > * Add all the user prices in a field like "price_"
> >
> > Now for displaying the correct price this is fine, just look if there is
> a
> > field "price_" for this result item, otherwise just display the
> > value of the "price" field.
> >
> > The tricky part is faceting and filtering. Which field do we use?
> > "price_"? What happens for users that don't have a user price
> set
> > for an item? In this case the "price_" field does not exist so
> > faceting and filtering will not work.
> >
> > We thought about adding a "price_" field for every item and
> every
> > user and fill in the default price for the item if the user does not have
> > an overwritten price for this item. This would potentially make our index
> > unnecessarily large. Consider 10,000 items and 10,000 users (quite
> > realistic), that's 100,000,000 "price_" fields, even though
> maybe
> > only a few users have overwritten prices.
> >
> > What I've been (unsuccessfully) looking for is some sort of field
> fallback
> > where I can tell Solr something like "use price_ for the
> results,
> > facets and filter queries, and if that does not exist for an item, use
> > price instead". At first sight field aliases seemed like that but turns
> out
> > that just renames the field in the result items.
> >
> > So, is there something like this or is there a better solution anyway?
> >
> > Thanks,
> > Georg
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
> --
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
<https://www.xing.com/profile/Georg_Sorst>facebook
<https://www.facebook.com/Findologic> Twitter
<https://twitter.com/findologic>

Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20SCD> Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
<berat...@findologic.com?subject=Terminvereinbarung%20dmexco> Termin
vereinbaren!


Use default field, if more specific field does not exist

2016-03-24 Thread Georg Sorst
Hi list,

we use Solr to search ecommerce products.

Items have a default price which can be overwritten per user. So when
searching we have to return the user price if it is set, otherwise the
default price. Same goes for building facets and when filtering by price.

What's the best way to achieve this in Solr? We know the user ID when
sending the request to Solr so we could do something like this:

* Add the default price in the field "price" to the items
* Add all the user prices in a field like "price_"

Now for displaying the correct price this is fine, just look if there is a
field "price_" for this result item, otherwise just display the
value of the "price" field.

The tricky part is faceting and filtering. Which field do we use?
"price_"? What happens for users that don't have a user price set
for an item? In this case the "price_" field does not exist so
faceting and filtering will not work.

We thought about adding a "price_" field for every item and every
user and fill in the default price for the item if the user does not have
an overwritten price for this item. This would potentially make our index
unnecessarily large. Consider 10,000 items and 10,000 users (quite
realistic), that's 100,000,000 "price_" fields, even though maybe
only a few users have overwritten prices.

What I've been (unsuccessfully) looking for is some sort of field fallback
where I can tell Solr something like "use price_ for the results,
facets and filter queries, and if that does not exist for an item, use
price instead". At first sight field aliases seemed like that but turns out
that just renames the field in the result items.

So, is there something like this or is there a better solution anyway?

Thanks,
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


Re: solr & docker in production

2016-03-15 Thread Georg Sorst
Hi,

sounds great!

Did you run any benchmarks? What's the IO penalty?

Best,
Georg

Jay Potharaju  schrieb am Di., 15. Mär. 2016 04:25:

> Upayavira,
> Thanks for the feedback.  I plan to deploy solr on its own instance rather
> than on instance running multiple applications.
>
> Jay
>
> On Mon, Mar 14, 2016 at 3:19 PM, Upayavira  wrote:
>
> > There is a default Docker image for Solr on the Docker Registry. I've
> > used it to great effect in creating a custom Solr install.
> >
> > The main thing I'd say is that Docker generally encourages you to run
> > many apps on the same host, whereas Solr benefits hugely from a host of
> > its own - so don't be misled into installing Solr alongside lots of
> > other things.
> >
> > Even if the only thing that gets put onto a node is a Docker install,
> > then a Solr Docker image, it is *still* way easier to do than anything
> > else I've tried and still very worth it.
> >
> > Upayavira (who doesn't, yet, have Dockerised Solr in production, but
> > will soon)
> >
> > On Mon, 14 Mar 2016, at 07:53 PM, Jay Potharaju wrote:
> > > Hi,
> > > I was wondering is running solr inside a  docker container. Are there
> any
> > > recommendations for this?
> > >
> > >
> > > --
> > > Thanks
> > > Jay
> >
>
>
>
> --
> Thanks
> Jay Potharaju
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!


SolrJ + JSON Facet API

2016-02-24 Thread Georg Sorst
Hi list!

Does SolrJ already wrap the new JSON Facet API? I couldn't find any info
about this.

If not, what's the best way for a Java client to build and send requests
when you want to use the JSON Facets?

On a side note, since the JSON Facet API uses POST I will not be able to
see the requested facets in my Solr logs anymore, right?

Thanks!
Georg
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH

Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf der* Internet World am 01.03. und 02.03.2016 - Halle
B6 Stand D182!* Hier
 Termin
vereinbaren!


Re: User-defined properties and configsets

2016-01-31 Thread Georg Sorst
Erick Erickson <erickerick...@gmail.com> schrieb am Fr., 29. Jan. 2016 um
17:55 Uhr:

> These are system properties, right? They go in the startup for all of
> your Solr instances scattered about your cluster.
>

No, they will be used in the solrconfig.xml and schema.xml. I just
mentioned the -Dmyproperty=... example because that's one way to get
properties in there.

What I'm looking for is one concise place to define properties that all
cores using this configset will use as a default. I've tried several ways:

* Using the ${property:default} syntax in the configs is no good, because
the same property will occur several times in the configs
* Setting them with the config API is no good, because then they live in my
code, but I'd rather have them in a property file for visiblity and
maintenance reasons
* Setting them as system properties (-Dmyproperty=...) is no good, because
that makes our deployment more complicated

So, ideally I can just put a .properties file in the configset that will
provide default values for all cores using this configset.

Of course the properties may be changed from their default values on a
per-core base, but that's what the config API is for.

So, where do other people put their properties when they use configsets?

Best regards,
Georg


>
> The bin/solr script has a -a option for passing additional stuff to the
> JVM...
>
> Best,
> Erick
>
> On Thu, Jan 28, 2016 at 11:50 PM, Georg Sorst <g.so...@findologic.com>
> wrote:
> > Any takers?
> >
> > Georg Sorst <g.so...@findologic.com> schrieb am So., 24. Jän. 2016
> 00:22:
> >
> >> Hi list!
> >>
> >> I've just started playing with Solr 5 (upgrading from Solr 4) and want
> to
> >> use configsets. I'm currently struggling with how to use user-defined
> >> properties and configsets together.
> >>
> >> My solrconfig.xml contains a few properties. Previously these were in a
> >> solrcore.properties and thus were properly loaded and substituted by
> Solr.
> >>
> >> Now I've moved my configuration to a configset (as I may need to create
> >> several cores with the same config). When I create a core with
> >>
> http://localhost:8983/solr/admin/cores?action=CREATE=mycore=myconfigset
> Solr
> >> tells me:
> >>
> >> Caused by: org.apache.solr.common.SolrException: Error loading solr
> config
> >> from //configsets/myconfigset/conf/solrconfig.xml
> >> at
> >>
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:186)
> >> at
> >>
> org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:94)
> >> at
> >>
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
> >> ... 30 more
> >> Caused by: org.apache.solr.common.SolrException: No system property or
> >> default value specified for  value:
> >> 
> >> at
> >>
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:66)
> >> ...
> >>
> >> Where should I put my properties so Solr can load them when I create a
> new
> >> core using this config set? From what I read I could specify them as
> system
> >> properties (-Dmyproperty=...) but I'd rather keep them in a file that I
> can
> >> check in.
> >>
> >> Thanks!
> >> Georg
> >>
> >>
> >>
>


Re: User-defined properties and configsets

2016-01-28 Thread Georg Sorst
Any takers?

Georg Sorst <g.so...@findologic.com> schrieb am So., 24. Jän. 2016 00:22:

> Hi list!
>
> I've just started playing with Solr 5 (upgrading from Solr 4) and want to
> use configsets. I'm currently struggling with how to use user-defined
> properties and configsets together.
>
> My solrconfig.xml contains a few properties. Previously these were in a
> solrcore.properties and thus were properly loaded and substituted by Solr.
>
> Now I've moved my configuration to a configset (as I may need to create
> several cores with the same config). When I create a core with
> http://localhost:8983/solr/admin/cores?action=CREATE=mycore=myconfigset
>  Solr
> tells me:
>
> Caused by: org.apache.solr.common.SolrException: Error loading solr config
> from //configsets/myconfigset/conf/solrconfig.xml
> at
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:186)
> at
> org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:94)
> at
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
> ... 30 more
> Caused by: org.apache.solr.common.SolrException: No system property or
> default value specified for  value:
> 
> at
> org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:66)
> ...
>
> Where should I put my properties so Solr can load them when I create a new
> core using this config set? From what I read I could specify them as system
> properties (-Dmyproperty=...) but I'd rather keep them in a file that I can
> check in.
>
> Thanks!
> Georg
>
>
>


User-defined properties and configsets

2016-01-23 Thread Georg Sorst
Hi list!

I've just started playing with Solr 5 (upgrading from Solr 4) and want to
use configsets. I'm currently struggling with how to use user-defined
properties and configsets together.

My solrconfig.xml contains a few properties. Previously these were in a
solrcore.properties and thus were properly loaded and substituted by Solr.

Now I've moved my configuration to a configset (as I may need to create
several cores with the same config). When I create a core with
http://localhost:8983/solr/admin/cores?action=CREATE=mycore=myconfigset
Solr
tells me:

Caused by: org.apache.solr.common.SolrException: Error loading solr config
from //configsets/myconfigset/conf/solrconfig.xml
at
org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:186)
at
org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:94)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
... 30 more
Caused by: org.apache.solr.common.SolrException: No system property or
default value specified for  value:

at
org.apache.solr.util.PropertiesUtil.substituteProperty(PropertiesUtil.java:66)
...

Where should I put my properties so Solr can load them when I create a new
core using this config set? From what I read I could specify them as system
properties (-Dmyproperty=...) but I'd rather keep them in a file that I can
check in.

Thanks!
Georg