Solr PHP client

2012-12-13 Thread Romita Saha
Hi,

Can anyone please guide me to use SolrPhpClient? The documents available 
are not clear. As to where to place SolrPhpClient?

I have downloaded SolrPhpClient and have changed the following lines, 
specifying the path (where the files are present in my computer)

require_once('/home/solr/SolrPhpClient/Apache/Solr/Document.php./Document.php');
require_once('/home/solr/SolrPhpClient/Apache/Solr/Document.php./Response.php');

After this I am unable to proceed. What and how should I index my 
documents now. How should I start my solr. Where to place the conf files. 
I see there are few html documents inside the folder 
"SolrPhpClien/phpdocs". 

Could someone please help.

Thanks and regards,
Romita 

Re: score calculation

2012-12-13 Thread Sangeetha
Using Toms reply i have got most of the terms,

The following is my understanding of a single doc score,
 
5.528805 = (MATCH) sum of:  (sum of scores = 0.08775589 + 5.441049)

0.08775589 = (MATCH) weight(text:sachin in 286) [DefaultSimilarity], result
of: 

0.08775589 = score(doc=286,freq=2.0 = termFreq=2.0 ), product of: * (
fieldweight*queryweight)*

0.06781097 = queryWeight, product of: 

5.856543 = idf(docFreq=18, maxDocs=2443) 

0.011578668 = queryNorm  - not used for scoring 

1.2941253 = fieldWeight in 286, product of:  ( tf*idf*fieldNorm)

1.4142135 = tf(freq=2.0), with freq of: 2.0 = termFreq=2.0 

5.856543 = idf(docFreq=18, maxDocs=2443) 

0.15625 = fieldNorm(doc=286) 

5.441049 = (MATCH) weight(type_s:video^10.0 in 286) [DefaultSimilarity],
result of: 

5.441049 = score(doc=286,freq=1.0 = termFreq=1.0 ), product of:  (
fieldweight*queryweight)

0.793726 = queryWeight, product of: 

10.0 = boost 

6.855072 = idf(docFreq=6, maxDocs=2443) 

0.011578668 = queryNorm 

6.855072 = fieldWeight in 286, product of: ( tf*idf*fieldNorm)

1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 
6.855072 = idf(docFreq=6, maxDocs=2443)
1.0 = fieldNorm(doc=286)


But still i am not clear of fieldNorm(lenghNorm *boost(index) - how to get
boost(index)

and how queryWeight is calculated


Thanks,
Sangeetha




--
View this message in context: 
http://lucene.472066.n3.nabble.com/score-calculation-tp4026669p4026946.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Update / replication of offline indexes

2012-12-13 Thread Dikchant Sahi
Yes, we have an uniqueId defined but merge adds two documents with the same
id. As per my understanding this is how Solr behaves. Correct me if am
wrong.

On Fri, Dec 14, 2012 at 2:25 AM, Alexandre Rafalovitch
wrote:

> Do you have IDs defined? How do you expect Sold to know they are duplicate
> records? Maybe the issue is there somewhere.
>
> Regards,
>  Alex
> On 13 Dec 2012 15:17, "Dikchant Sahi"  wrote:
>
> > Hi Alex,
> >
> > You got my point right. What I see is merge adds duplicate document. Is
> > there a way to overwrite existing document in one core by another. Can
> > merge operation lead to data corruption, say in case when the core on
> > client had uncommitted changes.
> >
> > What would be a better solution for my requirement, merge or indexing
> > XML/JSON?
> >
> > Regards,
> > Dikchant
> >
> > On Thu, Dec 13, 2012 at 6:39 PM, Alexandre Rafalovitch
> > wrote:
> >
> > > Not sure I fully understood this and maybe you already cover that by
> > > 'merge', but if you know what you gave the client last time, you can
> just
> > > build a differential as a second core, then on client mount that second
> > > core and merge it into the first one (e.g. with DIH).
> > >
> > > Just a thought.
> > >
> > > Regards,
> > >Alex.
> > >
> > > Personal blog: http://blog.outerthoughts.com/
> > > LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> > > - Time is the quality of nature that keeps events from happening all at
> > > once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD
> book)
> > >
> > >
> > >
> > > On Thu, Dec 13, 2012 at 5:28 PM, Dikchant Sahi  > > >wrote:
> > >
> > > > Hi Erick,
> > > >
> > > > Sorry for creating the confusion. By slave, I mean the indexes on
> > client
> > > > machine will be replica of the master and in not same as the slave in
> > > > master-slave model. Below is the detail:
> > > >
> > > > The system is being developed to support search facility on 1000s of
> > > > system, a majority of which will be offline.
> > > >
> > > > The idea is that we will have a search system which will be sold
> > > > on subscription basis. For each of the subscriber, we will copy the
> > > master
> > > > index to their local machine, over a drive or CD. Now, if a
> subscriber
> > > > comes after 2 months and want the updates, we just want to provide
> the
> > > > deltas for 2 month as the volume of data is huge. For this we can
> think
> > > of
> > > > two approaches:
> > > > 1. Fetch the documents which are less than 2 months old  in JSON
> format
> > > > from master Solr. Copy it to the subscriber machine
> > > > and index those documents. (copy through cd / memory sticks)
> > > > 2. Create separate indexes for each month on our master machine. Copy
> > the
> > > > indexes to the client machine and merge. Prior to merge we need to
> > delete
> > > > records which the new index has, to avoid duplicates.
> > > >
> > > > As long as the setup is new, we will copy the complete index and
> > restart
> > > > Solr. We are not sure of the best approach for copying the deltas.
> > > >
> > > > Thanks,
> > > > Dikchant
> > > >
> > > >
> > > >
> > > > On Thu, Dec 13, 2012 at 3:52 AM, Erick Erickson <
> > erickerick...@gmail.com
> > > > >wrote:
> > > >
> > > > > This is somewhat confusing. You say that box2 is the slave, yet
> > they're
> > > > not
> > > > > connected? Then you need to copy the /data index from
> box
> > 1
> > > to
> > > > > box 2 manually (I'd have box2 solr shut down at the time) and
> restart
> > > > Solr.
> > > > >
> > > > > Why can't the boxes be connected? That's a much simpler way of
> going
> > > > about
> > > > > it.
> > > > >
> > > > > Best
> > > > > Erick
> > > > >
> > > > >
> > > > > On Tue, Dec 11, 2012 at 1:04 AM, Dikchant Sahi <
> > contacts...@gmail.com
> > > > > >wrote:
> > > > >
> > > > > > Hi Walter,
> > > > > >
> > > > > > Thanks for the response.
> > > > > >
> > > > > > Commit will help to reflect changes on Box1. We are able to
> achieve
> > > > this.
> > > > > > We want the changes to reflect in Box2.
> > > > > >
> > > > > > We have two indexes. Say
> > > > > > Box1: Master & DB has been setup. Data Import runs on this.
> > > > > > Box2: Slave running.
> > > > > >
> > > > > > We want all the updates on Box1 to be merged/present in index on
> > > Box2.
> > > > > Both
> > > > > > the boxes are not connected over n/w. How can be achieve this.
> > > > > >
> > > > > > Please let me know, if am not clear.
> > > > > >
> > > > > > Thanks again!
> > > > > >
> > > > > > Regards,
> > > > > > Dikchant
> > > > > >
> > > > > > On Tue, Dec 11, 2012 at 11:22 AM, Walter Underwood <
> > > > > wun...@wunderwood.org
> > > > > > >wrote:
> > > > > >
> > > > > > > You do not need to manage online and offline indexes. Commit
> when
> > > you
> > > > > are
> > > > > > > done with your updates and Solr will take care of it for you.
> The
> > > > > changes
> > > > > > > are not live until you commit.
> > > > > > >
> > > > > > > wunder
> > > > > > >
> > > > > > > On De

facet count distinct and sum group by field

2012-12-13 Thread cmd.ares
lucene index structure: 
product_name   type  price 
--- 
iphone4s  mobile 2000 
iphone4s  mobile 1500 
iphone5   mobile 5000 
iphone5   mobile 5000 
S3  mobile 3000 
intel i3 pc  1000 
intel i5 pc  1500 
 
i want to use solr like sql: 
select type,count(distinct product_name)s1,sum(price)s2 group by type 
how to do it with solr? 
thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/facet-count-distinct-and-sum-group-by-field-tp4026931.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr faceting sum function

2012-12-13 Thread Jack Krupansky
And add &stats.facet=type - and then you would get a sum of price for each 
type.


So,

 ?q=*:*&stats=true&stats.field=price&stats.facet=type

-- Jack Krupansky

-Original Message- 
From: Markus Mirsberger

Sent: Thursday, December 13, 2012 11:41 PM
To: solr-user@lucene.apache.org
Subject: Re: solr faceting sum function

Sorrystats.field=price

"cmd.ares"  wrote:


lucene index structure:
product_name   type  price
---
iphone4s  mobile 2000
iphone5   mobile 5000
S3  mobile 3000
intel i3 pc  1000
intel i5 pc  1500

i want to use solr like sql:
select type,sum(price)s group by type

the resule will be like:
mobile  1
pc2500

how to do it with solr?
thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-faceting-sum-function-tp4026921.html
Sent from the Solr - User mailing list archive at Nabble.com. 




Re: solr faceting sum function

2012-12-13 Thread Markus Mirsberger
Sorrystats.field=price

"cmd.ares"  wrote:

>lucene index structure:
>product_name   type  price
>---
>iphone4s  mobile 2000
>iphone5   mobile 5000
>S3  mobile 3000
>intel i3 pc  1000
>intel i5 pc  1500 
>
>i want to use solr like sql:
>select type,sum(price)s group by type
>
>the resule will be like:
>mobile  1
>pc2500
>
>how to do it with solr?
>thanks
>
>
>
>--
>View this message in context: 
>http://lucene.472066.n3.nabble.com/solr-faceting-sum-function-tp4026921.html
>Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr faceting sum function

2012-12-13 Thread Markus Mirsberger
Hi,

You can add stats=true&stats.field=type to your query.
Then you get a stats section in your result which includes the sum of the 
choosen field.

Cheers,
Markus

"cmd.ares"  wrote:

>lucene index structure:
>product_name   type  price
>---
>iphone4s  mobile 2000
>iphone5   mobile 5000
>S3  mobile 3000
>intel i3 pc  1000
>intel i5 pc  1500 
>
>i want to use solr like sql:
>select type,sum(price)s group by type
>
>the resule will be like:
>mobile  1
>pc2500
>
>how to do it with solr?
>thanks
>
>
>
>--
>View this message in context: 
>http://lucene.472066.n3.nabble.com/solr-faceting-sum-function-tp4026921.html
>Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr faceting sum function

2012-12-13 Thread Jack Krupansky
You could do it... if you wrote your own ValueSource that went out and did 
the sum of the named field for the selected documents end then you could use 
it in a function query in the &fl field list.


-- Jack Krupansky

-Original Message- 
From: cmd.ares

Sent: Thursday, December 13, 2012 11:21 PM
To: solr-user@lucene.apache.org
Subject: solr faceting sum function

lucene index structure:
product_name   type  price
---
iphone4s  mobile 2000
iphone5   mobile 5000
S3  mobile 3000
intel i3 pc  1000
intel i5 pc  1500

i want to use solr like sql:
select type,sum(price)s group by type

the resule will be like:
mobile  1
pc2500

how to do it with solr?
thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-faceting-sum-function-tp4026921.html
Sent from the Solr - User mailing list archive at Nabble.com. 



solr faceting sum function

2012-12-13 Thread cmd.ares
lucene index structure:
product_name   type  price
---
iphone4s  mobile 2000
iphone5   mobile 5000
S3  mobile 3000
intel i3 pc  1000
intel i5 pc  1500 

i want to use solr like sql:
select type,sum(price)s group by type

the resule will be like:
mobile  1
pc2500

how to do it with solr?
thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-faceting-sum-function-tp4026921.html
Sent from the Solr - User mailing list archive at Nabble.com.


Can a Solr (4.0.0) query return position details of matches within a text field

2012-12-13 Thread Michael Willekes
I am indexing documents with Solr 4.0 and also processing the text
using a set of custom entity extractors (people, places, dates, times,
etc). The text field in Solr has Term Vectors enabled as well as term
positions and offsets, primarily so we can make use of Fast Vector
Highlighting. So far, it's working very well.

The entity extractors run on the source text prior to indexing and
record the position and offsets into the source text. The entity data
is stored in an external datasource (which happens to be indexed in a
separate Solr core). Given numeric character range in the source
document I can easily lookup all of the extracted entities that fall
within that range (i.e. all of the organizations and dates mentioned
between offset 100 and 200).

What I'd like to do is: Issue a query against the source text and
return (in addition to the highlight fragments) the position
information of the query matches within the text field so that I can
issue a secondary query to find co-mentioned entities within n
characters/terms. Alternatively the start/end positions of the
highlight fragments would be sufficient as well.

I'm familiar with the simpler aspects of Solr, but am quite stumped on
this one. Is this possible to do with "out-of-the-box" Solr 4.0?

Regards,
Mike


if I only need exact search, does frequency/score matter?

2012-12-13 Thread Jie Sun
this is related to my previous post where I did not get any feedback yet...

I am going through a practice to reduce the disk usage by solr index files.

first step I took was to move some fields from stored to not stored. this
reduced the size of .fdt by 30-60%.

very promising... however I notice the .frq are taking almost as much disk
space as the .fdt files.

It seems .frq keeps the term frequency information. 

In our application, we only care about exact search (legal purpose), we do
not care about search results in relevance (by score) at all.

does this mean I can omit the freq? is it feasible in solr to turn the
frequency off?
I do need phrase search so I will have to keep the .prx which is also the
huge files similar to .fdt files.

Any suggestions or inputs?
thanks
Jie



--
View this message in context: 
http://lucene.472066.n3.nabble.com/if-I-only-need-exact-search-does-frequency-score-matter-tp4026893.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Update / replication of offline indexes

2012-12-13 Thread Alexandre Rafalovitch
Do you have IDs defined? How do you expect Sold to know they are duplicate
records? Maybe the issue is there somewhere.

Regards,
 Alex
On 13 Dec 2012 15:17, "Dikchant Sahi"  wrote:

> Hi Alex,
>
> You got my point right. What I see is merge adds duplicate document. Is
> there a way to overwrite existing document in one core by another. Can
> merge operation lead to data corruption, say in case when the core on
> client had uncommitted changes.
>
> What would be a better solution for my requirement, merge or indexing
> XML/JSON?
>
> Regards,
> Dikchant
>
> On Thu, Dec 13, 2012 at 6:39 PM, Alexandre Rafalovitch
> wrote:
>
> > Not sure I fully understood this and maybe you already cover that by
> > 'merge', but if you know what you gave the client last time, you can just
> > build a differential as a second core, then on client mount that second
> > core and merge it into the first one (e.g. with DIH).
> >
> > Just a thought.
> >
> > Regards,
> >Alex.
> >
> > Personal blog: http://blog.outerthoughts.com/
> > LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> > - Time is the quality of nature that keeps events from happening all at
> > once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)
> >
> >
> >
> > On Thu, Dec 13, 2012 at 5:28 PM, Dikchant Sahi  > >wrote:
> >
> > > Hi Erick,
> > >
> > > Sorry for creating the confusion. By slave, I mean the indexes on
> client
> > > machine will be replica of the master and in not same as the slave in
> > > master-slave model. Below is the detail:
> > >
> > > The system is being developed to support search facility on 1000s of
> > > system, a majority of which will be offline.
> > >
> > > The idea is that we will have a search system which will be sold
> > > on subscription basis. For each of the subscriber, we will copy the
> > master
> > > index to their local machine, over a drive or CD. Now, if a subscriber
> > > comes after 2 months and want the updates, we just want to provide the
> > > deltas for 2 month as the volume of data is huge. For this we can think
> > of
> > > two approaches:
> > > 1. Fetch the documents which are less than 2 months old  in JSON format
> > > from master Solr. Copy it to the subscriber machine
> > > and index those documents. (copy through cd / memory sticks)
> > > 2. Create separate indexes for each month on our master machine. Copy
> the
> > > indexes to the client machine and merge. Prior to merge we need to
> delete
> > > records which the new index has, to avoid duplicates.
> > >
> > > As long as the setup is new, we will copy the complete index and
> restart
> > > Solr. We are not sure of the best approach for copying the deltas.
> > >
> > > Thanks,
> > > Dikchant
> > >
> > >
> > >
> > > On Thu, Dec 13, 2012 at 3:52 AM, Erick Erickson <
> erickerick...@gmail.com
> > > >wrote:
> > >
> > > > This is somewhat confusing. You say that box2 is the slave, yet
> they're
> > > not
> > > > connected? Then you need to copy the /data index from box
> 1
> > to
> > > > box 2 manually (I'd have box2 solr shut down at the time) and restart
> > > Solr.
> > > >
> > > > Why can't the boxes be connected? That's a much simpler way of going
> > > about
> > > > it.
> > > >
> > > > Best
> > > > Erick
> > > >
> > > >
> > > > On Tue, Dec 11, 2012 at 1:04 AM, Dikchant Sahi <
> contacts...@gmail.com
> > > > >wrote:
> > > >
> > > > > Hi Walter,
> > > > >
> > > > > Thanks for the response.
> > > > >
> > > > > Commit will help to reflect changes on Box1. We are able to achieve
> > > this.
> > > > > We want the changes to reflect in Box2.
> > > > >
> > > > > We have two indexes. Say
> > > > > Box1: Master & DB has been setup. Data Import runs on this.
> > > > > Box2: Slave running.
> > > > >
> > > > > We want all the updates on Box1 to be merged/present in index on
> > Box2.
> > > > Both
> > > > > the boxes are not connected over n/w. How can be achieve this.
> > > > >
> > > > > Please let me know, if am not clear.
> > > > >
> > > > > Thanks again!
> > > > >
> > > > > Regards,
> > > > > Dikchant
> > > > >
> > > > > On Tue, Dec 11, 2012 at 11:22 AM, Walter Underwood <
> > > > wun...@wunderwood.org
> > > > > >wrote:
> > > > >
> > > > > > You do not need to manage online and offline indexes. Commit when
> > you
> > > > are
> > > > > > done with your updates and Solr will take care of it for you. The
> > > > changes
> > > > > > are not live until you commit.
> > > > > >
> > > > > > wunder
> > > > > >
> > > > > > On Dec 10, 2012, at 9:46 PM, Dikchant Sahi wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > How can we do delta update of offline indexes?
> > > > > > >
> > > > > > > We have the master index on which data import will be done. The
> > > index
> > > > > > > directory will be copied to slave machine in case of full
> update,
> > > > > through
> > > > > > > CD as the  slave/client machine is offline.
> > > > > > > So, what should be the approach for getting the delta to the
> > > slave. I
> > > > > can
> > > > 

Re: Update / replication of offline indexes

2012-12-13 Thread Dikchant Sahi
Hi Alex,

You got my point right. What I see is merge adds duplicate document. Is
there a way to overwrite existing document in one core by another. Can
merge operation lead to data corruption, say in case when the core on
client had uncommitted changes.

What would be a better solution for my requirement, merge or indexing
XML/JSON?

Regards,
Dikchant

On Thu, Dec 13, 2012 at 6:39 PM, Alexandre Rafalovitch
wrote:

> Not sure I fully understood this and maybe you already cover that by
> 'merge', but if you know what you gave the client last time, you can just
> build a differential as a second core, then on client mount that second
> core and merge it into the first one (e.g. with DIH).
>
> Just a thought.
>
> Regards,
>Alex.
>
> Personal blog: http://blog.outerthoughts.com/
> LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> - Time is the quality of nature that keeps events from happening all at
> once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)
>
>
>
> On Thu, Dec 13, 2012 at 5:28 PM, Dikchant Sahi  >wrote:
>
> > Hi Erick,
> >
> > Sorry for creating the confusion. By slave, I mean the indexes on client
> > machine will be replica of the master and in not same as the slave in
> > master-slave model. Below is the detail:
> >
> > The system is being developed to support search facility on 1000s of
> > system, a majority of which will be offline.
> >
> > The idea is that we will have a search system which will be sold
> > on subscription basis. For each of the subscriber, we will copy the
> master
> > index to their local machine, over a drive or CD. Now, if a subscriber
> > comes after 2 months and want the updates, we just want to provide the
> > deltas for 2 month as the volume of data is huge. For this we can think
> of
> > two approaches:
> > 1. Fetch the documents which are less than 2 months old  in JSON format
> > from master Solr. Copy it to the subscriber machine
> > and index those documents. (copy through cd / memory sticks)
> > 2. Create separate indexes for each month on our master machine. Copy the
> > indexes to the client machine and merge. Prior to merge we need to delete
> > records which the new index has, to avoid duplicates.
> >
> > As long as the setup is new, we will copy the complete index and restart
> > Solr. We are not sure of the best approach for copying the deltas.
> >
> > Thanks,
> > Dikchant
> >
> >
> >
> > On Thu, Dec 13, 2012 at 3:52 AM, Erick Erickson  > >wrote:
> >
> > > This is somewhat confusing. You say that box2 is the slave, yet they're
> > not
> > > connected? Then you need to copy the /data index from box 1
> to
> > > box 2 manually (I'd have box2 solr shut down at the time) and restart
> > Solr.
> > >
> > > Why can't the boxes be connected? That's a much simpler way of going
> > about
> > > it.
> > >
> > > Best
> > > Erick
> > >
> > >
> > > On Tue, Dec 11, 2012 at 1:04 AM, Dikchant Sahi  > > >wrote:
> > >
> > > > Hi Walter,
> > > >
> > > > Thanks for the response.
> > > >
> > > > Commit will help to reflect changes on Box1. We are able to achieve
> > this.
> > > > We want the changes to reflect in Box2.
> > > >
> > > > We have two indexes. Say
> > > > Box1: Master & DB has been setup. Data Import runs on this.
> > > > Box2: Slave running.
> > > >
> > > > We want all the updates on Box1 to be merged/present in index on
> Box2.
> > > Both
> > > > the boxes are not connected over n/w. How can be achieve this.
> > > >
> > > > Please let me know, if am not clear.
> > > >
> > > > Thanks again!
> > > >
> > > > Regards,
> > > > Dikchant
> > > >
> > > > On Tue, Dec 11, 2012 at 11:22 AM, Walter Underwood <
> > > wun...@wunderwood.org
> > > > >wrote:
> > > >
> > > > > You do not need to manage online and offline indexes. Commit when
> you
> > > are
> > > > > done with your updates and Solr will take care of it for you. The
> > > changes
> > > > > are not live until you commit.
> > > > >
> > > > > wunder
> > > > >
> > > > > On Dec 10, 2012, at 9:46 PM, Dikchant Sahi wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > How can we do delta update of offline indexes?
> > > > > >
> > > > > > We have the master index on which data import will be done. The
> > index
> > > > > > directory will be copied to slave machine in case of full update,
> > > > through
> > > > > > CD as the  slave/client machine is offline.
> > > > > > So, what should be the approach for getting the delta to the
> > slave. I
> > > > can
> > > > > > think of two approaches.
> > > > > >
> > > > > > 1.Create separate indexes of the delta on the master machine,
> copy
> > it
> > > > to
> > > > > > the slave machine and merge. Before merging the indexes on the
> > client
> > > > > > machine, delete all the updated and deleted documents in client
> > > machine
> > > > > > else merge will add duplicates. So along with the index, we need
> to
> > > > > > transfer the list of documents which has been updated/deleted.
> > > > > >
> > > > > > 2. Extract all the documents which 

Re: SEVER error when stopping tomcat using solr webapp

2012-12-13 Thread Chris Hostetter

: SEVERE: The web application [/apache-solr-4.0.0] created a ThreadLocal with
: key of type [org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value

It's a long standing issue...

https://issues.apache.org/jira/browse/SOLR-2357

In practice it shouldn't really cause you any problems.  These are 
small DateFormat objects kept in ThreadLocal for easy reuse, and tomcat 
is warning you that nothing cleans them up when you are shuting down 
solr -- so they will live as long as the Thread lives.  In theory if you 
never shutdown tomcat, but do dynamicly load/unload individual WAR apps in 
tomcat, that could mean a threadleak over time if the Threads are re-used 
forever -- but in practice this Thread's aren't used forever, tomcat 
even notes this in it's error...

:Threads are going to be renewed over time to try and
: avoid a probable memory leak.

Ideally solr should be more well behaved then this and not leave 
anything in ThreadLocal on app shutdown (just in case those Thread's are 
re-used forever and ever) ... we just haven't ever had anyone work up a 
patch to move in that direction.



: 
: This seems like something I should be concerned about.
: 
: I am going to post my schema and solrconfig for visibility.
: 
: schema.xml
: 
: 
: 
: 
: 
: 
: 
:   
: 
:  
:
: 
:
: 
:
:
:
:
:
:
:
: 
:
:
:
:
:
: 
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
: 
:
:
: 
:
:
: 
: 
:
:
: 
:
:
:
:
:
:
:
:
:
:
:
:
:
:
: 
:
:
: 
:
:
: 
:
:
:
:
:
:
: 
:
: 
:
: 
:
: 
: 
:
:
:  
: 
: 
:  
:  id
: 
:  
: 
:  
: 
:   
: 
:   
:   
:   
:   
:   
:   
:   
: 
:   
:   
:   
:   
: 
:   
:   
:   
:   
:   
:   
:   
:   
:   
:   
:   
: 
:   
:   
:  
:   
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
:   
: 
:   
: 
: 
: 
: 
:   
: 
: 
: 
: 
: 
: 
:   
:   
: 
: 
: 
: 
: 
: 
: 
:   
: 
: 
: 
:   
: 
: 
: 
: 
: 
: 
:   
:   
: 
: 
: 
: 
: 
: 
: 
:   
: 
: 
: 
:   
: 
: 
: 
: 
: 
:   
:   
: 
: 
: 
: 
: 
: 
:   
: 
: 
:   
:   
: 
:   
: 
: 
: 
:   
: 
:   
:   
: 
:   
:   
: 
: 
: 
: 
:   
:   
: 
: 
: 
: 
: 
:   
: 
: 
: 
: 
:   
: 
: 
: 
: 
:   
:   
:   
: 
: 
: 
:   
: 
: 
: 
:  
:
:  
:  
:  
:  
:  
:  
:
: 
: 
:   
: 
: 
: 
: 
: SolrConfig.xml
: 
: 
: 
: 
: 
: 
:   
: 
:   
:   LUCENE_40
: 
:   
: 
:   
:   
:   
: 
:   
:   
: 
:   
:   
: 
:   
:   
: 
:   
:   
: 
:   
:   
:   
:   
:   ${solr.data.dir:}
: 
: 
:   
:
: 
:   
:   
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
:
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
:   
:   
:   
:   
:   
:   
: 
: 
: 
:   
:   
: 
: 
:   
:   
:   
:   
:   
:   
: 
:   
:   
: 
: 
:   
:15000 
:false 
:  
: 
: 
:  
: 
: 
: 
: 
: 
: 
:  
: 
:   ${solr.data.dir:}
: 
:
: 
:   
:   
:   
:   
:   
:   
: 
:   
:   
: 
: 1024
: 
: 
: 
: 
: 
: 
: 
: 
: 
:
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: 
: true
: 
:
:
: 
:
:20
: 
:
:200
: 
:
: 
: 
:   
: 
:   
: 
: 
:   
: 
:   static firstSearcher warming in solrconfig.xml
: 
:   
: 
: 
: 
: true
: 
: 
: 2
: 
:   
: 
: 
:   
:   
:  
: 
: 
: 
: 
: 
: 
: 
: 
:   max-age=30, public 
: 
:   
: 
:   
:   
:   
: 
: 
: text
:   explicit
:   10
: 
:   
: 
:   
: 
: text
:   edismax
: 0.01
: 
:   sku^9.0 upc^9.1 searchKeyword^1.9 series^2.8 productTitle^1.2
: productID^9.0 manufacturer^4.0 masterFinish^1.5 theme^1.1 catego

Re: fieldType custom search

2012-12-13 Thread Otis Gospodnetic
Hi,

We've done something very similar to this before.  We implemented it as a
custom SearchComponent with a pluggable hit (re)ordering mechanism.

Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 13, 2012 at 8:06 AM, nihed mbarek  wrote:

> actually, I have as schema albums with
> artist / album_name / album_description
>
> When I made a search without query the result is (having the some score) :
> artist A
> artist A
> artist A
> artist B
> artist B
> artist B
> artist C
> artist C
> => depends on my indexing process
>
> what I want as result :
> artist A
> artist B
> artist C
> artist A
> artist B
> artist C
> artist A
> artist B
>
> => a circular result to see what I have as artist on my solr
>
>
>
>
>
> On Thu, Dec 13, 2012 at 1:59 PM, Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
> > What do you mean? Could you explain your use case?
> >
> > Tomás
> >
> >
> > On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek  wrote:
> >
> > > Hello,
> > >
> > > Is it possible to define a custom search for a fieldType on a schema ?
> ?
> > >
> > > Regards,
> > >
> > > --
> > >
> > > M'BAREK Med Nihed
> > >
> >
>
>
>
> --
>
> M'BAREK Med Nihed,
> Fedora Ambassador, TUNISIA, Northern Africa
> http://www.nihed.com
>
> 
>


Re: Is replication an atomic operation?

2012-12-13 Thread Otis Gospodnetic
As far as I know, if replication fails, the old index will still be used.
There will be some performance impact during replication simply because of
network and disk IO, and of course the JVM Solr is in will be doing more
work.  But I think you can throttle replication. or maybe not, I can't
see anything about it over on http://wiki.apache.org/solr/SolrReplication

Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html




On Thu, Dec 13, 2012 at 2:13 PM, Lan  wrote:

> In our current architecture, we use a staging core to perform full
> re-indexes
> while the live core continues to serve queries. After a full re-index we
> use
> the core admin to swap the live and stage index. Both the live and stage
> core are on the same solr instance.
>
> In our new architecture we want to have the live core and stage core
> running
> on separate solr instances. Using core admin to swap is not longer possible
> so we use the replication command below to push the stage indexed to live
> index.
>
>
> http://search-stage9084/solr/replication?command=fetchindex&masterUrl=http://search-live:9084/solr/live/replication
>
> Is this operation guaranteed to be atomic? For example if replication fails
> halfway through, the old live index will still be good? Also during
> replication, will the live server continue to serve queries without
> performance penalty?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Is-replication-an-atomic-operation-tp4026813.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Is replication an atomic operation?

2012-12-13 Thread Lan
In our current architecture, we use a staging core to perform full re-indexes
while the live core continues to serve queries. After a full re-index we use
the core admin to swap the live and stage index. Both the live and stage
core are on the same solr instance.

In our new architecture we want to have the live core and stage core running
on separate solr instances. Using core admin to swap is not longer possible
so we use the replication command below to push the stage indexed to live
index.

http://search-stage9084/solr/replication?command=fetchindex&masterUrl=http://search-live:9084/solr/live/replication

Is this operation guaranteed to be atomic? For example if replication fails
halfway through, the old live index will still be good? Also during
replication, will the live server continue to serve queries without
performance penalty?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-replication-an-atomic-operation-tp4026813.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SEVER error when stopping tomcat using solr webapp

2012-12-13 Thread davers
The only custom code I have are 3 custom transformers for my DIH.

Here is the code.

package org.build.com.solr;

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.handler.dataimport.Transformer;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class CategoriesTransformer extends Transformer {
  
  public Object transformRow(Map row, Context context) {
int index = 0;
Object tf = row.get("categories");

if (tf != null) {
  String[] arr = ((String) tf).split(",");
  String tempKey = "";
  
  for (int i = 0; i < arr.length; i++) {
if (arr[i].length() > 0) {
  index = arr[i].indexOf(':');
  
  tempKey = "categories_" + arr[i].substring(0, index) + "_i";
  if (row.containsKey(tempKey))
  {
List tempArrayList =
(ArrayList)row.get(tempKey);
tempArrayList.add(arr[i].substring(index+1,
arr[i].length()));
row.put(tempKey, tempArrayList);
  } else {
List tempArrayList = new ArrayList();
tempArrayList.add(arr[i].substring(index+1,
arr[i].length()));
row.put(tempKey, tempArrayList);
  }
}
  }
  
  row.remove("categories");
}
return row;
  }
}

package org.build.com.solr;

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.handler.dataimport.Transformer;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class FacetsTransformer extends Transformer {
  
  public Object transformRow(Map row, Context context) {
Object tf = row.get("facets");
if (tf != null) {
  if (tf instanceof List) {
List list = (List) tf;
String tempKey = "";
for (Object o : list) {
  String[] arr = ((String) o).split("=");
  if (arr.length == 3) {
tempKey = arr[0].replaceAll("[^A-Za-z0-9]", "") + "_" +
arr[1];
if (row.containsKey(tempKey))
{
  List tempArrayList =
(ArrayList)row.get(tempKey);
  tempArrayList.add(arr[2]);
  row.put(tempKey, tempArrayList );
} else {
  List tempArrayList = new ArrayList();
  tempArrayList.add(arr[2]);
  row.put(tempKey, tempArrayList);
}
  }
}
  } else {
String[] arr = ((String) tf).split("=");
if (arr.length == 3) row.put(arr[0].replaceAll("[^A-Za-z0-9]",
"") + "_" + arr[1], arr[2]);
  }
  row.remove("facets");
}
return row;
  }
}

package org.build.com.solr;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.handler.dataimport.Transformer;

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 * ht

Re: Solrj connect to already running solr server

2012-12-13 Thread Chris Hostetter

: I am trying to use the EmbeddedSolrServer to connect to the "general"
: core that is already running:

that is not going to work.  EmbeddedSolrServer is solely for running solr 
entirely contained inside your application, w/o using a servlet container 
-- it doesn't support any of the HTTP based features of Solr (ie: 
replication, solr cloud, distributed search, admin ui, etc...)

: The problem here is that I think this is trying to start a new Solr

Exactly.

: I know that I can also use HttpSolrServer, but I don't really want to
: connect Http when I am already in  the application server.

An HTTP connection to localhost, using things like keep-alive, should have 
almost negligable overhead compared to using EmbeddedSolrServer -- and 
leaves you open for all of the scalability advantages of distributing solr 
to multiple machines.


-Hoss


Re: Searching for phrase

2012-12-13 Thread Jack Krupansky
Try the Solr Admin Analyzer page to see how Solr is indexing that text. I 
suspect that the ShingleFilter is generating extra terms with positions so 
that PhraseQuery no longer sees the two terms from your quoted phrase as 
being adjacent.


Your second query is simply generating a Boolean AND or OR query for the 
individual terms without regards to their relative position.


-- Jack Krupansky

-Original Message- 
From: Arkadi Colson

Sent: Tuesday, December 11, 2012 10:36 AM
To: solr >> "solr-user@lucene.apache.org"
Subject: Searching for phrase

Hi

My schema looks like this:


  





  
  

-->


  


I inserted these 2 strings into solr:
abcdefg12345 678910
abcdefg12345 xyz 678910

When searching for "abcdefg12345 678910" with quotes I got no result.
Without quotes both string are found.

SolrObject Object
(
[responseHeader] => SolrObject Object
(
[status] => 0
[QTime] => 38
[params] => SolrObject Object
(
[sort] => score desc
[indent] => on
[collection] => intradesk
[wt] => xml
[version] => 2.2
[rows] => 5
[debugQuery] => true
[fl] =>
id,smsc_module,smsc_modulekey,smsc_userid,smsc_ssid,smsc_description,smsc_content,smsc_courseid,smsc_lastdate,score,metadata_stream_size,metadata_stream_source_info,metadata_stream_name,metadata_stream_content_type,last_modified,author,title,subject
[start] => 0
[q] => (smsc_content:\"abcdefg12345 678910\" ||
smsc_description:\"abcdefg12345 678910\") &&
(smsc_lastdate:[2012-11-11T09:59:51Z TO 2013-12-11T09:48:51Z]) &&
(smsc_ssid:929)
)

)

[response] => SolrObject Object
(
[numFound] => 0
[start] => 0
[docs] =>
)

[debug] => SolrObject Object
(
[rawquerystring] => (smsc_content:\"abcdefg12345 678910\"
|| smsc_description:\"abcdefg12345 678910\") &&
(smsc_lastdate:[2012-11-11T09:59:51Z TO 2013-12-11T09:48:51Z]) &&
(smsc_ssid:929)
[querystring] => (smsc_content:\"abcdefg12345 678910\" ||
smsc_description:\"abcdefg12345 678910\") &&
(smsc_lastdate:[2012-11-11T09:59:51Z TO 2013-12-11T09:48:51Z]) &&
(smsc_ssid:929)
[parsedquery] => +(smsc_content:"abcdefg12345
smsc_content:678910" smsc_description:"abcdefg12345
smsc_content:678910") +smsc_lastdate:[1352627991000 TO 1386755331000]
+smsc_ssid:929
[parsedquery_toString] => +(smsc_content:"abcdefg12345
smsc_content:678910" smsc_description:"abcdefg12345
smsc_content:678910") +smsc_lastdate:[1352627991000 TO 1386755331000]
+smsc_ssid:`#8;#0;#0;#7;!
[QParser] => LuceneQParser
[explain] => SolrObject Object
(
)

)

)

Anybody an idea what's wrong?

--
Met vriendelijke groeten

Arkadi Colson

Smartbit bvba . Hoogstraat 13 . 3670 Meeuwen
T +32 11 64 08 80 . F +32 11 64 08 81



How to configure termvectors to not store positions/offsets

2012-12-13 Thread Tom Burton-West
Hello,

As I understand it, MoreLikeThis only requires term frequencies, not
positions or offsets.  So in order to save disk space I would like to store
termvectors, but without positions and offsets.  Is there documentation
somewhere that
1) would confirm that MoreLikeThis only needs term frequencies
2) Shows how to configure termvectors in Solr schema.xml to only store term
frequencies, and not positions and offsets?

Tom


Re: score calculation

2012-12-13 Thread Chris Hostetter

: Can anyone explain or provide some links?

https://wiki.apache.org/solr/SolrRelevancyFAQ#How_are_documents_scored


-Hoss


Re: Can a field with defined synonym be searched without the synonym?

2012-12-13 Thread Walter Underwood
Perhaps you could use two indexed fields, one with synonym expansion and one 
without.

wunder

On Dec 12, 2012, at 11:33 PM, Burgmans, Tom wrote:

> In our case it's the opposite. For our clients it is very important that 
> every synonym gets equal chances in the relevancy calculation. The fact that 
> "nol" scores higher than "net operating loss", simply because its document 
> frequency is lower, is unacceptable and a reason to look for ways to disable 
> the IDF from the score calculation. But that is in fact something I don't 
> like to do since IDF is such an elementary part of the algorithm (and very 
> useful for non-synonym searches).
> 
> Pre-processing synonyms to apply 'reverse weighting' is also a strategy to 
> consider but I agree with Walter that this very error-prone, things could get 
> easily out of sync. Moreover, none of our Dev-, QA-, STG-, PRD- environment 
> contain exactly the same content, so it would require different tuned 
> synonyms dictionary for each of them...meh...
> 
> In our previous search engine (FAST ESP) we basically switched off IDF, but I 
> am still a bit hoping that there is a more sophisticated solution with Solr.
> 
> 
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: Thursday 13 December 2012 02:30
> To: solr-user@lucene.apache.org
> Subject: Re: Can a field with defined synonym be searched without the synonym?
> 
> All of the applications I've seen with user control over synonym expansion 
> where recall-oriented. The "give me all matches for X" kind of problem. So 
> ranking is not as important.
> 
> wunder
> 
> On Dec 12, 2012, at 5:23 PM, Roman Chyla wrote:
> 
>> Well, this IDF problem has more sides. So, let's say your synonym file
>> contains multi-token synonyms (it does, right? or perhaps you don't need
>> it? well, some people do)
>> 
>> "TV, TV set, TV foo, television"
>> 
>> if you use the default synonym expansion, when you index 'television'
>> 
>> you have increased frequency of also 'set', 'foo', so, the IDF of 'TV' is
>> the same as that of 'television' - but IDF of 'foo' and 'set' has changed
>> (their frequency increased, their IDF decreased) -- TV's have in fact made
>> 'foo' term very frequent and undesirable
>> 
>> So, you might be sure that IDF of 'TV' and 'television' are the same, but
>> you are not aware it has 'screwed' other (desirable) terms - so it really
>> depends. And I wouldn't argue these cases are esoteric.
>> 
>> And finally: there are use cases out there, where people NEED to switch off
>> synonym expansion at will (find only these documents, that contain the word
>> 'TV' and not that bloody 'foo'). This cannot be done if the index contains
>> all synonym terms (unless you have a way to mark the original and the
>> synonym in the index).
>> 
>> roman
>> 
>> 
>> On Wed, Dec 12, 2012 at 12:50 PM, Walter Underwood 
>> wrote:
>> 
>>> Query parsers cannot fix the IDF problem or make query-time synonyms
>>> faster. Query synonym expansion makes more search terms. More search terms
>>> are more work at query time.
>>> 
>>> The IDF problem is real; I've run up against it. The most rare variant of
>>> the synonym have the highest score. This probably the opposite of what you
>>> want. For me, it was "TV" and "television". Documents with "TV" had higher
>>> scores than those with "television".
>>> 
>>> wunder
>>> 
>>> On Dec 12, 2012, at 9:45 AM, Roman Chyla wrote:
>>> 
 @wunder
 It is a misconception (well, supported by that wiki description) that the
 query time synonym filter have these problems. It is actually the default
 parser, that is causing these problems. Look at this if you still think
 that index time synonyms are cure for all:
 https://issues.apache.org/jira/browse/LUCENE-4499
 
 @joe
 If you can use the flexible query parser (as linked in by @Swati) then
>>> all
 you need to do is to define a different field with a different tokenizer
 chain and then swap the field names before the analyzers processes the
 document (and then rewrite the field name back - for example, we have
 fields called "author" and "author_nosyn")
 
 roman
 
 On Wed, Dec 12, 2012 at 12:38 PM, Walter Underwood <
>>> wun...@wunderwood.org>wrote:
 
> Query time synonyms have known problems. They are slower, cause
>>> incorrect
> IDF, and don't work for phrase synonyms.
> 
> Apply synonyms at index time and you will have none of those problems.
> 
> See:
> 
>>> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
> 
> wunder
> 
> On Dec 12, 2012, at 9:34 AM, Swati Swoboda wrote:
> 
>> Query-time analyzers are still applied, even if you include a string in
> quotes. Would you expect "foo" to not match "Foo" just because it's
> enclosed in quotes?
>> 
>> Also look at this, someone who had similar requirements:
>> 
> 
>>> http://lucen

Re: The shard called `properties`

2012-12-13 Thread Yonik Seeley
On Thu, Dec 6, 2012 at 12:03 PM, Mark Miller  wrote:
> Yeah, the main problem with it didn't really occur to me until I saw the 
> properties shard in the cluster view.
>
> I started working on the UI to ignore it the other day and then never got 
> there because I was getting all sorts of weird 'busy' errors from svn for a 
> while and didn't have a clean checkout.

It occurs to me that the introduction of this "properties" value is
actually a back compat break anyway (since older clients won't know
it's not a real shard).
Seems like we should just bite the bullet and do it right.

{"collection1": {
"config" : "myconf"
"router" : "compositeId",
"shards" : {
   "shard1" : {...


-Yonik
http://lucidworks.com



> - mark
>
> On Dec 6, 2012, at 8:16 AM, Yonik Seeley  wrote:
>
>> On Wed, Dec 5, 2012 at 5:17 PM, Mark Miller  wrote:
>>> See the custom hashing issue - the UI has to be updated to ignore this.
>>>
>>> Unfortunately, it seems that clients have to be hard coded to realize 
>>> properties is not a shard unless we add another nested layer.
>>
>> Yeah, I talked about this a while back, but no one bit...
>> https://issues.apache.org/jira/browse/SOLR-3815?focusedCommentId=13452611&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452611
>>
>> At this point, I suppose we could still add it, but retain the ability
>> to read older cluster states?
>>
>> -Yonik
>> http://lucidworks.com
>


Re: Strange data-loss problem on one of our cores

2012-12-13 Thread Mark Miller
Couple things to start:

By default SolrCloud distributes updates a doc at a time. So if you have 1 
shard, whatever node you index too, it will send updates to the other. 
Replication is only used for recovery, not distributing data. So for some 
reason, there is an IOException when it tries to forward.

The other issue is not something that Ive seen reported. Can/did you try and do 
another hard commit to make sure you had the latest search open when checking 
the # of docs on each node? There was previously a race around commit that 
could cause some issues around expected visibility. 

If you are able to, you might try out a nightly build - 4.1 will be ready very 
soon and has numerous bug fixes for SolrCloud.

- Mark

On Dec 13, 2012, at 9:53 AM, John Nielsen  wrote:

> Hi all,
> 
> We are seeing a strange problem on our 2-node solr4 cluster. This problem
> has resultet in data loss.
> 
> We have two servers, varnish01 and varnish02. Zookeeper is running on
> varnish02, but in a separate jvm.
> 
> We index directly to varnish02 and we read from varnish01. Data is thus
> replicated from varnish02 to varnish01.
> 
> I found this in the varnish01 log:
> 
> *INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
> http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
> status=0 QTime=42
> Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
> INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
> http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
> status=0 QTime=41
> Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
> INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
> http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
> status=0 QTime=33
> Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
> INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
> http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
> status=0 QTime=33
> Dec 13, 2012 12:23:39 PM org.apache.solr.common.SolrException log
> SEVERE: shard update error StdNode:
> http://varnish02.lynero.net:8000/solr/default1_Norwegian/:org.apache.solr.client.solrj.SolrServerException:
> IOException occured when talking to server at:
> http://varnish02.lynero.net:8000/solr/default1_Norwegian
>at
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:413)
>at
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
>at
> org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:335)
>at
> org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:309)
>at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>at java.lang.Thread.run(Thread.java:636)
> Caused by: org.apache.http.NoHttpResponseException: The target server
> failed to respond
>at
> org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
>at
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
>at
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:282)
>at
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
>at
> org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:216)
>at
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
>at
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
>at
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:647)
>at
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:464)
>at
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
>at
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
>at
> org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
>at
> org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:352)
>... 11 more
> 
> Dec 13, 2012 12:23:39 PM
> org.apache.solr.update.processor.DistributedUpdateProcessor doFinish
> INFO

Strange data-loss problem on one of our cores

2012-12-13 Thread John Nielsen
Hi all,

We are seeing a strange problem on our 2-node solr4 cluster. This problem
has resultet in data loss.

We have two servers, varnish01 and varnish02. Zookeeper is running on
varnish02, but in a separate jvm.

We index directly to varnish02 and we read from varnish01. Data is thus
replicated from varnish02 to varnish01.

I found this in the varnish01 log:

*INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
status=0 QTime=42
Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
status=0 QTime=41
Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
status=0 QTime=33
Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish02.lynero.net:8000/solr/default1_Norwegian/&update.distrib=TOLEADER&wt=javabin&version=2}
status=0 QTime=33
Dec 13, 2012 12:23:39 PM org.apache.solr.common.SolrException log
SEVERE: shard update error StdNode:
http://varnish02.lynero.net:8000/solr/default1_Norwegian/:org.apache.solr.client.solrj.SolrServerException:
IOException occured when talking to server at:
http://varnish02.lynero.net:8000/solr/default1_Norwegian
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:413)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
at
org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:335)
at
org.apache.solr.update.SolrCmdDistributor$1.call(SolrCmdDistributor.java:309)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.apache.http.NoHttpResponseException: The target server
failed to respond
at
org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:101)
at
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:252)
at
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:282)
at
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:247)
at
org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:216)
at
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:298)
at
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:647)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:464)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:352)
... 11 more

Dec 13, 2012 12:23:39 PM
org.apache.solr.update.processor.DistributedUpdateProcessor doFinish
INFO: try and ask http://varnish02.lynero.net:8000/solr to recover*

It looks like it is sending updates from varnish01 to varnish02. I am not
sure for what since we only index on varnish02. Updates should never be
going from varnish01 to varnish02.

Meanwhile on varnish02:

*INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish01.lynero.net:8000/solr/default1_Norwegian/&update.distrib=FROMLEADER&wt=javabin&version=2}
status=0 QTime=16
Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish01.lynero.net:8000/solr/default1_Norwegian/&update.distrib=FROMLEADER&wt=javabin&version=2}
status=0 QTime=15
Dec 13, 2012 12:23:36 PM org.apache.solr.core.SolrCore execute
INFO: [default1_Norwegian] webapp=/solr path=/update params={distrib.from=
http://varnish01.lynero.net:8000/solr/default1_Norwegian

Re: fieldType custom search

2012-12-13 Thread Tomás Fernández Löbbe
If you want that exact circular result I think the only option is to have
an extra field, pre calculated (according to these needs, for example,
incrementing a value for each document for the same artist or something
like that) and sort on that field.

If what you need is to show the artists, maybe the grouping feature can
help you?


On Thu, Dec 13, 2012 at 10:06 AM, nihed mbarek  wrote:

> actually, I have as schema albums with
> artist / album_name / album_description
>
> When I made a search without query the result is (having the some score) :
> artist A
> artist A
> artist A
> artist B
> artist B
> artist B
> artist C
> artist C
> => depends on my indexing process
>
> what I want as result :
> artist A
> artist B
> artist C
> artist A
> artist B
> artist C
> artist A
> artist B
>
> => a circular result to see what I have as artist on my solr
>
>
>
>
>
> On Thu, Dec 13, 2012 at 1:59 PM, Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
> > What do you mean? Could you explain your use case?
> >
> > Tomás
> >
> >
> > On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek  wrote:
> >
> > > Hello,
> > >
> > > Is it possible to define a custom search for a fieldType on a schema ?
> ?
> > >
> > > Regards,
> > >
> > > --
> > >
> > > M'BAREK Med Nihed
> > >
> >
>
>
>
> --
>
> M'BAREK Med Nihed,
> Fedora Ambassador, TUNISIA, Northern Africa
> http://www.nihed.com
>
> 
>


Re: Update / replication of offline indexes

2012-12-13 Thread Alexandre Rafalovitch
Not sure I fully understood this and maybe you already cover that by
'merge', but if you know what you gave the client last time, you can just
build a differential as a second core, then on client mount that second
core and merge it into the first one (e.g. with DIH).

Just a thought.

Regards,
   Alex.

Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)



On Thu, Dec 13, 2012 at 5:28 PM, Dikchant Sahi wrote:

> Hi Erick,
>
> Sorry for creating the confusion. By slave, I mean the indexes on client
> machine will be replica of the master and in not same as the slave in
> master-slave model. Below is the detail:
>
> The system is being developed to support search facility on 1000s of
> system, a majority of which will be offline.
>
> The idea is that we will have a search system which will be sold
> on subscription basis. For each of the subscriber, we will copy the master
> index to their local machine, over a drive or CD. Now, if a subscriber
> comes after 2 months and want the updates, we just want to provide the
> deltas for 2 month as the volume of data is huge. For this we can think of
> two approaches:
> 1. Fetch the documents which are less than 2 months old  in JSON format
> from master Solr. Copy it to the subscriber machine
> and index those documents. (copy through cd / memory sticks)
> 2. Create separate indexes for each month on our master machine. Copy the
> indexes to the client machine and merge. Prior to merge we need to delete
> records which the new index has, to avoid duplicates.
>
> As long as the setup is new, we will copy the complete index and restart
> Solr. We are not sure of the best approach for copying the deltas.
>
> Thanks,
> Dikchant
>
>
>
> On Thu, Dec 13, 2012 at 3:52 AM, Erick Erickson  >wrote:
>
> > This is somewhat confusing. You say that box2 is the slave, yet they're
> not
> > connected? Then you need to copy the /data index from box 1 to
> > box 2 manually (I'd have box2 solr shut down at the time) and restart
> Solr.
> >
> > Why can't the boxes be connected? That's a much simpler way of going
> about
> > it.
> >
> > Best
> > Erick
> >
> >
> > On Tue, Dec 11, 2012 at 1:04 AM, Dikchant Sahi  > >wrote:
> >
> > > Hi Walter,
> > >
> > > Thanks for the response.
> > >
> > > Commit will help to reflect changes on Box1. We are able to achieve
> this.
> > > We want the changes to reflect in Box2.
> > >
> > > We have two indexes. Say
> > > Box1: Master & DB has been setup. Data Import runs on this.
> > > Box2: Slave running.
> > >
> > > We want all the updates on Box1 to be merged/present in index on Box2.
> > Both
> > > the boxes are not connected over n/w. How can be achieve this.
> > >
> > > Please let me know, if am not clear.
> > >
> > > Thanks again!
> > >
> > > Regards,
> > > Dikchant
> > >
> > > On Tue, Dec 11, 2012 at 11:22 AM, Walter Underwood <
> > wun...@wunderwood.org
> > > >wrote:
> > >
> > > > You do not need to manage online and offline indexes. Commit when you
> > are
> > > > done with your updates and Solr will take care of it for you. The
> > changes
> > > > are not live until you commit.
> > > >
> > > > wunder
> > > >
> > > > On Dec 10, 2012, at 9:46 PM, Dikchant Sahi wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > How can we do delta update of offline indexes?
> > > > >
> > > > > We have the master index on which data import will be done. The
> index
> > > > > directory will be copied to slave machine in case of full update,
> > > through
> > > > > CD as the  slave/client machine is offline.
> > > > > So, what should be the approach for getting the delta to the
> slave. I
> > > can
> > > > > think of two approaches.
> > > > >
> > > > > 1.Create separate indexes of the delta on the master machine, copy
> it
> > > to
> > > > > the slave machine and merge. Before merging the indexes on the
> client
> > > > > machine, delete all the updated and deleted documents in client
> > machine
> > > > > else merge will add duplicates. So along with the index, we need to
> > > > > transfer the list of documents which has been updated/deleted.
> > > > >
> > > > > 2. Extract all the documents which has changed since a particular
> > time
> > > in
> > > > > XML/JSON and index it in client machine.
> > > > >
> > > > > The size of indexes are huge, so we cannot rollover index
> everytime.
> > > > >
> > > > > Please help me with your take and challenges you see in the above
> > > > > approaches. Please suggest if you think of any other better
> approach.
> > > > >
> > > > > Thanks a ton!
> > > > >
> > > > > Regards,
> > > > > Dikchant
> > > >
> > > > --
> > > > Walter Underwood
> > > > wun...@wunderwood.org
> > > >
> > > >
> > > >
> > > >
> > >
> >
>


Re: fieldType custom search

2012-12-13 Thread nihed mbarek
actually, I have as schema albums with
artist / album_name / album_description

When I made a search without query the result is (having the some score) :
artist A
artist A
artist A
artist B
artist B
artist B
artist C
artist C
=> depends on my indexing process

what I want as result :
artist A
artist B
artist C
artist A
artist B
artist C
artist A
artist B

=> a circular result to see what I have as artist on my solr





On Thu, Dec 13, 2012 at 1:59 PM, Tomás Fernández Löbbe <
tomasflo...@gmail.com> wrote:

> What do you mean? Could you explain your use case?
>
> Tomás
>
>
> On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek  wrote:
>
> > Hello,
> >
> > Is it possible to define a custom search for a fieldType on a schema ? ?
> >
> > Regards,
> >
> > --
> >
> > M'BAREK Med Nihed
> >
>



-- 

M'BAREK Med Nihed,
Fedora Ambassador, TUNISIA, Northern Africa
http://www.nihed.com




Re: fieldType custom search

2012-12-13 Thread Tomás Fernández Löbbe
What do you mean? Could you explain your use case?

Tomás


On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek  wrote:

> Hello,
>
> Is it possible to define a custom search for a fieldType on a schema ? ?
>
> Regards,
>
> --
>
> M'BAREK Med Nihed
>


Re: score calculation

2012-12-13 Thread Aloke Ghoshal
Hi Tom,

This is great. Should make it to the documentations.

Regards,
Aloke

On Thu, Dec 13, 2012 at 1:23 PM, Burgmans, Tom <
tom.burgm...@wolterskluwer.com> wrote:

> I am also busy with getting this clear. Here are my notes so far (by
> copying and writing myself):
>
>
>
> queryWeight = the impact of the query against the field
> implementation: boost(query)*idf*queryNorm
>
>
> boost(query) = boost of the field at query-time
> Implication: hits in fields with higher boost get a higher score
> Rationale: a term in field A could be more relevant than the same
> term in field B
>
>
> idf = inverse document frequency = measure of how often the term
> appears across the index for this field
> implementation: log(numDocs/(docFreq+1))+1
> Implication: the greater the occurrence of a term in different
> documents, the lower its score
> Rationale: common terms are less important than uncommon ones
> numDocs = the total number of documents in the index, not including
> those that are marked as deleted but have not yet been purged. This is a
> constant (the same value for all documents in the index).
> docFreq = the number of documents in the index which contain the term
> in this field. This is a constant (the same value for all documents in the
> index containing this field)
>
>
> queryNorm = normalization factor so that queries can be compared
> implementation: 1/sqrt(sumOfSquaredWeights)
> Implication: doesn't impact the relevancy of this result
> Rationale: queryNorm is not related to the relevance of the
> document, but rather tries to make scores between different queries
> comparable. This value is equal for all results of the query
>
>
> fieldWeight = the score of a term matching the field
> implementation: tf*idf*fieldNorm
>
>
> tf = term frequency in a field = measure of how often a term appears
> in the field
> implementation: sqrt(freq)
> Implication: the more frequent a term occurs in a field, the
> greater its score
> Rationale: fields which contains more of a term are generally more
> relevant
> freq = termFreq = amount of times the term occurs in the field for
> this document
>
>
> fieldNorm = impact of a hit in this field
> implementation: lengthNorm*boost(index)
> lengthNorm = measure of the importance of a term according to the
> total number of terms in the field
> implementation: 1/sqrt(numTerms)
> Implication: a term matched in fields with less terms have a
> higher score
> Rationale: a term in a field with less terms is more important
> than one with more
> numTerms = amount of terms in a field
> boost (index) = boost of the field at index-time
> Implication: hits in fields with higher boost get a higher score
> Rationale: a term in field A could be more relevant than the same
> term in field B
>
>
> maxDocs = the number of documents in the index, including those that
> are marked as deleted but have not yet been purged. This is a constant (the
> same value for all documents in the index)
> Implication: (probably) doesn't play a role in the scoring
> calculation
>
>
> coord = number of terms in the query that were found in the document
> (omitted if equal to 1)
> implementation: overlap/maxOverlap
> Implication: of the terms in the query, a document that contains
> more terms will have a higher score
> Rationale: documents that match the most optional terms score
> highest
> overlap = the number of query terms matched in the document
> maxOverlap = the total number of terms in the query
>
>
> FunctionQuery = could be any kind of custom ranking function, which
> outcome is added to, or multiplied with the default rank score.
> Implication: various
>
>
> Look at the EXPLAIN information to see how the final score is calculated.
>
> Tom
>
>
> -Original Message-
> From: Sangeetha [mailto:sangeetha...@gmail.com]
> Sent: Thursday 13 December 2012 08:33
> To: solr-user@lucene.apache.org
> Subject: score calculation
>
>
> I want to know how score is calculated?
>
> what is fieldweight, fieldNorm, queryWeight and queryNorm. And what is the
> formula to get the final score using fieldweight, fieldNorm, queryWeight
> ,queryNorm, idf and tf.
>
> Can anyone explain or provide some links?
>
> Thanks,
> Sangeetha
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/score-calculation-tp4026669.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
> This email and any attachments may contain confidential or privileged
> information
> and is intended for the addressee only. If you are not the intended
> recipient, please
> immediately notify us by email or telephone and delete the original email
> and attachments
> without using, disseminating or reproducing its contents to anyone other
> than the 

RE: Solr3 and solr4 side by side in tomcat on windows

2012-12-13 Thread Gian Maria Ricci
Thanks to everyone for the suggestions, :).

Gian Maria

-Original Message-
From: Upayavira [mailto:u...@odoko.co.uk] 
Sent: mercoledì 12 dicembre 2012 14:49
To: solr-user@lucene.apache.org
Subject: Re: Solr3 and solr4 side by side in tomcat on windows

You will need to create a context XML file for each Solr instance to tell it
where to find its indexes (aka solr_home). Obviously, each Solr instance
will want a different one.

See:
http://wiki.apache.org/solr/SolrTomcat#Installing_Solr_instances_under_Tomca
t

Note that it says to create a file in
$CATALINA_HOME/conf/Catalina/localhost/solr-example.xml, you should replace
'solr-example' with whatever you name your webapps.

Upayavira

On Wed, Dec 12, 2012, at 10:44 AM, Daniel Exner wrote:
> Hi,
> 
> Gian Maria Ricci - aka Alkampfer wrote:
> > Hi to everyone, I've a solr3.6 server up and running, now I wish to
install solr4 on the same machine and if possible in side-by-side
configuration. (tomcat on Windows) Is it possible and is there some
documentation on how to do this? Thanks a lot. Gian Maria.
> 
> Just deploy Solr4 in the running tomcat and configure another context.
> You can copy the whole Solr3.6 config over, but have to do some 
> tweaking, obviously.
> 
> Catalina.out will guide you the way ;)
> 
> Also it is not recommended to use the "old" index from 3.6 but do a 
> fresh reindexing of your docs.
> 
> Greetings
> Daniel Exner
> 



Re: FieldFlags QueryComponent

2012-12-13 Thread nihed mbarek
Can any one help me ??

On Mon, Dec 10, 2012 at 10:07 AM, nihed mbarek  wrote:

> Hi,
>
> What is the utility of "FieldFlags" on ResponseBuilder ??
> I saw a set on QueryComponent and want to have more information about this
> attribute.
>
> Regards,
>
>