Re: Create incremental snapshot

2009-07-12 Thread tushar kapoor

Thanks for the reply Asif. We have already tried removing the optimization
step. Unfortunately the commit command alone is also causing an identical
behaviour . Is there any thing else that we are missing ?


Asif Rahman wrote:
 
 Tushar:
 
 Is it necessary to do the optimize on each iteration?  When you run an
 optimize, the entire index is rewritten.  Thus each index file can have at
 most one hard link and each snapshot will consume the full amount of space
 on your disk.
 
 Asir
 
 On Thu, Jul 9, 2009 at 3:26 AM, tushar kapoor 
 tushar_kapoor...@rediffmail.com wrote:
 

 What I gather from this discussion is -

 1. Snapshots are always hard links and not actual files so they cannot
 possibly consume the same amountof space.
 2. Snapshots contain hard links to existing docs + delta docs.

 We are facing a situation wherein the snapshot occupies the same space as
 the actual indexes thus violating the first point.
 We have a batch processing scheme for refreshing indexes. the steps we
 follow are -

 1. Delete 200 documents in one go.
 2. Do an optimize.
 3. Create the 200 documents deleted earlier.
 4. Do a commit.

 This process continues for around 160,000 documents i.e. 800 times and by
 the end of it we have 800 snapshots.

 The size of actual indexes is 200 Mb and remarkably all the 800 snapshots
 are of size around 200 Mb each. In effect this process consumes around
 160
 Gb space on our disks. This is causing a lot of pain right now.

 My concern are - Is our understanding of the snapshooter correct ? Should
 this massive space consumption be happening at all ? Are we missing
 something critical ?

 Regards,
 Tushar.

 Shalin Shekhar Mangar wrote:
 
  On Sat, Apr 18, 2009 at 1:06 PM, Koushik Mitra
  koushik_mi...@infosys.comwrote:
 
  Ok
 
  If these are hard links, then where does the index data get stored?
 Those
  must be getting stored somewhere in the file system.
 
 
  Yes, of course they are stored on disk. The hard links are created from
  the
  actual files inside the index directory. When those older files are
  deleted
  by Solr, they are still left on the disk if at least one hard link to
 that
  file exists. If you are looking for how to clean old snapshots, you
 could
  use the snapcleaner script.
 
  Is that what you wanted to do?
 
  --
  Regards,
  Shalin Shekhar Mangar.
 
 

 --
 View this message in context:
 http://www.nabble.com/Create-incremental-snapshot-tp23109877p24405434.html
 Sent from the Solr - User mailing list archive at Nabble.com.


 
 
 -- 
 Asif Rahman
 Lead Engineer - NewsCred
 a...@newscred.com
 http://platform.newscred.com
 
 
:-((
-- 
View this message in context: 
http://www.nabble.com/Create-incremental-snapshot-tp23109877p24447593.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Create incremental snapshot

2009-07-09 Thread tushar kapoor

What I gather from this discussion is -

1. Snapshots are always hard links and not actual files so they cannot
possibly consume the same amountof space.
2. Snapshots contain hard links to existing docs + delta docs.

We are facing a situation wherein the snapshot occupies the same space as
the actual indexes thus violating the first point.
We have a batch processing scheme for refreshing indexes. the steps we
follow are -

1. Delete 200 documents in one go.
2. Do an optimize.
3. Create the 200 documents deleted earlier.
4. Do a commit.

This process continues for around 160,000 documents i.e. 800 times and by
the end of it we have 800 snapshots.

The size of actual indexes is 200 Mb and remarkably all the 800 snapshots
are of size around 200 Mb each. In effect this process consumes around 160
Gb space on our disks. This is causing a lot of pain right now.

My concern are - Is our understanding of the snapshooter correct ? Should
this massive space consumption be happening at all ? Are we missing
something critical ?

Regards,
Tushar.

Shalin Shekhar Mangar wrote:
 
 On Sat, Apr 18, 2009 at 1:06 PM, Koushik Mitra
 koushik_mi...@infosys.comwrote:
 
 Ok

 If these are hard links, then where does the index data get stored? Those
 must be getting stored somewhere in the file system.

 
 Yes, of course they are stored on disk. The hard links are created from
 the
 actual files inside the index directory. When those older files are
 deleted
 by Solr, they are still left on the disk if at least one hard link to that
 file exists. If you are looking for how to clean old snapshots, you could
 use the snapcleaner script.
 
 Is that what you wanted to do?
 
 -- 
 Regards,
 Shalin Shekhar Mangar.
 
 

-- 
View this message in context: 
http://www.nabble.com/Create-incremental-snapshot-tp23109877p24405434.html
Sent from the Solr - User mailing list archive at Nabble.com.



Master Slave data distribution | rsync fail issue

2009-05-05 Thread tushar kapoor

Hi,

I am facing an issue while performing snapshot pulling thru Snappuller
script from slave server :
We have the setup of multicores on Master Solr and Slave Solr servers. 
Scenario , 2 cores are set :
i)  CORE_WWW.ABCD.COM
ii) CORE_WWW.XYZ.COM

rsync-enable and rsync-start script run from CORE_WWW.ABCD.COM on master
server. Thus rsyncd.commf file got generated on CORE_WWW.ABCD.COM  only ,
but not on CORE_WWW.XYZ.COM.
Rsyncd.conf of CORE_WWW.ABCD.COM :
 rsyncd.conf file  
uid = webuser
gid = webuser
use chroot = no
list = no
pid file =
/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.ABCD.COM/logs/rsyncd.pid
log file =
/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.ABCD.COM/logs/rsyncd.log
[solr]
path =
/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.ABCD.COM/data
comment = Solr

rsync error used to get generated while doing the  pulling of master server
snapshot of a particular core CORE_WWW.XYZ.COM from slave end, for core
CORE_WWW.ABCD.COM snappuller occured without any error.

Also, this issue is coming only when snapshot are generated at master end
thru the way given below:
A)  Snapshot are generated automatically by
editing  “${SOLR_HOME}/solr/conf/solrconfig.xml” to let either commit index
or optimize index trigger the snapshooter (search “postCommit” and
“postOptimize” to find the configuration section). 

Sample of solrconfig.xml entry on Master server End:
I)
listener event=postCommit class=solr.RunExecutableListener
  str 
name=exe/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.ABCD.COM/bin/snapshooter/str
  str
name=dir/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.ABCD.COM/bin/str
  bool name=waittrue/bool
  arr name=args strarg1/str strarg2/str /arr
  arr name=env strMYVAR=val1/str /arr
/listener 

same way done for core CORE_WWW.XYZ.COM solrConfig.xml.
II) The  dataDir tag remains commented on both the cores .XML on master
server.

Log sample  for more clearity :
rsyncd.log of the core CORE_WWW.XYZ.COM:
2009/05/01 15:48:40 command: ./rsyncd-start
2009/05/01 15:48:40 [15064] rsyncd version 2.6.3 starting, listening on port
18983
2009/05/01 15:48:40 rsyncd started with
data_dir=/opt/apache-tomcat-6.0.18/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.XYZ.COm/data
and accepting requests
2009/05/01 15:50:36 [15195] rsync on solr/snapshot.20090501153311/ from
deltrialmac.mac1.com (10.210.7.191)
2009/05/01 15:50:36 [15195] rsync: link_stat snapshot.20090501153311/. (in
solr) failed: No such file or directory (2)
2009/05/01 15:50:36 [15195] rsync error: some files could not be transferred
(code 23) at main.c(442)
2009/05/01 15:52:23 [15301] rsync on solr/snapshot.20090501155030/ from
delpearsondm.sapient.com (10.210.7.191)
2009/05/01 15:52:23 [15301] wrote 3438 bytes  read 290 bytes  total size
2779
2009/05/01 16:03:31 [15553] rsync on solr/snapshot.20090501160112/ from
deltrialmac.mac1.com (10.210.7.191)
2009/05/01 16:03:31 [15553] rsync: link_stat snapshot.20090501160112/. (in
solr) failed: No such file or directory (2)
2009/05/01 16:03:31 [15553] rsync error: some files could not be transferred
(code 23) at main.c(442)
2009/05/01 16:04:27 [15674] rsync on solr/snapshot.20090501160054/ from
deltrialmac.mac1.com (10.210.7.191)
2009/05/01 16:04:27 [15674] wrote 4173214 bytes  read 290 bytes  total size
4174633

I m unable to figure out that from where /. gets appeneded at the end 
snapshot.20090501153311/.
Snappuller.log
2009/05/04 16:55:43 started by solrUser
2009/05/04 16:55:43 command:
/opt/apache-solr-1.3.0/example/solr/multicore/CORE_WWW.PUFFINBOOKS.CA/bin/snappuller
-u webuser
2009/05/04 16:55:52 pulling snapshot snapshot.20090504164935
2009/05/04 16:56:09 rsync failed
2009/05/04 16:56:24 failed (elapsed time: 41 sec)

Error shown on console : 
rsync: link_stat snapshot.20090504164935/. (in solr) failed: No such file
or directory (2)
client: nothing to do: perhaps you need to specify some filenames or the
--recursive option?
rsync error: some files could not be transferred (code 23) at main.c(723)

B) The same issue is not coming while manually running the Snapshot script
after reguler interval of time at Master server and then running Snappuller
script at slave end for multiple cores. The postCommit/postOptimize part of
solrConfig.xml has been commented.
Here also rsync script run thru the core CORE_WWW.ABCD.COM. Snappuller and
snapinstaller occurred successfully.

Thanks in advance.

-- 
View this message in context: 
http://www.nabble.com/Master-Slave-data-distribution-%7C-rsync-fail-issue-tp23391580p23391580.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Unexpected sorting results when sorting with mutivalued filed

2009-04-01 Thread tushar kapoor



Shalin Shekhar Mangar wrote:
 
 On Tue, Mar 31, 2009 at 2:18 PM, tushar kapoor 
 tushar_kapoor...@rediffmail.com wrote:
 

 I have indexes with a multivalued field authorLastName. I query them with
 sort=authorLastName asc and  get the results as  -

 Index#  authorLastName
  1   Antonakos
  2   Keller
  3   Antonakos
   Mansfield

 However  Index#3 has a value starting with A (Antonakos ) .. Should'nt
 Index#3 preceed Index#2 in the results.

 
 The last value is used for sorting in multi-valued fields. What is the
 reason behind sorting on a multi-valued field?
 
 -- 
 Regards,
 Shalin Shekhar Mangar.
 
 

Cant do much about it, that is the way our design is.  Is there any way we
can change this ?





-- 
View this message in context: 
http://www.nabble.com/Unexpected-sorting-results-when-sorting-with-mutivalued-filed-tp22800877p22840705.html
Sent from the Solr - User mailing list archive at Nabble.com.



Unexpected sorting results when sorting with mutivalued filed

2009-03-31 Thread tushar kapoor

Hi,

I have indexes with a multivalued field authorLastName. I query them with
sort=authorLastName asc and  get the results as  -  

Index#  authorLastName
  1   Antonakos 
  2   Keller
  3   Antonakos 
   Mansfield

However  Index#3 has a value starting with A (Antonakos ) .. Should'nt
Index#3 preceed Index#2 in the results.

If this is not the default behaviour, what is the right work around for it.
Regards,
Tushar.
-- 
View this message in context: 
http://www.nabble.com/Unexpected-sorting-results-when-sorting-with-mutivalued-filed-tp22800877p22800877.html
Sent from the Solr - User mailing list archive at Nabble.com.



Dismax q.alt field for field level boosting

2009-02-01 Thread tushar kapoor

Hi,
I am trying to test relevancy of results with the q.alt field on a Dismax
Request Handler. Term level boosting based on bq information in
solrconfig.xml works fine. However field level boosting based on the qf
information in solrconfig.xml doesn't seem to work.

Query
q=q.alt=forrows=1000qt=dismaxrequest

Results
  ?xml version=1.0 encoding=UTF-8 ? 
- response
- lst name=responseHeader
  int name=status0/int 
  int name=QTime0/int 
- lst name=params
  str name=rows1000/str 
  str name=q.altfor/str 
  str name=q / 
  str name=qtdismaxrequest/str 
  /lst
  /lst
- result name=response numFound=6 start=0 maxScore=5.244862E-8
- doc
  float name=score5.244862E-8/float 
  str name=IndexId_sproduct_711069667/str 
  str name=IndexId_str_sproduct_711069667/str 
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=isbn10_product_s0425172651/str 
  str name=isbn10_product_str_s0425172651/str 
  str name=isbn13_product_s9780425172650/str 
  str name=isbn13_product_str_s9780425172650/str 
  str name=prdMainSubTitle_product_sThe Natural Solution for Pain/str 
  str name=prdMainSubTitle_product_str_sThe Natural Solution for
Pain/str 
  str name=prdMainTitle_product_sMiracle of MSM/str 
  str name=prdMainTitle_product_str_sMiracle of MSM/str 
  str name=productId_product_s711069667/str 
  str name=productId_product_str_s711069667/str 
  str name=productPrice_product_s0/str 
  str name=productPrice_product_str_s0/str 
  str name=websiteId_product_s51728/str 
  str name=websiteId_product_str_s51728/str 
  /doc
- doc
  float name=score4.495596E-8/float 
  str name=IndexId_sproduct_711069593/str 
  str name=IndexId_str_sproduct_711069593/str 
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=isbn10_product_s0140265139/str 
  str name=isbn10_product_str_s0140265139/str 
  str name=isbn13_product_s9780140265132/str 
  str name=isbn13_product_str_s9780140265132/str 
  str name=prdMainSubTitle_product_sThe Search for the Great White
Shark/str 
  str name=prdMainSubTitle_product_str_sThe Search for the Great White
Shark/str 
  str name=prdMainTitle_product_sBlue Meridian/str 
  str name=prdMainTitle_product_str_sBlue Meridian/str 
  str name=productId_product_s711069593/str 
  str name=productId_product_str_s711069593/str 
  str name=productPrice_product_s0/str 
  str name=productPrice_product_str_s0/str 
  str name=websiteId_product_s51728/str 
  str name=websiteId_product_str_s51728/str 
  /doc
- doc
  float name=score4.495596E-8/float 
  str name=IndexId_sproduct_711069848/str 
  str name=IndexId_str_sproduct_711069848/str 
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=isbn10_product_s0721472869/str 
  str name=isbn10_product_str_s0721472869/str 
  str name=isbn13_product_s9780721472867/str 
  str name=isbn13_product_str_s9780721472867/str 
  str name=prdMainTitle_product_sDinosaur Stories for 5-year-olds/str 
  str name=prdMainTitle_product_str_sDinosaur Stories for
5-year-olds/str 
  str name=prdPubDate_product_s25-MAR-99/str 
  str name=prdPubDate_product_str_s25-MAR-99/str 
  str name=productId_product_s711069848/str 
  str name=productId_product_str_s711069848/str 
  str name=productPrice_product_s3.69/str 
  str name=productPrice_product_str_s3.69/str 
  str name=websiteId_product_s51728/str 
  str name=websiteId_product_str_s51728/str 
  /doc
- doc
  float name=score4.495596E-8/float 
  str name=IndexId_sproduct_711069902/str 
  str name=IndexId_str_sproduct_711069902/str 
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=isbn10_product_s0751362476/str 
  str name=isbn10_product_str_s0751362476/str 
  str name=isbn13_product_s9780751362473/str 
  str name=isbn13_product_str_s9780751362473/str 
  str name=prdMainTitle_product_sTouch  Feel: ABC/str 
  str name=prdMainTitle_product_str_sTouch  Feel: ABC/str 
  str name=prdPubDate_product_s03-FEB-00/str 
  str name=prdPubDate_product_str_s03-FEB-00/str 
  str name=productId_product_s711069902/str 
  str name=productId_product_str_s711069902/str 
  str name=productPrice_product_s4.99/str 
  str name=productPrice_product_str_s4.99/str 
  str name=strapline_product_sPhotographic tactile experience for young
learners/str 
  str name=strapline_product_str_sPhotographic tactile experience for
young learners/str 
  str name=websiteId_product_s51728/str 
  str name=websiteId_product_str_s51728/str 
  /doc
- doc
  float name=score3.74633E-8/float 
  str name=IndexId_sproduct_711069724/str 
  str name=IndexId_str_sproduct_711069724/str 
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=isbn10_product_s0135206510/str 
  str name=isbn10_product_str_s0135206510/str 
  str name=isbn13_product_s9780135206515/str 
  str name=isbn13_product_str_s9780135206515/str 
  str name=prdMainSubTitle_product_sHundreds of Sure-fire 

FileBasedSpellChecker Multiple wordlist source files

2008-12-19 Thread tushar kapoor

I am using FileBasedSpellChecker and currently configuring it through one
source file. Something like this - 

 lst name=spellchecker
  str name=namedefault/str
  str name=classnamesolr.spelling.FileBasedSpellChecker/str
  str name=sourceLocation./files/spellings.txt/str
  str name=characterEncodingUTF-8/str
  str name=spellcheckIndexDir./spellcheckerindex/str
/lst

I have a whole bunch of other wordlist files which I want to use for
spell-check. However I do not want to merge all these into one file. 

Is it possible to specify multiple wordlist sources in the same spell
checker configuration ?

I have tried using wildcards - *.txt but they dont seem to work.



-- 
View this message in context: 
http://www.nabble.com/FileBasedSpellChecker-Multiple-wordlist-source-files-tp21090710p21090710.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: Russian stopwords

2008-12-06 Thread tushar kapoor

Hi Steve,

You were right,it turned out to be a an encoding issue but a really weird
one. I was using windows notepad   to save the stopwords file in UTF-8
encoding. On the other hand I was using editplus to save synonyms file. That
was the only difference. The moment I switched to editplus for saving
stopwords file it started working for Russian, German and all type of
languages.

Anyways Thanks for the suggesting a valid direction.

Regards,
Tushar.


Steven A Rowe wrote:
 
 Hi Tushar,
 
 On 12/05/2008 at 5:18 AM, tushar kapoor wrote:
 I am trying to filter russian stopwords but have not been
 successful with that.
 [...]
   filter class=solr.StopFilterFactory ignoreCase=true
  words=stopwords.txt/
  filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
  ignoreCase=true expand=false/
 [...]
 Intrestingly, Russian synonyms are working fine. English and russian
 synonyms get searched correctly.

 Also,If I add an English language word to stopwords.txt it
 gets filtered correctly. Its the russian words that are not
 getting filtered as stopwords.
 
 It might be an encoding issue - StopFilterFactory delegates stopword file
 reading to SolrResourceLoader.getLines(), which uses an InputStreamReader
 instantiated with the UTF-8 charset.  Is your stopwords.txt encoded as
 UTF-8?
 
 It's strange that synonyms are working fine, though - SynonymFilterFactory
 reads in the synonyms file using the same mechanism as StopFilterFactory -
 is it possible that your synonyms file is encoded as UTF-8, but your
 stopwords file is encoded with a different encoding, perhaps KOI8-R?  Like
 UTF-8, KOI8-R includes the entirety of 7-bit ASCII, so English words would
 be properly decoded under UTF-8.
 
 Steve
 
 

-- 
View this message in context: 
http://www.nabble.com/Russian-stopwords-tp20851093p20868126.html
Sent from the Solr - User mailing list archive at Nabble.com.



Russian stopwords

2008-12-05 Thread tushar kapoor

I am trying to filter russian stopwords but have not been successful with
that. I am using the following schema entry -

.
 fieldType name=text class=solr.TextField 
   analyzer
tokenizer class=solr.WhitespaceTokenizerFactory/
 filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt/
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
ignoreCase=true  
expand=false/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=0 generateNumberParts=0 catenateWords=1
catenateNumbers=1 catenateAll=0/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.RemoveDuplicatesTokenFilterFactory/
  /analyzer
/fieldType
..

Intrestingly, Russian synonyms are working fine. English and russian
synonyms get searched correctly.

Also,If I add an English language word to stopwords.txt it gets filtered
correctly. Its the russian words that are not getting filtered as stopwords.

Can someone explain the behaviour.

Thanks,
Tushar.
-- 
View this message in context: 
http://www.nabble.com/Russian-stopwords-tp20851093p20851093.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Encoded search string qt=Dismax

2008-12-03 Thread tushar kapoor

Hoss,

If the way I am doing it (Query 1) is a fluke, what is the correct way of
doing it? Seems like there is something fundamental that I am missing.

It would be great if you could list down the steps required to support multi
language search. Please provide some context on how exactly Language
analyzers are used.

I am attaching - 

http://www.nabble.com/file/p20817191/schema.xml schema.xml 
http://www.nabble.com/file/p20817191/solrconfig.xml solrconfig.xml 

Also, I am using a multicore setup with support for only one language per
core.
The field type on which I have applied language analyzer(Russian) is text.

Regards,
Tushar.


hossman wrote:
 
 
 First of all...
 
 standard request handler uses the default search field specified in your 
 schema.xml -- dismax does not.  dismax looks at the qf param to decide 
 which fields to search for the q param.  if you started with the example 
 schema the dismax handler may have a default value for qf which is 
 trying to query different fields then you actaully use in your documents.
 
 debugQuery=true will show you exactly what query structure (and on which 
 fields) each request is using.
 
 Second...
 
 I don't know Russian, and character encoding issues tend to make my head 
 spin, but the fact that the responseHeader is echoing back a q param 
 containing java string literal sequences suggests that you are doing 
 soemthing wrong.  you should be sending the URL encoding of the actaul 
 characters, not the URL encoding of the actual Russian word, not the URL 
 encoding or the java string literal encoding of the Russian word.  I 
 suspect the fact that you are getting any results at all from your first 
 query is a fluke.
 
 The str name=q in the responseHeader should show you the real word you 
 want to search for -- once it does, then you'll know that you have the 
 URL+UTF8 encoding issues straightened out.  *THEN* i would worry about the 
 dismax/standard behavior.
 
 :  lst name=params
 :   str
 :
 name=q\u041f\u0440\u0435\u0434\u0432\u0430\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435/str
  
 :   /lst
 
 
 -Hoss
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Encoded--search-string---qt%3DDismax-tp20797703p20817191.html
Sent from the Solr - User mailing list archive at Nabble.com.



Multi Language Search

2008-12-02 Thread tushar kapoor

Hi,

Before I start with Solr specific question, there is one thing I need to get
information on.

If I am a Russian user on a Russian Website  I want to search for indexes
having two Russian words how is the query term going to look like.

1. Russian Word 1 AND Russian Word 2

or rather,

2 . Russian Word 1 AND in Russian Russian Word 2

Now over to solr specific question. In case the answer to above is either 1.
or 2. how does one do it using Solr. I tried using the Language anallyzers
but I m not too sure how exactly it works.

Regards,
Tushar.
-- 
View this message in context: 
http://www.nabble.com/Multi-Language-Search-tp20789025p20789025.html
Sent from the Solr - User mailing list archive at Nabble.com.



Encoded search string qt=Dismax

2008-12-02 Thread tushar kapoor

Hi,

I am facing problems while searching for some encoded text as part of the
search query string. The results don't come up when I use some url encoding
with qt=dismaxrequest.

I am searching a Russian word by posting a URL encoded UTF8 transformation
of the word. The query works fine for normal request. However, no docs are
fetched when qt=dismaxrequest is appended as part of the query string.

The word being searched is -
Russian Word - Предварительное 

UTF8 Java Encoding -
\u041f\u0440\u0435\u0434\u0432\u0430\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435

Posted query string (URL Encoded) - 
%5Cu041f%5Cu0440%5Cu0435%5Cu0434%5Cu0432%5Cu0430%5Cu0440%5Cu0438%5Cu0442%5Cu0435%5Cu043b%5Cu044c%5Cu043d%5Cu043e%5Cu0435

Following are the two queries and the difference in results

Query 1 - this one works fine

?q=%5Cu041f%5Cu0440%5Cu0435%5Cu0434%5Cu0432%5Cu0430%5Cu0440%5Cu0438%5Cu0442%5Cu0435%5Cu043b%5Cu044c%5Cu043d%5Cu043e%5Cu0435

Result -

?xml version=1.0 encoding=UTF8 ? 
 response
 lst name=responseHeader
  int name=status0/int 
  int name=QTime0/int 
 lst name=params
  str
name=q\u041f\u0440\u0435\u0434\u0432\u0430\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435/str
 
  /lst
  /lst
 result name=response numFound=1 start=0
 doc
  str name=Index_Type_sproductIndex/str 
  str name=Index_Type_str_sproductIndex/str 
  str name=URL_s4100018/str 
  str name=URL_str_s4100018/str 
 arr name=all
  strproductIndex/str 
  strproduct/str 
  strПредварительное K математики учебная книга/str 
  str4100018/str 
  str4100018/str 
  str21125/str 
  str91048/str 
  str91047/str 
  /arr
  str name=editionTypeId_s21125/str 
  str name=editionTypeId_str_s21125/str 
 arr name=listOf_taxonomyPath
  str91048/str 
  str91047/str 
  /arr
  str name=prdMainTitle_sПредварительное K математики учебная
книга/str 
  str name=prdMainTitle_str_sПредварительное K математики учебная
книга/str 
  str name=productType_sproduct/str 
  str name=productType_str_sproduct/str 
 arr name=strlistOf_taxonomyPath
  str91048/str 
  str91047/str 
  /arr
  date name=timestamp20081202T08:14:05.63Z/date 
  /doc
  /result
  /response

Query 2 - qt=dismaxrequest - This doesnt work

?q=%5Cu041f%5Cu0440%5Cu0435%5Cu0434%5Cu0432%5Cu0430%5Cu0440%5Cu0438%5Cu0442%5Cu0435%5Cu043b%5Cu044c%5Cu043d%5Cu043e%5Cu0435qt=dismaxrequest

Result -
  ?xml version=1.0 encoding=UTF8 ? 
 response
 lst name=responseHeader
  int name=status0/int 
  int name=QTime109/int 
 lst name=params
  str
name=q\u041f\u0440\u0435\u0434\u0432\u0430\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435/str
 
  str name=qtdismaxrequest/str 
  /lst
  /lst
  result name=response numFound=0 start=0 maxScore=0.0 / 
  /response

Dont know why there is a difference on appending qt=dismaxrequest. Any help
would be appreciated.


Regards,
Tushar.
-- 
View this message in context: 
http://www.nabble.com/Encoded--search-string---qt%3DDismax-tp20797703p20797703.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Is it possible to specify a pattern of Ranking while querying the indexes?

2008-09-29 Thread tushar kapoor

In case I dont want to write a plugin code for that, what is the option that
I am left with ?

As I see it, one of the solution is to add a new a ranking field and query
on the basis of ascending/descending values of the ranking filed.
To change the sort order change the value of the ranking field by
re-creating indexes. 

Is there any other way of doing it ?


Shalin Shekhar Mangar wrote:
 
 Well that is not very well defined. I suppose if you can mathematically
 defined an ordering, you can implement it by writing some plugin code.
 
 On Mon, Sep 29, 2008 at 9:57 AM, tushar kapoor 
 [EMAIL PROTECTED] wrote:
 

 What I need is a specific sorting order in which the documents are
 retrieved.
 The only way I know that this is possible is using something like this -

 widescreen AND HDTV^2; popular desc, score desc;

 as given on
 http://wiki.apache.org/solr/SolrRelevancyCookbook?highlight=%28ranking%29
 http://wiki.apache.org/solr/SolrRelevancyCookbook?highlight=%28ranking%29

 Can I specify an order other than ascending or descending.


 For instance, if I have a 'Product' field in indexes with values
 P1,P2,P3,P4

 If I query like this - (queryString);Product desc;

 Result would be - ,P4,P3,P2,P1

 Now I want the order to be ,P3,P2,P1,P4. Is this possible ?




 Grant Ingersoll-6 wrote:
 
  Can you give an example of what you mean?
 
  On Sep 26, 2008, at 11:28 AM, tushar kapoor wrote:
 
 
  I want to specify a particular pattern in which results are
  retrieved for a
  query. Can a pattern of ranks be specified in the query ?
  --
  View this message in context:
 
 http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19690731.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 
 
  --
  Grant Ingersoll
  http://www.lucidimagination.com
 
  Lucene Helpful Hints:
  http://wiki.apache.org/lucene-java/BasicsOfPerformance
  http://wiki.apache.org/lucene-java/LuceneFAQ
 
 
 
 
 
 
 
 
 

 --
 View this message in context:
 http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19718201.html
 Sent from the Solr - User mailing list archive at Nabble.com.


 
 
 -- 
 Regards,
 Shalin Shekhar Mangar.
 
 

-- 
View this message in context: 
http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19719048.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Is it possible to specify a pattern of Ranking while querying the indexes?

2008-09-28 Thread tushar kapoor

What I need is a specific sorting order in which the documents are retrieved.
The only way I know that this is possible is using something like this -

widescreen AND HDTV^2; popular desc, score desc;

as given on 
http://wiki.apache.org/solr/SolrRelevancyCookbook?highlight=%28ranking%29
http://wiki.apache.org/solr/SolrRelevancyCookbook?highlight=%28ranking%29 

Can I specify an order other than ascending or descending.


For instance, if I have a 'Product' field in indexes with values
P1,P2,P3,P4

If I query like this - (queryString);Product desc;

Result would be - ,P4,P3,P2,P1

Now I want the order to be ,P3,P2,P1,P4. Is this possible ?




Grant Ingersoll-6 wrote:
 
 Can you give an example of what you mean?
 
 On Sep 26, 2008, at 11:28 AM, tushar kapoor wrote:
 

 I want to specify a particular pattern in which results are  
 retrieved for a
 query. Can a pattern of ranks be specified in the query ?
 -- 
 View this message in context:
 http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19690731.html
 Sent from the Solr - User mailing list archive at Nabble.com.

 
 --
 Grant Ingersoll
 http://www.lucidimagination.com
 
 Lucene Helpful Hints:
 http://wiki.apache.org/lucene-java/BasicsOfPerformance
 http://wiki.apache.org/lucene-java/LuceneFAQ
 
 
 
 
 
 
 
 
 

-- 
View this message in context: 
http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19718201.html
Sent from the Solr - User mailing list archive at Nabble.com.



Is it possible to specify a pattern of Ranking while querying the indexes?

2008-09-26 Thread tushar kapoor

I want to specify a particular pattern in which results are retrieved for a
query. Can a pattern of ranks be specified in the query ?
-- 
View this message in context: 
http://www.nabble.com/Is-it-possible-to-specify-a-pattern-of-Ranking-while-querying-the-indexes--tp19690731p19690731.html
Sent from the Solr - User mailing list archive at Nabble.com.