Re: ConcurrentUpdateSolrClient - notify on success/failure?

2019-01-01 Thread deniz
thanks a lot for the explanation :) 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


ConcurrentUpdateSolrClient - notify on success/failure?

2018-12-27 Thread deniz
I am trying to figure out if i can log anything or fire some events for other
listeners (like a jee event and so on) once ConcurrentUpdateSolrClient sends
the updates to the Solr (i.e internal queue is emptied and the request is
made with the all of the data to Solr itself) from java code... I am trying
to find a way to add some logic based on the "flushing" status of
ConcurrentUpdateSolrClient basically... 

After digging a bit, i found some methods which might be useful, but couldnt
find any explanation regarding those...

blockUntilFinished() -> seems this might be useful, but couldnt find any
example cases.
handleError(Throwable ex) -> only logs the error
onSuccess(HttpResponse resp) -> empty method body, needs overwriting

there is also shutdownNow(), but it doesnt seem to be useful for the
functionality i am looking for... 

are there any other ways to listen the flushing? and could anyone explain
some details about blockUntilFinished() please? in what cases it can be
useful? 







-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Trying to retrieve two values from two different collections by sql (V 7.2.1)

2018-10-16 Thread deniz
found out sth strange regarding this case. If i change one of the values into
sth else, and the field names are not the same any more, then i can get the
different values

so the initial query was

select *collection1.id* as collection1id, collection2.id as collection2id
from
collection1 join collection2 on collection1.name = collection2.name where
collection1.name = 'dummyname';


once i change it into

select *collection1.age* as collection1id, collection2.id as collection2id
from
collection1 join collection2 on collection1.name = collection2.name where
collection1.name = 'dummyname';

I am able to get the age from one collection and id from the second one. but
if use age for both of the collections, like id field, i am getting only one
value from one of the collections.






-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Trying to retrieve two values from two different collections by sql (V 7.2.1)

2018-10-16 Thread deniz
I am trying to run a query like below to query against two different
collections:

select collection1.id as collection1id, collection2.id as collection2id from
collection1 join collection2 on collection1.name = collection2.name where
collection1.name = 'dummyname';

And as a result, I am only seeing

{
result-set=
{
docs=
[
{collection2id=1001}, 
{collection2id=1002}, 
{collection2id=1003}, 
{collection2id=1004}, 
{collection2id=1005}, 
{collection2id=1001}, 
{collection2id=1002}, 
{collection2id=1003}, 
{collection2id=1004}, 
{collection2id=1005}, 
{collection2id=1001}, 
{collection2id=1002}, 
{collection2id=1003}, 
{collection2id=1004}, 
{collection2id=1005}, 
{collection2id=1001}, 
{collection2id=1002}, 
{collection2id=1003}, 
{collection2id=1004}, 
{collection2id=1005}, 
{collection2id=1001}, 
{collection2id=1002}, 
{collection2id=1003}, 
{collection2id=1004}, 
{collection2id=1005}, 
{EOF=true, 
RESPONSE_TIME=2221}
]
}
}

I can understand the reason why the same docs are returned (for each
matching doc on collection1 there are 5 different docs on collection2), but
the thing I dont get is, why collection1id is not anywhere in the result
list while it is in the select statement? 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: SQL Query with NOT (V 7.2.1)

2018-10-16 Thread deniz
okay, found a work around for string fields for NOT queries

This query does not filters for NOT:

curl --data-urlencode "stmt=select id, name from collection where NOT (name
= 'defaultmail')" 'http://server:port/solr/collection/sql'

but after adding sth trivial i.e id > 0 o the where clause as

curl --data-urlencode "stmt=select id, name from collection where id > 0 AND
NOT (name
= 'defaultmail')" 'http://server:port/solr/collection/sql'

i am not seeing any name field with 'defaultmail' in the response. 

I am not sure if this is a bug or just a wrong syntax while using NOT in the
where clause as a single criteria though






-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: SQL Query with NOT (V 7.2.1)

2018-10-16 Thread deniz
using integers in where clause with NOT is the same, though for that one
using <> as workaround does the job



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: SQL Query with NOT (V 7.2.1)

2018-10-16 Thread deniz
with curl the result is  the same:

curl --data-urlencode "stmt=select id, name from collection where NOT (name
= 'defaultmail')" 'http://server:port/solr/collection/sql' 

then the response is

.
.
.
{
"id":113,
"name":"defaultmail"}
  ,{
"id":109,
"name":"defaultmail"}
  ,{
"EOF":true,
"RESPONSE_TIME":197}]}}




-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


SQL Query with NOT (V 7.2.1)

2018-10-15 Thread deniz
I have been trying to get sql queries running, but having trouble while
dealing with the NOT queries.

Basically, the code looks like below


SolrQuery sqlQuery = new SolrQuery();
sqlQuery.setRequestHandler("/sql");
sqlQuery.set("stmt","select collection1.id as collection_1_id,
collection1.email as collection_1_mail from collection1 join collection2 on
collection1.email = collection2.email where collection1.email =
's...@email.com' limit 2");
System.out.println(solrClient.query("collection2",sqlQuery).getResponse());

Above code is working fine and returning a response like

{result-set={docs=[{collection_1_id=2, collection_1_mail=s...@email.com},
{collection_1_id=1, collection_1_mail=s...@email.com}, {EOF=true,
RESPONSE_TIME=1959}]}}

Then when I try to use a NOT clause as described on
https://lucene.apache.org/solr/guide/7_2/parallel-sql-interface.html


sqlQuery.set("stmt","select collection1.id as collection_1_id,
collection1.email as collection_1_mail from collection1 join collection2 on
collection1.email = collection2.email where NOT (collection1.email =
's...@email.com') limit 2");

or 

sqlQuery.set("stmt","select collection1.id as collection_1_id,
collection1.email as collection_1_mail from collection1 join collection2 on
collection1.email = collection2.email where collection1.email <>
's...@email.com' limit 2");

I am still getting 

{result-set={docs=[{collection_1_id=2, collection_1_mail=s...@email.com},
{collection_1_id=1, collection_1_mail=s...@email.com}, {EOF=true,
RESPONSE_TIME=1959}]}}

as response.

What is more, although the wiki page states '!=' is a valid operator, I am
getting 


parse failed: Bang equal '!=' is not allowed under the current SQL
conformance level

in the response.


Is there anything I am missing for using sql queries via SolrJ? 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Metrics API via Solrj

2018-10-03 Thread deniz
Thanks a lot Jason and Shawn, it is quite smooth although there is no built
in stuff like collection or schema request objects for metrics :) 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Metrics API via Solrj

2018-10-03 Thread deniz
Are there anyway to get the metrics via solrj ? all of the examples seem like
using plain curl or http reqs with json response. I have found
org.apache.solr.client.solrj.io.stream.metrics package, but couldnt figure
out how to send the requests via solrj... 

could anyone help me to figure out how to deal with metrics api on solrj? 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Concurrent Update Client Stops on Exceptions Randomly v7.4

2018-09-06 Thread deniz
I am trying to write a wrapper for DIH, so i can leverage the field type
guessing while importing the sql data. 

the query is supposed to retrieve 400K+ documents. in the test data in db,
there are dirty date fields, which has data like '1966-00-00' or
'1987-10-00' as well. 

I am running the code below:

 public void dataimport(ConcurrentUpdateSolrClient updateClient, String
importSql) {

try {

Connection conn = DriverManager.getConnection("connection
string","user","pass");
Statement stmt =
conn.createStatement(ResultSet.TYPE_FORWARD_ONLY,
ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
ResultSet rs = stmt.executeQuery(importsql);
ResultSetMetaData resultSetMetaData = rs.getMetaData();
List fields = new ArrayList<>();
for(int index=1; index < resultSetMetaData.getColumnCount();
index++){
fields.add(new
SolrFieldObject(resultSetMetaData.getColumnLabel(index),
resultSetMetaData.getColumnClassName(index)));
}
while(rs.next()){
SolrInputDocument solrInputDocument = new
SolrInputDocument();
for(SolrFieldObject field : fields){
try{
Object dataObject = rs.getString(field.name());
Optional.ofNullable(dataObject).ifPresent(
databaseInfo ->{
solrInputDocument.addField(field.name(),
String.valueOf(databaseInfo)); 
}
);
}catch(Exception e){
e.printStackTrace();
}

}
try{
 UpdateRequest updateRequest = new UpdateRequest();
 updateRequest.setCommitWithin(1);
try{
  updateRequest.add(solrInputDocument);
  updateRequest.process(updateClient);

}catch(Exception e){
  e.printStackTrace();
}
}catch(Exception e){
System.out.println("Inner -> " + e.getMessage());
}
}
stmt.close();
conn.close();
} catch (Exception e) {
e.printStackTrace();
}
}

The code is working fine, except that it is randomly stopping with the logs
like 'Error adding field 'day'='1976-00-00' msg=Invalid Date
String:'1976-00-00' on random documents. Although there are many other
documents with invalid dates, those are logged as errors on the server side,
but client works fine and continues to push other document, until it stops
on random document with the given error.

Are there any error threshold value that makes the concurrent update client
stop after some time? or there are some other points I am missing while
dealing with this kind of updates? 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Null Pointer Exception without details on Update in schemaless 7.4

2018-09-05 Thread deniz
server is also 7.4 

and your assumption/check on null values on input doc seems legit... I have
added some checks before pushing the doc to solr and replaced null values
with some default values, and updates seem going through w/o problem...
though having a little bit explanatory logs on server side might be
useful...

thanks a lot for pointing out the null fields 



-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Null Pointer Exception without details on Update in schemaless 7.4

2018-09-05 Thread deniz
nope, the data i am pushing is stuff like string, int, etc etc

i have have checked further and made bunch of trial and error, here are the
things I was able to figure out:

 - If a date value from database is null, then it is breaking the update
with "-00-00" is not a valid date string error. 
 - There are some column names with suffixes which is triggering dynamic
field creation with incorrect data.

But above cases have proper error logs unlike the one I have initially
posted. 




-
Zeki ama calismiyor... Calissa yapar...
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Null Pointer Exception without details on Update in schemaless 7.4

2018-09-05 Thread deniz
I have set up a schemaless solr (cloud) and have been trying to test the
updates. as DIH is not going through field guessing, I have wrote a small
piece of code to query data from db and push the docs to solr...

Once the client pushes the docs to solr, on server there are npe logs as
below:

 
o.a.s.h.RequestHandlerBase java.lang.NullPointerException
  at
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
  at
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
  at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at
org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
  at
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
  at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
  at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
  at
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
  at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
  at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
  at
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
  at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
  at
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
  at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
  at
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
  at
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
  at
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2539)
  at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
  at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
  at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
  at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
  at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
  at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
  at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
  at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
  at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
  at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
  at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
  at
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at

Re: Spatial Search based on the amount of docs, not the distance

2017-06-21 Thread deniz
it is for sure possible to use d value for limiting the distance, however, it
might not be very efficient, as some of the coords may not have any docs
around for a large value of d... so it is hard to determine a default value
for d. 

though it sounds like havinga default d and gradual increments on its value
might be a work around for top K results... 





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Spatial-Search-based-on-the-amount-of-docs-not-the-distance-tp4342108p4342258.html
Sent from the Solr - User mailing list archive at Nabble.com.


Spatial Search based on the amount of docs, not the distance

2017-06-21 Thread deniz
I am trying to figure out if it is possible to have the spatial search limit
itself based on the amount of docs rather than the distance...

What I want is,  sth like "closest XXX documents from point(x,y)" in
dependent from  "d" value in the query... would this need a custom plugin,
or there are any query params/function to achieve this? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Spatial-Search-based-on-the-amount-of-docs-not-the-distance-tp4342108.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Interval Facets with JSON

2017-02-08 Thread deniz
Tom Evans-2 wrote
> I don't think there is such a thing as an interval JSON facet.
> Whereabouts in the documentation are you seeing an "interval" as JSON
> facet type?
> 
> 
> You want a range facet surely?
> 
> One thing with range facets is that the gap is fixed size. You can
> actually do your example however:
> 
> json.facet={hieght_facet:{type:range, gap:20, start:160, end:190,
> hardend:True, field:height}}
> 
> If you do require arbitrary bucket sizes, you will need to do it by
> specifying query facets instead, I believe.
> 
> Cheers
> 
> Tom


nothing other than
https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-IntervalFaceting
for documentation on intervals...  i am ok with range queries as well but
intervals would fit better because of different sizes...

i have also checked the class FacetRequest after digging through the error
stack and found the lines below:

public Object parseFacetOrStat(String key, String type, Object args) throws
SyntaxError {
// TODO: a place to register all these facet types?

if ("field".equals(type) || "terms".equals(type)) {
  return parseFieldFacet(key, args);
} else if ("query".equals(type)) {
  return parseQueryFacet(key, args);
} else if ("range".equals(type)) {
  return parseRangeFacet(key, args);
}

AggValueSource stat = parseStat(key, type, args);
if (stat == null) {
  throw err("Unknown facet or stat. key=" + key + " type=" + type + "
args=" + args);
}

couldnt find any other class which is extending this method either... so
simply i will switch to ranges for now...

thanks a lot for your suggestions





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Interval-Facets-with-JSON-tp4319111p4319402.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Interval Facets with JSON

2017-02-07 Thread deniz
I have taken a look at the class
solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java but it
seems like there are no tests for interval facets on json...






-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Interval-Facets-with-JSON-tp4319111p4319254.html
Sent from the Solr - User mailing list archive at Nabble.com.


Interval Facets with JSON

2017-02-07 Thread deniz
Hello,

I am trying to run JSON facets with on interval query as follows:

  
"json.facet":{"height_facet":{"interval":{"field":"height","set":["[160,180]","[180,190]"]}}}

And related field is 

But I keep seeing errors like:

o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Unknown
facet or stat. key=height_facet type=interval args={field=height,
set=[[160,180], [180,190]]} , path=/facet


I have tried to find example for the json facets with intervals but couldnt
find anything... almost everywhere having examples for range queries rather
than interval...

the thing i am trying to achive is the same/similar response with:

   
/select?facet=on=on=*:*=json=height=[0,155]=(155,165]
 

in case i directly query solr with the above query, i am able to see the
facets. 

solr version i use is 6.1.0

is it something missing or incorrect in the syntax that i use for json
facets? anyone had similar issues with interval facets with json?






-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Interval-Facets-with-JSON-tp4319111.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: [Solr 5.1.0] - Ignoring Whitespaces as delimiters

2016-10-16 Thread deniz
thanks a lot... prepared one regex, which seems like doing what i am looking
for :)



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-5-1-0-Ignoring-Whitespaces-as-delimiters-tp4300939p4301344.html
Sent from the Solr - User mailing list archive at Nabble.com.


[Solr 5.1.0] - Ignoring Whitespaces as delimiters

2016-10-12 Thread deniz
Hello,

Are there any built-in tokenizers which will do sth like StandardTokenizer,
but will not tokenize on whitespace? 

e.g field:abc cde-rfg will be tokenized as "abc cde" and "rfg", not "abc",
"cde", "rfg"


I have checked the existing tokenizers/analyzers and it seems like there is
no other way but writing a custom tokenizer... 

 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-5-1-0-Ignoring-Whitespaces-as-delimiters-tp4300939.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-09 Thread deniz
was able to get "gettingstarted" example running with sql, on my local only
with a single zk... 

still not sure why the core/collection i have tried, didnt work till now...

thanks a lot for pointing out for the version related issues, it made me
change my focus from client to server side :) 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4275447.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-08 Thread deniz
Joel Bernstein wrote
> It appears that the /sql handler is not sending the metadata Tuple.
> According to the log the parameter includeMetadata=true is being sent.
> This
> should trigger the sending of the metadata Tuple.
> 
> Is it possible that you are using a pre 6.0 release version of Solr from
> the master branch? The JDBC client appears to be from the 6.0 release but
> the server could be an older version.
> 
> The reason I ask this, is that older versions of the /sql handler don't
> have the metadata Tuple logic. So the query would be processed correctly
> but the metadata Tuple wouldn't be there.
> 
> Joel Bernstein
> http://joelsolr.blogspot.com/


I have checked once more about the version of solr and clean up all of the
zookeeper data as well and restarted, but the problem is still going on... 

pre-6 versions actually support /sql handler? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4275437.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-06 Thread deniz
I went on digging and debug the code and here is what I got on the point it
breaks:


 

so basically the tuple doesnt have anything for "isMetadata" hence getting
null on that point... is this a bug or there is missing configs on
clientside or classpath? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4275053.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-05 Thread deniz
Joel Bernstein wrote
>> Can you post your classpath?

classpath as follows:


solr-solrj-6.0.0
commons-io-2.4
httpclient-4.4.1
httpcore-4.4.1
httpmime-4.4.1
zookeeper-3.4.6
stax2-api-3.1.4
woodstox-core-asl-4.4.1
noggit-0.6
jcl-over-slf4j-1.7.7
slf4j-api-1.7.7




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4274979.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-05 Thread deniz
Also found 

// JDBC requires metadata like field names from the SQLHandler. Force
this property to be true.
props.setProperty("includeMetadata", "true");


in org.apache.solr.client.solrj.io.sql.DriverImpl 

are there any other ways to get response on solrj without metaData to avoid
the error? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4274739.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-05 Thread deniz
could it be something with includeMetaData=true param? I have tried to set it
to false but then the logs look like:

webapp=/solr path=/sql
params={includeMetadata=true=false=1=json=2.2=select+id,+text+from+test+where+tits%3D1+limit+5=map_reduce}
status=0 QTime=3 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4274733.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-05 Thread deniz

> The logs you shared don't seem to be the full logs. There will be a
> related
> exception on the Solr server side. The exception on the Solr server side
> will explain the cause of the problem.

The logs are the full logs which I got on the console when I run the code,
and there is no exception on server side at all (it prints the incoming
query and shows the hits actually, already pasted above)

the same query is fine if I run with curl only though...



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451p4274715.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 6 / Solrj RuntimeException: First tuple is not a metadata tuple

2016-05-04 Thread deniz
I am trying to go through the steps  here
<http://https://sematext.com/blog/2016/04/26/solr-6-as-jdbc-data-source/>  
to start playing with the new api, but I am getting:

java.sql.SQLException: java.lang.RuntimeException: First tuple is not a
metadata tuple
at
org.apache.solr.client.solrj.io.sql.StatementImpl.executeQuery(StatementImpl.java:70)
at com.sematext.blog.App.main(App.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.lang.RuntimeException: First tuple is not a metadata tuple
at
org.apache.solr.client.solrj.io.sql.ResultSetImpl.(ResultSetImpl.java:75)
at
org.apache.solr.client.solrj.io.sql.StatementImpl.executeQuery(StatementImpl.java:67)
... 6 more



My code is 

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;


/**
 * Hello world!
 *
 */
public class App 
{
public static void main( String[] args )
{


Connection connection = null;
Statement statement = null;
ResultSet resultSet = null;

try{
String connectionString =
"jdbc:solr://zkhost:port?collection=test=map_reduce=1";
connection = DriverManager.getConnection(connectionString);
statement  = connection.createStatement();
resultSet = statement.executeQuery("select id, text from test
where tits=1 limit 5");
while(resultSet.next()){
String id = resultSet.getString("id");
String nickname = resultSet.getString("text");

System.out.println(id + " : " + nickname);
}
}catch(Exception e){
e.printStackTrace();
}finally{
if (resultSet != null) {
try {
resultSet.close();
} catch (Exception ex) {
}
}
if (statement != null) {
try {
statement.close();
} catch (Exception ex) {
}
}
if (connection != null) {
try {
connection.close();
} catch (Exception ex) {
}
}
}


}
}


I tried to figure out what is happening, but there is no more logs other
than the one above. And on Solr side, the logs seem okay:

2016-05-04 15:52:30.364 INFO  (qtp1634198-41) [c:test s:shard1 r:core_node1
x:test] o.a.s.c.S.Request [test]  webapp=/solr path=/sql
params={includeMetadata=true=1=json=2.2=select+id,+text+from+test+where+tits%3D1+limit+5=map_reduce}
status=0 QTime=3
2016-05-04 15:52:30.382 INFO  (qtp1634198-46) [c:test s:shard1 r:core_node1
x:test] o.a.s.c.S.Request [test]  webapp=/solr path=/select
params={q=(tits:"1")=false=id,text,score=score+desc=5=json=2.2}
hits=5624 status=0 QTime=1


The error is happening because of some missing handlers on errors on the
code or because of some strict checks on IDE(Ideaj)? Anyone had similar
issues while using sql with solrj?


Thanks

Deniz



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-6-Solrj-RuntimeException-First-tuple-is-not-a-metadata-tuple-tp4274451.html
Sent from the Solr - User mailing list archive at Nabble.com.


Export request handler via SolrJ

2016-02-01 Thread deniz
I have been trying to export the whole resultset via SolrJ but till now
everything (Including the tricks here:
http://stackoverflow.com/questions/33540577/how-can-use-the-export-request-handler-via-solrj)
has failed... On curl, it is working totally fine to query with
server:port/solr/core/export but couldnt find a way to have the same results
via SolrJ...

Anyone has tried "exporting" via SolrJ or it doesnt support it yet? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Export-request-handler-via-SolrJ-tp4254597.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Core mismatch in org.apache.solr.update.StreamingSolrClients Errors for ConcurrentUpdateSolrClient

2015-08-11 Thread deniz
okay, to make everything clear, here are the steps:

- Creating configs etc and then running:

./zkcli.sh -cmd upconfig -n CoreA -d /path/to/core/configs/CoreA/conf/ -z
zk1:2181,zk2:2182,zk3:2183

- Then going to http://someserver:8983/solr/#/~cores

- Clicking Add Core:
http://lucene.472066.n3.nabble.com/file/n4222345/Screen_Shot_2015-08-11_at_14.png
 

Repateding the last step on other node as well

So this is invalid (incl https://wiki.apache.org/solr/CoreAdmin)? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Core-mismatch-in-org-apache-solr-update-StreamingSolrClients-Errors-for-ConcurrentUpdateSolrClient-tp4222335p4222345.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Core mismatch in org.apache.solr.update.StreamingSolrClients Errors for ConcurrentUpdateSolrClient

2015-08-11 Thread deniz
thanks for the details Anshum :)

I got one more question, could this kind of error logging might be also
triggered by the amount of incoming requests? I can see these errors only on
prod env, but testing env is totally fine, although the creation process is
exactly the same



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Core-mismatch-in-org-apache-solr-update-StreamingSolrClients-Errors-for-ConcurrentUpdateSolrClient-tp4222335p4222348.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Core mismatch in org.apache.solr.update.StreamingSolrClients Errors for ConcurrentUpdateSolrClient

2015-08-11 Thread deniz
Hello Anshum,

thanks for the quick reply

I know it is being forwarded one node to the leader node, but for collection
names, it shows different collections while master node address is correct.

Dunno if I am missing some points but my concern is the bold parts below:

ERROR - 2015-08-11 05:04:34.592; [*CoreA* shard1 core_node2 *CoreA*]
org.apache.solr.update.StreamingSolrClients$1; error
org.apache.solr.common.SolrException: Bad Request
request:
http://server:8983/solr/*CoreB*/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2Fserver2%3A8983%2Fsolr%2F*CoreB*%2Fwt=javabinversion=2

So this is also normal?


Anshum Gupta wrote
 Hi Deniz,
 
 Seems like the update that's being forwarded from a non-leader (original
 node that received the request) is failing. This could be due to multiple
 reasons, including issue with your schema vs document that you sent.
 
 To elaborate more, here's how a typical batched request in SolrCloud
 works.
 
 1. Batch sent from client.
 2. Received by node X.
 3. All documents that have their shard leader on node X, are processed and
 distributed to the replicas by node X. All other documents which belong to
 a shard who's leader isn't on Node X, get forwarded using the
 ConcurrentUpdateSolrClient to their respective leaders.
 
 There's nothing *strange* about this log, other than the fact that the
 update failed (and would have failed even if you would have directly sent
 the document to this node). Hope this made things clear.
 
 -- 
 Anshum Gupta





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Core-mismatch-in-org-apache-solr-update-StreamingSolrClients-Errors-for-ConcurrentUpdateSolrClient-tp4222335p4222338.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Core mismatch in org.apache.solr.update.StreamingSolrClients Errors for ConcurrentUpdateSolrClient

2015-08-11 Thread deniz
I have created by simply creating configs and then using upconfig to upload
to zookeeper, then adding it on admin interface of solr.

I have only changed the ips of server and server1 and changed the
core/collection names to CoreA and CoreB, in the logs CoreA and CoreB are
different collections with different names. 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Core-mismatch-in-org-apache-solr-update-StreamingSolrClients-Errors-for-ConcurrentUpdateSolrClient-tp4222335p4222341.html
Sent from the Solr - User mailing list archive at Nabble.com.


Core mismatch in org.apache.solr.update.StreamingSolrClients Errors for ConcurrentUpdateSolrClient

2015-08-10 Thread deniz
I have a simple 2-node(5.1) cloud env with 6 different cores. One of the
cores(CoreB) has some update issue which I am aware of, but the error logs
on solr, I am seeing these below:

ERROR - 2015-08-11 05:04:34.592; [*CoreA shard1 core_node2 CoreA*]
org.apache.solr.update.StreamingSolrClients$1; error
org.apache.solr.common.SolrException: Bad Request
request:
*http://server:8983/solr/CoreB*/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2Fserver2%3A8983%2Fsolr%2FCoreB%2Fwt=javabinversion=2
at
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:241)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

ERROR - 2015-08-11 05:09:30.260; [CoreA shard1 core_node2 CoreA]
org.apache.solr.update.StreamingSolrClients$1; error
org.apache.solr.common.SolrException: Bad Request
request:
http://server:8983/solr/CoreB/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2Fserver2%3A8983%2Fsolr%2FCoreB%2Fwt=javabinversion=2
at
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:241)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

ERROR - 2015-08-11 05:20:49.710; [gaysuser shard1 core_node2 gaysuser]
org.apache.solr.update.StreamingSolrClients$1; error
org.apache.solr.common.SolrException: Bad Request
request:
http://server:8983/solr/CoreB/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2Fserver2%3A8983%2Fsolr%2FCoreB%2Fwt=javabinversion=2
at
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:241)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

ERROR - 2015-08-11 05:23:29.868; [CoreA shard1 core_node2 CoreA]
org.apache.solr.update.StreamingSolrClients$1; error
org.apache.solr.common.SolrException: Bad Request
request:
http://server:8983/solr/CoreB/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2Fserver2%3A8983%2Fsolr%2FCoreB%2Fwt=javabinversion=2
at
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:241)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

is this normal and just an issue with the wrong logging params or there is
something wrong with the configs of the cloud 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Core-mismatch-in-org-apache-solr-update-StreamingSolrClients-Errors-for-ConcurrentUpdateSolrClient-tp4222335.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: stats component performance

2015-06-09 Thread deniz
would be really nice if there are any... i am also looking for some detailed
explanation about the stats on solr, couldnt find anything yet though...



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/stats-component-performance-tp4202569p4210621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: zk disconnects and failure to retry?

2015-01-22 Thread deniz
bumping an old entry... but are there any improvements on this issue?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/zk-disconnects-and-failure-to-retry-tp4065877p4181370.html
Sent from the Solr - User mailing list archive at Nabble.com.


Using tmpfs for Solr index

2015-01-22 Thread deniz
Would it boost any performance in case the index has been switched from
RAMDirectoryFactory to use tmpfs? Or it would simply do the same thing like
MMap? 

And in case it would be better to use tmpfs rather than RAMDirectory or
MMap, which directory factory would be the most feasible one for this
purpose?

Regards,



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-tmpfs-for-Solr-index-tp4181399.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Grouping based on multiple filters/criterias

2014-08-24 Thread deniz
umeshprasad wrote
 Solr does support date mathematics in filters / queries . So your
 timestamps intervals can be dynamic ..

how would it be done for this case then? retrieving bunch of documents
sorted by timestamp, then depending on some interval like 1 hour, those
should be grouped all together if those are published by the same user...
running function on query might be ok but combining with other fields on
dynamic basis again sounds a bit confusing



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-based-on-multiple-filters-criterias-tp4153462p4154920.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Grouping based on multiple filters/criterias

2014-08-21 Thread deniz
umeshprasad wrote
 Grouping supports group by queries.
 
 https://cwiki.apache.org/confluence/display/solr/Result+Grouping
 
 However you will need to form the group queries before hand.
 
 Thanks  Regards
 Umesh Prasad
 Search 

 Lead@

 
  in.linkedin.com/pub/umesh-prasad/6/5bb/580/

have seen this page before but it is not providing the functionality that I
need, because the timestamp interval would be seriously tricky, as it is
supposed to be dynamic... 

though i have found another solution to handle this out of Solr :) 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-based-on-multiple-filters-criterias-tp4153462p4154343.html
Sent from the Solr - User mailing list archive at Nabble.com.


Grouping based on multiple filters/criterias

2014-08-18 Thread deniz
is it possible to have multiple filters/criterias on grouping? I am trying to
do something like those, and I am assuming that from the statuses of the
tickets, it doesnt seem possible?

https://issues.apache.org/jira/browse/SOLR-2553
https://issues.apache.org/jira/browse/SOLR-2526
https://issues.apache.org/jira/browse/LUCENE-3257

To make everything clear, here is details which I am planning to do with
Solr...

so there is an activity feed of a site and it is basically working like
facebook or linkedin newsfeed, though there is no relationship between
users, it doesnt matter if i am following someone or not, as long as their
settings allows me to see their posts and they hit my search filter, i will
see their posts.

the part related with grouping is tricky... so lets assume that you are able
to see my posts, and I have posted 8 activities in the last one hour, those
activities should appear different than other posts, as it would be a
combined view of the posts...

i.e 
 deniz
  activity one
  activity two
  .
  activity eight
 /deniz
 other user 1
 single activity
 /other user 1
 another user 1
 single activity
  /another user 1
  other user 2
 activity one
 activity two
  /other user 2

So here the results should be grouped depending on their post times... 

on solr (4.7.2), i am indexing activities as documents, and each document
has bunch of fields including timestamp and source_user etc etc.

is it possible to do this on current solr? 

(in case the details are not clear, please feel free to ask for more details
:) )







-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-based-on-multiple-filters-criterias-tp4153462.html
Sent from the Solr - User mailing list archive at Nabble.com.


Retrieving and updating large set of documents on Solr 4.7.2

2014-08-17 Thread deniz
 0 down vote favorite


I am trying to implement an activity feed for a website, and planning to use
Solr for this case. As it does not have any follower/following relation,
Solr is fitting for the requirements.

There is one point which makes me concerned about performance. So as user A,
I may have 10K activities in the feed, and then I have updated my
preferences, so the activities that I have posted should be updated too
(imagine that I am changing my user name, so all of the activities would
have my new username). In order to update the all 10K activities, i need to
retrieve the unique document ids from Solr, then update them. Retrieving 10K
docs at once is not a good idea, if you imagine bunch of other users are
also doing a similar change. I have checked docs and forums, using Cursors
on Solr seems ok, but still makes me thing about the performance (after id
retrieval, i need to update each activity)

Are there any other ways to handle this withou Cursors? Or I should better
use another tool/backend to have something like a username - activity_id
mapping, so i can directly retrieve the ids to update?

Regards,




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Retrieving-and-updating-large-set-of-documents-on-Solr-4-7-2-tp4153457.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 4.7.2 Core Creation Issue on SC - ZK

2014-05-19 Thread deniz
Hello,

I am using SC with version 4.7.2. There is already one collection/core
running on the cloud, and I am trying to add a new core according to
https://wiki.apache.org/solr/CoreAdmin#CREATE 

When I add the core, I can see that it is added to collections in Cloud/file
menu, but the config part still shows the existing core, so the new one that
I add shares the same config. So the result looks like this:

  /collections
  newcore
  oldcore
  /configs
 oldcore

Are there any other settings that I need to do to see my newcore's configs
on the cloud? In this way newcore is only seeing oldcore settings, which is
not something wanted...



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-7-2-Core-Creation-Issue-on-SC-ZK-tp4136833.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Searching with special chars

2014-03-03 Thread deniz
So as there was no quick work around to this issue, we simply change the http
method from get to post, to avoid further problems which could be triggered
by user input too. though this violates the restful standards... at least we
have something running properly



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Searching-with-special-chars-tp4120047p4121043.html
Sent from the Solr - User mailing list archive at Nabble.com.


Searching with special chars

2014-02-26 Thread deniz
Hello,

We are facing some kinda weird problem. So here is the scenario:

We have a frontend and a middle-ware which is dealing with user input search
queries before posting to Solr.

So when a user enters city:Frankenthal_(Pfalz) and then searches, there is
no result although there are fields on some documents matching
city:Frankenthal_(Pfalz). We are aware that we can escape those chars, but
the middleware which is accepting queries is running on a Glassfish server,
which is refusing URLs with backslashes in it, hence using backslashes is
not okay for posting the query.

To make everyone clear about the system it looks like:

(PHP) - Encoded JSON - (Glassfish App - Middleware) - Javabin - Solr
 
any other ideas who to deal with queries with special chars like this one? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Searching-with-special-chars-tp4120047.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: how to avoid recover? how to ensure a recover success?

2013-10-28 Thread deniz
I have had a similar problem before but the patch which was included with the
version 4.1 fixed that... I couldnt reproduce the problem with the patch... 

anyone is able to reproduce this exception?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-avoid-recover-how-to-ensure-a-recover-success-tp4096777p4098166.html
Sent from the Solr - User mailing list archive at Nabble.com.


Field with default value and stored=false, will be reset back to the default value in case of updating other fields

2013-10-09 Thread deniz
hi all,

I have encountered some problems and post it on stackoverflow here:
http://stackoverflow.com/questions/19285251/solr-field-with-default-value-resets-itself-if-it-is-stored-false
 

as you can see from the response, does it make sense to open a bug ticket
for this? because, although i can workaround this by setting everything back
to stored=true, it does not make sense to keep every field stored while i
dont need to return them in the search result.. or will anyone can make more
detailed explanations that this is expected and normal? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Field-with-default-value-and-stored-false-will-be-reset-back-to-the-default-value-in-case-of-updatins-tp4094508.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Field with default value and stored=false, will be reset back to the default value in case of updating other fields

2013-10-09 Thread deniz
Billnbell wrote
 You have to update the whole record including all fields...

so what is the point of having atomic updates if i need to update
everything? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Field-with-default-value-and-stored-false-will-be-reset-back-to-the-default-value-in-case-of-updatins-tp4094508p4094523.html
Sent from the Solr - User mailing list archive at Nabble.com.


Full Import and Most up to date data

2013-08-01 Thread deniz
Hello,

I have some questions about full importing. 

So lets say that i have some kinda large data to index and it takes around 2
hours to finish the full import.

When I start full import at 1pm, what happens if some data in db is updated
at 1:15 or 2pm while full import is still going on? will they be lost on
solr side or they will be added to solr index? or it all depends if that
updated data was indexed before the update or not? 

in case it is never indexed, how do we get that updated data without a delta
import(if possible)?

regards



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Full-Import-and-Most-up-to-date-data-tp4081847.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Unexpected character '' (code 60) expected '='

2013-08-01 Thread deniz
Vineet Mishra wrote
 I am using Solr 3.5 with the posting XML file size of just 1Mb.
 
 
 On Wed, Jul 31, 2013 at 8:19 PM, Shawn Heisey lt;

 solr@

 gt; wrote:
 
 On 7/31/2013 7:16 AM, Vineet Mishra wrote:
  I checked the File. . .nothing is there. I mean the formatting is
 correct,
  its a valid XML file.

 What version of Solr, and how large is your XML file?

 If Solr is older than version 4.1, then the POST buffer limit is decided
 by your container config, which based on your stacktrace, is tomcat.  If
 you have 4.1 or later, then the POST buffer limit is decided by Solr,
 and defaults to 2048KiB.

 Could that be the problem?

 Thanks,
 Shawn




you might need to escape some chars like  to lt; and so on



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Unexpected-character-code-60-expected-tp4081603p4081854.html
Sent from the Solr - User mailing list archive at Nabble.com.


DIH and tinyint(1) Field

2013-07-22 Thread deniz
Hello, 

I have exactly the same problem as here 

http://lucene.472066.n3.nabble.com/how-to-avoid-DataImportHandler-from-interpreting-quot-tinyint-1-unsigned-quot-value-as-quot-Boolean--td4035241.html#a4036967

however for the solution there, it is ruining my date type fields...

are there any other ways to deal with this problem? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-and-tinyint-1-Field-tp4079392.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: DIH and tinyint(1) Field

2013-07-22 Thread deniz
Shalin Shekhar Mangar wrote
 Your database's JDBC driver is interpreting the tinyint(1) as a boolean.
 
 Solr 4.4 fixes the problem affected date fields with convertType=true. It
 should be released by the end of this week.
 
 
 On Mon, Jul 22, 2013 at 12:18 PM, deniz lt;

 denizdurmus87@

 gt; wrote:
 
 Hello,

 I have exactly the same problem as here


 http://lucene.472066.n3.nabble.com/how-to-avoid-DataImportHandler-from-interpreting-quot-tinyint-1-unsigned-quot-value-as-quot-Boolean--td4035241.html#a4036967

 however for the solution there, it is ruining my date type fields...

 are there any other ways to deal with this problem?



 -
 Zeki ama calismiyor... Calissa yapar...
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/DIH-and-tinyint-1-Field-tp4079392.html
 Sent from the Solr - User mailing list archive at Nabble.com.

 
 
 
 -- 
 Regards,
 Shalin Shekhar Mangar.


thank you Shalin, for a quick solution i found that adding
amp;tinyInt1isBit=false to connection url also works fine



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-and-tinyint-1-Field-tp4079392p4079398.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Field exist in schema.xml but returns

2013-04-09 Thread deniz
Raymond Wiker wrote
 You have misspelt the tag name in the field definition... you have fiald
 instead of field.

thank you Raymond, it was really hard to find it out in a massive schema
file



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Field-exist-in-schema-xml-but-returns-tp4054634p4054903.html
Sent from the Solr - User mailing list archive at Nabble.com.


Field exist in schema.xml but returns

2013-04-08 Thread deniz
hi all, I am using solrcloud and running some simple test queries... though i
am getting a undefined field error for a field that I have in my schema.xml

so the query is

myField:*

and response is:

response
  lst name=responseHeader
int name=status400/int
int name=QTime3/int
lst name=params
  str name=wtxml/str
  str name=qmyField:*/str
   /lst
  /lst
  lst name=error
str name=msgundefined field myField/str
int name=code400/int
  /lst
/response




and this is how my schema.xml looks like:
..
 field name=field1 type=tint indexed=true stored=true/
 fiald name=myField type=long indexed=true stored=true/
 field name=field3 type=tint indexed=true stored=true/
..

Any ideas what the reason could be? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Field-exist-in-schema-xml-but-returns-tp4054634.html
Sent from the Solr - User mailing list archive at Nabble.com.


Analysing Solr Log Files

2013-01-08 Thread deniz
Hi All,

i want to analyze the solr log file... the thing i want to do is, putting
all the queries coming to the server to a log file, on a daily or hourly
basis, and then running a tool to make analysis like most used field or
queries, the queries which have hits and so on... are there any tools that I
can do this without modifying solr source code? or I need to find a third
party tool or write my own code to process the out put in logs? 




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Analysing-Solr-Log-Files-tp4031746.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Analysing Solr Log Files

2013-01-08 Thread deniz
thank you Otis 

I have used sematext's trial version but it requires sending log files to
another url(correct me if i am wrong :) ), but i need something which could
run on local, something would be triggrered by cronjob or something could be
integrated(somehow) with the admin interface 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Analysing-Solr-Log-Files-tp4031746p4031748.html
Sent from the Solr - User mailing list archive at Nabble.com.


How does fq skip score value?

2012-12-09 Thread deniz
Hello,

 I would like to know fq parameters doesnt deal with scoring so on,,, I have
been digging the code, to see where it separates and executes fq parameters
but couldnt find yet...

anyone knows how does fq work to skip score information?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-does-fq-skip-score-value-tp4025608.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: ids parameter decides the result set and order, no matter what kind of query you enter

2012-12-05 Thread deniz
replying my own question. ids is used in responsebuilders some internal
mapping structure which is used for sorting and reordering document list
before it is shown to the end user... simply it stores unique field values
from documents which are to be shown to the user, and each mapping of these
ids are actual documents which were gathered from other shards, including
their positions on shards and their future position in merged resultset





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/ids-parameter-decides-the-result-set-and-order-no-matter-what-kind-of-query-you-enter-tp4024390p4024412.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Query Parameter : ids - What is this used for?

2012-12-03 Thread deniz
Hello, as it is clear in the title too, i wanna know for what solr uses this
parameter... i see it on a sharding env on cloud, so i guess it is related
with cloud but still there is no explanation about it in any of wiki pages
that i have checked... can someone explain the usage and aim of this
parameter? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Query-Parameter-ids-What-is-this-used-for-tp4024152.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Query Parameter : ids - What is this used for?

2012-12-03 Thread deniz
Yonik Seeley-4 wrote
 It's an internal implementation detail of distributed search - the
 second phase selects specific ids on each shard via the ids
 parameter.
 
 -Yonik
 http://lucidworks.com

so i suppose it us unique field? or it depends on which field we are using
for querying on shards? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Query-Parameter-ids-What-is-this-used-for-tp4024152p4024159.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Errors while recovering

2012-12-02 Thread deniz
Mark Miller-3 wrote
 FYI, I've fixed this 5x issue a few days ago.
 
 - Mark

Yep, after the patch, it is not occuring anymore, thank you 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-tp4022542p4023858.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Sorting Problem

2012-12-02 Thread deniz
Chris Hostetter-3 wrote
 w/o more information about how/where you add this information, it's going 
 to be really hard to give you suggestions on how to fix your problem.

The modifications I made is nearly the same with score field. Basically I
have added a PositionAugmenter class, modified ReturnFields class and made
some changes on the classes which extends Document Iterator class, to show
positions for each document. It is pretty simple actually, when you make a
query you see some results, sorted by whichever field you choose, and
depending on how you see the result page, there is position information for
each document added by my modifications. 


And thank you for your explanation, as i see, it is pretty much works in the
same way with traditional sharding... but the point which makes me totally
confused is that even if there is a single solr instance in cloud the order
of documents is different and position information is not correct.

So when you make a search on a standalone single solr, you see some
documents sorted in some order. And when you make the same search, with the
same dataset and index in a cloud which has single solr inside, returns
documents in a different order. so basically even without asking for
position information, the order is different between a standalone and an
instance on the cloud. 

besides this, I have made some simple tests to see what was going on. on a
standalone solr, when i make a query, and also add position in fl, my
modifications are called only once, and then i see the results. however, in
cloud, with a single instance, when i run the same query, the same part is
called more than once, usually 3 times (I dont know why?)

and when there are more instances on the cloud, i can see same logs in both
instances, though number of times that i can see the logs differs for each
request, which is normal for cloud with multiple solrs running on it...

after these, I guess i need to check how the request is distributed on
cloud... any ideas where I should start checking? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Sorting-Problem-tp4023382p4023861.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Sorting Problem

2012-12-02 Thread deniz
deniz wrote
 after these, I guess i need to check how the request is distributed on
 cloud... any ideas where I should start checking?

as for replying my own question (hopefully correct) I have started digging
org.apache.solr.handler.component.SearchHandler.handleRequestBody which
loops (i couldnt find out exacly why or how, but it is always 3 times )
where it calls my custom method..



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Sorting-Problem-tp4023382p4023871.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Sorting Problem

2012-12-02 Thread deniz
I think I have figured out this... at least some kinda.. 

After putting logs here there in the code, especially in SolrCore,
HttpShardHandler, SearchHandler classes, it seems like sorting is done after
all of the shards finish responding and then before we see the results the
result set is sorted... I am not sure if this is correct or not totally, it
is what i see from the logs, in the request headers..

so for a shard or distributed search the header looks like this:

status=0,QTime=4,params={df=text,fl=*,position,shard.url=blablabla

and just before i see the results on my browser the header becomes this:

status=0,QTime=178,params={fl=*,position,sort=myfield desc

and basically, because the position field was filled before actual sorting
on the page, the positions are incorrect...

is this right? i mean sorting is really done after everything finishes and
we are about to get results? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Sorting-Problem-tp4023382p4023889.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr - Jetty Form Too Large Exception

2012-11-29 Thread deniz
Marcin Rzewucki wrote
 
 I think you should change/set value for multipartUploadLimitInKB attribute
 of requestParsers in solrconfig.xml


the value for multiPartUploadLimit is shown as 2048000 in the config and in
the error logs i see 20, related with jetty... I have changed some part
in the source code and will test soon... i dunno if it works or not for
now...



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Jetty-Form-Too-Large-Exception-tp4023185p4023367.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud - Sorting Problem

2012-11-29 Thread deniz
Hello, I am having a weird problem with solrcloud and sorting, I will open a
bug ticket about this too, but wondering if anyone had similar problems like
mine

Background: Basically, I have added a new feature to Solr after I got the
source code. Similar to the we get score in the resultset,  I am now able
to get position (or ranking) information of each document in the list. i.e
if there are 5 documents in the result set, each of them has its position
information if you add fl=*,position to the query.

Problem: Briefly, when a solr instance is standalone, there is no problem
with sorting and posiiton information of each document, but when the same
solr is on a cloud (as a master), the result set is some kinda shuffled and
position information is incorrect.

So it ls like this:

Both standalone and the on cloud finds the same amount of documents in the
index (say 15000), which is filled by using the same data source. So till
this point everything seems normal

But here are the results

Standalone Solr:

doc
   ida/id
   position1/position
/doc
doc
  idb/id
  position2/position
/doc
doc
   idc/id
   position3/position
/doc
doc
  idd/id
  position4/position
/doc
doc
  ide/id
  position5/position
/doc
doc
  idf/id
  position6/position
/doc

Same Solr on Cloud (as master)

doc
idz/id
position4/position
/doc
doc
   idx/id
   position6/position
/doc
doc
   idy/id
   position1/position
/doc
doc
   idv/id
   position3/position
/doc
doc
   idr/id
   position2/position
/doc
doc
   ido/id
   position5/position
/doc


As clear above, the *same configs with the same query and sorting
parameter*, are returning *different documents and totally shuffled
position* information. 


Anyone has any ideas on this?





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Sorting-Problem-tp4023382.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Sorting Problem

2012-11-29 Thread deniz
After playing with this more, i think I have some clue...

on the standalone solr, when i give start 11 and rows 20, i can see
documents with positions ranging from 12 to 31, which is correct... on the
cloud, when i give the same parameters, again i get the same documents, but
this time position ranges between 1 to 20... 

so my question... cloud uses some different class for responding to the
search request? if so, are there any other ways to find those classes out
other than digging the code? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Sorting-Problem-tp4023382p4023399.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: SolrCloud(5x) - Errors while recovering

2012-11-27 Thread deniz
Markus Jelsma-2 wrote
 Seems you got this issue:
 https://issues.apache.org/jira/browse/SOLR-4032
  

thank you for the heads up 


and a surprising thing about my error.. when i use smaller size of
documents, i do not get any errors at all... I dont know why but I have just
tried to index only 12K docs, with few fields, with the same configuration,
and after a solr node is restarted, there is no errors at all and i got the
sync'ed index with the cloud for that node... 

there is nobody using solrcloud on their prod envs or with too large
datasets? or they are using one that they have customized for their own
needs? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-tp4022542p4022564.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: SolrCloud(5x) - Errors while recovering

2012-11-27 Thread deniz
another update

having 300K docs causes the same error...

I think there is something going on with the size of the index stored...
after some point replication fails... 

any ideas how to bypass this?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-tp4022542p4022570.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: SolrCloud(5x) - Errors while recovering

2012-11-27 Thread deniz
i have that issue with some larger size of indexes only... 12 - 14K docs are
working totally okay even after a node dies and then starts again but if
index is bigger, somehow i keep getting the lines above



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-tp4022542p4022610.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Fails to read db config file

2012-11-26 Thread deniz
Marcin Rzewucki wrote
 Hi,
 
 It seems like the file is missing from Zookeeper. Can you confirm ?
 
 Regards.

nope, i can see my db-config file on the admin interface of solr as well as
zk client on command line, i dont think it is missing from zookeeper 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Fails-to-read-db-config-file-tp4022299p4022304.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Fails to read db config file

2012-11-26 Thread deniz
okay, after changing it to db-config from the full path above, i am able to
see dataimport page, but still data import is failing... i see this in the
logs 




SEVERE: Full Import
failed:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
to PropertyWriter implementation:ZKPropertiesWriter
at
org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataImporter.java:336)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:418)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)
Caused by: org.apache.solr.common.cloud.ZooKeeperException:
ZkSolrResourceLoader does not support getConfigDir() - likely, what you are
trying to do is not supported in ZooKeeper mode
at
org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:100)
at
org.apache.solr.handler.dataimport.SimplePropertiesWriter.init(SimplePropertiesWriter.java:91)
at
org.apache.solr.handler.dataimport.ZKPropertiesWriter.init(ZKPropertiesWriter.java:45)
at
org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataImporter.java:334)
... 3 more

Exception in thread Thread-306 java.lang.NullPointerException
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:427)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)



and i cant import anything at all... any ideas?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Fails-to-read-db-config-file-tp4022299p4022307.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Fails to read db config file

2012-11-26 Thread deniz
Mark Miller-3 wrote
 It looks like your original path had a double / in it that was causing
 problems.

my original path in the config file doesnt have any double quotes, but when
it is on solrcloud, it adds additional slash to the path... i am not an
zookeeper expert or something but could it be because of absolute path rule
of zookeeer? basically while you are listing something on zk, you need to
you go with /configs/corename/ and then whatever node comes here... and as
in my config file it was starting with a slash too it is simply added to the
path... does this make sense? if so, is this a bug too?



Mark Miller-3 wrote
 It looks like the below is a bug. Could you please file a JIRA issue?

added :)



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Fails-to-read-db-config-file-tp4022299p4022368.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Fails to read db config file

2012-11-26 Thread deniz
Mark Miller-3 wrote
 It looks like your original path had a double / in it that was causing
 problems.

my original path in the config file doesnt have any double quotes, but when
it is on solrcloud, it adds additional slash to the path... i am not an
zookeeper expert or something but could it be because of absolute path rule
of zookeeer? basically while you are listing something on zk, you need to
you go with /configs/corename/ and then whatever node comes here... and as
in my config file it was starting with a slash too it is simply added to the
path... does this make sense? if so, is this a bug too?



Mark Miller-3 wrote
 It looks like the below is a bug. Could you please file a JIRA issue?

added :)



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Fails-to-read-db-config-file-tp4022299p4022369.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Fails to read db config file

2012-11-26 Thread deniz
Mark Miller-3 wrote
 It looks like your original path had a double / in it that was causing
 problems.

my original path in the config file doesnt have any double quotes, but when
it is on solrcloud, it adds additional slash to the path... i am not an
zookeeper expert or something but could it be because of absolute path rule
of zookeeer? basically while you are listing something on zk, you need to
you go with /configs/corename/ and then whatever node comes here... and as
in my config file it was starting with a slash too it is simply added to the
path... does this make sense? if so, is this a bug too?



Mark Miller-3 wrote
 It looks like the below is a bug. Could you please file a JIRA issue?

added :)



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Fails-to-read-db-config-file-tp4022299p4022370.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud(5x) - Errors while recovering

2012-11-26 Thread deniz
Here is briefly what is happening:

I have a simple SolrCloud environment for test purposes, running with a
zookeeper ensemble, not the ones embedded in Solr.

I have 3 instances in the cloud, all of them are using RAMDirectory (which
is enabled by new Solr release to use with cloud)

After running zookeepers and connecting my solrs to them, the cloud is up
without any errors or problems. Then I have started indexing (which is much
slower than a single instance, i will open a topic about it too) and
everything is okay once again, all of the nodes get the sync'ed data from
the leader node. 

After that I have killed one Solr instance. then I have restarted it and in
the logs it keeps showing me these errors:

SEVERE: Error while trying to recover:org.apache.solr.common.SolrException:
Server at http://myhost:8995/solr/mycore returned non ok status:500,
message:Server Error
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:372)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181)
at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
at
org.apache.solr.cloud.RecoveryStrategy.commitOnLeader(RecoveryStrategy.java:182)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:134)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:407)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
.
.
.
.
.

Nov 27, 2012 11:49:04 AM
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher fetchPackets
WARNING: Error in fetching packets 
java.io.EOFException
at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:151)
at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:144)
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchPackets(SnapPuller.java:1143)
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1107)
at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:716)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:387)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:273)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:152)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:407)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
.
.
.
.
.
SEVERE: SnapPull failed :org.apache.solr.common.SolrException: Unable to
download _41y.fdt completely. Downloaded 3145728!=3243906
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.cleanup(SnapPuller.java:1237)
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1118)
at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:716)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:387)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:273)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:152)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:407)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)
SEVERE: Error while trying to recover:org.apache.solr.common.SolrException:
Replication for recovery failed.
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:155)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:407)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222)



can anyone explain why i am getting this error? 












-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Errors-while-recovering-tp4022542.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud Performance - Indexing

2012-11-26 Thread deniz
As I am some kinda confused, I wanna check if anyone else has same confusions
like mine about solrcloud..

I have set up an environment with 3 solr instances and 2 zookeepers, amd
tried to index some documents from mysql db. the total amount the docs are
around 3.5M. before indexing i was expecting some longer time for cloud as
it does replication between nodes, but i am some kinda disappointed after
seeing that indexing took 4 to 5 times higher than indexing on a single solr
instance. on a single solr instance i am able to index those docs around 17
mins while with cloud it tooks around 60 minutes. and as a possible
production environment will have more instances and machines available for
the cloud, i cant imagine the indexing time... in adiditon to initial
indexing time, we will be updating our indexes frequently, which makes me
sceptical about solrcloud. 

so in a possible production environment with solrcloud, in case there is a
serious failure on some nodes, sync operation on cloud will take long
time... in this case, reindexing everything on a single instance will took
less than 17 mins, which is a reasonable amount of time for a crash.. so in
this case does it make sense use solrcloud although indexing time will
increase much higher than a single instance? or using a traditional master -
slave structure will be better for this case? 

I am aware cloud makes loadbalancing and some other stuff largely concerned
about searching, rather than indexing, but for a frequently updated system,
does it still useful to set up a cloud environment? 

and are there some workarounds for indexing speed, other than the known ones
for solr, on cloud? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Performance-Indexing-tp4022549.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Weird Behaviour on Solr 5x (SolrCloud)

2012-11-21 Thread deniz
Mark Miller-3 wrote
 I'm not sure - I guess I'll have to look into it - could you file a
 JIRA issue with these details?

sure... 
but before that could it be because of using RAM dir? because basically when
you restart solr the ram is gone and it tries to checks the old folder that
it had used... and as it cant find anything in the ram it shows an empty
index... could it be the reason? though still this not explains why after
restart it was not filled with the data from cloud...





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Weird-Behaviour-on-Solr-5x-SolrCloud-tp4021219p4021776.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-21 Thread deniz
after putting the port information to solr.xml too, it seems properly... i
dont know why this thing only happens on remote machines not on local, but
could this be a minor bug related with solr? basically if we are giving the
port information in the starting command, then we shouldnt be dealing with
the port information in configs files, imho



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254p4021777.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-20 Thread deniz
Hello,

I am running a Solr instance (4.0), without invoking anything about
zookeeper and solrcloud, as a standalone server on a machine.

then for testing Solr 5x trunk, i have set 2 Solr (5x) instances, running
with -DzkHost= someaddress:port

and when i check zookeeper logs, i can see that the standalone Solr(4.0) is
also detected, and although there is nothing indexed on the last two which
are in cloud, a search query returns results from the standalone Solr... 

I am using basic configs from SolrCloud wiki, except showing an external
Zookeeper server rather than embedded one into Solr..

can anyone explain why I am able to see my standalone solr in cloud and why
my search queries returning results? 

i am totally confused



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-20 Thread deniz
well here are more details about my starting commands

This is the standalone SolrServer:
(port 8983)
java 
-server 
-XX:+UnlockExperimentalVMOptions 
-XX:+UseG1GC 
-XX:+UseCompressedStrings 
-Dcom.sun.management.jmxremote 
-d64 
-Xmx4096m 
-Dcom.sun.management.jmxremote.port= 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Djava.rmi.server.hostname=remotehost -jar start.jar


These are below are the ones who are supposed to be in cloud:

java -DzkHost=zkhost:2182 -jar start.jar (port 8995)
java -DzkHost=zkhost:2181 -jar start.jar (port 8996)

The three of them are on the same remote machine..


and here is the one that i run from my local

java -Dbootstrap_conf=true -DzkHost=zkhost:2181 -jar start.jar (port 8997)



and when i check the cloud page, I see 

localhost:8997 and remotehost: 8983 in cloud... the actual ones
(remotehost:8995 and remotehost:8996) are totally invisible to the cloud






-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254p4021471.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-20 Thread deniz
so another test result: 

i have set up a similar environment on another virtual machine which is
running on the same hard machine with my previous example...

so basically my standalone solr is running on virtual1:8983 and i set up 3
solr instances which are on virtual2:8995,8996,8997... those virtual1 and
virtual2 are on the same hard machine, lets call machine1

so basically, whenever i start a zookeeper server and connect any of my
instances on virtual2, only *virtual2:8983* is displayed in cloud(which must
be *virtual1:8983* actually, there is nothing running on 8983 on virtual2!) 

is this a bug? or something related with virtual machines? or what is it? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254p4021477.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-20 Thread deniz
Mark Miller-3 wrote
 How are you specifying the port? I don't see jetty.port in there. That
 is critical - it sets the hostPort in solr.xml.
 - Mark

setting it with -Djetty.port=blabla or directly in etc/jetty.xml



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254p4021480.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud(5x) - Detects all of the Solr insrances on a machine

2012-11-20 Thread deniz
Mark Miller-3 wrote
 It must be passes with -D as a system prop with the default setup.
 That feeds hostPort in solr.xml. If you use etc/jetty.xml, but sure to
 still pass it on the cmd line or also put the port in solr.xml for
 hostPort.
 - Mark

basically I should add the port info to the solr.xml too? 




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-5x-Detects-all-of-the-Solr-insrances-on-a-machine-tp4021254p4021485.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Weird Behaviour on Solr 5x (SolrCloud)

2012-11-20 Thread deniz
is this because of zookeeper's load balancer or something like that? because
the results are returning totally randomly... 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Weird-Behaviour-on-Solr-5x-SolrCloud-tp4021219p4021500.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Weird Behaviour on Solr 5x (SolrCloud)

2012-11-20 Thread deniz
well... i find a way to avoid this... i dont know if it is the correct way or
i am simply bypassing the problem instead of fixing it..

when i delete the data/ folders contents before restarting, it can get the
index information from cloud without any problem...

so it is the way how solrcloud works? or i am missing something important
here?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Weird-Behaviour-on-Solr-5x-SolrCloud-tp4021219p4021507.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud Error after leader restarts

2012-11-19 Thread deniz
i know facts about ramdirectory actually.. just running some perf tests on
our dev env right now..

so in case i use ramdir with 5x cloud, it will still not do the recovery? i
mean it will not get the data from the leader and fill its ramdir again?



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Error-after-leader-restarts-tp4020985p4021203.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud Error after leader restarts

2012-11-19 Thread deniz
Mark Miller-3 wrote
 On Nov 19, 2012, at 9:11 PM, deniz lt;

 denizdurmus87@

 gt; wrote:
 
 so in case i use ramdir with 5x cloud, it will still not do the recovery?
 i
 mean it will not get the data from the leader and fill its ramdir again?
 
 Yes, in 5x RAM directory should be able to recover.
 
 - Mark

thank you so much for your patience with me :) 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Error-after-leader-restarts-tp4020985p4021209.html
Sent from the Solr - User mailing list archive at Nabble.com.


Weird Behaviour on Solr 5x (SolrCloud)

2012-11-19 Thread deniz
Hi all, 

after Mark Miller made it clear to me that 5x is supporting cloud with
ramdir, I have started playing with it and it seemed working smoothly,
except a weird behaviour.. here is the story of it:

Basically, I have pulled the code and built solr 5x, and the replace the war
file in webapps dir of my current installation... then i have started my
zookeeper servers..

after that i have started solr instances with the params below:

java -Djetty.port=7574 -DzkHost=zkserver2:2182 -jar start.jar (running on a
remote machine)
java -Dbootstrap_conf=true -DzkHost=zkserver1:2181 -jar start.jar (running
on local)

after both of them are up, i have indexed some docs, and both of the solr
instances were updated succesfully. after this point, i have killed one of
the solr (running on remote, not leader) and then restarted it again. there
was no errors in the log and everything seemed normal in the logs...

however, when i have checked the web interface for the one i have restarted
it showed 0 docs.. after that I ran q=*:* few times... 
and thats the point which surprises me... randomly it returned 0 results and
then it returned correct numbers.. each time i make the same query, i get an
empty result set randomly... I have no idea why this is happening


here is the logs 

for the one running on remote (which was restarted)

Nov 20, 2012 11:32:11 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=10.60.0.54:8983/solr/collection1/|remote:7574/solr/collection1/NOW=1353382331589start=0q=*:*isShard=truefsv=true}
hits=0 status=0 QTime=0 
Nov 20, 2012 11:32:11 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select params={wt=xmlq=*:*} hits=0
status=0 QTime=7 
Nov 20, 2012 11:32:22 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=10.60.0.54:8983/solr/collection1/|remote:7574/solr/collection1/NOW=1353382342238start=0q=*:*isShard=truefsv=true}
hits=0 status=0 QTime=0 
Nov 20, 2012 11:32:22 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select params={wt=xmlq=*:*} hits=0
status=0 QTime=7 
Nov 20, 2012 11:32:27 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=10.60.0.54:8983/solr/collection1/|remote:7574/solr/collection1/NOW=1353382347438start=0q=*:*isShard=truefsv=true}
hits=0 status=0 QTime=0 
Nov 20, 2012 11:32:27 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select params={wt=xmlq=*:*} hits=0
status=0 QTime=14 
Nov 20, 2012 11:32:28 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=10.60.0.54:8983/solr/collection1/|remote:7574/solr/collection1/NOW=1353382348255start=0q=*:*isShard=truefsv=true}
hits=0 status=0 QTime=1 
Nov 20, 2012 11:32:28 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select params={wt=xmlq=*:*} hits=0
status=0 QTime=7 
Nov 20, 2012 11:32:28 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select params={wt=xmlq=*:*} hits=32
status=0 QTime=14 


and for the same query, here is the log, from my local (leader, not
restarted)

Nov 20, 2012 11:31:46 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=localhost:8983/solr/collection1/|remoteserver:7574/solr/collection1/NOW=1353382306472start=0q=*:*isShard=truefsv=true}
hits=32 status=0 QTime=0 
Nov 20, 2012 11:31:46 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={df=textshard.url=localhost:8983/solr/collection1/|remoteserver:7574/solr/collection1/NOW=1353382306472q=*:*ids=SP2514N,GB18030TEST,apple,F8V7067-APL-KIT,adata,6H500F0,MA147LL/A,ati,IW-02,asusdistrib=falseisShard=truewt=javabinrows=10version=2}
status=0 QTime=1 
Nov 20, 2012 11:32:00 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={distrib=falsewt=javabinrows=10version=2df=textfl=id,scoreshard.url=localhost:8983/solr/collection1/|remoteserver:7574/solr/collection1/NOW=1353382320738start=0q=*:*isShard=truefsv=true}
hits=32 status=0 QTime=0 
Nov 20, 2012 11:32:00 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/select
params={df=textshard.url=localhost:8983/solr/collection1/|remoteserver:7574/solr/collection1/NOW=1353382320738q=*:*ids=SP2514N,GB18030TEST,apple,F8V7067-APL-KIT,adata,6H500F0,MA147LL/A,ati,IW-02,asusdistrib=falseisShard=truewt=javabinrows=10version=2}
status=0 QTime=1 
Nov 20, 2012 11:32:28 AM org.apache.solr.core.SolrCore execute
INFO: [collection1] 

SolrCloud - Zookeeper Questions

2012-11-18 Thread deniz
Hi All,

I have been digging web for answers to some of my questions in my mind but
till now I am still not clear on them..

first (and main)question is about zookeeper itself.. are there any tutorials
in detail how we can use it with solr other than wikies? SolrCloud wiki page
is useful for sure, but i want to know how to control some operations like
removing a shard/replica temporarily and then adding back again and so on... 
I did some trials for that but basically it was playing with the command
line witb zKClient.sh, which I am not sure if it is the correct was to
administer SolrCloud. 

anyone has experience with zookeeper? 




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Zookeeper-Questions-tp4020971.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud - Zookeeper Questions

2012-11-18 Thread deniz
Mark Miller-3 wrote
 On Nov 18, 2012, at 10:02 PM, deniz lt;

 denizdurmus87@

 gt; wrote:
 
 removing a shard/replica temporarily and then adding back again 
 
 I'd unload them with http cmds (without using the options that delete
 things on disk) and then create them again when I wanted them back.
 
 - Mark


so then, the reason zookeeper is a part of SolrCloud is only for storing
config information? 

and instead of using the steps here
http://lucidworks.lucidimagination.com/display/solr/Using+ZooKeeper+to+Manage+Configuration+Files
can we directly change the configs and then reloading by http would do the
new configs valid? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Zookeeper-Questions-tp4020971p4020981.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud Error after leader restarts

2012-11-18 Thread deniz
Hello,

for test purposes, I am running two zookeepers on ports 2181 and 2182. and i
have two solr instances running on different machines...

For the one which is running on my local and acts as leader:
 java -Dbootstrap_conf=true -DzkHost=localhost:2181 -jar start.jar

and for the one which acts as follower, on a remote machine:
java -Djetty.port=7574 -DzkHost=address-of-mylocal:2182 -jar start.jar

until this point everything is smooth and i can see the configs on both
zookeeper hosts when i connect with zkCli.sh. 

just to see what happens and check recovery stuff, i have killed the solr
which is running on my local and tried to index some files by using the
follewer, which was failed... this is normal as writes are routed into the
leader...

the point that i dont understand is here:

when i restart the leader with the same command on terminal, after normal
logs, it start showing this 


Nov 19, 2012 2:15:18 PM org.apache.solr.common.SolrException log
SEVERE: SnapPull failed :org.apache.solr.common.SolrException: Index fetch
failed : 
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:400)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:297)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:151)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:405)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file
found in org.apache.lucene.store.RAMDirectory@1e75e89
lockFactory=org.apache.lucene.store.NativeFSLockFactory@128e909: files: []
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:639)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:75)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:62)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:191)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:77)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:354)
... 4 more

Nov 19, 2012 2:15:18 PM org.apache.solr.common.SolrException log
SEVERE: Error while trying to recover:org.apache.solr.common.SolrException:
Replication for recovery failed.
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:154)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:405)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)


it fails to recover after shutdown... why does this happen? 


 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Error-after-leader-restarts-tp4020985.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr4.0 problem zkHost with multiple hosts throws out of range exception

2012-11-12 Thread deniz
so do we need to add one of the servers from the -DzkHost string to -DzkRun? 
should it look like 

-DzkRun=host1:port -DzkHost=host:port, host1:port, host2:port in the
start up command? 


and will wiki page be updated? because the example there is still letting
into the error that was mentioned here nearly a month ago... 







-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr4-0-problem-zkHost-with-multiple-hosts-throws-out-of-range-exception-tp4014440p4019941.html
Sent from the Solr - User mailing list archive at Nabble.com.


Jetty Error while testing Solr

2012-11-07 Thread deniz
HI all, 

I got a weird error while running tests on my solr... here is the log for
that error:


ERROR o.a.solr.servlet.SolrDispatchFilter -
null:org.eclipse.jetty.io.EofException
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:154)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:101)
at
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:203)
at
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:196)
at 
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:94)
at
org.apache.solr.response.BinaryResponseWriter.write(BinaryResponseWriter.java:49)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:404)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:289)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
at java.lang.Thread.run(Unknown Source)




as you see there is no cause log for this error and i am totally confused... 
if it helps, this is my jetty config file on which i am running solr:



?xml version=1.0?
!DOCTYPE Configure PUBLIC quot;-//Jetty//Configure//ENquot;
quot;http://www.eclipse.org/jetty/configure.dtdquot;
 







 
 
Configure id=Server class=org.eclipse.jetty.server.Server
 

Call name=setAttribute
  Argorg.eclipse.jetty.server.Request.maxFormContentSize/Arg
  Arg20/Arg
/Call
 



Set name=ThreadPool
  
  New class=org.eclipse.jetty.util.thread.QueuedThreadPool
Set name=minThreads30/Set
Set name=maxThreads5/Set
Set name=detailedDumpfalse/Set
  /New
/Set
 



 
  
 

Call name=addConnector
  Arg
  New class=org.eclipse.jetty.server.bio.SocketConnector
Set name=hostSystemProperty name=jetty.host //Set
 Set name=portSystemProperty name=jetty.port
default=8983//Set
Set name=maxIdleTime5/Set
Set name=lowResourceMaxIdleTime1500/Set
Set name=statsOnfalse/Set
  /New
  /Arg
/Call
 



Set name=handler
  New id=Handlers
class=org.eclipse.jetty.server.handler.HandlerCollection
Set name=handlers
 Array type=org.eclipse.jetty.server.Handler
   Item
 New id=Contexts
class=org.eclipse.jetty.server.handler.ContextHandlerCollection/
   /Item
   Item
 New id=DefaultHandler

Re: Solr Replication is not Possible on RAMDirectory?

2012-11-06 Thread deniz
Erik Hatcher-4 wrote
 There's an open issue (with a patch!) that enables this, it seems:
 lt;https://issues.apache.org/jira/browse/SOLR-3911gt;
 
   Erik

well patch seems not doing that... i have tried and still getting some error
lines about the dir types




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Replication-is-not-Possible-on-RAMDirectory-tp4017766p4018670.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Replication is not Possible on RAMDirectory?

2012-11-05 Thread deniz
Erik Hatcher-4 wrote
 There's an open issue (with a patch!) that enables this, it seems:
 lt;https://issues.apache.org/jira/browse/SOLR-3911gt;

 i will check it for sure, thank you Erik :) 


Shawn Heisey-4 wrote
 ... transparently mapping the files on disk to a virtual memory space and
 using excess RAM to cache that data and make it fast.  If you have
 enough extra memory (disk cache) to fit the entire index, the OS will
 never have to read any part of the index from disk more than once

so for disk cache, are there any disks with 1 gigs or more of caches? if am
not wrong there are mostly 16 or 32mb cache disks around (or i am checking
the wrong stuff? ) if so, that amount definitely too small... 





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Replication-is-not-Possible-on-RAMDirectory-tp4017766p4018396.html
Sent from the Solr - User mailing list archive at Nabble.com.


MMapDirectory - Mapping Failed error

2012-11-04 Thread deniz
Hi All,

I was testing my solr on MMapDirectory, and while indexing, I get this error
lines in the log:



10:27:41.003 [commitScheduler-4-thread-1] ERROR
org.apache.solr.update.CommitTracker - auto commit
error...:org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1310)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1422)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:552)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:215)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
Source)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:284)
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.init(MMapDirectory.java:256)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:224)
at
org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.init(Lucene40PostingsReader.java:68)
at
org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:316)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.init(PerFieldPostingsFormat.java:194)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:233)
at
org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:107)
at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:57)
at
org.apache.lucene.index.ReadersAndLiveDocs.getReader(ReadersAndLiveDocs.java:120)
at
org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
at
org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:3010)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3001)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:363)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:255)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:249)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1249)
... 11 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
... 30 more



And this is how i run the start.jar file:

/java64/bin/java \
-server \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseG1GC \
-XX:+UseCompressedStrings \
-d64 \
-Xmx4096m \
-jar start.jar 

As another fact, on the interface, it shows success for indexing... so this
error lines simply informative about something not important? 

anyone knows why i get this error? I am running on a 64-bit linux OS with
solr 4.0..




-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/MMapDirectory-Mapping-Failed-error-tp4018182.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Replication is not Possible on RAMDirectory?

2012-11-04 Thread deniz
Michael Della Bitta-2 wrote
 No, RAMDirectory doesn't work for replication. Use MMapDirectory... it
 ends up storing the index in RAM and more efficiently so, plus it's
 backed by disk.
 
 Just be sure to not set a big heap because MMapDirectory works outside of
 heap.

for my tests, i dont think index is ended up in ram with mmap... i gave
4gigs for heap while using mmap and got mapping error while indexing...
while index should be something around 2 gigs, ram consumption was around
300mbs... 

Can anyone explain why RAMDirectory cant be used for replication? I cant see
why the master is set for using RAMDirectory and replica is using MMap or
some other? As far as I understand SolrCloud is some kinda pushing from
master to replica/slave... so why it is not possible to push from RAM to
HDD? If my logic is wrong, someone can please explain me all these? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Replication-is-not-Possible-on-RAMDirectory-tp4017766p4018198.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Replication is not Possible on RAMDirectory?

2012-11-02 Thread deniz
Hi all, I am trying to set up a master/slave system, by following this page :
http://wiki.apache.org/solr/SolrReplication

I was able to set up and did some experiments with that, but when i try to
set the index for RAMDirectory, i got errors for indexing.

While master and slave are both using a non-RAM directory, everything is
okay... but when i try to use RAMdirectory on both I got this error below: 

16:40:31.626 [qtp28208563-24] ERROR org.apache.solr.core.SolrCore -
org.apache.lucene.index.IndexNotFoundException: no segments* file found in
org.apache.lucene.store.RAMDirectory@7e693f
lockFactory=org.apache.lucene.store.NativeFSLockFactory@92c787: files: []
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:639)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:75)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:62)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:191)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:77)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:511)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:87)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1016)
at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
at
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
at java.lang.Thread.run(Unknown Source)

16:40:31.627 [qtp28208563-24] ERROR o.a.solr.servlet.SolrDispatchFilter -
null:org.apache.lucene.index.IndexNotFoundException: no segments* file found
in org.apache.lucene.store.RAMDirectory@7e693f

Re: Http 500/503 Responses from Solr

2012-10-30 Thread deniz
well, as for details for the server, it is running on a server with 6 gigs of
ram ( jvm heap max is 4 gb), using ram directory for the index. 

here is my config file (some values are changed as i have been testing):

config
  luceneMatchVersionLUCENE_40/luceneMatchVersion
  lib dir=../../../dist/ regex=apache-solr-cell-\d.*\.jar /
  lib dir=../../../contrib/extraction/lib regex=.*\.jar /
  lib dir=../../../dist/ regex=apache-solr-clustering-\d.*\.jar /
  lib dir=../../../contrib/clustering/lib/ regex=.*\.jar /
  lib dir=../../../dist/ regex=apache-solr-langid-\d.*\.jar /
  lib dir=../../../contrib/langid/lib/ regex=.*\.jar /
  lib dir=../../../dist/ regex=apache-solr-velocity-\d.*\.jar /
  lib dir=../../../contrib/velocity/lib regex=.*\.jar /
  lib dir=/total/crap/dir/ignored /
  dataDir${solr.data.dir:}/dataDir
  directoryFactory name=DirectoryFactory 
   
class=${solr.directoryFactory:solr.RAMDirectoryFactory}/
  indexConfig
  /indexConfig
  jmx /
  updateHandler class=solr.DirectUpdateHandler2
 autoCommit
   maxTime15000/maxTime
   openSearcherfalse/openSearcher
 /autoCommit
updateLog
  str name=dir${solr.data.dir:}/str
/updateLog
  /updateHandler
  query
maxBooleanClauses1024/maxBooleanClauses
filterCache class=solr.FastLRUCache
 size=512
 initialSize=512
 autowarmCount=0/
queryResultCache class=solr.LRUCache
 size=512
 initialSize=512
 autowarmCount=0/
documentCache class=solr.LRUCache
   size=512
   initialSize=512
   autowarmCount=0/
enableLazyFieldLoadingtrue/enableLazyFieldLoading
   queryResultWindowSize20/queryResultWindowSize
   queryResultMaxDocsCached200/queryResultMaxDocsCached
listener event=newSearcher class=solr.QuerySenderListener
  arr name=queries
  /arr
/listener
listener event=firstSearcher class=solr.QuerySenderListener
  arr name=queries
lst
 str name=qstatic firstSearcher warming in
solrconfig.xml/str
/lst
  /arr
/listener
useColdSearchertrue/useColdSearcher
maxWarmingSearchers2/maxWarmingSearchers
  /query
  requestDispatcher handleSelect=false 
requestParsers enableRemoteStreaming=true 
multipartUploadLimitInKB=2048000 /
httpCaching never304=true /
  /requestDispatcher
  requestHandler name=/select class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   int name=rows10/int
   str name=dftext/str
 /lst
/requestHandler
  requestHandler name=/query class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   str name=wtjson/str
   str name=indenttrue/str
   str name=dftext/str
 /lst
  /requestHandler
  requestHandler name=/get class=solr.RealTimeGetHandler
 lst name=defaults
   str name=omitHeadertrue/str
   str name=wtjson/str
   str name=indenttrue/str
 /lst
  /requestHandler
  requestHandler name=/browse class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   str name=wtvelocity/str
   str name=v.templatebrowse/str
   str name=v.layoutlayout/str
   str name=titleSolritas/str
   str name=defTypeedismax/str
   str name=qf
  text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  title^10.0 description^5.0 keywords^5.0 author^2.0
resourcename^1.0
   /str
   str name=dftext/str
   str name=mm100%/str
   str name=q.alt*:*/str
   str name=rows10/str
   str name=fl*,score/str
   str name=mlt.qf
 text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
 title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
   /str
   str
name=mlt.fltext,features,name,sku,id,manu,cat,title,description,keywords,author,resourcename/str
   int name=mlt.count3/int
   str name=faceton/str
   str name=facet.fieldcat/str
   str name=facet.fieldmanu_exact/str
   str name=facet.fieldcontent_type/str
   str name=facet.fieldauthor_s/str
   str name=facet.queryipod/str
   str name=facet.queryGB/str
   str name=facet.mincount1/str
   str name=facet.pivotcat,inStock/str
   str name=facet.range.otherafter/str
   str name=facet.rangeprice/str
   int name=f.price.facet.range.start0/int
   int name=f.price.facet.range.end600/int
   int name=f.price.facet.range.gap50/int
   str name=facet.rangepopularity/str
   int name=f.popularity.facet.range.start0/int
   int name=f.popularity.facet.range.end10/int
   int name=f.popularity.facet.range.gap3/int
   str name=facet.rangemanufacturedate_dt/str
   str
name=f.manufacturedate_dt.facet.range.startNOW/YEAR-10YEARS/str
   str name=f.manufacturedate_dt.facet.range.endNOW/str
   str 

  1   2   >