Re: Best strategy for logging & security

2015-06-02 Thread Vishal Swaroop
I am using log4j.properties which logs core name with each query...

Is there a way to generate separate logs for each core ?

Regards

On Mon, Jun 1, 2015 at 2:13 PM, Rajesh Hazari 
wrote:

> Logging :
>
> Just use logstash to a parse your logs for all collection and  logstash
> forwarder and lumberjack at your solr replicas in your solr cloud to send
> the log events to you central logstash server and send it to back to solr
> (either the same or different instance) to a different collection.
>
> The default log4j.properties that comes with solr dist can log core name
> with each query log.
>
> Security:
> suggest you to go through this wiki
> https://wiki.apache.org/solr/SolrSecurity
>
> *Thanks,*
> *Rajesh,*
> *(mobile) : 8328789519.*
>
> On Mon, Jun 1, 2015 at 11:20 AM, Vishal Swaroop 
> wrote:
>
> > It will be great if you can provide your valuable inputs on strategy for
> > logging & security...
> >
> >
> > Thanks a lot in advance...
> >
> >
> >
> > Logging :
> >
> > - Is there a way to implement logging for each cores separately.
> >
> > - What will be the best strategy to log every query details (like source
> > IP, search query, etc.) at some point we will need monthly reports for
> > analysis.
> >
> >
> >
> > Securing SOLR :
> >
> > - We need to implement SOLR security from client as well as server
> side...
> > requests will be performed via web app as well as other server side apps
> > e.g. curl...
> >
> > Please suggest about the best approach we can follow... link to any
> > documentation will also help.
> >
> >
> >
> > Environment : SOLR 4.7 configured on Tomcat 7  (Linux)
> >
>


Re: Best strategy for logging & security

2015-06-01 Thread Vishal Swaroop
Thanks Rajesh... just trying to figure out if *logstash *is opensource and
free ?

On Mon, Jun 1, 2015 at 2:13 PM, Rajesh Hazari 
wrote:

> Logging :
>
> Just use logstash to a parse your logs for all collection and  logstash
> forwarder and lumberjack at your solr replicas in your solr cloud to send
> the log events to you central logstash server and send it to back to solr
> (either the same or different instance) to a different collection.
>
> The default log4j.properties that comes with solr dist can log core name
> with each query log.
>
> Security:
> suggest you to go through this wiki
> https://wiki.apache.org/solr/SolrSecurity
>
> *Thanks,*
> *Rajesh,*
> *(mobile) : 8328789519.*
>
> On Mon, Jun 1, 2015 at 11:20 AM, Vishal Swaroop 
> wrote:
>
> > It will be great if you can provide your valuable inputs on strategy for
> > logging & security...
> >
> >
> > Thanks a lot in advance...
> >
> >
> >
> > Logging :
> >
> > - Is there a way to implement logging for each cores separately.
> >
> > - What will be the best strategy to log every query details (like source
> > IP, search query, etc.) at some point we will need monthly reports for
> > analysis.
> >
> >
> >
> > Securing SOLR :
> >
> > - We need to implement SOLR security from client as well as server
> side...
> > requests will be performed via web app as well as other server side apps
> > e.g. curl...
> >
> > Please suggest about the best approach we can follow... link to any
> > documentation will also help.
> >
> >
> >
> > Environment : SOLR 4.7 configured on Tomcat 7  (Linux)
> >
>


Best strategy for logging & security

2015-06-01 Thread Vishal Swaroop
It will be great if you can provide your valuable inputs on strategy for
logging & security...


Thanks a lot in advance...



Logging :

- Is there a way to implement logging for each cores separately.

- What will be the best strategy to log every query details (like source
IP, search query, etc.) at some point we will need monthly reports for
analysis.



Securing SOLR :

- We need to implement SOLR security from client as well as server side...
requests will be performed via web app as well as other server side apps
e.g. curl...

Please suggest about the best approach we can follow... link to any
documentation will also help.



Environment : SOLR 4.7 configured on Tomcat 7  (Linux)


Re: distributed search limitations via SolrCloud

2015-05-27 Thread Vishal Swaroop
Thanks a lot Erick... great inputs...

Currently our deployment is on Tomcat 7 and I think SOLR 5.x does not
support Tomcat but runs on its own Jetty server, right ?
I will discuss this with the team.

Thanks again.

Regards
Vishal

On Wed, May 27, 2015 at 4:16 PM, Erick Erickson 
wrote:

> I'd move to Solr 4.10.3 at least, but preferably Solr 5.x. Solr 5.2 is
> being readied for release as we speak, it'll probably be available in
> a week or so barring unforeseen problems and that's the one I'd go
> with by preference.
>
> Do be aware, though, that the 5.x Solr world deprecates using a war
> file. It's still actually produced, but Solr is moving towards start
> scripts instead. This is something new to get used to. See:
> https://wiki.apache.org/solr/WhyNoWar
>
> Best,
> Erick
>
> On Wed, May 27, 2015 at 12:51 PM, Vishal Swaroop 
> wrote:
> > Thanks a lot Erick... You are right we should not delay moving to
> > sharding/SolrCloud process.
> >
> > As you all are expert... currently we are using SOLR 4.7.. Do you suggest
> > we should move to latest SOLR release 5.1.0 ? or we can manage the above
> > issue using SOLR 4.7
> >
> > Regards
> > Vishal
> >
> > On Wed, May 27, 2015 at 2:21 PM, Erick Erickson  >
> > wrote:
> >
> >> Hard to say. I've seen 20M doc be the place you need to consider
> >> sharding/SolrCloud. I've seen 300M docs be the place you need to start
> >> sharding. That said I'm quite sure you'll need to shard before you get
> >> to 2B. There's no good reason to delay that process.
> >>
> >> You'll have to do something about the join issue though, that's the
> >> problem you might want to solve first. The new streaming aggregation
> >> stuff might help there, you'll have to figure that out.
> >>
> >> The first thing I'd explore is whether you can denormlized your way
> >> out of the need to join. Or whether you can use block joins instead.
> >>
> >> Best,
> >> Erick
> >>
> >> On Wed, May 27, 2015 at 11:15 AM, Vishal Swaroop 
> >> wrote:
> >> > Currently, we have SOLR configured on single linux server (24 GB
> physical
> >> > memory) with multiple cores.
> >> > We are using SOLR joins (https://wiki.apache.org/solr/Join) across
> >> cores on
> >> > this single server.
> >> >
> >> > But, as data will grow to ~2 billion we need to assess whether we’ll
> need
> >> > to run SolrCloud as "In a DistributedSearch environment, you can not
> Join
> >> > across cores on multiple nodes"
> >> >
> >> > Please suggest at what point or index size should we start
> considering to
> >> > run SolrCloud ?
> >> >
> >> > Regards
> >>
>


Re: distributed search limitations via SolrCloud

2015-05-27 Thread Vishal Swaroop
Thanks a lot Erick... You are right we should not delay moving to
sharding/SolrCloud process.

As you all are expert... currently we are using SOLR 4.7.. Do you suggest
we should move to latest SOLR release 5.1.0 ? or we can manage the above
issue using SOLR 4.7

Regards
Vishal

On Wed, May 27, 2015 at 2:21 PM, Erick Erickson 
wrote:

> Hard to say. I've seen 20M doc be the place you need to consider
> sharding/SolrCloud. I've seen 300M docs be the place you need to start
> sharding. That said I'm quite sure you'll need to shard before you get
> to 2B. There's no good reason to delay that process.
>
> You'll have to do something about the join issue though, that's the
> problem you might want to solve first. The new streaming aggregation
> stuff might help there, you'll have to figure that out.
>
> The first thing I'd explore is whether you can denormlized your way
> out of the need to join. Or whether you can use block joins instead.
>
> Best,
> Erick
>
> On Wed, May 27, 2015 at 11:15 AM, Vishal Swaroop 
> wrote:
> > Currently, we have SOLR configured on single linux server (24 GB physical
> > memory) with multiple cores.
> > We are using SOLR joins (https://wiki.apache.org/solr/Join) across
> cores on
> > this single server.
> >
> > But, as data will grow to ~2 billion we need to assess whether we’ll need
> > to run SolrCloud as "In a DistributedSearch environment, you can not Join
> > across cores on multiple nodes"
> >
> > Please suggest at what point or index size should we start considering to
> > run SolrCloud ?
> >
> > Regards
>


distributed search limitations via SolrCloud

2015-05-27 Thread Vishal Swaroop
Currently, we have SOLR configured on single linux server (24 GB physical
memory) with multiple cores.
We are using SOLR joins (https://wiki.apache.org/solr/Join) across cores on
this single server.

But, as data will grow to ~2 billion we need to assess whether we’ll need
to run SolrCloud as "In a DistributedSearch environment, you can not Join
across cores on multiple nodes"

Please suggest at what point or index size should we start considering to
run SolrCloud ?

Regards


Re: Suggestion on field type

2015-05-20 Thread Vishal Swaroop
Thank you all... You all are experts...

I will go with double as this seems to be more feasible.

Regards

On Tue, May 19, 2015 at 7:26 PM, Walter Underwood 
wrote:

> A field type based on BigDecimal could be useful, but that would be a fair
> amount more work.
>
> Double is usually sufficient for big data analysis, especially if you are
> doing simple aggregates (which is most of what Solr can do).
>
> If you want to do something fancier, you’ll need a database, not a search
> engine. As I usually do, I’ll recommend MarkLogic, which is pretty awesome
> stuff. Solr would not be in my top handful of solutions for big data
> analysis.
>
> Personally, I’d stuff it all in JSON in Amazon S3 and run map-reduce
> against it. If you need to do something like that, you could store a JSON
> blob in Solr with the exact values, and use approximate fields to narrow
> things down. Of course, MarkLogic has a graceful interface to Hadoop.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> On May 19, 2015, at 4:09 PM, Erick Erickson 
> wrote:
>
> > Well, double is all you've got, so that's what you have to work with.
> > _Every_ float is an approximation when you get out to some number of
> > decimal places, so you don't really have any choice. Of course it'll
> > affect the result. The question is whether it affects the result
> > enough to matter which is application-specific.
> >
> > Best,
> > Erick
> >
> > On Tue, May 19, 2015 at 12:05 PM, Vishal Swaroop 
> wrote:
> >> Also 10481.5711458735456*79* indexes to 10481.571145873546 using double
> >>  >> positionIncrementGap="0" omitNorms="false"/>
> >>
> >> On Tue, May 19, 2015 at 2:57 PM, Vishal Swaroop 
> >> wrote:
> >>
> >>> Thanks Erick... I can ignore the trailing zeros
> >>>
> >>> I am indexing data from Vertica database... Though *double *is very
> close
> >>> but it SOLR indexes 14 digits after decimal
> >>> e.g. actual db value is 15 digits after decimal i.e.
> 249.81735425382405*2*
> >>>
> >>> SOLR indexes 14 digits after decimal i.e. 249.81735425382405
> >>>
> >>> As these values will be used for big data analysis, so I am wondering
> if
> >>> it might impact the result.
> >>>  >>> positionIncrementGap="0" omitNorms="false"/>
> >>>
> >>> Any suggestions ?
> >>>
> >>> Regards
> >>>
> >>>
> >>> On Tue, May 19, 2015 at 1:41 PM, Erick Erickson <
> erickerick...@gmail.com>
> >>> wrote:
> >>>
> >>>> Why do you want to keep trailing zeros? The original input is
> >>>> preserved in the "stored" portion and will be returned if you specify
> >>>> the field in your "fl" list. I'm assuming here that you're looking at
> >>>> the actual indexed terms, and don't really understand why the trailing
> >>>> zeros are important
> >>>>
> >>>> Do not use strings.
> >>>>
> >>>> Best
> >>>> Erick
> >>>>
> >>>> On Tue, May 19, 2015 at 10:22 AM, Vishal Swaroop <
> vishal@gmail.com>
> >>>> wrote:
> >>>>> Thank you John and Jack...
> >>>>>
> >>>>> Looks like double is much closer... it removes trailing zeros...
> >>>>> a) Is there a way to keep trailing zeros
> >>>>> double : 194.846189733028000 indexes to 194.846189733028
> >>>>>  precisionStep="0"
> >>>>> positionIncrementGap="0" omitNorms="false"/>
> >>>>>
> >>>>> b) If I use "String" then will there be issue doing range query
> >>>>>
> >>>>> float
> >>>>>  >>>>> positionIncrementGap="0" omitNorms="false"/>
> >>>>> 277.677836785372000 indexes to 277.67783
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Tue, May 19, 2015 at 11:56 AM, Jack Krupansky <
> >>>> jack.krupan...@gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>>> "double" (solr.TrieDoubleField) gives more precision
> >>>>>>
> >>>>>> See:
> >>>>>>
> >>>>>>
> >>>>
> https://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/schema/TrieDoubleField.html
> >>>>>>
> >>>>>> -- Jack Krupansky
> >>>>>>
> >>>>>> On Tue, May 19, 2015 at 11:27 AM, Vishal Swaroop <
> vishal@gmail.com
> >>>>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> Please suggest which numeric field type to use so that I can get
> >>>> complete
> >>>>>>> value.
> >>>>>>>
> >>>>>>> e.g value in database is : 194.846189733028000
> >>>>>>>
> >>>>>>> If I index it as float SOLR indexes it as 194.84619 where as I need
> >>>>>>> complete value i.e 194.846189733028000
> >>>>>>> I will also be doing range query on this field.
> >>>>>>>
> >>>>>>>  precisionStep="0"
> >>>>>>> positionIncrementGap="0"/>
> >>>>>>>
> >>>>>>>  >>>>>>> multiValued="false" />
> >>>>>>>
> >>>>>>> Regards
> >>>>>>>
> >>>>>>
> >>>>
> >>>
> >>>
>
>


Re: Suggestion on field type

2015-05-19 Thread Vishal Swaroop
Also 10481.5711458735456*79* indexes to 10481.571145873546 using double


On Tue, May 19, 2015 at 2:57 PM, Vishal Swaroop 
wrote:

> Thanks Erick... I can ignore the trailing zeros
>
> I am indexing data from Vertica database... Though *double *is very close
> but it SOLR indexes 14 digits after decimal
> e.g. actual db value is 15 digits after decimal i.e. 249.81735425382405*2*
>
> SOLR indexes 14 digits after decimal i.e. 249.81735425382405
>
> As these values will be used for big data analysis, so I am wondering if
> it might impact the result.
>  positionIncrementGap="0" omitNorms="false"/>
>
> Any suggestions ?
>
> Regards
>
>
> On Tue, May 19, 2015 at 1:41 PM, Erick Erickson 
> wrote:
>
>> Why do you want to keep trailing zeros? The original input is
>> preserved in the "stored" portion and will be returned if you specify
>> the field in your "fl" list. I'm assuming here that you're looking at
>> the actual indexed terms, and don't really understand why the trailing
>> zeros are important
>>
>> Do not use strings.
>>
>> Best
>> Erick
>>
>> On Tue, May 19, 2015 at 10:22 AM, Vishal Swaroop 
>> wrote:
>> > Thank you John and Jack...
>> >
>> > Looks like double is much closer... it removes trailing zeros...
>> > a) Is there a way to keep trailing zeros
>> > double : 194.846189733028000 indexes to 194.846189733028
>> > > > positionIncrementGap="0" omitNorms="false"/>
>> >
>> > b) If I use "String" then will there be issue doing range query
>> >
>> > float
>> > > > positionIncrementGap="0" omitNorms="false"/>
>> > 277.677836785372000 indexes to 277.67783
>> >
>> >
>> >
>> > On Tue, May 19, 2015 at 11:56 AM, Jack Krupansky <
>> jack.krupan...@gmail.com>
>> > wrote:
>> >
>> >> "double" (solr.TrieDoubleField) gives more precision
>> >>
>> >> See:
>> >>
>> >>
>> https://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/schema/TrieDoubleField.html
>> >>
>> >> -- Jack Krupansky
>> >>
>> >> On Tue, May 19, 2015 at 11:27 AM, Vishal Swaroop > >
>> >> wrote:
>> >>
>> >> > Please suggest which numeric field type to use so that I can get
>> complete
>> >> > value.
>> >> >
>> >> > e.g value in database is : 194.846189733028000
>> >> >
>> >> > If I index it as float SOLR indexes it as 194.84619 where as I need
>> >> > complete value i.e 194.846189733028000
>> >> > I will also be doing range query on this field.
>> >> >
>> >> > > >> > positionIncrementGap="0"/>
>> >> >
>> >> > > >> >  multiValued="false" />
>> >> >
>> >> > Regards
>> >> >
>> >>
>>
>
>


Re: Suggestion on field type

2015-05-19 Thread Vishal Swaroop
Thanks Erick... I can ignore the trailing zeros

I am indexing data from Vertica database... Though *double *is very close
but it SOLR indexes 14 digits after decimal
e.g. actual db value is 15 digits after decimal i.e. 249.81735425382405*2*
SOLR indexes 14 digits after decimal i.e. 249.81735425382405

As these values will be used for big data analysis, so I am wondering if it
might impact the result.


Any suggestions ?

Regards


On Tue, May 19, 2015 at 1:41 PM, Erick Erickson 
wrote:

> Why do you want to keep trailing zeros? The original input is
> preserved in the "stored" portion and will be returned if you specify
> the field in your "fl" list. I'm assuming here that you're looking at
> the actual indexed terms, and don't really understand why the trailing
> zeros are important
>
> Do not use strings.
>
> Best
> Erick
>
> On Tue, May 19, 2015 at 10:22 AM, Vishal Swaroop 
> wrote:
> > Thank you John and Jack...
> >
> > Looks like double is much closer... it removes trailing zeros...
> > a) Is there a way to keep trailing zeros
> > double : 194.846189733028000 indexes to 194.846189733028
> >  > positionIncrementGap="0" omitNorms="false"/>
> >
> > b) If I use "String" then will there be issue doing range query
> >
> > float
> >  > positionIncrementGap="0" omitNorms="false"/>
> > 277.677836785372000 indexes to 277.67783
> >
> >
> >
> > On Tue, May 19, 2015 at 11:56 AM, Jack Krupansky <
> jack.krupan...@gmail.com>
> > wrote:
> >
> >> "double" (solr.TrieDoubleField) gives more precision
> >>
> >> See:
> >>
> >>
> https://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/schema/TrieDoubleField.html
> >>
> >> -- Jack Krupansky
> >>
> >> On Tue, May 19, 2015 at 11:27 AM, Vishal Swaroop 
> >> wrote:
> >>
> >> > Please suggest which numeric field type to use so that I can get
> complete
> >> > value.
> >> >
> >> > e.g value in database is : 194.846189733028000
> >> >
> >> > If I index it as float SOLR indexes it as 194.84619 where as I need
> >> > complete value i.e 194.846189733028000
> >> > I will also be doing range query on this field.
> >> >
> >> >  >> > positionIncrementGap="0"/>
> >> >
> >> >  >> >  multiValued="false" />
> >> >
> >> > Regards
> >> >
> >>
>


Re: Suggestion on field type

2015-05-19 Thread Vishal Swaroop
Thank you John and Jack...

Looks like double is much closer... it removes trailing zeros...
a) Is there a way to keep trailing zeros
double : 194.846189733028000 indexes to 194.846189733028


b) If I use "String" then will there be issue doing range query

float

277.677836785372000 indexes to 277.67783



On Tue, May 19, 2015 at 11:56 AM, Jack Krupansky 
wrote:

> "double" (solr.TrieDoubleField) gives more precision
>
> See:
>
> https://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/schema/TrieDoubleField.html
>
> -- Jack Krupansky
>
> On Tue, May 19, 2015 at 11:27 AM, Vishal Swaroop 
> wrote:
>
> > Please suggest which numeric field type to use so that I can get complete
> > value.
> >
> > e.g value in database is : 194.846189733028000
> >
> > If I index it as float SOLR indexes it as 194.84619 where as I need
> > complete value i.e 194.846189733028000
> > I will also be doing range query on this field.
> >
> >  > positionIncrementGap="0"/>
> >
> >  >  multiValued="false" />
> >
> > Regards
> >
>


Suggestion on field type

2015-05-19 Thread Vishal Swaroop
Please suggest which numeric field type to use so that I can get complete
value.

e.g value in database is : 194.846189733028000

If I index it as float SOLR indexes it as 194.84619 where as I need
complete value i.e 194.846189733028000
I will also be doing range query on this field.





Regards


Help to index nested document

2015-05-11 Thread Vishal Swaroop
Need your valuable inputs...

I am indexing data from database (one table) which is in this example
format :
id name value
1 Joe 102724904
2 Joe 100996643

- id is primary/ unique key
- there can be same "name" but different "value"
- If I try "name" as unique key then SOLR removes duplicate and indexes 1
document

- I am getting the result in this format... Is there as way I can index
data in a way so that I can "value" can be child for "name"...
"response": {
"numFound": 2,
"start": 0,
"docs": [
  {
"id": "1",
"name": "Joe",
"value": [
  "102724904"
]
  },
  {
"id": "2",
"name": "Joe",
"value": [
  "100996643"
]
  }...

Expected format :
"docs": [
  {
"name": "Joe",
"value": [
  "102724904",
  "100996643"
]
  }


How to get exact match along with text edge_ngram

2015-05-04 Thread Vishal Swaroop
We have item_name indexed as text edge_ngram which returns like results...

Please suggest what will be the best approach (like "string" index (in
addition to "...edge_ngram"... or using copyField...) to search ALSO for
exact matches?

e.g. url should return item_name as "abc" entries only... I tried item_name
in quotes ("abc") but no luck...
http://localhost:8081/solr/item/select?q=item_name:abc&wt=json&indent=true
But, I get abc-1, abc-2... as result, however I only need "abc" entries.






















Re: generate uuid/ id for table which do not have any primary key

2015-04-20 Thread Vishal Swaroop
Thanks... Yes that is option we will go forward with.
On Apr 20, 2015 10:52 AM, "Kaushik"  wrote:

> Have you tried select  as id, name, age ?
>
> On Thu, Apr 16, 2015 at 3:34 PM, Vishal Swaroop 
> wrote:
>
> > Just wondering if there is a way to generate uuid/ id in data-config
> > without using combination of fields in query...
> >
> > data-config.xml
> > 
> > 
> >  >   batchSize="2000"
> >   name="test"
> >   type="JdbcDataSource"
> >   driver="oracle.jdbc.OracleDriver"
> >   url="jdbc:oracle:thin:@ldap:"
> >   user="myUser"
> >   password="pwd"/>
> > 
> >  >   docRoot="true"
> >   dataSource="test"
> >   query="select name, age from test_user">
> > 
> > 
> > 
> >
> > On Thu, Apr 16, 2015 at 3:18 PM, Vishal Swaroop 
> > wrote:
> >
> > > Thanks Kaushik & Erick..
> > >
> > > Though I can populate uuid by using combination of fields but need to
> > > change the type to "string" else it throws "Invalid UUID String"
> > >  > > required="true" multiValued="false"/>
> > >
> > > a) I will have ~80 millions records and wondering if performance might
> be
> > > issue
> > > b) So, during update I can still use combination of fields i.e. uuid ?
> > >
> > > On Thu, Apr 16, 2015 at 2:44 PM, Erick Erickson <
> erickerick...@gmail.com
> > >
> > > wrote:
> > >
> > >> This seems relevant:
> > >>
> > >>
> > >>
> >
> http://stackoverflow.com/questions/16914324/solr-4-missing-required-field-uuid
> > >>
> > >> Best,
> > >> Erick
> > >>
> > >> On Thu, Apr 16, 2015 at 11:38 AM, Kaushik 
> > wrote:
> > >> > You seem to have defined the field, but not populating it in the
> > query.
> > >> Use
> > >> > a combination of fields to come up with a unique id that can be
> > >> assigned to
> > >> > uuid. Does that make sense?
> > >> >
> > >> > Kaushik
> > >> >
> > >> > On Thu, Apr 16, 2015 at 2:25 PM, Vishal Swaroop <
> vishal@gmail.com
> > >
> > >> > wrote:
> > >> >
> > >> >> How to generate uuid/ id (maybe in data-config.xml...) for table
> > which
> > >> do
> > >> >> not have any primary key.
> > >> >>
> > >> >> Scenario :
> > >> >> Using DIH I need to import data from database but table does not
> have
> > >> any
> > >> >> primary key
> > >> >> I do have uuid defined in schema.xml and is
> > >> >>  > >> required="true"
> > >> >> multiValued="false"/>
> > >> >> uuid
> > >> >>
> > >> >> data-config.xml
> > >> >> 
> > >> >> 
> > >> >>  > >> >>   batchSize="2000"
> > >> >>   name="test"
> > >> >>   type="JdbcDataSource"
> > >> >>   driver="oracle.jdbc.OracleDriver"
> > >> >>   url="jdbc:oracle:thin:@ldap:"
> > >> >>   user="myUser"
> > >> >>   password="pwd"/>
> > >> >> 
> > >> >>  > >> >>   docRoot="true"
> > >> >>   dataSource="test"
> > >> >>   query="select name, age from test_user">
> > >> >> 
> > >> >> 
> > >> >> 
> > >> >>
> > >> >> Error : Document is missing mandatory uniqueKey field: uuid
> > >> >>
> > >>
> > >
> > >
> >
>


Re: generate uuid/ id for table which do not have any primary key

2015-04-16 Thread Vishal Swaroop
Just wondering if there is a way to generate uuid/ id in data-config
without using combination of fields in query...

data-config.xml









On Thu, Apr 16, 2015 at 3:18 PM, Vishal Swaroop 
wrote:

> Thanks Kaushik & Erick..
>
> Though I can populate uuid by using combination of fields but need to
> change the type to "string" else it throws "Invalid UUID String"
>  required="true" multiValued="false"/>
>
> a) I will have ~80 millions records and wondering if performance might be
> issue
> b) So, during update I can still use combination of fields i.e. uuid ?
>
> On Thu, Apr 16, 2015 at 2:44 PM, Erick Erickson 
> wrote:
>
>> This seems relevant:
>>
>>
>> http://stackoverflow.com/questions/16914324/solr-4-missing-required-field-uuid
>>
>> Best,
>> Erick
>>
>> On Thu, Apr 16, 2015 at 11:38 AM, Kaushik  wrote:
>> > You seem to have defined the field, but not populating it in the query.
>> Use
>> > a combination of fields to come up with a unique id that can be
>> assigned to
>> > uuid. Does that make sense?
>> >
>> > Kaushik
>> >
>> > On Thu, Apr 16, 2015 at 2:25 PM, Vishal Swaroop 
>> > wrote:
>> >
>> >> How to generate uuid/ id (maybe in data-config.xml...) for table which
>> do
>> >> not have any primary key.
>> >>
>> >> Scenario :
>> >> Using DIH I need to import data from database but table does not have
>> any
>> >> primary key
>> >> I do have uuid defined in schema.xml and is
>> >> > required="true"
>> >> multiValued="false"/>
>> >> uuid
>> >>
>> >> data-config.xml
>> >> 
>> >> 
>> >> > >>   batchSize="2000"
>> >>   name="test"
>> >>   type="JdbcDataSource"
>> >>   driver="oracle.jdbc.OracleDriver"
>> >>   url="jdbc:oracle:thin:@ldap:"
>> >>   user="myUser"
>> >>   password="pwd"/>
>> >> 
>> >> > >>   docRoot="true"
>> >>   dataSource="test"
>> >>   query="select name, age from test_user">
>> >> 
>> >> 
>> >> 
>> >>
>> >> Error : Document is missing mandatory uniqueKey field: uuid
>> >>
>>
>
>


Re: generate uuid/ id for table which do not have any primary key

2015-04-16 Thread Vishal Swaroop
Thanks Kaushik & Erick..

Though I can populate uuid by using combination of fields but need to
change the type to "string" else it throws "Invalid UUID String"


a) I will have ~80 millions records and wondering if performance might be
issue
b) So, during update I can still use combination of fields i.e. uuid ?

On Thu, Apr 16, 2015 at 2:44 PM, Erick Erickson 
wrote:

> This seems relevant:
>
>
> http://stackoverflow.com/questions/16914324/solr-4-missing-required-field-uuid
>
> Best,
> Erick
>
> On Thu, Apr 16, 2015 at 11:38 AM, Kaushik  wrote:
> > You seem to have defined the field, but not populating it in the query.
> Use
> > a combination of fields to come up with a unique id that can be assigned
> to
> > uuid. Does that make sense?
> >
> > Kaushik
> >
> > On Thu, Apr 16, 2015 at 2:25 PM, Vishal Swaroop 
> > wrote:
> >
> >> How to generate uuid/ id (maybe in data-config.xml...) for table which
> do
> >> not have any primary key.
> >>
> >> Scenario :
> >> Using DIH I need to import data from database but table does not have
> any
> >> primary key
> >> I do have uuid defined in schema.xml and is
> >>  required="true"
> >> multiValued="false"/>
> >> uuid
> >>
> >> data-config.xml
> >> 
> >> 
> >>  >>   batchSize="2000"
> >>   name="test"
> >>   type="JdbcDataSource"
> >>   driver="oracle.jdbc.OracleDriver"
> >>   url="jdbc:oracle:thin:@ldap:"
> >>   user="myUser"
> >>   password="pwd"/>
> >> 
> >>  >>   docRoot="true"
> >>   dataSource="test"
> >>   query="select name, age from test_user">
> >> 
> >> 
> >> 
> >>
> >> Error : Document is missing mandatory uniqueKey field: uuid
> >>
>


generate uuid/ id for table which do not have any primary key

2015-04-16 Thread Vishal Swaroop
How to generate uuid/ id (maybe in data-config.xml...) for table which do
not have any primary key.

Scenario :
Using DIH I need to import data from database but table does not have any
primary key
I do have uuid defined in schema.xml and is

uuid

data-config.xml









Error : Document is missing mandatory uniqueKey field: uuid


Re: SOLR 5.0.0 and Tomcat version ?

2015-03-23 Thread Vishal Swaroop
a) Does this means that SOLR 5 cannot be deployed on Tomcat or it is not
worth.


Regards
Vishal

On Mon, Mar 23, 2015 at 2:27 PM, Adnan Yaqoob  wrote:

> Erick,
> Any specific reason for going away from war file?
>
> Adnan
>
> On Mon, Mar 23, 2015 at 12:35 PM, Erick Erickson 
> wrote:
>
> > There will be no war file distributed for a start
> >
> > Best,
> > Erick
> >
> > On Mon, Mar 23, 2015 at 9:04 AM, Karl Kildén 
> > wrote:
> > > Just curious, what will be done that is incompatible with servlet
> > > containers?
> > >
> > >
> > >
> > > On 23 March 2015 at 16:50, Aman Tandon 
> wrote:
> > >
> > >> Hi Vishal,
> > >>
> > >> I am not aware of which version of tomcat will suit best. But I will
> > >> suggest to use Solr as it is, because after few more release solr will
> > not
> > >> be able to run using application server.
> > >>
> > >> So its good to use it as it is (without application server) when you
> are
> > >> redesigning the structure again.
> > >>
> > >> With Regards
> > >> Aman Tandon
> > >>
> > >> On Mon, Mar 23, 2015 at 9:09 PM, Vishal Swaroop  >
> > >> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > We are planning to configure new linux server for latest SOLR
> release
> > >> i.e.
> > >> > 5.0.0
> > >> >
> > >> > Please suggest which Tomcat version will be best compatible with
> > SOLR5...
> > >> > latest Tomcat release is 8.0
> > >> >
> > >> > Thanks
> > >> >
> > >>
> >
>
>
>
> --
> Regards,
> *Adnan Yaqoob*
>


SOLR 5.0.0 and Tomcat version ?

2015-03-23 Thread Vishal Swaroop
Hi,

We are planning to configure new linux server for latest SOLR release i.e.
5.0.0

Please suggest which Tomcat version will be best compatible with SOLR5...
latest Tomcat release is 8.0

Thanks


Re: Suggestion on indexing complex xml

2015-03-02 Thread Vishal Swaroop
Thanks for your time and suggestions Alex...

a) So, if I use xslt... then SOLR output result will be xml, or there is a
trick to get json also

b) I am trying to figure out xslt template for my xml (as below) input to
differentiate "parameter1" & "parameter2" as some elements are common (e.g.
name, value)... any help will be great.
Trying to figure out best approach for below xml.

c) Is there a way to get the attribute (e.g. version) also for element
(e.g. parameter)

*XML input :*

<*build*>
   <*actions*>
  <*parametersAction*>
 <*parameters*>
<*parameter1 *version="1.0">
   Name1
   1

<*parameter2*>
   Name2
   description test
   2

 
  
   


Regards
Vishal


On Fri, Feb 27, 2015 at 4:43 PM, Alexandre Rafalovitch 
wrote:

> On 27 February 2015 at 16:11, Vishal Swaroop  wrote:
> > I am able to index XML with same "name" element but in different XPATH by
> > using XPathEntityProcessor "forEach" (e.g. below)
> >
> > Just wondering if there is better way to handle this xml format.
>
> DIH's XML parser is rather limited and literally-minded. You could
> instead pre-process XML with XSLT:
>
> https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-UsingXSLTtoTransformXMLIndexUpdates
>
> Or looking into something like SIREn:
> http://siren.solutions/siren/overview/
>
> Regards,
>Alex.
>
>
> 
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>


Suggestion on indexing complex xml

2015-02-27 Thread Vishal Swaroop
Hi,

I am able to index XML with same "name" element but in different XPATH by
using XPathEntityProcessor "forEach" (e.g. below)

Just wondering if there is better way to handle this xml format.

a) Is there any better way to handle this scenario as xml file will have
multiple sub-menu attributes (e.g. A, B, C, D...)
and I will have to specify each in "forEach" attribute.

b) How to differentiate xml result from two entities defined in
data-config.xml

Example xml


Waffles
$2.95


Strawberry

Light waffles covered with strawberries

$3.95



Example dataConfig















Result :
"docs": [
  {
"name": "Waffles",
"value": "$2.95"
  },
  {
"description": "Light waffles covered with strawberries",
"name": "Strawberry",
"value": "$3.95"
  }
]


Re: Add fields without manually editing Schema.xml.

2015-02-25 Thread Vishal Swaroop
Thanks a lot Alex...

I thought about dynamic fields and will also explore the suggested
options...

On Wed, Feb 25, 2015 at 1:40 PM, Alexandre Rafalovitch 
wrote:

> Several ways. Reading through tutorials should help to get the
> details. But in short:
> 1) Map them to dynamic fields using prefixes and/or suffixes.
> 2) Use dynamic schema which will guess the types and creates the
> fields based on first use
>
> Something like SIREn might also be of interest:
> http://siren.solutions/siren/overview/
>
> Regards,
>Alex.
> 
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 25 February 2015 at 13:26, Vishal Swaroop  wrote:
> > Hi,
> >
> > Just wondering if there is a way to handle this use-case in SOLR without
> > manually editing Schema.xml.
> >
> > Scenario :
> > We have xml data with some elements/ attributes which we plan to index.
> > As we move forward there can be addition of xml elements.
> >
> > Is there a way to handle this with out manually adding fields /changing
> in
> > schema.xml ?
> >
> > Thanks
> > V
>


Add fields without manually editing Schema.xml.

2015-02-25 Thread Vishal Swaroop
Hi,

Just wondering if there is a way to handle this use-case in SOLR without
manually editing Schema.xml.

Scenario :
We have xml data with some elements/ attributes which we plan to index.
As we move forward there can be addition of xml elements.

Is there a way to handle this with out manually adding fields /changing in
schema.xml ?

Thanks
V


Suggestion on distinct/ group by for a field ?

2015-02-23 Thread Vishal Swaroop
Please suggest on how to get the distinct count for a field (name).

Summary : I have data indexed in the following format
category name value
Cat1 A 1
Cat1 A 2
Cat1 B 3
Cat1 B 4

I tried getting the distinct "name" count... but it returns 4 records
instaed of 2 (i.e. A, B)
http://localhost:8081/solr/core_test/select?q=category:Cat1&fl=category,name&wt=json&indent=true&facet.mincount=1&facet=true

In Oracle I can easily perform the distinct count using groop-by
select c.cat, count(*distinct *i.name) from category c, itemname i, value v
where v.item_id = i.id and i.cat_id = c.id and c.cat ='Cat1' *group by
c.cat *
Result:
"Cat1" "2"

Thanks


Re: If I change schema.xml then reIndex is neccessary in Solr or not?

2015-01-22 Thread Vishal Swaroop
We noticed that SOLR/ Tomcat also needs a restart... is it same for you
also ?

Regards


On Thu, Jan 22, 2015 at 2:11 AM, Nitin Solanki  wrote:

> Ok. Thanx
>
> On Thu, Jan 22, 2015 at 11:38 AM, Gora Mohanty  wrote:
>
> > On 22 January 2015 at 11:23, Nitin Solanki  wrote:
> > > I *indexed* *2GB* of data. Now I want to *change* the *type* of *field*
> > > from *textSpell* to *string* type into
> >
> > Yes, one would need to reindex.
> >
> > Regards,
> > Gora
> >
>


Re: Ignore whitesapce, underscore using KeywordTokenizer... EdgeNGramFilter

2015-01-21 Thread Vishal Swaroop
I tried adding *PatternReplaceFilterFactory *in index section but it is not
working

Example itemName data can be :
- "ABC E12" : if user types "ABCE" suggestion should be "ABC E12"
- "ABCE_12" : if user types "ABCE1" suggestion should be "ABCE_12"




   

**


   

   

   


On Wed, Jan 21, 2015 at 3:31 PM, Alvaro Cabrerizo 
wrote:

> Hi,
>
> Not sure, but I think that the PatternReplaceFilterFactory or
> the PatternReplaceCharFilterFactory could help you deleting those
> characters.
>
> Regards.
> On Jan 21, 2015 7:59 PM, "Vishal Swaroop"  wrote:
>
> > I am trying to implement type-ahead suggestion for single field which
> > should ignore whitesapce, underscore or special characters in
> autosuggest.
> >
> > It works as suggested by Alex using KeywordTokenizerFactory but how to
> > ignore whitesapce, underscore...
> >
> > Example itemName data can be :
> > "ABC E12" : if user types "ABCE" suggestion should be "ABC E12"
> > "ABCE_12" : if user types "ABCE1" suggestion should be "ABCE_12"
> >
> > Schema.xml
> >  > stored="true" multiValued="false" />
> >
> >  > positionIncrementGap="100">
> >
> > 
> > 
> >  > maxGramSize="15" side="front"/>
> >
> >
> > 
> >
> > 
> >
>


Ignore whitesapce, underscore using KeywordTokenizer... EdgeNGramFilter

2015-01-21 Thread Vishal Swaroop
I am trying to implement type-ahead suggestion for single field which
should ignore whitesapce, underscore or special characters in autosuggest.

It works as suggested by Alex using KeywordTokenizerFactory but how to
ignore whitesapce, underscore...

Example itemName data can be :
"ABC E12" : if user types "ABCE" suggestion should be "ABC E12"
"ABCE_12" : if user types "ABCE1" suggestion should be "ABCE_12"

Schema.xml



   



   
   

   



Re: How to make edge_ngram work with number, underscores, dashes and space

2015-01-21 Thread Vishal Swaroop
Thanks a lot Alex...

It looks like it works as expected... I removed "EdgeNGramFilterFactory"
from "query" section and used "KeywordTokenizerFactory" in "index"... this
is final version..


   



   
   

   


So... when is right to use <*tokenizer *
class="solr.EdgeNGramTokenizerFactory"/>...



On Tue, Jan 20, 2015 at 11:46 PM, Alexandre Rafalovitch 
wrote:

> So, try the suggested tokenizers and dump the ngrams from query. See
> what happens. Ask a separate question with corrected config/output if
> you still have issues.
>
> Regards,
>Alex.
> 
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 20 January 2015 at 23:08, Vishal Swaroop  wrote:
> > Thanks for the response..
> > a) I am trying to make it non-case-sensitive... itemName data is indexed
> in
> > upper case
> >
> > b) I am looking to display the result as type-ahead suggestion which
> might
> > include space, underscore, number...
> >
> > - "ABC12DE" : It does not work as soon as I type 1.. i.e. ABC1
> > Output expected "A", "AB", "ABC", "ABC1"... so on
> > Data can also have underscores, dashes
> > - "ABC_12DE", : Output expected "A", "AB", "ABC", "ABC_", "ABC_1"... so
> on
> >
> > Filed name & type defined in schema :
> >  > stored="true" multiValued="false" />
> >
> >  > positionIncrementGap="100">
> >
> > 
> >  > maxGramSize="15" side="front"/>
> >
> >
> > 
> >  > maxGramSize="15" side="front"/>
> >
> > 
> >
> > On Tue, Jan 20, 2015 at 9:53 PM, Alexandre Rafalovitch <
> arafa...@gmail.com>
> > wrote:
> >
> >> Were you actually trying to "...divides text at non-letters and
> >> converts them to lower case"? Or were you trying to make it
> >> non-case-sensitive, which would be KeywordTokenizer and
> >> LowerCaseFilter?
> >>
> >> Also, normally we do not use NGRam filter on both Index and Query.
> >> That just makes things to match on common prefixes instead of matching
> >> what you are searching for to a prefix of original word.
> >>
> >> Regards,
> >> Alex.
> >> 
> >> Sign up for my Solr resources newsletter at http://www.solr-start.com/
> >>
> >>
> >> On 20 January 2015 at 21:47, Vishal Swaroop 
> wrote:
> >> > Hi,
> >> >
> >> > May be this is basic but I am trying to understand which Tokenizer and
> >> > Filter to use. I followed some examples as mentioned in solr wiki but
> >> > type-ahead does not show expected suggestions.
> >> >
> >> > Example itemName data can be :
> >> > - "ABC12DE" : It does not work as soon as I type 1.. i.e. ABC1
> >> > - "ABC_12DE", "ABC 12DE"
> >> > - Data can also have underscores, dashes
> >> > - I am tyring ignorecase auto suggest
> >> >
> >> > Filed name & type defined in schema :
> >> >  >> > stored="true" multiValued="false" />
> >> >
> >> >  >> > positionIncrementGap="100">
> >> >
> >> > 
> >> >  >> > maxGramSize="15" side="front"/>
> >> >
> >> >
> >> > 
> >> >  >> > maxGramSize="15" side="front"/>
> >> >
> >> > 
> >>
>


Re: How to make edge_ngram work with number, underscores, dashes and space

2015-01-20 Thread Vishal Swaroop
Thanks for the response..
a) I am trying to make it non-case-sensitive... itemName data is indexed in
upper case

b) I am looking to display the result as type-ahead suggestion which might
include space, underscore, number...

- "ABC12DE" : It does not work as soon as I type 1.. i.e. ABC1
Output expected "A", "AB", "ABC", "ABC1"... so on
Data can also have underscores, dashes
- "ABC_12DE", : Output expected "A", "AB", "ABC", "ABC_", "ABC_1"... so on

Filed name & type defined in schema :



   


   
   


   


On Tue, Jan 20, 2015 at 9:53 PM, Alexandre Rafalovitch 
wrote:

> Were you actually trying to "...divides text at non-letters and
> converts them to lower case"? Or were you trying to make it
> non-case-sensitive, which would be KeywordTokenizer and
> LowerCaseFilter?
>
> Also, normally we do not use NGRam filter on both Index and Query.
> That just makes things to match on common prefixes instead of matching
> what you are searching for to a prefix of original word.
>
> Regards,
> Alex.
> 
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 20 January 2015 at 21:47, Vishal Swaroop  wrote:
> > Hi,
> >
> > May be this is basic but I am trying to understand which Tokenizer and
> > Filter to use. I followed some examples as mentioned in solr wiki but
> > type-ahead does not show expected suggestions.
> >
> > Example itemName data can be :
> > - "ABC12DE" : It does not work as soon as I type 1.. i.e. ABC1
> > - "ABC_12DE", "ABC 12DE"
> > - Data can also have underscores, dashes
> > - I am tyring ignorecase auto suggest
> >
> > Filed name & type defined in schema :
> >  > stored="true" multiValued="false" />
> >
> >  > positionIncrementGap="100">
> >
> > 
> >  > maxGramSize="15" side="front"/>
> >
> >
> > 
> >  > maxGramSize="15" side="front"/>
> >
> > 
>


How to make edge_ngram work with number, underscores, dashes and space

2015-01-20 Thread Vishal Swaroop
Hi,

May be this is basic but I am trying to understand which Tokenizer and
Filter to use. I followed some examples as mentioned in solr wiki but
type-ahead does not show expected suggestions.

Example itemName data can be :
- "ABC12DE" : It does not work as soon as I type 1.. i.e. ABC1
- "ABC_12DE", "ABC 12DE"
- Data can also have underscores, dashes
- I am tyring ignorecase auto suggest

Filed name & type defined in schema :



   


   
   


   



Is defining facet fields in solrconfig.xml mandatory ?

2015-01-07 Thread Vishal Swaroop
Hi,

I am exploring faceting in SOLR in collection1 example Faceting fields are
defined in solrconfig.xml under browse request handler which is used in
in-built "VelocityResponseWriter"

...
   on
   cat


I think it is not at all mandatory to define facet fields in
solrconfig.xml, right ?
Instead we can directly use facet in query URLs... e.g.
http://
:8081/solr/mytestcollection/select?q=*:*&rows=0&facet=true&facet.field=item_id&facet.field=item_type&wt=json&indent=true

Regards


Re: SOLR - any open source framework

2015-01-06 Thread Vishal Swaroop
Thanks a lot... We are in the process of analyzing what to use with SOLR...
On Jan 6, 2015 5:30 PM, "Roman Chyla"  wrote:

> We've compared several projects before starting - AngularJS was on them,
>  it is great for stuff where you could find components (already prepared)
> but writing custom components was easier in other framworks (you need to
> take this statement with grain of salt: it was specific to our situation),
> but that was one year ago...
>
> On Tue, Jan 6, 2015 at 5:20 PM, Vishal Swaroop 
> wrote:
>
> > Thanks Roman... I will check it... Maybe it's off topic but how about
> > Angular...
> > On Jan 6, 2015 5:17 PM, "Roman Chyla"  wrote:
> >
> > > Hi Vishal, Alexandre,
> > >
> > > Here is another one, using Backbone, just released v1.0.16
> > >
> > > https://github.com/adsabs/bumblebee
> > >
> > > you can see it in action: http://ui.adslabs.org/
> > >
> > > While it primarily serves our own needs, I tried to architect it to be
> > > extendible (within reasonable limits of code, man power)
> > >
> > > Roman
> > >
> > > On Tue, Jan 6, 2015 at 4:58 PM, Alexandre Rafalovitch <
> > arafa...@gmail.com>
> > > wrote:
> > >
> > > > That's very general question. So, the following are three random
> ideas
> > > > just to get you started to think of options.
> > > >
> > > > *) spring.io (Spring Data Solr) + Vaadin
> > > > *)  http://gethue.com/ (it's primarily Hadoop, but has Solr UI
> builder
> > > > too)
> > > > *) http://projectblacklight.org/
> > > >
> > > > Regards,
> > > >Alex.
> > > > 
> > > > Sign up for my Solr resources newsletter at
> http://www.solr-start.com/
> > > >
> > > >
> > > > On 6 January 2015 at 16:35, Vishal Swaroop 
> > wrote:
> > > > > I am new to SOLR and was able to configure, run samples as well as
> > able
> > > > to
> > > > > index data using DIH (from database).
> > > > >
> > > > > Just wondering if there are open source framework to query and
> > > > > display/visualize.
> > > > >
> > > > > Regards
> > > >
> > >
> >
>


Re: SOLR - any open source framework

2015-01-06 Thread Vishal Swaroop
Thanks Roman... I will check it... Maybe it's off topic but how about
Angular...
On Jan 6, 2015 5:17 PM, "Roman Chyla"  wrote:

> Hi Vishal, Alexandre,
>
> Here is another one, using Backbone, just released v1.0.16
>
> https://github.com/adsabs/bumblebee
>
> you can see it in action: http://ui.adslabs.org/
>
> While it primarily serves our own needs, I tried to architect it to be
> extendible (within reasonable limits of code, man power)
>
> Roman
>
> On Tue, Jan 6, 2015 at 4:58 PM, Alexandre Rafalovitch 
> wrote:
>
> > That's very general question. So, the following are three random ideas
> > just to get you started to think of options.
> >
> > *) spring.io (Spring Data Solr) + Vaadin
> > *)  http://gethue.com/ (it's primarily Hadoop, but has Solr UI builder
> > too)
> > *) http://projectblacklight.org/
> >
> > Regards,
> >    Alex.
> > 
> > Sign up for my Solr resources newsletter at http://www.solr-start.com/
> >
> >
> > On 6 January 2015 at 16:35, Vishal Swaroop  wrote:
> > > I am new to SOLR and was able to configure, run samples as well as able
> > to
> > > index data using DIH (from database).
> > >
> > > Just wondering if there are open source framework to query and
> > > display/visualize.
> > >
> > > Regards
> >
>


Re: SOLR - any open source framework

2015-01-06 Thread Vishal Swaroop
Great... Thanks for the inputs... I explored Velocity respond writer some
posts suggest it is good for prototyping but not for production...
On Jan 6, 2015 4:59 PM, "Alexandre Rafalovitch"  wrote:

> That's very general question. So, the following are three random ideas
> just to get you started to think of options.
>
> *) spring.io (Spring Data Solr) + Vaadin
> *)  http://gethue.com/ (it's primarily Hadoop, but has Solr UI builder
> too)
> *) http://projectblacklight.org/
>
> Regards,
>Alex.
> 
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 6 January 2015 at 16:35, Vishal Swaroop  wrote:
> > I am new to SOLR and was able to configure, run samples as well as able
> to
> > index data using DIH (from database).
> >
> > Just wondering if there are open source framework to query and
> > display/visualize.
> >
> > Regards
>


SOLR - any open source framework

2015-01-06 Thread Vishal Swaroop
I am new to SOLR and was able to configure, run samples as well as able to
index data using DIH (from database).

Just wondering if there are open source framework to query and
display/visualize.

Regards