Re: wordpress anyone?

2021-03-03 Thread Gora Mohanty
On Thu, 4 Mar 2021 at 01:50, dmitri maziuk  wrote:

> Hi all,
>
> does anyone use Solr with WP? It seems there is one for-pay-only
> offering and a few defunct projects from a decade ago... a great web
> search engine is particularly useful if it can actually be used in a
> client.
>
> So has anyone heard about any active WP integration projects other than
> wpsolr.com?
>

Haven't had occasion to use Wordpress, and Solr with it, for a while. Since
nobody else has replied, there does seem to be another plugin that is
open-source,and hosted on Github: https://wordpress.org/plugins/solr-power/
. Cannot comment as to how well it works. Alternatively, one could use a
PHP client library like Solarium.

Regards,
Gora


Re: bash: solr: command not found

2020-10-15 Thread Gora Mohanty
On Thu, 15 Oct 2020 at 22:27, Ryan W  wrote:

> Hi all,
>
> I run solr like this...
>
> service solr start
>

You are probably running Solr on a Linux machine using systemd where
someone has set up the systemd files needed for Solr. This might not have
been done on your client's system. Try searching Google for "Solr systemd"
which seems to turn up likely-looking sources for the files.


> However, another user of the server can't run solr this way.  They are
> seeing...
>
> bash: solr: command not found
>

However, this seems to point to some other error; not from systemd. It
looks like the systemd files may be set up, but the user that Solr is
running as does not have the solr executable in their path. Try including
it in their path, or giving the full path to solr in the systemd file.

Regards,
Gora


Re: Unsubscribe request

2020-03-16 Thread Gora Mohanty
On Tue, 17 Mar 2020 at 05:18, Arpit Agarwal 
wrote:

> Hi,
> Please unsubscribe my email address (arpit.agarwa...@gmail.com) from your
> mailing list .
>

Please follow the usual practice for subscribing from a public mailing
list: see  https://lucene.apache.org/solr/community.html . You need to send
an email to solr-user-unsubscribe 
to unsubscribe, and not to this list at large.

Regards,
Gora


Re: Unsubscribe please

2019-09-19 Thread Gora Mohanty
Hi,

Please see https://lucene.apache.org/solr/community.html#mailing-lists-irc
. In order to unsubscribe, please send mail to solr-user-unsubscribe


Regards,
Gora

>


Re: Unsubscribe me

2019-05-21 Thread Gora Mohanty
Hi,

Please see https://lucene.apache.org/solr/community.html#mailing-lists-irc
: you need to send mail to
   solr-user-unsubscr...@lucene.apache.org
in order to unsubscribe,

Regards,
Gora


Re: need one sample application

2019-03-25 Thread Gora Mohanty
On Mon, 25 Mar 2019 at 15:58, Suresh Kumar Shanmugavel
 wrote:
>
> the example which I need is in Solr7.4.0 and above
>
> On Mon, 25 Mar 2019 at 15:31, Suresh Kumar Shanmugavel <
> sureshkumar.shanmuga...@lastminute.com> wrote:
>
> > Hi Team,
> > I need one sample web application on Solr with master and slave
> > configurations having at least one core with it.
> >
> > If possible just do the modifications in the attached file and send back
> > to me.

Please don't take this otherwise, but your request is:
(a) ill-specified: a web application doing what? What else is needed
besides Solr. What technology are you looking for
(b) too broad: it is unlikely that someone will sit down, and do your
work for you.

Please try going the Solr ersources:
http://lucene.apache.org/solr/resources.html . Many examples
addressing common use cases are also available by searching Google.
After going through these, please ask pertinent questions about any
problems that you encounter, and people will probably be a lot more
willing to help.

Regards,
Gora


Re: Remove my mail from subscriptions

2019-02-07 Thread Gora Mohanty
On Thu, 7 Feb 2019 at 12:29, manohar c  wrote:

> Hi,
>  Please Remove my mail from the subscription list.
>

 Please see http://lucene.apache.org/solr/community.html#mailing-lists-irc

In a manner similar to how you subscribed, you need to send an email from
your subscribed account to solr-user-unsubscr...@lucene.apache.org n order
to unsubscribe.

Regards,
Gora


Re: Index data from mysql DB to Solr - From Scratch

2018-02-17 Thread Gora Mohanty
On 18 February 2018 at 08:18, @Nandan@ 
wrote:

> Thanks Rick.
> Is it possible to get some demo learning video link or web links from
> where I can get overview with real example?
> By which I can able to know in more details.
>

Searching Google for "Solr index data database" turns up many links with
examples, e.g.,
http://blog.comperiosearch.com/blog/2014/08/28/indexing-database-using-solr/

Regards,
Gora


Re: Does apache solr stores the file?

2017-12-06 Thread Gora Mohanty
On 6 December 2017 at 10:39, Munish Kumar Arora
 wrote:
>
> So the questions are,
> 1. Can I get the PDF content?
> 2. does Solr stores the actual file somewhere?
>a. If it stores then where it does?
> b. If it does not store then, is there a way to store THE FILE?

Normal practice would be to store the PDF file somewhere on the file
system where it can be served through a HTTP request. Then, store the
filesystem path to the PDF file in Solr so that it can be returned in
a Solr search request.

Regards,
Gora


Re: solr expertise

2017-11-15 Thread Gora Mohanty
On 16 November 2017 at 03:54, Niraj Aswani  wrote:
>
> Hi,
>
> Apologies if this is not the relevant question for this mailing list but we
> are looking for a solr expert and were wondering If this is the right
> mailing list to send the job description. If there is any other list,
> please can you suggest?

Please see https://wiki.apache.org/solr/Support

Regards,
Gora


Re: Mysql data import issue

2016-01-28 Thread Gora Mohanty
On 29 January 2016 at 04:13, vsriram30  wrote:

> Hi,
> I am using Solr 4.6.1 and I am trying to import my data from mysql to solr.
>
> In mysql, I have a table with columns,
> id, legacyid, otherfields...
>
[...]

> But still I get mysql id field to solr id field mapping. Please let me know
> how to prevent this from happening.
>

How about if you do not select the mysql "id" field in the query attribute
for the entity?

Regards,
Gora


Re: curl adapter in solarium 3x

2015-12-03 Thread Gora Mohanty
On 3 December 2015 at 16:20, kostali hassan  wrote:
> How to force the connection to explicitly close when it has finished
> processing, and not be pooled for reuse.
> they are a way to tell to  server may send a keep-alive timeout (with
> default Apache install, it is 15 seconds or 100 requests, whichever comes
> first) - but cURL will just open another connection when that happens.

These questions seem no longer relevant to the Solr mailing list.
Please ask on a Solarium mailing list.

In response to your earlier message,  I had sent you a link to the
Solarium ZendHttpAdapter which seems to allow keepalive, unlike the
curl adapter. Here it is again:
http://wiki.solarium-project.org/index.php/V1:Client_adapters . You
might also find this useful:
http://framework.zend.com/manual/1.12/en/zend.http.client.advanced.html

Regards,
Gora


Re: indexing rich data from directory using solarium

2015-12-02 Thread Gora Mohanty
On 2 December 2015 at 21:55, kostali hassan  wrote:
> yes they are a Error in my solr logs:
> SolrException URLDecoder: Invalid character encoding detected after
> position 79 of query string / form data (while parsing as UTF-8)
> 
> this is my post in stack overflow :
> http://stackoverflow.com/questions/34017889/solrexception-urldecoder-invalid-character-encoding-detected-after-position-79

Looks like an encoding error all right. Are you very sure that you can
sucessfully POST the same document with SimplePostTool. If so, I would
guess that you are not using Solarium correctly, i.e., the PDF file is
getting POSTed such that Solr is getting the raw content rather than
the extracted content.

Regards,
Gora


Re: indexing rich data from directory using solarium

2015-12-02 Thread Gora Mohanty
On 2 December 2015 at 21:59, Erik Hatcher  wrote:
> Gora -
>
> SimplePostTool actually already adds the literal.id parameter* when in “auto” 
> mode (and it’s not an XML, JSON, or CSV file).

Ah, OK. It has been a while since I actually used the tool. Thanks for the info.

Regards,
Gora


Re: indexing rich data from directory using solarium

2015-12-02 Thread Gora Mohanty
On 2 December 2015 at 17:16, kostali hassan  wrote:
> yes its logic Thank you , but i want understand why the same data is
> indexing fine in shell using windows SimplePostTool :
>>
>> D:\solr\solr-5.3.1>java -classpath example\exampledocs\post.jar -Dauto=yes
>> -Dc=solr_docs_core -Ddata=files -Drecursive=yes
>> org.apache.solr.util.SimplePostTool D:\Lucene\document ;

That seems strange. Are you sure that you are posting the same PDF.
With SimplePostTool, you should be POSTing to the URL
/solr/update/extract?literal.id=myid , i.e., you need an option of
something like:
-Durl=http://localhost:8983/solr/update/extract?literal.id=myid in the
command line for SimplePostTool.

Likewise, I am not that familiar with Solarium. Are you sure that the
file is being POSTed to /solr/update/extract . Are you seeing any
errors in your Solr logs?

Regards,
Gora


Re: indexing rich data from directory using solarium

2015-12-02 Thread Gora Mohanty
On 2 December 2015 at 22:35, kostali hassan  wrote:
> i fixed but he still a smal prb from time out 30sc of wamp server then i
> can just put 130files to a directory to index untill i index all my files :
> this is my function idex document:

Again, not familiar with Solarium, and at this point you are probably
better off asking on a Solarium-specific list, but my guess is that
you need keepalive on the connection. It seems that Solarium's
ZendHttpServer does this:
http://wiki.solarium-project.org/index.php/V1:Client_adapters .

Regards,
Gora


Re: Error on DIH log

2015-12-02 Thread Gora Mohanty
On 27 November 2015 at 11:12, Midas A  wrote:
> Error:
> org.apache.solr.common.SolrException: ERROR: [doc=83629504] Error adding
> field 'master_id'='java.math.BigInteger:0' msg=For input string:
> "java.math.BigInteger:0"

Sorry, was busy the last few days. On a closer look, it seems that
there is an issue with java.math.BigInteger fields not being
serialised properly, i.e., the input seems to be
"java.math.BigInteger:0" rather than 0.

Which version of Solr are you using? Might have to do with
https://issues.apache.org/jira/browse/SOLR-6165

Regards,
Gora


Re: indexing rich data from directory using solarium

2015-12-02 Thread Gora Mohanty
On 2 December 2015 at 16:32, kostali hassan  wrote:
[...]
>
> When i execute it i get this ERROR:
>
> org.apache.solr.common.SolrException: URLDecoder: Invalid character
> encoding detected after position 79 of query string / form data (while
> parsing as UTF-8)

Solr expects UTF-8 data. Your documents are probably in some different
encoding. You will need to figure out what the encoding is, and how to
convert it to UTF-8.

Regards,
Gora


Re: Error on DIH log

2015-11-26 Thread Gora Mohanty
On 27 November 2015 at 11:48, Midas A  wrote:
> Big Integer part is coming from mysql  .

So, show us that part of the DIH config. As Alex mentioned, most
likely there is some error in the conversion: DIH seems to be getting
a string rather than a BigInteger.

>use of transformer 
> add latency
> while indexing ?

No, this is unlikely to have anything to do with latency.

Regards,
Gora


Re: Many files /dataImport in same project

2015-11-03 Thread Gora Mohanty
On 2 November 2015 at 22:38, Alexandre Rafalovitch <arafa...@gmail.com> wrote:
> On 2 November 2015 at 11:30, Gora Mohanty <g...@mimirtech.com> wrote:
>> As per my last
>> follow-up, there is currently no way to have DIH automatically pick up
>> different data-config files without manually editing the DIH
>> configuration each time.
>
> I missed previous discussions, but the DIH config file is given in a
> query parameter. So, if there is a bunch of them on a file system, one
> could probably do
> find . - name "*.dihconf" | xargs curl .

Sorry, I might be missing something, or things might have changed with
newer Solr versions. I thought that the DIH data-config file had to be
specified in solrconfig.xml.

Regards,
Gora


Re: Many files /dataImport in same project

2015-11-03 Thread Gora Mohanty
On 3 November 2015 at 21:25, Alexandre Rafalovitch <arafa...@gmail.com> wrote:
> On 3 November 2015 at 10:38, Gora Mohanty <g...@mimirtech.com> wrote:
>>> I missed previous discussions, but the DIH config file is given in a
>>> query parameter. So, if there is a bunch of them on a file system, one
>>> could probably do
>>> find . - name "*.dihconf" | xargs curl .
>>
>> Sorry, I might be missing something, or things might have changed with
>> newer Solr versions. I thought that the DIH data-config file had to be
>> specified in solrconfig.xml.
>
> It is just a parameter to the request handler. I am 99.5% sure you
> just pass config= parameter to the call and it picks it up then.
> That's why you can keep updating the definition file and not need to
> reload the core, it gets reloaded on each DIH call.
[...]

OK, looks like you are right. Had never had reason to use this, so
hadn't realised that this was possible.

Regards,
Gora


Re: Many files /dataImport in same project

2015-11-02 Thread Gora Mohanty
On 2 November 2015 at 21:50, fabigol  wrote:
> Hi,
>  i have many files of config dataImport
> I want to start at once instead of launching DataImport for each file.
> is it possible??

Not to be antagonistic, but did you not ask this before, and have
various people not tried to help you?

With all due respect, it seems that you need to understand your
specific setup better in order to ask more specific questions. It
would be good if you stuck to one thread for that. As per my last
follow-up, there is currently no way to have DIH automatically pick up
different data-config files without manually editing the DIH
configuration each time. This is probably unlikely to get fixed as one
can put all DIH entities into one file, and import each as needed.
Further, if what you need is complex requirements in populating Solr,
it is advisable to use SolrJ, or similar libraries for other
languages.

Regards,
Gora


Re: Many mapping files

2015-10-29 Thread Gora Mohanty
On 28 October 2015 at 19:45, fabigol  wrote:
>
> Thank for your response.
> I have 7 files *.xml. I already worked with SOlr but i have an only file. My
> question is why in this project there's 7 files describing an entity.

I am afraid that it is still difficult for an external person to guess
at how these files are being used without looking at complete details.
These look like separate data-config.xml files for DIH, and my best
guess is that either these were used for trial-and-error while setting
up the Solr installation, or that the person responsible manually
edited Solr's configuration files to use one or the other
data-config.xml (not sure why one would do that).

You might be better off trying to understand things from the other
end, i.e., figuring out what searches are made in the system, and what
data from the RDBMS needs to be imported into Solr for these. That
should lead you to an understanding of how these DIH data
configuration files might be used.

Regards,
Gora


Re: Many mapping files

2015-10-27 Thread Gora Mohanty
On 27 October 2015 at 13:22, fabigol  wrote:
> Hi,
> I already posted on my subject that helped me a lot. I recover a solr
> project but the person is gone.
> There are several configuration files / mapping (base / Solr)
> What interest do that?
> thanks for your help

Unfortunately, this question is too vague for any kind of reasonable
help. Your best bet probably is to get a basic familiarity with Solr.
There are books available, and the Wiki is a helpful resource. You
might want to start at https://wiki.apache.org/solr/ and go through
the tutorial linked therein. Many of the files will be explained that,
and once you have at least a basic understanding, please ask more
specific questions about specific configuration files on this list.

Regards,
Gora


Re: Spam handling with ASF mailing lists

2015-10-01 Thread Gora Mohanty
> On 23 September 2015 at 21:10, Upayavira  wrote:
>
> > If you have specific questions about spam handling, then I'd suggest you
> > ask on the ASF infrastructure list, but generally, we can expect that
> > there will be occasions when something that seems obviously spam gets
> > through our systems.
>
> OK, will take this up on the ASF infrastructure list as you suggest.
> Thanks: I was not quite sure where to address this plaint to.
>

Sorry to bug all of you again, but I was annoyed enough once more by the
continuing spam that slips through the cracks to try and complain to ASF
infrastructure lists. Unfortunately, at the very beginning of
http://www.apache.org/dev/infra-mail it says "Participation in these lists
is only available to ASF committers.", and I am not a committer. Would it
be possible for someone to forward this to the infrastructure list?

Not to point fingers, but the spam that I have been responding to would
*not* slip through the open-source-based filters that we maintain for our
small clients. I would think that ASF, with all its resources, would be
able to handle this.

Regards,
Gora


Spam handling with ASF mailing lists

2015-09-23 Thread Gora Mohanty
Hi,

There seems to have been a significant spike in spam emails forwarded
to moderators. While the volume is still tractable, what surprises me
is that almost all of these ought to have been caught by automated
spam filters. E.g., a significant fraction of the spam seems to
originate from just a few email addresses. Surely it is possible to
black-list such addresses after they have been marked as spam several
times, and have never had a successful post?

Regards,
Gora


Re: Spam handling with ASF mailing lists

2015-09-23 Thread Gora Mohanty
On 23 September 2015 at 21:10, Upayavira <u...@odoko.co.uk> wrote:
>
>
> On Wed, Sep 23, 2015, at 04:34 PM, Gora Mohanty wrote:
>> Hi,
>>
>> There seems to have been a significant spike in spam emails forwarded
>> to moderators. While the volume is still tractable, what surprises me
>> is that almost all of these ought to have been caught by automated
>> spam filters. E.g., a significant fraction of the spam seems to
>> originate from just a few email addresses. Surely it is possible to
>> black-list such addresses after they have been marked as spam several
>> times, and have never had a successful post?
>
> I think you'll find spam handling is something of an art-form, and an
> ongoing catch-up race. I think you'll also find it horrific how much
> spam ASF infrastructure blocks before it gets to moderators.

Agreed as in spam being an ongoing and maybe never-ending battle, but
as mentioned, these seem obvious fodder for automated spam filters,
and email is still being accepted fro 2-3 weeks from a single email
address. I am pretty sure that spamassassin would catch many of these.

> If you have specific questions about spam handling, then I'd suggest you
> ask on the ASF infrastructure list, but generally, we can expect that
> there will be occasions when something that seems obviously spam gets
> through our systems.

OK, will take this up on the ASF infrastructure list as you suggest.
Thanks: I was not quite sure where to address this plaint to.

Regards,
Gora


Re: Want to modify Solr Source Code

2015-03-17 Thread Gora Mohanty
On 17 March 2015 at 13:21, Nitin Solanki nitinml...@gmail.com wrote:

 I have already downloaded
 http://archive.apache.org/dist/lucene/solr/4.10.2/solr-4.10.2.tgz. Now,
 How
 to view or edit the source code of any file? I don't have any idea about
 it.. Your help is appreciated..
 Please guide my step by step..
 Thanks again..


You need to learn the basics of putting together a development setup
yourself, or from a local mentor. A .tgz is a gzip-compressed tar file that
can be unarchived with tar, or most unarchivers. You are probably best off
to use a Java IDE, such as Eclipse, to edit the source code. The Wiki
article covers how to compile the code, and run the built-in tests.

Regards,
Gora


Re: Want to modify Solr Source Code

2015-03-17 Thread Gora Mohanty
On 17 March 2015 at 13:38, Nitin Solanki nitinml...@gmail.com wrote:
 Hi Gora,
  Thanks again. Do you have any link/ article of Wiki article?
 Please send me.

Sent the link in my very first follow-up:
http://wiki.apache.org/solr/HowToContribute#Contributing_Code_.28Features.2C_Bug_Fixes.2C_Tests.2C_etc29

Regards,
Gora


Re: Want to modify Solr Source Code

2015-03-17 Thread Gora Mohanty
On 17 March 2015 at 12:22, Nitin Solanki nitinml...@gmail.com wrote:

 Hi,
  I want to modify the solr source code. I don't have any idea where
 source code is available. I want to edit source code. How can I do ?
 Any help please...

Please start with:
http://wiki.apache.org/solr/HowToContribute#Contributing_Code_.28Features.2C_Bug_Fixes.2C_Tests.2C_etc29

Regards,
Gora


Re: Want to modify Solr Source Code

2015-03-17 Thread Gora Mohanty
On 17 March 2015 at 13:12, Nitin Solanki nitinml...@gmail.com wrote:
 Hi Gora,
Hi, I want to make changes only into my machine without svn.
 I want to do test on source code. How ? Any steps to do so ? Please help..

You could still use SVN for a local repository. Else, you can download
a tar.gz of a Solr distribution from under the Download link at the
top right of http://lucene.apache.org/solr/

Regards,
Gora


Re: Unsubscribing MAIL

2015-02-26 Thread Gora Mohanty
On 27 February 2015 at 12:10, Kishan Parmar kishan@gmail.com wrote:

 HI
 I want unsubscribe the mailing list od solr and lucene so plz do the same..

Please follow the standard procedure for unsubscribing from most
mailing lists, and send a mail to
solr-user-unsubscr...@lucene.apache.org . For other lists, you might
want to take a look at the addresses listed under
http://lucene.apache.org/core/discussion.html

Regards,
Gora


Re: ignoring bad documents during index

2015-02-20 Thread Gora Mohanty
On 20 February 2015 at 15:31, SolrUser1543 osta...@gmail.com wrote:

 I want to experiment with this issue , where exactly I should take a look ?
 I want to try to fix this missing aggregation .

 What class is responsible to that ?

Are you indexing through SolrJ, DIH, or what?

Regards,


Re: Possibility of Indexing without feeding again in Solr 4.10.2

2015-02-17 Thread Gora Mohanty
On 17 February 2015 at 15:18, dinesh naik dineshkumarn...@gmail.com wrote:

 Hi all,
 How to can do re-indexing in Solr without importing the data again?
 Is there a way to do re-indexing only for few documents ?


If you have a unique ID for your documents, updating the index with that ID
will update just that document. Other than that, you need to import all
your data again if you want to change the Solr index.

Regards,
Gora


Re: Better way of copying/backup of index in Solr 4.10.2

2015-02-17 Thread Gora Mohanty
On 17 February 2015 at 15:19, dinesh naik dineshkumarn...@gmail.com wrote:

 What is the best way for copying/backup of index in Solr 4.10.2?

Please take a look at
https://cwiki.apache.org/confluence/display/solr/Backing+Up

Regards,
Gora


Re: indexed and stored fields don't appear in the response

2015-02-09 Thread Gora Mohanty
On 9 February 2015 at 14:26, Sandy Ding sandy.ding...@gmail.com wrote:
 Part of my schema is as follows:

 field name=_version_ type=long indexed=true stored=true/
 field name=id type=mem_long indexed=true stored=true
 required=true multiValued=false docValues=true /
  field name=bizid type=string indexed=true stored=false
 required=false multiValued=true docValues=true/

 When I issue the following command,

 curl http://localhost:8983/solr/pa_info/select?q=*:*rows=10;

 The response is:

 response
 lst name=responseHeaderint name=status0/intint
 name=QTime16/intlst name=paramsstr name=q*:*/strstr
 name=rows10/str/lst/lstresult name=response numFound=2831784
 start=0 maxScore=1.0doclong name=id659404797/longlong
 name=_version_1492348841962242048/long/doc

Sorry, what exactly is the problem? Your indexed and stored fields are
__version__ and id, both of which seem to be appearing in each
document.

Regards,
Gora


Re: indexed and stored fields don't appear in the response

2015-02-09 Thread Gora Mohanty
On 9 February 2015 at 15:50, Anshum Gupta ans...@anshumgupta.net wrote:
 Common reasons for that would be
 1. Your default fl in solrconfig is set to id, _version_. Can you try
 explicitly mentioning fl=id,tagid,bizid in the request? Also, it'd be good
 to look at your solrconfig.xml.
 2. Chances are, those documents do not contain those fields to begin with.
 Both bizid and and tagid aren't required fields and so those documents
 might not even have those. What you've shared confuses me a bit. Does your
 query contain q=bizid:2380505101 or q=*:* ? If you are querying for bizid,
 the field (with that value) should be a part of the document and the
 previous point should be the reason why you're seeing this behavior.

One more possibility to consider is that if you change the schema, you
would need to reload the Solr container, and reindex.

Regards,
Gora


Re: If I change schema.xml then reIndex is neccessary in Solr or not?

2015-01-21 Thread Gora Mohanty
On 22 January 2015 at 11:23, Nitin Solanki nitinml...@gmail.com wrote:
 I *indexed* *2GB* of data. Now I want to *change* the *type* of *field*
 from *textSpell* to *string* type into

Yes, one would need to reindex.

Regards,
Gora


Re: htaccess

2015-01-06 Thread Gora Mohanty
Hi,

Your message seems quite confused (even the URL is not right for most
normal Solr setup), and it is not clear as to what you mean by function
properly. Solr is a search engine, and has no idea about .htacess files.

Are you asking whether Solr respects directives in .htaccess files? I am
pretty sure that cannot be the case.

With regards to Solr security, it is again normally not a Solr concern.
Please start from https://wiki.apache.org/solr/SolrSecurity

No offence, but it seems that your real concerns might lie elsewhere.
Please take a look at http://people.apache.org/~hossman/#xyproblem

Please do follow up on this list if your questions have not been addressed.

Regards,
Gora


On 6 January 2015 at 23:28, Craig Hoffman choff...@eclimb.net wrote:

 Quick question: If put a .htaccess file in www.mydomin.com/8983/solr/#/
 will Solr continue to function properly? One thing to note, I will have a
 CRON job that runs nightly that re-indexes the engine. In a nutshell I’m
 looking for a way to secure this area.

 Thanks,
 Craig
 --
 Craig Hoffman
 w: http://www.craighoffmanphotography.com
 FB: www.facebook.com/CraigHoffmanPhotography
 TW: https://twitter.com/craiglhoffman
















Re: Trying to understand error

2014-12-17 Thread Gora Mohanty
On 18 December 2014 at 13:10, Joel Stobart j...@stobart.co.uk wrote:
 When I try to upload a solr item by the index:

 site/solr/#/collection1/documents

 When I upload a simple text file via the interface:

 Response:

 {
   error: {
 msg: java.lang.NoSuchFieldError: LFH_SIG,
 trace: java.lang.RuntimeException: java.lang.NoSuchFieldError:
 LFH_SIG\n\tat

The above field is missing from your Solr schema.

Regards,
Gora


Re: data import handler clarifications/ pros and cons.

2014-10-07 Thread Gora Mohanty
On 8 October 2014 01:00, Ahmet Arslan iori...@yahoo.com.invalid wrote:



 Hi Durga,

 That wiki talks about an uncommitted code. So it is not built in.

Maybe it is just me, but given that there are existing scheduling
solutions in most operating systems, I fail to understand why
people expect Solr to expand to include that. How would that
fit into Solr's goals?

IMHO, going by the argument that Solr should also do whatever
anyone could want, one could replace M-x hail-emacs with
M-x hail-solr-lucene.

Regards,
Gora


Re: data import handler clarifications/ pros and cons.

2014-10-06 Thread Gora Mohanty
On 6 October 2014 18:40, Karunakar Reddy karunaka...@gmail.com wrote:

 Hey Alex,
 Thanks for your reply.
 Is delta-import handler configurable? say if I want to update documents
 every 20 mins is it possible through any configuration/settings like
 autocommit?

As a delta-import involves loading a URL, you can do this through a scheduler
on your OS. On Linux, we have a cron job that uses curl. I do not see a big
argument for Solr to include a scheduler.

Regards,
Gora


Re: indexing comments with Apache Solr

2014-08-06 Thread Gora Mohanty
On 6 August 2014 14:13, Ali Nazemian alinazem...@gmail.com wrote:

 Dear all,
 Hi,
 I was wondering how can I mange to index comments in solr? suppose I am
 going to index a web page that has a content of news and some comments that
 are presented by people at the end of this page. How can I index these
 comments in solr? consider the fact that I am going to do some analysis on
 these comments. For example I want to have such query flexibility for
 retrieving all comments that are presented between 24 June 2014 to 24 July
 2014! or all the comments that are presented by specific person. Therefore
 defining these comment as multi-value field would not be the solution since
 in this case such query flexibility is not feasible. So what is you
 suggestion about document granularity in this case? Can I consider all of
 these comments as a new document inside main document (tree based
 structure). What is your suggestion for this case? I think it is a common
 case of indexing webpages these days so probably I am not the only one
 thinking about this situation. Please share you though and perhaps your
 experiences in this condition with me. Thank you very much.

Parsing a web page, and breaking up parts up for indexing into different fields
is out of the scope of Solr. You might want to look at Apache Nutch which
can index into Solr, and/or other web crawlers/scrapers.

Regards,
Gora


Re: Solr Full Import frozen after indexing a fixed number of records

2014-07-27 Thread Gora Mohanty
On 27 July 2014 12:13, Aniket Bhoi aniket.b...@gmail.com wrote:

 On Fri, Jul 25, 2014 at 8:32 PM, Aniket Bhoi aniket.b...@gmail.com
 wrote:

  I have Apache Solr,hosted on my apache Tomcat Server with SQLServer
  Backend.

 [...]

  After I run a full import,Indexing proceeds sucessfully,but seems to
  freeze everytime after fetching fixed number of records.What I mean is
  after it fetches 10730 records it just freezes and doesnt process any
 more.
 
  Excerpt from dataimport.xml:
 
  lst name=statusMessages
  str name=Time Elapsed0:15:31.959/str
  str name=Total Requests made to DataSource0/str
  str name=Total Rows Fetched*10730*/str
  str name=Total Documents Processed3579/str
  str name=Total Documents Skipped0/str
  str name=Full Dump Started2014-07-25 10:44:39/str
 
  This seems to happen everytime.
 
  I checked the tomcatlog.Following is the excerpt when Solr freezes:
 
  INFO:  Generating record for Unique ID :null attachment Ref:null
  parent ref :nullexecuted by thread:25


[...]

Something is wrong with your DIH config file: You seem to be getting null
for a document unique ID. Please share the file with us.

Regards,
Gora


Re: Inquiries

2014-07-11 Thread Gora Mohanty
On 11 July 2014 19:14, Shawn Heisey s...@elyograg.org wrote:

 On 7/11/2014 5:56 AM, Veeger Worakamon wrote:
  I am not sure about which email I should send my questions to because I do 
  not want this to be answered publicly.  Is the channel to communicate 
  personally to the person who can answer my questions relating to Solr 
  functionalities? If It is I would appreciate your reply and I will send 
  more questions to you later. But firstly I would like to know that How can 
  a company get production support from Solr quickly and privately? Is there 
  a local support from Solr engineers in the Netherlands?
 
  Looking forward to hearing from you. Thank you very much.

 The Lucene/Solr project is open source, doing things behind closed doors
 is against everything that its members believe.  If this is what you
 really want, then you're probably going to have to pay for it.  If you
 want free help, use this mailing list or the IRC channel and keep it
 public.  You can always keep specific data (logins, configs, etc)
 private, especially on IRC, where the help is typically interactive and
 one-on-one.

 The first three hits on a google search for solr support resources
 offer paid consulting services:

 http://www.innoventsolutions.com/solr-consulting.html
 http://www.lucidworks.com/consulting/
 http://www.searchtechnologies.com/solr-consulting.html

There is also http://wiki.apache.org/solr/Support

Regards,
Gora


Re: don't count facet on blank values

2014-07-08 Thread Gora Mohanty
On 8 July 2014 15:46, Aman Tandon amantandon...@gmail.com wrote:
 Hi,

 Is this possible to not to count the facets for the blank values?
 e.g. cat:
[...]

Either filter them out in the query, or remove them client-side when
displaying the results.

Regards,
Gora


Re: Field for 'species' data?

2014-07-03 Thread Gora Mohanty
On 3 July 2014 21:40, Dan Bolser dbol...@ebi.ac.uk wrote:

 Hi,

 Does anyone on the list have experience with hierarchical facets,
 specifically for species data?
[...]

Maybe not specifically for species data, but hierarchical faceting works
pretty well with Solr. Please see
http://wiki.apache.org/solr/HierarchicalFaceting
For your use case, I would probably use pivot facets:
http://wiki.apache.org/solr/HierarchicalFaceting#Pivot_Facets

Regards,
Gora


Re: Customise score

2014-07-02 Thread Gora Mohanty
On 2 July 2014 20:32, rachun rachun.c...@gmail.com wrote:
 Dear all,

 Could anybody suggest me how to customize the score?
 So, I have data like this ..

 {ID : '0001', Title :'MacBookPro',Price: 400,Base_score:'121.2'}
 {ID : '0002', Title :'MacBook',Price: 350,Base_score:'100.2'}
 {ID : '0003', Title :'Laptop',Price: 300,Base_score:'155.7'}

 Notice that I have ID field for uniqueKey.
 When I query q=MacBook  sort=score desc
 it will return result something like this

 {ID : '0002', Title :'MacBook',Price: 350,Base_score:'100.2',score:1.45}
 {ID : '0001', Title :'MacBookPro',Price: 400,Base_score:'121.2',score:1.11}

 But I want solr to produce score by also using my Base_score. The score
 should be something like this

 - score = 100.2 + 1.45 = 101.65
 - score = 121.2 + 1.11 = 122.31

You should use Solr's sum function query:
http://wiki.apache.org/solr/FunctionQuery#sum
q=MacBooksort=sum(Base_score, score)+desc should do it.

Regards,
Gora


Re: Drupal 7 solr index remaining question

2014-06-13 Thread Gora Mohanty
On 13 June 2014 13:05, kastania44 kastani...@hotmail.com wrote:

 Hello,

 In our Drupal 7 site we are using Apache Solr to index and search.

This is best addressed to a Drupal-related list. Depending on which Drupal
plugin you are using there might be help forums available.

Regards,
Gora


Re: Delta-Import not loading child entities?

2014-06-13 Thread Gora Mohanty
On 14 June 2014 00:36, Drew Mazurek d...@veoci.com wrote:

 A little more info... removing the cache fixes everything.  Are delta
 queries incompatible with the cache?  There isn't a lot of current
 documentation on this, far as I can tell.

Not quite au courant myself with 4.8.1 (need to install new Jave somewhere),
but it would help to post your DIH configuration file.

Regards,
Gora


Re: solr user

2014-06-03 Thread Gora Mohanty
On 3 June 2014 11:22, Manoj V manojv1...@gmail.com wrote:
 I m working on solr. i m interested in getting added to solr user group.

 Can you please add me to the group ?

If mail from your address is reaching this list, you are already subscribed
to it. Presumably, you did that from under
https://lucene.apache.org/solr/discussion.html
Or, did you mean something else?

Regards,
Gora


Re: score retrieval performance

2014-05-19 Thread Gora Mohanty
On 19 May 2014 19:52, Michael Ryan mr...@moreover.com wrote:
 Is there any significant difference in query speed when retrieving the score 
 pseudo-field? E.g., does...

 q=foosort=date+descfl=*,score

 ...take longer to run than...

 q=foosort=date+descfl=*

 I know there's different code paths in Solr depending on whether the score is 
 needed or not, but not sure how much slower the score path is...

If you are using the q parameter, the score is being calculated in any
case, and while we
have never measured this, I would imagine that the performance hit
from retrieving the score
would be miniscule.

Are you perchance thinking of using filter queries using the fq
parameter? As filter queries are
cached, using these can indeed make a difference.

Regards,
Gora


Re: URLDataSource : indexing from other Solr servers

2014-05-14 Thread Gora Mohanty
On 12 May 2014 21:41, helder.sepulveda helder.sepulv...@homes.com wrote:

 I been trying to index data from other solr servers but the import always
 shows:
 Indexing completed. Added/Updated: 0 documents. Deleted 0 documents.
 Requests: 1, Fetched: 0, Skipped: 0, Processed

 My data config looks like this:

Nothing came through for your data config. Please send it again, or put it up
somewhere online. From the DIH message, it seems that it is not even fetching
anything, so make sure that your URLs are correct.

Regards,
Gora


Re: URLDataSource : indexing from other Solr servers

2014-05-13 Thread Gora Mohanty
On 12 May 2014 22:52, helder.sepulveda helder.sepulv...@homes.com wrote:
 Here is the data config:

 dataConfig
 dataSource type=URLDataSource /

 document name=listingcore
 entity name=listing pk=link
 url=http://slszip11.as.homes.com/solr/select?q=*:*;
 processor=XPathEntityProcessor
 forEach=/response/result/doc
 transformer=DateFormatTransformer
 field column=batch_address
 xpath=/response/result/doc/str[@name='batch_address']/
 field column=batch_state
 xpath=/response/result/doc/str[@name='batch_state']/
 field column=batch_city
 xpath=/response/result/doc/str[@name='batch_city']/
 field column=batch_zip
 xpath=/response/result/doc/str[@name='batch_zip']/
 field column=sz_id
 xpath=/response/result/doc/long[@name='sz_id']/
 /entity
 /document
 /dataConfig

Hmm, see no issues here. Can you also share your Solr schema?
Is the URL accessible, and the results from Solr show properly when
loaded in a browser window? I cannot seem to reach slszip11.as.homes.com
but that could be because it is restricted to certain IPs.

Regards,
Gora


Re: Website running Solr

2014-05-12 Thread Gora Mohanty
On 11 May 2014 23:39, Ahmet Arslan iori...@yahoo.com wrote:

 Hi,

 Some site owners put themselves here :

 https://wiki.apache.org/solr/PublicServers

Thanks for the reminder: I need to add some sites there.
If you got it, flaunt it :-)



 Besides, I would try *:* match all docs query.

Won't work. Many front-ends, such as the excellent Haystack for
Django, will not expose raw Solr queries by default.

Regards,
Gora


Re: Solr, How to index scripts *.sh and *.SQL

2014-05-11 Thread Gora Mohanty
On 8 May 2014 12:25, Visser, Marc marc.viss...@ordina.nl wrote:

 HI All,
 Recently I have set up an image with SOLR. My goal is to index and extract 
 files on a Windows and Linux server. It is possible for me to index and 
 extract data from multiple file types. This is done by the SOLR CELL request 
 handler. See the post.jar cmd below.

 j ava -Dauto -Drecursive -jar post.jar Y:\ SimplePostTool version 1.5 Posting 
 files to base url localhost:8983/solr/update.. Entering auto mode. File 
 endings considered are xml,json,csv,pdf,doc,docx,ppt,pp 
 tx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log Entering recursive 
 mode, max depth=999, delay=0s 0 files indexed.

 Is it possible to index and extract metadata/content from file types like .sh 
 and .sql? If it is possible I would like to know how of course :)

Don't know about Windows, but on Linux these are just text files. What
metadata are you referring to? Normally, a Linux text file only has content,
unless you are talking about metadata such as obtained from:
   file cmd.sh

Regards,
Gora


Re: Anybody uses Solr JMX?

2014-05-04 Thread Gora Mohanty
On May 5, 2014 7:09 AM, Alexandre Rafalovitch arafa...@gmail.com wrote:

 I have religiously kept jmx statement in my solrconfig.xml, thinking
 it was enabling the web interface statistics output.

 But looking at the server logs really closely, I can see that JMX is
 actually disabled without server present. And the Admin UI does not
 actually seem to care after a quick test.

 Does anybody have a real experience with Solr JMX? Does it expose more
 information than Admin UI's Plugins/Stats page? Is it good for


Have not been using JMX lately, but we were using it in the past. It does
allow monitoring many useful details. As others have commented, it also
integrates well with other monitoring  tools as JMX is a standard.

Regards,
Gora


Re: Apache Solr - Pdf Indexing.

2014-04-29 Thread Gora Mohanty
On Apr 29, 2014 2:52 PM, vignesh vignes...@ninestars.in wrote:

 Hi Team,



  I am indexing PDF using  Apache Solr 3.6 . Passing around
3000 keywords using the OR operator and able to get the files containing
the keywords. Kindly guide me to get the keyword list in a .PDF file.

What do you mean? Do you want Solr search results in a PDF file? Why would
a search engine provide such functionality? You can take the Solr XML/JSON
results, and generate a PDF if you need that.

Regards,
Gora


Re: space issue in search results

2014-04-28 Thread Gora Mohanty
On 28 April 2014 12:42, PAVAN pavans2...@gmail.com wrote:

 I have indexed title in the following way.

 honda cars in rajaji nagar
 honda cars in rajajinagar.

 suppose if i search for

 honda cars in rajainagar (OR)
 honda cars in rajaji nagar

 it has to display both the results.

Please do not start multiple threads with the same question.

The straightforward way to do what you want is to use synonyms:
  rajaji nagar, rajajinagar
as presumably you want to collapse spaces only for things like
place names.

Regards,
Gora


Re: SolrCloud OR distributed Solr

2014-03-30 Thread Gora Mohanty
On 30 March 2014 23:12, Priti Solanki pritiatw...@gmail.com wrote:

 Hello Member,

 Is there any difference between distributed solr  solrCloud ?

You might be confusing the older Solr distributed search with the new SolrCloud:
* Older distributed search: https://wiki.apache.org/solr/DistributedSearch
* SolrCloud: https://cwiki.apache.org/confluence/display/solr/SolrCloud

 Consider I have three countries' product. I have indexed one country data
 and it's index size is 160 gb+

 Now we have other two countries and now I am confused !

 My client ask me what is the difference if we procure another Solr server
 and indexed separatelyI was thinking for solrcloud.Can someone explain
 how we can explain these two approaches in simple words and if there are
 any reading links please share.

With 4.0+ versions of Solr, you probably want to go for SolrCloud.

Regards,
Gora


Re: Indexing parts of an HTML file differently

2014-03-25 Thread Gora Mohanty
On 25 March 2014 15:59, Michael Clivot cli...@netmedia.de wrote:
 Hello,

 I have the following issue and need help:

 One HTML file has different parts for different countries.
 For example:

 !-- Country: FR, BE ---
 
 Address for France and Benelux
 
 !-- Country End --
 !-- Country: CH --
 
 Address for Switzerland
 
 !-- Country End --

 Depending on a parameter, I show or hide the parts on the website
 Logically, all parts are in the index and therefore all items are found by 
 SolR.
 My question is: how can I have only the items for the current country in my 
 result list?

How are you fetching the HTML content, and indexing it into Solr?
It is probably best to handle this requirement at that point. Haven't
used Nutch ( http://nutch.apache.org/ ) recently, but you might be
able to use it for this.

Regards,
Gora


Re: Multiple search analyzers question

2014-03-25 Thread Gora Mohanty
On Mar 25, 2014 10:37 PM, ku3ia dem...@gmail.com wrote:

 Hi all!
 Now I have a default search field, defined as

 field name=Text type=text indexed=true stored=true /
 ...
fieldType name=text class=solr.TextField
 autoGeneratePhraseQueries=true 
   analyzer type=index
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.KStemFilterFactory/
 filter class=solr.TrimFilterFactory/
 filter class=solr.StopFilterFactory words=stopwords.txt
 ignoreCase=true/
 filter class=solr.LengthFilterFactory min=2 max=50/
   /analyzer
   analyzer type=query
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.KStemFilterFactory/
 filter class=solr.TrimFilterFactory/
 filter class=solr.StopFilterFactory words=stopwords.txt
 ignoreCase=true/
 filter class=solr.LengthFilterFactory min=2 max=50/
   /analyzer
 /fieldType

 In a future, I will need to search using my current field (with KStem
 filter) and need alternative search - w/o using KStem filter. The easiest
 way is to add a copy field and declare a new field type (w/o KStem):

 field name=TextNKS type=text_nks indexed=true stored=false /
 ...
fieldType name=text_nks class=solr.TextField
 autoGeneratePhraseQueries=true 
   analyzer type=index
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.TrimFilterFactory/
 filter class=solr.StopFilterFactory words=stopwords.txt
 ignoreCase=true/
 filter class=solr.LengthFilterFactory min=2 max=50/
   /analyzer
   analyzer type=query
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.TrimFilterFactory/
 filter class=solr.StopFilterFactory words=stopwords.txt
 ignoreCase=true/
 filter class=solr.LengthFilterFactory min=2 max=50/
   /analyzer
 /fieldType

 and to re-index all my data.
 Is any alternative way?
[...]

No. If your analysers change, and/or you add new fields, you will need to
reindex.

Regards,
Gora


Re: DIH dataimport.properties Zulu time

2014-03-25 Thread Gora Mohanty
On 26 March 2014 02:44, Kiran J kiranjuni...@gmail.com wrote:

 Hi

 Is it possible to set up the data import handler so that it keeps track of
 the last imported time in Zulu time and not local time ?
[...]

Start your JVM with the desired timezone, e.g.,
java -Duser.timezone=UTC -jar start.jar

Regards,
Gora


Re: how to generate json response from the php solarium ?

2014-03-24 Thread Gora Mohanty
On 24 March 2014 12:35, Sohan Kalsariya sohankalsar...@gmail.com wrote:
 How can i get the json response from solr ?
 I mean how can i get response of the searched results in json format
 and print it in solarium php code ?

Adding wt=json to the query will get you Solr results in JSON format.
Please refer to the Solarium documentation for how to print the
results.

Regards,
Gora


Re: Can the solr dataimporthandler consume an atom feed?

2014-03-24 Thread Gora Mohanty
On 25 March 2014 01:15, eShard zim...@yahoo.com wrote:
 I confirmed the xpath is correct with a third party XPath visualizer.
 /atom:feed/atom:entry parses the xml correctly.

 Can anyone confirm or deny that the dataimporthandler can handle an atom
 feed?

Yes, an ATOM feed can be consumed by DIH, as noted in the documentation.
We have done this in the past, and a Google search turns up examples, e.g.,
http://blog.florian-hopf.de/2012/05/importing-atom-feeds-in-solr-using-data.html

Have not dealt with namespaces, but here is a line from the documentation
that s probably relevant to your ATOM feed:
It does not support namespaces, but it can handle xmls with namespaces.
When you provide the xpath, just drop the namespace and give the rest
(eg if the tag is 'dc:subject' the mapping should just contain 'subject').

Other than that, I still see nothing wrong with your DIH data configuration. The
message from the dataimport shows that it did make a request to the
URLDataSource. If things still do not work:
* Can you double-check that the specified URL in the url attribute of the
   entity does indeed retrieve the desired XML.
* I am pretty sure that you have checked this, but are your fields properly
  defined in the Solr schema?

Regards,
Gora


Re: SOLR Typo3

2014-03-21 Thread Gora Mohanty
On 21 March 2014 13:54, Bernhard Prange m...@bernhard-prange.de wrote:
 Hey Group,
 I am trying to use SOLR with TYPO3.

 It works so far. But I get an ?sword_list[]=endometrialno_cache=1 on the
 end of each link, causing the linking not to work. How do I remove that? Do
 I have to configure this within RealUrl?

You should ask on a TYPO3 list: This seems to have nothing to do with Solr.

Regards,
Gora


Re: Can the solr dataimporthandler consume an atom feed?

2014-03-21 Thread Gora Mohanty
On 22 March 2014 02:55, eShard zim...@yahoo.com wrote:

 Good afternoon,
 I'm using solr 4.0 Final.
 I have an IBM atom feed I'm trying to index but it won't work.
 There are no errors in the log.
 All the other DIH I've created consumed RSS 2.0
 Does it NOT work with an atom feed?
[...]

Atom is XML, and your DIH data configuration file looks fine on the
face of it. What message do you get when you do a full-import?
Can you also provide a sample of your feed?

Regards,
Gora


Re: Solr dih to read Clob contents

2014-03-20 Thread Gora Mohanty
On 20 March 2014 14:53, Prasi S prasi1...@gmail.com wrote:

 Hi,
 I have a requirement to index a database table with clob content. Each row
 in my table a column which is an xml stored as clob. I want to read the
 contents of xmlthrough dih and map each of the xml tag to a separate solr
 field,

 Below is my clob content.
 root
authorA/author
date02-Dec-2013/date
.
.
.
 /root

 i want to read the contents of the clob and map author to author_solr and
 date to date_solr . Is this possible with a clob tranformer or a script
 tranformer.

You will need to use a FieldReaderDataSource, and a XPathEntityProcessor
along with the ClobTransformer. You do not provide details of your DIH data
configuration file, but this should look something like:

dataSource name=xmldata type=FieldReaderDataSource/
...
document
  entity name=x query=... transformer=ClobTransformer
 entity name=y dataSource=xmldata dataField=x.clob_column
processor=XPathEntityProcessor forEach=/root
   field column=author_solr xpath=/author /
   field column=date_solr xpath=/date /
 /entity
  /entity
/document

Regards,
Gora


Re: Does CachedSqlEntityProcessor works?

2014-03-17 Thread Gora Mohanty
On 17 March 2014 18:13, manju16832003 manju16832...@gmail.com wrote:

 I tried to use *CachedSqlEntityProcessor* in DataImportHandler with
 Sub-entity query. It does not seems to be working.
 Here is my query
 entity name=listing dataSource=mysql query=SELECT id,make, model FROM
 LISTING
   entity name=account dataSource=mssql query=SELECT name,email FROM
 CUSTOMER WHERE id='${listing.customer_id}'
 processor=CachedSqlEntityProcessor
   /entity
 /entity

 If I remove *processor=CachedSqlEntityProcessor*, run without
 CachedSqlEntityProcessor it works fine. With CachedSqlEntityProcessor my
 sub-entity some how fails to load the data?

Your outer entity selects 'id', but the inner one reference 'customer_id'.
Likewise, the inner entity does not select for the 'id' used in the where
clause.

If your tables are set up correctly, this should work:
entity name=listing dataSource=mysql query=SELECT id,make, model FROM
LISTING
  entity name=account dataSource=mssql query=SELECT
id,name,email FROM CUSTOMER cacheKey=id cacheLookup=listing.id
processor=CachedSqlEntityProcessor
  /entity
/entity

Also, unless your field names in the tables exactly match those in the Solr
schema (including case), you should also have field entries inside the entities.
Please see the Solr DataimportHandler Wiki page.

Regards,
Gora


Re: Problem adding fields when indexing a pdf (add-on)

2014-03-13 Thread Gora Mohanty
On 13 March 2014 18:33, Croci  Francesco Luigi (ID SWS)
fcr...@id.ethz.ch wrote:
 Ok, I renamed the filed  rmDocumentTitle to  rmdocumenttitle and now the 
 field is there!

 Is there some naming rules for the field's names? No uppercase?

No. We have used mixed-case names in the past.

Are you sure that you reindexed the first time before checking?

Regards,
Gora


Re: Partial Counts in SOLR

2014-03-07 Thread Gora Mohanty
On 7 March 2014 15:18, Salman Akram salman.ak...@northbaysolutions.net wrote:
 All,

 Is it possible to get partial counts in SOLR? The idea is to get the count
 but if its above a certain limit than just return that limit.

 Reason: In an index with millions of documents I don't want to know that a
 certain query matched 1 million docs (of course it will take time to
 calculate that). Why don't just stop looking for more results lets say
 after it finds 100 docs? Possible??

 e.g. Something similar that we can do in MySQL:

 SELECT COUNT(*) FROM ( (SELECT * FROM table where 1 = 1) LIMIT 100) Alias

The response to the /select Solr URL has a numFound attribute that
is the number
of matches.

Regards,
Gora


Re: Dataimport handler Date

2014-03-06 Thread Gora Mohanty
On 7 March 2014 08:50, Pritesh Patel priteshpate...@gmail.com wrote:
 I'm using the dataimporthandler to index data from a mysql DB.  Been
 running it just fine. I've been using full-imports. I'm now trying
 implement the delta import functionality.

 To implement the delta query, you need to be reading the last_index_time
 from a properties file to know what new to index.  So I'm using the
 parameter:
 {dataimporter.last_index_time} within my query.

 The problem is when I use this, the date always is : Thu Jan 01 00:00:00
 UTC 1970.  It's never actually reading the correct date stored in the
 dataimport.properties file.
[...]

I take it that you have verified that the dataimport.properties file exists.
What are its contents?

Please share the exact DIH configuration file that you use, obfuscating
DB password/username. Your cut-and-paste seems to have a syntax
error in the deltaQuery (notice the 'jgkg' string):
deltaQuery=SELECT node.nid from node where node.type = 'news' and
node.status = 1 and (node.changed gt;
UNIX_TIMESTAMP('${
dataimporter.last_index_time}'jgkg) or node.created gt;
UNIX_TIMESTAMP('${dataimporter.last_index_time}'))

What response do you get fromm the delta-import URL?
Are there any error messages in your Solr log?

Regards,
Gora


Re: query problem

2014-03-05 Thread Gora Mohanty
On 6 March 2014 11:23, Kishan Parmar kishan@gmail.com wrote:
 Thanks,
 my documents are  xml files i am attaching that document in this and in my
 project i have to search  from each field defined in schema.xml
[...]

The type for State in your schema is string which is a non-analysed
field that stores the text verbatim, i.e., here it is preserving case. Try
searching for State:Tamil Nadu.

Regards,
Gora


Re: SOLR 4.2 - Restart vs Reload

2014-03-04 Thread Gora Mohanty
On 5 March 2014 02:14, bbi123 bbar...@gmail.com wrote:
 I am currently using SOLR 4.2 (non cloud mode). I see that most of the
 changes made to the config files (solrconfig.xml, schema.xml, elevate.xml,
 stopwords.txt etc..) gets updated when reloading the core.

 Is there any particular change (in any of the config files) requires a
 restart instead of reload for the changes to be reflected?

 I am trying to come up with an automated deployment process which will just
 do reload (instead of restart) after the files are deployed to corresponding
 folders and I want to make sure that I don't miss anything.

Surely that depends on the Java container that you are using?
Otherwise, could you explain further?

Regards,
Gora


Re: Solr is NoSQL database or not?

2014-02-28 Thread Gora Mohanty
On 1 March 2014 09:39, nutchsolruser nutchsolru...@gmail.com wrote:
 You may think this is silly question but let me ask this because i am
 confused ,
 http://www.lucidworks.com/webinar-solr-4-the-nosql-search-server/  this says
 Solr is NoSQL but many other links dont have solr in their list as NoSQL
 database.

 http://en.wikipedia.org/wiki/NoSQL
 http://en.wikipedia.org/wiki/Document-oriented_database

  it's really confusing what is real meaning of NoSQL database?

Rather than looking for buzzword compliance, maybe you should
ask what features do you need out of Solr. We have used Solr as
a noSQL data store, but for something like that, plus search, Solr
+ Cassandra look like a good bet.

Regards,
Gora


Re: Urgent_Can not index binary data stored in DB as BLOB type

2014-02-25 Thread Gora Mohanty
On 25 February 2014 14:27, Chandan khatua chand...@nrifintech.com wrote:
 Sir,



 Please send me the data-config file to index binary data which are stored in
 Database as BLOB type.

Are you paying attention to the follow-ups? I had suggested
possibilities, including the fact that Solr cannot automatically
decide whether a blob contains richtext or not.

Please do not start multiple threads for the same issue.

Regards,
Gora


Re: Can not index raw binary data stored in Database in BLOB format.

2014-02-25 Thread Gora Mohanty
On 25 February 2014 14:54, Chandan khatua chand...@nrifintech.com wrote:
 Hi Gora,

 The column type in DB is BLOB. It only stores binary data.

 If I do not use TikaEntityProcessor, then the following exception occurs:
[...]

It is difficult to follow what you are doing when you say one thing, and
seem to do another. You say above that you are not using TikaEntityProcessor
but your DIH data configuration file shows that you are. Please start with
one configuration, and show us the *exact* files in use, and the error from
the Solr logs.

Regards,
Gora


Re: Can not index raw binary data stored in Database in BLOB format.

2014-02-24 Thread Gora Mohanty
On 24 February 2014 12:51, Chandan khatua chand...@nrifintech.com wrote:
 Hi,



 We have raw binary data stored in database(not word,excel,xml etc files) in
 BLOB.

 We are trying to index using TikaEntityProcessor but nothing seems to get
 indexed.

 But the same configuration works when xml/word/excel files are stored in the
 BLOB field.

Please start by reviewing http://wiki.apache.org/solr/DataImportHandler as the
above seems quite confused. Why are you using TikaEntityProcessor if the data
in the DB are not richtext files?

What is the type of the column used to store the binary data in
Oracle? You might
be able to convert it with a ClobTransformer. Please see
http://wiki.apache.org/solr/DataImportHandler#ClobTransformer
http://wiki.apache.org/solr/DataImportHandlerFaq#Blob_values_in_my_table_are_added_to_the_Solr_document_as_object_strings_like_B.401f23c5

Regards,
Gora


Re: Sorl integrate with GAE

2014-02-24 Thread Gora Mohanty
On 24 February 2014 12:39, Quốc Nguyễn nhquoc1...@gmail.com wrote:
 Dear sir,

 To Apache Solr support!
 wish you have a good day!

 I'm new in Solr, please help me to confirm bellow information :

 1. The URL must use the standard ports for HTTP (80) and HTTPS (443).
 The port is implied by the scheme, but may also be mentioned in the URL as
 long as the port is standard for the scheme (https://...:443/). An app
 cannot connect to an arbitrary port of a remote host, nor can it use a
 non-standard port for a scheme. This is an annoyance for those running
 Solr on non-80/443. To some, this may be a fatal limitation.
 2. You can not write index on disk, but you can read files. So
 theoretically if the index is read-only and small, you can package it with
 the war file.
 3. If you need to update the index, you will have to store the index
 with Google's data store, just like store an index into databases. Sure
 it'll work. But performance would suffer because of transferring the whole
 index into memory, then really start searching. On the other hand, this
 could be a good solution for small index with per-user data.
 4. For large changing indexes, you need to find other solutions to
 maintain lucene index.
 5. GAE does not support SOLR implementation : solr requires access to
 the server file system, which GAE forbids.



 this is the restriction of Solr when integrate with GAE. I found this in
 the internet . is it right? and any restriction more?

You would probably have better luck with asking on a GAE forum. This
seems to have nothing to do with Solr per se.

Regards,
Gora


Re: URLDataSource : Issue assigning single xpath field name to two solr fields

2014-02-24 Thread Gora Mohanty
On 24 February 2014 14:45, manju16832003 manju16832...@gmail.com wrote:
 I'm not sure if I would be missing any configuration params here, however
 when I tried to assign an xpath field from URLDataSource (XML end point) to
 two fields defined in schema.xml.

 Here is my scenario,
 I have two fields
 *profile_display* and *profile_indexed*

 My assignment in DataImpotHandler looks like this

 entity name=user dataSource=crm

 url=http://URLTOExternalSystem//ProfileService.svc/;
 processor=XPathEntityProcessor
 forEach=/Profiles
 field column=profile_display xpath=/Profile/profile_name /
 field column=profile_indexed xpath=/Profile/profile_name /
 /entity

 My Scheama.xml config looks like this
 field name=profile_display type=string indexed=true stored=true
 multiValued=false default=/
 field name=profile_indexed type=text_general indexed=true
 stored=false  multiValued=false default=/

 *So the issue here is, the value is value is always assigned to
 profile_indexed, and profile_display does not contain any value. *

 Meaning, if we were to assign xpath field name to different solr fields,
 only the last field contains the data.

 The reason I have two fields is that, One to store it as a String to display
 to user, another field where I apply Filter and Tokenizers to do text
 transformation.

Not sure what happens when the same Xpath is applied to two fields
(though I would have thought that this should work). If you need the
same data in two fields that are tokenised in different ways, you can
use Solr's CopyField: http://wiki.apache.org/solr/SchemaXml#Copy_Fields
This will be more efficient, too.

Regards,
Gora


Re: Can not index raw binary data stored in Database in BLOB format.

2014-02-24 Thread Gora Mohanty
On 24 February 2014 15:34, Chandan khatua chand...@nrifintech.com wrote:
 Hi Gora !

 Your concern was What is the type of the column used to store the binary
 data in Oracle?
 The column type is BLOB in DB.  The column can also have rich text file.

Um, your original message said that it does *not* contain richtext data. How
do you tell whether it has richtext data, or not? For just a binary blob, the
ClobTransformer should work, but you need the TikaEntityProcessor for richtext
data. If you do not know whether the data in the blob is richtext or
not, you will
need to roll your own solution to determine that.

Regards,
Gora


Re: Importing database DIH

2014-02-12 Thread Gora Mohanty
On 12 February 2014 20:53, Maheedhar Kolla maheedhar.ko...@gmail.com wrote:

 Hi ,


 I need help with importing data, through DIH.  ( using solr-3.6.1, tomcat6 )

  I see the following error when I try to do a full-import from my
 local MySQL table ( http:/s/solr//dataimport?command=full-import
 ).

 snip
 ..
 str name=Total Requests made to DataSource0/str
 str name=Total Rows Fetched0/str
 str name=Total Documents Processed0/str
 str name=Total Documents Skipped0/str
 str name=Indexing failed. Rolled back all changes./str
 
 /snip

 I did search to find ways to solve this problem and did create the
 file dataimport.properties , but no success.
[...]

You do not have to create dataimport.properties. Look in the Tomcat logs
for more details on the error, and post the relevant sections here if you
cannot make sense of it. My guess would be that your database credentials
are incorrect, or that the SELECT is failing. Try logging into mysql from
an admin. tool with those credentials, and running the SELECT manually.

Regards,
Gora


Re: Newb - Search not returning any results

2014-02-12 Thread Gora Mohanty
On 12 February 2014 20:57, leevduhl ld...@corp.realcomp.com wrote:
[...]
 However, when I try to search specifically where mailingcity=redford I
 don't get any results back.  See the following query/results.

 Query:
 http://{domain}:8983/solr/MIM/select?q=mailingcity=redfordrows=2fl=id,mailingcitywt=jsonindent=truedebugQuery=true

Please start by reading https://wiki.apache.org/solr/SolrQuerySyntax
The argument to 'q' above should be mailingcity:redford. The debug
section in the results even tells you that, as the parsedquery becomes
text:mailingcity text:redford which means that it is searching the
default full-text search field for the strings mailingcity and/or redford.

Regards,
Gora


Re: Import data from mysql to sold

2014-02-04 Thread Gora Mohanty
On 4 February 2014 15:28, rachun rachun.c...@gmail.com wrote:
 please see below code..

 dataConfig
   dataSource type=JdbcDataSource
   driver=com.mysql.jdbc.Driver
   url=jdbc:mysql://localhost:3306/mydb01
   user=root
   password=/
   document
 entity name=users query=select id,firstname,username from users
 field column=id name=user_id /
 field column=firstname name=user_firstname /
 /entity
   /document
 /dataConfig

 my question is..

Please check the URL that you were referred to:
http://wiki.apache.org/solr/DataImportHandler

 1. what is the url for?  (url=jdbc:mysql://localhost:3306/mydb01 )
 does it means my database url?

It is the JDBC URL: Please see
http://wiki.apache.org/solr/DataImportHandler#Configuring_JdbcDataSource

 2. did i do it right with this
 field column=id name=user_id /
 i'm not sure name means the field in Solr?

Yes, the column is the database column name, and name
the Solr field name that must be defined in schema.xml.

Regards,
Gora


Re: Import data from mysql to sold

2014-02-02 Thread Gora Mohanty
On 3 February 2014 10:14, rachun rachun.c...@gmail.com wrote:
 Dear all gurus,

 I would like to import my data (mysql) about 4 Million rows  into solar 4.6.
 What is the best way to do it?

Try using the Solr DataImportHandler:
http://wiki.apache.org/solr/DataImportHandler

Searching Google for solr mysql also turns up many links on how to index
data into Solr from mysql.

Regards,
Gora


Re: How to use Solr for two different projects on one server

2014-01-23 Thread Gora Mohanty
On 23 January 2014 14:06, Stavros Delisavas stav...@delisavas.de wrote:
 Dear Solr-Experts,

 I am using Solr for my current web-application on my server successfully.
 Now I would like to use it in my second web-application that is hosted
 on the same server. Is it possible in any way to create two independent
 instances/databases in Solr? I know that I could create another set of
 fields with alternated field names, but I would prefer to be independent
 on my field naming for all my projects.
[...]

Use two cores: http://wiki.apache.org/solr/CoreAdmin
These are isolated from each other, and should serve your purpose.

Regards,
Gora


Re: Correct to use to store urls (unicode)

2014-01-12 Thread Gora Mohanty
On 12 January 2014 19:45, Hakim Benoudjit h.benoud...@gmail.com wrote:
 hi,

 what's the correct type used to store urls, which can contain some
 encoded unicode caracters in the form '%some-digits'. Because, the
 string type returns an error when I try to store these urls.

Please provide more details as that should not be the case.

 Btw, I'm using a python client which gives me this error: 'ascii'
 codec can't decode byte 0xc3.

That is a different issue having to do with Python, and the proper
handling of Unicode strings. Try searching Google, or asking on a
Python list.

Regards,
Gora


Re: Correct to use to store urls (unicode)

2014-01-12 Thread Gora Mohanty
On 12 January 2014 20:07, Hakim Benoudjit h.benoud...@gmail.com wrote:
 I have just forget the u'' next to a unicode string :\,

Heh! Handling of Unicode in Python 2.x is annoying. 3.x
is better, but only a little. Off-topic on this list, so I will
shut up now.

Regards,
Gora


Re: Correct to use to store urls (unicode)

2014-01-12 Thread Gora Mohanty
On 13 January 2014 00:30, Hakim Benoudjit h.benoud...@gmail.com wrote:

 Yep sure. But is it good for me to store a link(http://...) in a solr
 string field? knowing that this field isnt indexed, only stored.

Yes, there is no issue. Not sure why they are not indexed, but if
that is what you want, ...

Regards,
Gora


Re: to index byte array

2014-01-01 Thread Gora Mohanty
On 1 January 2014 13:37, sweety sweetyshind...@yahoo.com wrote:
 I am converting .doc and .docx files to byte array in c#, now I need to index
 this byte array of doc files.
 Is it possible in solr to index byte array of files??

This does not make much sense: How would one
search through a byte array? Why not index the
.doc/.docx files directly into Solr?

Regards,
Gora


Re: to index byte array

2014-01-01 Thread Gora Mohanty
On 1 January 2014 16:03, sweety sweetyshind...@yahoo.com wrote:
 For indexing .docx files using tika, requires file system path, but i dont
 want to give the path.

Um, why not?

 I read in DIH faq's that by using transformer the output can be converted
 from byte to string.

Yes, that can possibly be done, but seems like an awful roundabout
way of doing it.

Regards,
Gora


Re: How to use Solr in my project

2013-12-30 Thread Gora Mohanty
On 30 December 2013 11:27, Fatima Issawi issa...@qu.edu.qa wrote:
 Hi again,

 We have another program that will be extracting the text, and it will be 
 extracting the top right and bottom left corners of the words. You are right, 
 I do expect to have a lot of data.

 When would solr start experiencing issues in performance? Is it better to:

 INDEX:
 - document metadata
 - words

 STORE:
 - document metadata
 - words
 - coordinates

 in Solr rather than in the database? How would I set up the schema in order 
 to store the coordinates?

You do not mention the number of documents, but for a few
tens of thousands of documents, your problem should be tractable
in Solr. Not sure what document metadata you have, and if you need
to search through it, but what I would do is index the words, and
store the coordinates in Solr, the assumption being that words are
searched but not retrieved from Solr, while coordinates are retrieved
but never searched.

Off the top of my head, each record can be:
doc1 pg1 word1 coord_x1 coord_y1 coord_x2 coord_y2
doc1 pg1 word2 
...
doc1 pg2 ...
...
doc2 ...

* doc_id and pg_id from Solr search results let you retrieve the image
  from the filesystem
* The coordinates allow post-processing to highlight the word in the image

As always, set up a prototype system with a subset of the records in order
to measure performance.

 If storing the coordinates in solr is not recommended, what would be the best 
 process to get the coordinates after indexing the words and metadata? Do I 
 search in solr and then use the documentID to then search the database for 
 the words and coordinates?

You could do that, but Solr by itself should be fine.

Regards,
Gora


Re: Solr core is asking tomcat username and password

2013-12-30 Thread Gora Mohanty
On 30 December 2013 18:59, kumar pavan2...@gmail.com wrote:
 Hi,


 I Created core in solr using tomcat and when i am accessing created core it
 is asking tomcat authentication details.

Hi,

This is off-topic: If you are going to run Solr within Tomcat,
please do some basic homework, or ask on a Tomcat list.

Tomcat authentication is usually in tomcat-users.xml, but
where that file is depends on the OS and Tomcat version.
On Debian/Ubuntu, and probably other Linux distributions,
it is in /etc/tomcatX where X is your Tomcat version, e.g.,
/etc/tomcat7

Regards,
Gora


Re: How to use Solr in my project

2013-12-29 Thread Gora Mohanty
On 29 December 2013 11:10, Fatima Issawi issa...@qu.edu.qa wrote:
[...]
 We will have the full text stored, but we want to highlight the text in the 
 original image. I expect to process the image after retrieval. We do plan on 
 storing the (x, y) coordinates of the words in a database - I suspected that 
 it would be too expensive to store them in Solr. I guess I'm still confused 
 about how to use Solr to index the document, but then retrieve the (x, y) 
 coordinates of the search term from the database. Is this possible? If it 
 can, can you give an example how this can be done?

Storing, and retrieving the coordinates from Solr will likely be
faster than from the database. However, I still think that you
should think more carefully about your use case of highlighting
the images. It can be done, but is a significant amount of work,
and will need storage, and computational resources.
1. For highlighting in the image, you will need to store two sets
of coordinates (e.g., top right and bottom left corners) as you
not know the length of the word in the image. Thus, say with
15 words per line, 50 lines per page, 100 pages per document,
you will need to store:
  4 x 15 x 50 x 100 = 3,00,000 coordinates/document
2. Also, how are you going to get the coordinates in the first
place?

Regards,
Gora


Re: How to use Solr in my project

2013-12-26 Thread Gora Mohanty
On 26 December 2013 10:54, Fatima Issawi issa...@qu.edu.qa wrote:
 Hello,

 First off, I apologize if this was sent twice. I was having issues 
 subscribing to the list.

 I'm a complete noob in Solr (and indexing), so I'm hoping someone can help me 
 figure out how to implement Solr in my project. I have gone through some 
 tutorials online and I was able to import and query text in some Arabic PDF 
 documents.

 We have some scans of Historical Handwritten Arabic documents that will have 
 text extracted into a database (or PDF). We would like the user to be able to 
 search the document for text, then have the scanned image show up in a viewer 
 with the text highlighted.

This will not work for scanned images which do not actually contain the
text. If you have the text of the documents, the best that you can do is
break the text into pages corresponding to the scanned images, and
index into Solr the text from the pages and the scanned image that should
be linked to the text. For a user search, you will need to show the scanned
image for the entire page: Highlighting of the search term in an image is not
possible without optical character recognition (OCR).

Similarly, if you are indexing from PDFs, you will need to ensure that they
contain text, and not just images.

Regards,
Gora


Re: How to use Solr in my project

2013-12-26 Thread Gora Mohanty
On 26 December 2013 15:44, Fatima Issawi issa...@qu.edu.qa wrote:
 Hi,

 I should clarify. We have another application extracting the text from the 
 document. The full text from each document will be stored in a database 
 either at the document level or page level (this hasn't been decided yet). We 
 will also be storing word location of each word on the page in the database.

What do you mean by word location? The number on the page? What purpose
would this serve?

 What I'm having problems with is deciding on the schema. We want a user to be 
 able to search for a word in the database, have a list of documents that word 
 is located in, and location in the document that word is located it. When he 
 selects the search results, we want the scanned picture to have that word 
 highlighted on the page.
[...]

I think that you might be confusing things:
* If you have the full-text, you can highlight where the word was found. Solr
  highlighting handles this for you, and there is no need to store word location
* You can have different images (presumably, individual scanned pages) linked
   to different sections of text, and show the entire image.
Highlighting in the image
   is not possible, unless by word location you mean the (x, y) coordinates of
   the word on the page. Even then:
   - It will be prohibitively expensive to store the location of every
word in every
 image for a large number of documents
   - Some image processing will be required to handle the highlighting after the
 scanned image is retrieved

Regards,
Gora


Re: update doc with a xml-format string

2013-12-20 Thread Gora Mohanty
On 20 December 2013 13:57, YouPeng Yang yypvsxf19870...@gmail.com wrote:
 Hi solr users

   I have a  string field to store a xml string. Now I want to update the
 field.I use the command.

 http://10.7.23.122:8080/solr/meta_core/update?stream.body=adddocfield
 name=nameshardTv_20131031/fieldfield update=set
 name=jobFirstRunExpress rootconditionGroupconditionEntry
 type=\string\ isChangeable=\true\
keyREP_DATE/keyop/opval20130930/val/conditionEntry
 opand/op conditionEntry type=\string\ isChangeable=\true\
keyREP_DATE/keyop/opval20131003/val/conditionEntry/conditionGroup/root
 /field/doc/addcommit=true

 The red color string is what I want to update .However it give out an error:

Um, the error message is pretty clear. Your XML is not in the format
that Solr expects: You have a root tag that Solr does not understand.
Please see http://wiki.apache.org/solr/UpdateXmlMessages for the
required XML format.

Regards,
Gora


Re: update doc with a xml-format string

2013-12-20 Thread Gora Mohanty
On 20 December 2013 14:18, YouPeng Yang yypvsxf19870...@gmail.com wrote:
 Hi
Thanks for your reply.

The root/root  is actually what I want to update the doc. That
 is I intend to  update the xml string to one of the fields of my doc.
[...]

Ah, sorry, I missed that. As others have followed up, you need
to escape the XML that is within that field.

Regards,
Gora


Re: indexing from bowser

2013-12-16 Thread Gora Mohanty
On 16 December 2013 16:30, Koji Sekiguchi k...@r.email.ne.jp wrote:

 Hi,

 (13/12/16 19:46), Nutan wrote:

 how to index pdf,doc files from browser?


 I think you can index from browser.

 If you said that

 this query is used for indexing :
 curl
 http://localhost:8080/solr/document/update/extract?literal.id=12commit=true;
 -Fmyfile=@C:\solr\document\src\test1\Coding.pdf


 curl works for you but

 When i try to index using this:
 http://localhost:8080/solr/document/update/extract?literal.id=12commit=true;
 -Fmyfile=@C:\solr\document\src\test1\Coding.pdf

 the document does not get indexed.


 browser doesn't work for you, why don't you look into Solr log and
 compare the logs between when you using curl and browser?

No, this will not work by loading the URL into a browser, as with the
-F option, curl is emulating a POST for a form filled in by the user.
One must use curl, or a similar tool, for this.

Regards,
Gora


Re: indexing from bowser

2013-12-16 Thread Gora Mohanty
On 16 December 2013 16:50, Nutan nutanshinde1...@gmail.com wrote:

 ok thanks,
 but is there any other way where -F is not used?

 I am creating a api in vc++ and to link to solr i am using libcurl,for this
 to work the string is the url,
 eg:
 curl_easy_setopt(curl,
 CURLOPT_URL,http://localhost:8080/solr/document/select?q=*%3A*wt=jsonindent=truefl=id;);
[...]

If you can do a post using curl from the command line, you can
add --libcurl file to get a libcurl example written to file. Please
see the curl man page for details.

Regards,
Gora


  1   2   3   4   5   6   7   8   >