Can I be added to the Solr wiki contributors list?
Username: garysieling
Thanks
Gary
documents?
Best,
Gary
2016-06-15 19:23 GMT+02:00 Erick Erickson :
> Simplest, though a bit risky is to manually edit the znode and
> correct the znode entry. There are various tools out there, including
> one that ships with Zookeeper (see the ZK documentation).
>
> Or you ca
uggestions on how we could recover from this problem.
Best,
Gary
e="bin">
So it's something related to BinFileDataSource and TikaEntityProcessor.
Thanks,
Gary.
On 26/02/2015 14:24, Gary Taylor wrote:
Alex,
That's great. Thanks for the pointers. I'll try and get more info on
this and
Alex,
That's great. Thanks for the pointers. I'll try and get more info on
this and file a JIRA issue.
Kind regards,
Gary.
On 26/02/2015 14:16, Alexandre Rafalovitch wrote:
On 26 February 2015 at 08:32, Gary Taylor wrote:
Alex,
Same results on recursive=true / recursive=fals
Alex,
Same results on recursive=true / recursive=false.
I also tried importing plain text files instead of epub (still using
TikeEntityProcessor though) and get exactly the same result - ie. all
files fetched, but only one document indexed in Solr.
With verbose output, I get a row for each f
d store content in the index in preparation for trying out the
search highlighting. Couldn't work out how to do that with post.jar
Thanks,
Gary
On 25/02/2015 17:09, Alexandre Rafalovitch wrote:
Try removing that first epub from the directory and rerunning. If you
now index 0 docu
ed over 58!
No errors are reported in the logs.
I can search on the contents of that first epub document, so it's
extracting OK in Tika, but there's a problem somewhere in my config
that's causing only 1 document to be indexed in Solr.
Thanks for any assistance / pointers.
Regards,
Can anyone remove this spammer please?
On Tue, Jul 23, 2013 at 4:47 AM, wrote:
>
> Hi! http://mackieprice.org/cbs.com.network.html
>
>
default should be
> considerably larger than the header buffer used for GET requests.
>
> I thought that SolrJ used POST by default, but after looking at the
> code, it seems that I was wrong. Here's how to send a POST query:
>
> response = server.query(query, METHOD.POST)
arameter or document field names a different shard.
On Fri, Mar 22, 2013 at 3:39 PM, Gary Yngve wrote:
> I have a situation we just discovered in solr4.2 where there are
> previously cached results from a limited field list, and when querying for
> the whole field list, it responds d
field
list or the full field list.
We're releasing tonight, so is there a query param to selectively bypass
the cache, which I can use as a temp fix?
Thanks,
Gary
ocess(ZkStateReader.java:201)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
We ended up upgrading to solr4.2 and rebuilding the whole index from our
datastore.
-Gary
On Sat, Mar 16, 20
Cool, I'll need to try this. I could have sworn that it didn't work that
way in 4.0, but maybe my test was bunk.
-g
On Fri, Mar 15, 2013 at 9:41 PM, Mark Miller wrote:
>
> You can do this - just modify your starting Solr example to have no cores
> in solr.xml. You won't be able to make use of
i, Mar 15, 2013 at 7:14 PM, Mark Miller wrote:
>
> On Mar 15, 2013, at 10:04 PM, Gary Yngve wrote:
>
> > i think those followers are red from trying to forward requests to the
> > overseer while it was being restarted. i guess i'll see if they become
> > green ove
i think those followers are red from trying to forward requests to the
overseer while it was being restarted. i guess i'll see if they become
green over time. or i guess i can restart them one at a time..
On Fri, Mar 15, 2013 at 6:53 PM, Gary Yngve wrote:
> it doesn't appear t
it doesn't appear to be a shard1 vs shard11 issue... 60% of my followers
are red now in the solr cloud graph.. trying to figure out what that
means...
On Fri, Mar 15, 2013 at 6:48 PM, Gary Yngve wrote:
> I restarted the overseer node and another took over, queues are empty now.
>
&
> thread running perhaps? Or just post the results?
>
> To recover, you should be able to just restart the Overseer node and have
> someone else take over - they should pick up processing the queue.
>
> Any logs you might be able to share could be useful too.
>
> - Mark
&g
Also, looking at overseer_elect, everything looks fine. node is valid and
live.
On Fri, Mar 15, 2013 at 4:47 PM, Gary Yngve wrote:
> Sorry, should have specified. 4.1
>
>
>
>
> On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller wrote:
>
>> What Solr version? 4.0, 4.1 4.2
Sorry, should have specified. 4.1
On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller wrote:
> What Solr version? 4.0, 4.1 4.2?
>
> - Mark
>
> On Mar 15, 2013, at 7:19 PM, Gary Yngve wrote:
>
> > my solr cloud has been running fine for weeks, but about a week ago, it
>
the param in solr.xml should be shard, not shardId. i tripped over this
too.
-g
On Mon, Jan 14, 2013 at 7:01 AM, starbuck wrote:
> Hi all,
>
> I am trying to realize a solr cloud cluster with 2 collections and 4 shards
> each with 2 replicates hosted by 4 solr instances. If shardNum parm is s
will go up and down independently of each other.
Thanks,
Gary
ow to use antecedents :))
-g
On Mon, Jan 14, 2013 at 6:27 PM, Gary Yngve wrote:
> Posting this
>
> update="set">blah update="add">qux update="add">quuxfoo
>
> to an existing doc with foo and bar tags
> results in tags_ss containing
>
orked fine with all
Groovy versions. Can't imagine what the root cause might be -- Groovy
implements jsr223 differently in later versions? I suppose to find out I could
compile Solr with my jdk but time to march on. ;)
Gary
-Original Message-
From: Erick Erickson [mailto:
are any groovy
errors. Thanks in advance for any tips.
Gary
This electronic message contains information generated by the USDA solely for
the intended recipients. Any unauthorized interception of this message or the
use or disclosure of the information it contains may violate the law and
Hi, there
In order to keep a DocID vs UID map, we added payload to a solr core. The
search on UID is very fast but we get a problem with adding/deleting docs.
Every time we commit an adding/deleting action, solr/lucene will take up to 30
seconds to complete. Without payload, a same action can
ation="http://www.loc.gov/mods/v3
http://www.loc.gov/mods/v3/mods-3-4.xsd";
version="3.4">
Malus domestica: Arnold
then xpath="//titleInfo/title" works just fine. Can anyone confirm that this
is the case and, if so, recommend a solution?
Thanks
Gary
Gary Moore
Technical Lead
LCA Digital Commons Project
NAL/ARS/USDA
Java Development 2011-09-01
2011-09-15
Thanks in advance
Gary
On Thu, Sep 15, 2011 at 3:33 PM, Jonathan Rochkind wrote:
> You didn't tell us what your schema looks like, what fields with what types
> are involved.
>
> But similar to how you'd do it in your
Hi
I have a scenario that I am not sure how to write the query for.
Here is the scenario - have an employee record with multi value for project,
started date, end date.
looks something like
John Smith web site bug fix 2010-01-01 2010-01-03
u
tpd since it's usually an application backend,
e.g. a PHP application running on port 80 connects to Solr on port 8983.
Gary
-Original Message-
From: nagarjuna [mailto:nagarjuna.avul...@gmail.com]
Sent: Wednesday, September 07, 2011 7:41 AM
To: solr-user@lucene.apache.org
Subject: how to
Hah, I knew it was something simple. :) Thanks.
Gary
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Sunday, August 28, 2011 12:50 PM
To: solr-user@lucene.apache.org
Subject: Re: commas in synonyms.txt are not escaping
Turns out this
Alexi,
Yes but no difference. This is apparently an issue introduced in 3.*. Thanks
for your help.
-Gary
-Original Message-
From: Alexei Martchenko [mailto:ale...@superdownloads.com.br]
Sent: Friday, August 26, 2011 10:45 AM
To: solr-user@lucene.apache.org
Subject: Re: commas in
Thanks, Yonik.
Gary
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Friday, August 26, 2011 11:25 AM
To: solr-user@lucene.apache.org
Subject: Re: commas in synonyms.txt are not escaping
On Fri, Aug 26, 2011 at 11:16 AM, Yonik Seeley
Here you go -- I'm just hacking the text field at the moment. Thanks,
Gary
-Original Message-
From: Alexei Martc
a document.
The filter I'm declaring in the index analyzer looks like this:
Doesn't seem to matter which tokenizer I use.This must be something simple
that I'm not doing but am a bit stumped at the moment and would appreciate any
tips.
Thanks
Gary
ting
appropriate matches, not explaining where the match is.
Hope that helps.
Kind regards,
Gary.
On 09/06/2011 03:00, Naveen Gupta wrote:
Hi Gary
It started working .. though i did not test for Zip files, but for rar
files, it is working fine ..
only thing what i wanted to do is to index the me
Naveen,
For indexing Zip files with Tika, take a look at the following thread :
http://lucene.472066.n3.nabble.com/Extracting-contents-of-zipped-files-with-Tika-and-Solr-1-4-1-td2327933.html
I got it to work with the 3.1 source and a couple of patches.
Hope this helps.
Regards,
Gary.
On 08
tries (from ExtractingDocumentLoader.java) I was running the correct
code anyway.
However, I'm very pleased that it's working now - I get the full
contents of the zipped files indexed and not just the file names.
Thank you again for your assistance, and the patch!
Kind regards,
G
so I know I'm
running those patched files in the build.
If anyone can shed any light on what's happening here, I'd be very grateful.
Thanks and kind regards,
Gary.
On 11/04/2011 11:12, Gary Taylor wrote:
Jayendra,
Thanks for the info - been keeping an eye on this list in case
on!
Let's connect!
-Gary
http://www.linkedin.com/in/garyyngve
further with it. I've not yet moved to Solr 3.1 but it's on my to-do
list, as is testing out the patches referenced by Jayendra. I'll post
my findings on this thread - if you manage to test the patches before
me, let me know how you get on.
Thanks and kind regards,
Gary.
As an example, I run this in the same directory as the msword1.doc file:
curl
"http://localhost:8983/solr/core0/update/extract?literal.docid=74&literal.type=5";
-F "file=@msword1.doc"
The "type" literal is just part of my schema.
Gary.
On 03/03/2011 11:4
ion in solrconfig.xml:
class="org.apache.solr.handler.extraction.ExtractingRequestHandler"
startup="lazy">
text
true
ignored_
true
links
ignored_
Is there any further debug or diagnostic I can get out of Tika to help
me work out why it's only returning the file names and not the file
contents when parsin
cess the zip file when used
standalone with the tika-app jar - it outputs both the filenames and
contents. Should I be able to index the contents of files stored in a
zip by using extract ?
Thanks and kind regards,
Gary.
On 25/01/2011 15:32, Gary Taylor wrote:
Thanks Erlend.
Not used SVN before
izerFactory and
HTMLStripStandardTokenizerFactory deprecated. To strip HTML tags,
HTMLStripCharFilter can be used with an arbitrary Tokenizer. (koji)
Unfortunately, I can't seem to get that to work correctly. Does anyone
have an example fieldType stanza (for schema.xml) for stripping out HTML ?
Thanks and kind regard
e contents, and not doesn't even index the file names!
Is there a version of Tika that works with the Solr 1.4.1 released
distribution which does index the contents of the zipped files?
Thanks and kind regards,
Gary
Sorry, false alarm. Had a bad merge and had a stray library linking to an
older version of another library. Works now.
-Gary
On Sat, Nov 27, 2010 at 4:17 PM, Gary Yngve wrote:
> logs> grep SEVERE solr.err.log
> SEVERE: org.apache.solr.common.SolrException: Error load
d the solr-core jar inside, I can see the
missing classes, so I am pretty confused.
Has anyone else seen this before or have an idea on how to surmount it?
I'm not quite ready to file a Jira issue on it yet, as I'm hoping it's user
error.
Thanks,
Gary
Jayendra,
Brilliant! A very simple solution. Thank you for your help.
Kind regards,
Gary
On 17 Nov 2010 22:09, Jayendra Patil <jayendra.patil@gmail.com>
wrote:
The way we implemented the same scenario is zipping all the attachments into
a single zip file which can be passed
ve this to extend ExtractingRequestHandler to
allow multiple binary files and thus specify our own RequestHandler, or
would using the SolrJ interface directly be a better bet, or am I
missing something fundamental?
Thanks and regards,
Gary.
confguration, thus your
copyField should be defined as a type that is configured with the
SynonymFilterFactory, just like
"person_name".
You can find some guidance here:
http://bibwild.wordpress.com/2010/04/14/solr-stop-wordsdismax-gotcha/
Gary
Hi Eric
I catch the NPE in the NonAdjacentDocumentCollapser class and now it does
return the data field collapsed.
However I can not promise how accurate or correct this fix is becuase I have not
got allot of time to study all the code.
It would be best if some of the experts could give us a
http://www.webtide.com/choose/jetty.jsp
>> > - Original Message -
>> > From: "Steve Radhouani"
>> > To: solr-user@lucene.apache.org
>> > Sent: Tuesday, 16 February, 2010 12:38:04 PM
>> > Subject: Tomcat vs Jetty: A Comparative Analysis?
>> >
>> > Hi there,
>> >
>> > Is there any analysi
It works excellently in Tomcat 6. The toughest thing I had to deal with is
discovering that the environment variable in web.xml for solr/home is
essential. If you skip that step, it won't come up.
solr/home
java.lang.String
F:\Tomcat-6.0.14\webapps\solr
- Origi
Once again, thanks for your help getting Solr up and running.
I'm wondering if anyone has any hints on how to prepare TEI documents
for indexing - I was about to write some XSLT but didn't want to
reinvent the wheel (unless it's punctured)?
Regards
Gary
Gary Bro
:
"The requested resource (/solr/select/) is not available"
I have other apps running under tomcat okay, seems like it can't find
the lib .jars or can't access the classes within them?
Stuck...
Cheers
Gary
Gary Browne
Development Programmer
Library IT Services
University
JMS client lib not found (related to Cocoon). I have named a file
solr.xml in my $TOMCAT_HOME/conf/Catalina/localhost directory containing
the following:
I am using the example configs (unmodified).
Thanks again
Gary
Gary Browne
Development Programmer
Library IT Services
University
? (I've attached the trace for
reference)
Thanks again
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
May 14, 2007 1:17:34 PM org.apache.solr.core.SolrException log
SEVERE: java.lang.NullPointerException
es on the tutorial, but post.jar cannot be found...
Where is it? Is there a path variable I need to set up somewhere?
Any help greatly appreciated.
Regards,
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
requested resource
(HTTP method GET is not supported by this URL).
Any help with this would be much appreciated.
Regards
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
60 matches
Mail list logo