should we be learning to assure that we don't have a similar problem
at an application level on a future leap second?
-- Jack Krupansky
-Original Message- From: Óscar Marín Miró
Sent: Sunday, July 01, 2012 11:02 AM
To: solr-user@lucene.apache.org
Subject: Re: leap second bug
Hello Michael, thanks for the note :)
I'm having a similar problem since yesterday, tomcats are wild on CPU [near
100%]. Did your solr servers did not reply to index/query requests?
Thanks :)
On Sun, Jul 1, 2012 at 1:22 PM, Michael Tsadikov mich...@myheritage.comwrote:
Our solr servers went
/_I1_OfaL7QY
Also see the comments here:
http://news.ycombinator.com/item?id=4182642
Mike McCandless
http://blog.mikemccandless.com
On Sun, Jul 1, 2012 at 8:08 AM, Óscar Marín Miró
oscarmarinm...@gmail.com wrote:
Hello Michael, thanks for the note :)
I'm having a similar problem
Hello Paul, Mahout is a machine learning [clustering classification] and
recommendation library, with Hadoop linking.
So, the answer is yes, it qualifies as a recommender engine on itself (with
no other libs), scalable through Hadoop
On Tue, Mar 13, 2012 at 9:23 AM, Paul Libbrecht
Hi Alan, at my job we had a really succesful implementation similar to what
you are proposing. With a classic RDBM, we hit serious performance issues
so, we moved to solr to display time series of data. The 'trick' was to
facet on a date field, to get 'counts' of data for a time series on a
Nice job, pythonic solr access!! Thanks for the effort
On Thu, Dec 1, 2011 at 5:53 PM, Rubén Abad rua...@gmail.com wrote:
Hi Jens,
Our objective with mysolr was to create a pythonic Apache Solr binding. But
we
also have been working in speed and concurrency. We always use the Python
Hi Luis, just an opinion (worked with Nutch intensively, 2005-2008).
Web crawling is a bitch, and Nutch won't make it any easier.
Some problems you'll find along the way:
1. Spidering tunnels/traps
2. Duplicate and near-duplicate content removal
3. GET parameter explosion in dynamic
xD
On Thu, Jul 8, 2010 at 2:58 PM, Alejandro Gonzalez
alejandrogonzalezd...@gmail.com wrote:
ok please don't forget it :)
2010/7/8 Ruben Abad rua...@gmail.com
Jorl, ok tendré que modificar mi petición de vacaciones :(
Rubén Abad rua...@gmail.com
On Thu, Jul 8, 2010 at 2:46 PM,
Hello,
We've been working extensively with Solr as a 'standard' search service.
However, recently, we had a volume problem displaying time series (by
instance, sentiment of a brand by date), pulling data from a highly
denormalized database. Indexing a view of this database, coupled with
faceting
I personally love this book:
http://www.amazon.com/Building-Search-Applications-Lucene-LingPipe/dp/0615204252
It intermixes search with analysis: sentiment, named entity recognition, NLP
Pipelines and so on...
There's a little Nutch cameo too...
On Mon, Jul 20, 2009 at 4:56 PM, Mark Miller
Hi,
My guess is that *although* your DB is in UTF-8, the database engine sends
you the rows in ISO-Latin1, so before doing *anything* after receiving the
data, you should transcode from ISO-Latin1 to UTF-8 and then send that to
SolR. I'm no Java expert, but in perl (MySQL DB in utf-8) I have to
Marín Miró wrote:
Hi,
My guess is that *although* your DB is in UTF-8, the database engine
sends
you the rows in ISO-Latin1, so before doing *anything* after receiving
the
data, you should transcode from ISO-Latin1 to UTF-8 and then send that to
SolR. I'm no Java expert, but in perl
to convert my row from
latin to UTF-8. (see
http://wiki.apache.org/solr/DataImportHandler#head-27fcc2794bd71f7d727104ffc6b99e194bdb6ff9
)
So i just wanna know if you use DataImportHandler two with a perl script
like a transformer ?
Óscar Marín Miró wrote:
What I mean is that unless solène
Hi,
Maybe this info is handy for you:
http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
The fact is Mysql can have UTF8 in its storage engine (or defined by
database), as you have, but the *connection* to the mysql client, can be set
to latin1.
In fact, here are my character_set
14 matches
Mail list logo