cal.
>
> What "design pattern" for timing did Linux violate? In other words, what
> lesson should we be learning to assure that we don't have a similar problem
> at an application level on a future leap second?
>
> -- Jack Krupansky
>
> -Original Message- Fr
m/forum/?fromgroups#!topic/elasticsearch/_I1_OfaL7QY
>
> Also see the comments here:
>
> http://news.ycombinator.com/item?id=4182642
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sun, Jul 1, 2012 at 8:08 AM, Óscar Marín Miró
> wrote:
> > Hello Mic
Hello Michael, thanks for the note :)
I'm having a similar problem since yesterday, tomcats are wild on CPU [near
100%]. Did your solr servers did not reply to index/query requests?
Thanks :)
On Sun, Jul 1, 2012 at 1:22 PM, Michael Tsadikov wrote:
> Our solr servers went into GC hell, and becam
Hello Paul, Mahout is a machine learning [clustering & classification] and
recommendation library, with Hadoop linking.
So, the answer is yes, it qualifies as a recommender engine on itself (with
no other libs), scalable through Hadoop
On Tue, Mar 13, 2012 at 9:23 AM, Paul Libbrecht wrote:
>
>
Hi Alan, at my job we had a really succesful implementation similar to what
you are proposing. With a classic RDBM, we hit serious performance issues
so, we moved to solr to display time series of data. The 'trick' was to
facet on a date field, to get 'counts' of data for a time series on a
specifi
Nice job, pythonic solr access!! Thanks for the effort
On Thu, Dec 1, 2011 at 5:53 PM, Rubén Abad wrote:
> Hi Jens,
>
> Our objective with mysolr was to create a pythonic Apache Solr binding. But
> we
> also have been working in speed and concurrency. We always use the Python
> QueryResponseWrit
Hi Luis, just an opinion (worked with Nutch intensively, 2005-2008).
Web crawling is a bitch, and Nutch won't make it any easier.
Some problems you'll find along the way:
1. Spidering tunnels/traps
2. Duplicate and near-duplicate content removal
3. GET parameter explosion in dynamic page
xD
On Thu, Jul 8, 2010 at 2:58 PM, Alejandro Gonzalez
wrote:
> ok please don't forget it :)
>
> 2010/7/8 Ruben Abad
>
>> Jorl, ok tendré que modificar mi petición de vacaciones :(
>> Rubén Abad
>>
>>
>> On Thu, Jul 8, 2010 at 2:46 PM, ZAROGKIKAS,GIORGOS <
>> g.zarogki...@multirama.gr> wrote
Hello,
We've been working extensively with Solr as a 'standard' search service.
However, recently, we had a volume problem displaying time series (by
instance, sentiment of a brand by date), pulling data from a highly
denormalized database. Indexing a view of this database, coupled with
faceting (g
I personally love this book:
http://www.amazon.com/Building-Search-Applications-Lucene-LingPipe/dp/0615204252
It intermixes search with analysis: sentiment, named entity recognition, NLP
Pipelines and so on...
There's a little Nutch cameo too...
On Mon, Jul 20, 2009 at 4:56 PM, Mark Miller wro
Hi,
Maybe this info is handy for you:
http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
The fact is Mysql can have UTF8 in its storage engine (or defined by
database), as you have, but the *connection* to the mysql client, can be set
to latin1.
In fact, here are my character_set vari
taImportHandler and try to convert my row from
> latin to UTF-8. (see
>
> http://wiki.apache.org/solr/DataImportHandler#head-27fcc2794bd71f7d727104ffc6b99e194bdb6ff9
> )
>
> So i just wanna know if you use DataImportHandler two with a perl script
> like a transformer ?
&g
o SolR,
mapping-ISOLatin1Accent won't know how to interpret it.
Does it make any sense? :P
On Fri, Mar 20, 2009 at 11:53 AM, aerox7 wrote:
>
> I'm using DataImportHandler to send my data to Solr ! so you mean it
> possible
> to apply a transformer in db-config.xml with a
Hi,
My guess is that *although* your DB is in UTF-8, the database engine sends
you the rows in ISO-Latin1, so before doing *anything* after receiving the
data, you should transcode from ISO-Latin1 to UTF-8 and then send that to
SolR. I'm no Java expert, but in perl (MySQL DB in utf-8) I have to do
14 matches
Mail list logo