woo
\o/
On Mon, May 1, 2023 at 10:37 AM Davide Alberani
wrote:
> Hi all,
> We have just released version 2023.05.01 of Cinemagoer.
>
> It mostly consists of many little fixes to the parsers.
> The complete changelog:
> https://github.com/cinemagoer/cinemagoer/blob/master/CHANGELOG.txt
>
> As us
On 02/28/2018 12:13 AM, Davide Alberani wrote:
> On Tue, Feb 27, 2018 at 9:05 AM, H. Turgut Uyar wrote:
>> So I decided to develop a parser generator that will read a
>> specification for a parser and generate the necessary code
>
> What kind of help you need, mostly?
>
>
Most importantly, I
On Tue, Feb 27, 2018 at 9:05 AM, H. Turgut Uyar wrote:
>
> So I decided to develop a parser generator that will read a
> specification for a parser and generate the necessary code
That's really cool, I plan to give a look at it as soon as possible.
What kind of help you need, mostly?
--
David
I suppose you already know about this, but just in case, I noticed that
imdbPY isn't returning info for 'producer', 'cinematography' and 'editor'.
I didn't change my code, which was working previously. If I have to change
something, please tell me.
On 1 January 2018 at 18:52, Davide Alberani
wro
Hi all,
in other news, it seems that the 'akas' subdomain now
always redirects to 'www' and that the 'combined' page now
redirects to '/reference'
Not sure about the implications, but probably we'll have less
information about movies and tv series.
Regarding the fixes, if nobody else is already w
Hi,
I'm working on the movie combined parser. My primary goal is to make the
tests pass, so I won't look at the parts that are not covered by the
tests yet.
And a hint about the test suite. Running tox will try to run all tests
for all supported Python versions. If you want to do a quick test abo
Hi all,
yes: since a recent redesign of the web pages, IMDbPY is badly broken.
We started working on master to fix it, but there's still much to do;
see https://github.com/alberanid/imdbpy/issues/103
As always, any help is welcome.
If you want to start fixing something, run the tox and chose one
On Thu, Feb 23, 2017 at 3:48 AM, Vaishnav Murali wrote:
>
> Imdbpy won't stop flushing data, it just keeps on displaying flushing data
> for like 9hrs now. Its been nearly 12 hours since i started the script. is
> this normal?
No, it's not normal.
Isn't it proceeding at all, or is it just slow?
Sameer,
Version 5 is available directly from the source repositories, I believe
that Davide just hasn't uploaded 5.0 to pypi.
On 19 April 2013 02:14, Sameer Indarapu wrote:
> David,
>
> Any updates on when 5.0 will be available?
>
> Thanks,
> Sameer
>
>
> On Monday, December 31, 2012 4:00:11 A
On Fri, Dec 7, 2012 at 4:35 PM, Marcello Fab wrote:
>
> some days ago I wrote you about the search_film issue. I replaced your
> function with the attached one and, though it is not as complete as yours,
> it seems to be working. Maybe it can be used as a temporary solution.
Hi!
Thank you very mu
Unfortunately adding this line
k = k.replace('\xec\x8c\xa0', '') in the place you mentioned wont help.
Still same error on same place :(
SCANNING actor: Havel, Jir?
* FLUSHING CharactersCache...
Traceback (most recent call last):
.
self.flush()
File "./imdbpy2sql.py", line 1195, i
On Mon, Apr 11, 2011 at 18:35, darklow wrote:
>
> File "./imdbpy2sql.py", line 1194, in _toDB
> CURS.executemany(self.sqlstr, self.converter(l))
> psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0xc320
> HINT: This error can also happen if the byte sequence does not match the
On Thu, Dec 2, 2010 at 7:15 PM, Derek Ditch wrote:
> So, I've been able to begin building a graph reflecting relationships among
> actors, but I've reached the point where it's huge iterations, so I'm
> parallelizing it (using pp). The issue I'm having is that since I'm running
> queries manually,
On Sun, Nov 28, 2010 at 2:50 AM, Derek Ditch wrote:
>
> I'm working on a project that analyzes graph structures using a modified
> version of PageRank for a sample data set, I'm considering IMDB, using
> imdbpy,
Hi!
Your project sounds extremely cool. :-)
> So, it doesn't look like imdbpy has th
On Feb 28, Davide Alberani wrote:
> To be honest I'm more interested on why 4.4 doesn't work for you;
My fault - now I know what's going on on my system. :-)
I'll release 4.5 in a matter of minutes. :-/
--
Davide Alberani [GPG KeyID: 0x465BFD47]
http://www.mimante.net/
On Feb 28, Treas0n wrote:
> I'm having some issues as well.. I installed PIP but still my imdb
> scripts fail.
Did you run:
pip install
http://imdbpy.svn.sourceforge.net/viewvc/imdbpy/trunk/imdbpy.tar.gz?view=tar
Anyway, I'm unable to reproduce the problem: even IMDbPY 4.4 works
for me.
Is a
thank you for this answer
it helps me
Sebastien
2010/2/18 Davide Alberani
> On Feb 17, Sébastien Ragons wrote:
>
> > i try to create a new parser to get the picture of the movie in
> > big format.
>
> Hi!
> This is a subject more suited for the imdbpy-devel mailing list;
> anyway: see the reply
On Feb 17, Sébastien Ragons wrote:
> i try to create a new parser to get the picture of the movie in
> big format.
Hi!
This is a subject more suited for the imdbpy-devel mailing list;
anyway: see the reply to your other message.
> I wonder if you have to add a key to the dictionary of the movie
On Feb 11, Sébastien Ragons wrote:
> I try to test imdbpy and have difficulties whith Extrator.
Woah! That's an unusual request. :-)
That set of extractors were meant to be used internally by
IMDbPY: the fact that someone else is using it for another
pourpose is just another sign of the excelle
On Sep 28, Alexander Fischer wrote:
> i am currently writing a plugin for an irc bot, but this bot is
> written in python3.
Woah! So ahead of the times! ;-) 3.0 or 3.1?
> Will there be any support for python3 in the near/middle future? The
> 2to3 script throws errors alot of errors :(
Not so
heh awesome :) beer for everyone..
On Wed, Apr 1, 2009 at 2:03 PM, Davide Alberani
wrote:
> Ok, sharing the birthday with GMail is not easy, but exactly five
> years ago IMDbPY 1.0 was released, too. :-)
> And I can notice, with a bit of pride, that - _30_ releases later - the
> main API is still
On 1/14/09, Davide Alberani wrote:
> Not exactly (actually, at least); the only information saved and
> restored between two runs are "imdbID" (collected when IMDbPY have
> to retrieve from the web the "real" imdbID for a movie/person/...,
> and stored in the database for future faster accesses).
On Jan 13, Mike Castle wrote:
> I also just tested with PRAGMA journal_mode = OFF;
Good - I'll update the code and documentation ASAP (and submit it to you,
to check that I've understood everything).
> But, are you doing something at the beginning with preserving current
> ids?
Not exactly (ac
I also just tested with PRAGMA journal_mode = OFF; and, while it did
prevent sqlite from making and removing journal files all the time, it
turns out that it didn't make any significant difference. The
measurement was actually 5 minutes slow, considering the variability
of the machine, probably me
On Jan 13, Mike Castle wrote:
> First, I remembered to build the DB on a file system that is NOT
> journaled. This actually got me to a time similar to above.
[...]
> (I knew this of course,
I'd never thought that the difference would be _that_ huge.
That's for sure a thing that must be docum
On Sat, Jan 10, 2009 at 12:47 AM, Davide Alberani
wrote:
> On Jan 09, Mike Castle wrote:
>
>> For the curious:
> [...]
>> # TIME createIndexes() : 134 min, 24 sec.
>> DONE! (in 424 minutes, 25 seconds)
>
> Very interesting; so I assume it's safe to write in the documentation
> that using a ramdi
On Jan 09, Mike Castle wrote:
> For the curious:
[...]
> # TIME createIndexes() : 134 min, 24 sec.
> DONE! (in 424 minutes, 25 seconds)
Very interesting; so I assume it's safe to write in the documentation
that using a ramdisk/tmpfs may drastically improve performances.
That's good, thanks!
On Jan 09, Davide Alberani wrote:
> Implemented in SVN, for 'sql' and 'local'. Only for movies, I still
> have to check if this problem applies to people, too.
And now for people, too.
Btw looks like 'local' is badbly broken, for movies/persons with
an "high" movieID/personID number. I fear i
On Jan 08, Davide Alberani wrote:
> [THE TITLE IN THE RESULTS LIST ISSUE]
[...]
> While I think the result lists of 'sql/local' is a little more
> useful, it's true that this is an important difference with
> the 'http' interface, and I'm taking into consideration a "fix"
> for 4.0.
Implemente
On Jan 08, ori cohen wrote:
> i created a movie name comparison script. which is also based on
> 1. removing stop words 2. removing bad characters 3. then comparing
> what you have with a list of results.
>
> do you think this will be of use to you to that last precentage of
> unfound movie name
On Thu, Jan 8, 2009 at 10:34 AM, Davide Alberani
wrote:
> As a side-note, are there SQLite experts around?
> Are such poor performances to be expected? A "you can't create a
> 3GB single-file database and expect it to be fast, you insensitive clod"
> would be clear enough. :-)
http://www.mail-ar
On Thu, Jan 8, 2009 at 12:13 PM, Mike Castle wrote:
> On Linux, I might try mounting a scratch partition with async, but I
> don't yet know if that means ``I know what I'm doing, ignore what the
> application tells you to do.''
Oh, and a few lines down it says defaults include async. So much for
Davide,
i created a movie name comparison script. which is also based on
1. removing stop words
2. removing bad characters
3. then comparing what you have with a list of results.
do you think this will be of use to you to that last precentage of unfound
movie names when you fetch imdbID from the
On Thu, Jan 8, 2009 at 12:13 PM, Mike Castle wrote:
>
> If one has enough ram, a ramdisk/tmpfs of some sort might be helpful
> (I don't have enough swap configured to test at the moment).
For the curious:
On a somehwat wimpy Linux machine with 1G of ram, that's been busy
also encoding files, I s
On Jan 08, Davide Alberani wrote:
> Unfortunately SQLite is really slow, for our needs. :-/
As a side-note, are there SQLite experts around?
Are such poor performances to be expected? A "you can't create a
3GB single-file database and expect it to be fast, you insensitive clod"
would be clear e
Hi all,
I copy this mail to imdbpy-devel, because there are some issues
that I'm not sure how to handle. Feel free to express any thought.
On Jan 08, Mike Castle wrote:
> I built a local sqlite db with data from 2009-01-02, using my then
> installed 3.6 version on Debian/testing (currently usi
36 matches
Mail list logo