On 02/01/2011 12:19 AM, Chris Withers wrote:
> Hi Shane,
>
> This one's less serious if it is what I think it is:
>
> Traceback (most recent call last):
> File "bin/zodbconvert", line 24, in
> relstorage.zodbconvert.main()
> File
> "/var/buildout-eggs/RelStorage-1.4.0-py2.6.egg/relsto
On 02/01/2011 10:16 AM, Chris Withers wrote:
> I notice that one of my history-free storages only has 'new_oid' and
> 'object_state' tables while all the others have 'new_oid', 'object_ref',
> 'object_refs_added', 'object_state' and 'pack_object' tables.
>
> What's special about this storage?
It s
On 02/01/2011 10:01 AM, Chris Withers wrote:
> OperationalError: (2006, 'MySQL server has gone away')
>
> ...which feels a little on the serious side for (what is for MySQL)
> quite a normal situation to be in.
Random disconnects are unacceptable for RelStorage. If MySQL goes away
outside transa
On 02/01/2011 07:35 PM, Chris Withers wrote:
> On 01/02/2011 17:33, Shane Hathaway wrote:
>>> What's special about this storage?
>>
>> It sounds like RelStorage didn't get a chance to finish creating the
>> schema. In MySQL, DDL statements are not transactio
On 02/01/2011 07:51 PM, Chris Withers wrote:
> I can understand the problem being fairly terminal if there was a
> disconnect *during* a timeout, and I'd expect an exception, but not a
> segfault ;-)
I haven't seen segfaults except when the dynamic linker used an
incorrect library. Use "ldd" to
On 02/02/2011 10:57 AM, Chris Withers wrote:
> Er, since when? If that were the case, I'm sure Shane would place
> explicit instructions that it should not be used...
Safe is relative. MySQL is a good choice for Facebook, but if I knew my
bank was storing my account balance in MySQL, I would clo
FWIW, here is a way to extract timestamps from transaction IDs stored
with RelStorage. Kai of HexagonIT suggested it. The timestamps should
be in UTC.
PostgreSQL:
select
(tid >> 32) / 535680 + 1900 as year,
1 + ((tid >> 32) % 535680) / 44640 as month,
1 + ((tid >> 32) % 44640) / 1440
On 02/10/2011 06:30 AM, Santi Camps wrote:
> I was trying to move a database copy a relstorage zodb and having some
> issues. The original zodb is mounted using a mount point /original_path
>If I restore the backup of the database and mount it using exactly
> the same mount point /original_pat
On 02/10/2011 07:41 AM, Shane Hathaway wrote:
> On 02/10/2011 06:30 AM, Santi Camps wrote:
>> I was trying to move a database copy a relstorage zodb and having some
>> issues. The original zodb is mounted using a mount point /original_path
>> If I restore the backup of t
On 02/10/2011 08:42 AM, Santi Camps wrote:
> The objective is to duplicate a storage using different mount points.
> For instance, if we have Database1 -> mount_point_1 , create
> Database2 and Database3 as copies of Database1 (using pg_dump &
> pg_restore), and then mount them as mount_point_2
On 02/10/2011 09:27 AM, Santi Camps wrote:
>
>
> On Thu, Feb 10, 2011 at 5:07 PM, Shane Hathaway <mailto:sh...@hathawaymix.org>> wrote:
>
> On 02/10/2011 08:42 AM, Santi Camps wrote:
>
> The objective is to duplicate a storage using different mount
>
On 02/22/2011 09:25 AM, Martijn Pieters wrote:
> I haven't yet actually run this code, but the change isn't big. I
> didn't find any relevant tests to update. Anyone want to venture some
> feedback?
Both ideas are excellent. The new options even open the possibility of
running the pre-pack on a
On 02/25/2011 03:44 AM, Martijn Pieters wrote:
> Last night we used our two-phase pack to start packing the largest
> Oracle RelStorage ZODB we run. Some statistics first:
>
> * Never packed before in it's 2 year existence.
> * Has more than 4.5 million transactions, 52 million object states.
> * P
On 02/22/2011 03:10 PM, Maurits van Rees wrote:
> Hi,
>
> Normally RelStorage creates the database tables for you and the user you
> have specified is the owner of those tables. For security reasons a
> client does not want this, but wants a different user to own the tables
> and instead only gran
On 02/22/2011 01:41 PM, Martijn Pieters wrote:
> On Tue, Feb 22, 2011 at 19:12, Shane Hathaway wrote:
>> On 02/22/2011 09:25 AM, Martijn Pieters wrote:
>>> I haven't yet actually run this code, but the change isn't big. I
>>> didn't find any relevant te
On 02/25/2011 04:49 AM, Jim Fulton wrote:
> On Fri, Feb 25, 2011 at 5:44 AM, Martijn Pieters wrote:
>> Last night we used our two-phase pack to start packing the largest
>> Oracle RelStorage ZODB we run. Some statistics first:
>>
>> * Never packed before in it's 2 year existence.
>> * Has more tha
On 02/25/2011 08:31 AM, Martijn Pieters wrote:
> On Thu, Feb 24, 2011 at 16:56, Martijn Pieters wrote:
>> I see a lot of transaction aborted errors on the ZODB multi-thread
>> tests with this patch in place, so I'll have to investigate more.
>> Thread debugging joy!
>
> In the end it was a simple
On 02/28/2011 09:29 AM, Maurits van Rees wrote:
> This is now also happening for some images during normal operation, so
> after any blob migration has been run and existing blob caches have been
> cleared. What happens is that somehow the file contents for
> 0xblahblah.blob in the blob caches can
On 02/28/2011 07:45 AM, Martijn Pieters wrote:
> Early this morning, after packing through the weekend, our somewhat
> overweight Oracle RelStorage ZODB pack was completed. I am still
> waiting for the final size from the customer DBAs, but before the pack
> this beast was occupying 425GB. The pack
On 02/28/2011 08:19 AM, Maurits van Rees wrote:
> I wonder if there is some code that mistakenly throws away the wrong
> blob; it should throw away the oldest one I'd think, and not the one
> that it just loaded.
>
> The workaround is probably just to not set the blob-cache-size this low.
>But
On 02/28/2011 09:29 AM, Maurits van Rees wrote:
> This is now also happening for some images during normal operation, so
> after any blob migration has been run and existing blob caches have been
> cleared. What happens is that somehow the file contents for
> 0xblahblah.blob in the blob caches can
On 02/28/2011 07:37 PM, Shane Hathaway wrote:
> On 02/28/2011 08:19 AM, Maurits van Rees wrote:
>> I wonder if there is some code that mistakenly throws away the wrong
>> blob; it should throw away the oldest one I'd think, and not the one
>> that it just loaded.
>&g
On 03/01/2011 08:05 AM, Maurits van Rees wrote:
> Op 01-03-11 13:54, Maurits van Rees schreef:
>> Op 01-03-11 04:41, Shane Hathaway schreef:
>>> On 02/28/2011 09:29 AM, Maurits van Rees wrote:
>>>> This is now also happening for some images during normal operation, so
On 03/01/2011 01:10 PM, Jim Fulton wrote:
> On Tue, Mar 1, 2011 at 2:47 PM, Shane Hathaway wrote:
>>> Any idea where the error might be? Could this be in plone.app.blob?
>>> Any chance that this is better in ZODB 3.9+?
>>
>> This appears to be a design bug in
On 03/01/2011 08:05 AM, Maurits van Rees wrote:
> No, I am still seeing it. I now see more clearly that the problem is
> that two files share the same blob name. I completely remove the blob
> cache and restart the zeo client. I visit image number 1 and get this
> file in the var/blobcache:
> 29
On 03/01/2011 02:47 PM, Maurits van Rees wrote:
> That is pretty weird! I can understand a few duplicates because images
> are saved in a few sizes, but this is too much.
>
> To reiterate some versions:
>
> - Plone 3.3.5
> - ZODB3 3.8.6-polling
> - RelStorage 1.5.0b1
> - Zope2 2.10.12
> - plone.ap
On 03/01/2011 04:32 PM, Maurits van Rees wrote:
> As a workaround for getting blobstorage to work reliably in a relstorage
> setup without a shared blob dir: is there any way to store the blobs
> completely in postgres and have no shared or cached blob dir in the zeo
> clients? If that would work
On 03/02/2011 07:38 AM, Jim Fulton wrote:
> BTW, my sense is that when blobs were first implemented, the
> emphasis was on shared blob directories and the non-shared blob
> implementation suffered from neglect, which I at least tried to
> rectify in ZODB 3.9.
That evolutionary path is evident and
On 03/28/2011 04:01 PM, Nikolaj Groenlund wrote:
> Im using Plone 4.0.4, Postgresql 9.0.3 and RelStorage 1.5.0b2.
> Currently Im using "da_DK.ISO8859-1" encoding in PostgreSQL - would
> "da_DK.UTF-8" be better since Plone is using UTF-8 internally? PS
> both "Encoding, Collation and Ctype" are set
On 03/29/2011 09:16 PM, Erik Dahl wrote:
> Ok looked a little deeper. I think solution 2 is the way to go (ie clear the
> object_ref table from references that are in my range of non-packed
> transactions. Does that sound right? Statement would be:
>
> delete from object_ref where tid> 255908
On 03/29/2011 07:39 PM, Erik Dahl wrote:
> I was running a pack and canceled so that I could reboot my box. After it
> came back up I tried to restart the pack and got this:
[...]
>File
> "/opt/zenoss/lib/python2.6/site-packages/RelStorage-1.4.2-py2.6.egg/relstorage/adapters/packundo.py",
>
On 03/31/2011 04:46 AM, Adam GROSZER wrote:
> After investigating FileStorage a bit, I found that GC runs on objects,
> but pack later by transactions. That means that if there's a bigger-ish
> transaction, we can't get rid of it until all of it's objects are GCed
> (or superseeded by newer states)
On 04/24/2011 03:36 AM, Nikolaj Groenlund wrote:
> I guess this is a python path problem (on FreeBSD 8.1).
>
> Im trying to convert a Data.fs to Postgresql using zodbconvert. Ive
> downloaded RelStorage-1.5.0b2 and is running:
>
> /usr/local/Plone/Python-2.6/bin/python zodbconvert.py fstodb.conf
>
On 04/27/2011 05:07 AM, Chris Withers wrote:
> Hi Shane,
>
> Attempting to view the /manage_change_history_page of a history-keeping
> relstorage is giving me:
>
> script statement failed: '\nSELECT 1 FROM current_object WHERE
> zoid = %(oid)s\n'; parameters: {'oid': 1163686}
>
> ..
On 05/02/2011 09:23 PM, 潘俊勇 wrote:
> It seems that redis is much faster than memcached.
>
> Could we use redis as a cache for RelStroage?
Are you having speed issues?
I suspect either one is so fast that the speed of Redis or Memcached is
irrelevant. If you want speed, minimize the latency of t
On 05/03/2011 01:39 AM, Chris Withers wrote:
> On 27/04/2011 18:11, Shane Hathaway wrote:
>>> OperationalError: (2006, 'MySQL server has gone away')
>>>
>>> This is happening across at least two separate instances with separate
>>> storages.
&g
On 05/06/2011 06:22 AM, Pedro Ferreira wrote:
> But isn't RelStorage supposed be slower than FileStorage/ZEO?
No, every measurement I've tried suggests RelStorage (with PostgreSQL or
MySQL) is faster than ZEO on the same hardware. ZEO has certainly
gotten faster lately, but RelStorage still see
On 05/06/2011 10:18 AM, Paul Winkler wrote:
> On Fri, May 06, 2011 at 05:19:25PM +0200, Matthias wrote:
>> It would be cool if you could give a hint to ZEO somehow to prefetch a
>> certain set of objects along with their subobjects and then return
>> everything in one batch. This way you avoid all
On 05/06/2011 02:14 PM, Jim Fulton wrote:
>> It sounds like you primarily need a bigger and faster cache. If you
>> want to make minimal changes to your setup, try increasing the size of
>> your ZEO cache and store the ZEO cache on either a RAM disk (try mount
>> -t tmpfs none /some/path) or a sol
On 05/06/2011 02:38 PM, Jim Fulton wrote:
> If there is memory pressure and you take away ram for a ram disk, then you're
> going to start swapping, which will give you other problems.
In my experience, Linux moves pieces of the ZEO cache out of RAM long
before it starts swapping much.
> I tried
On 05/06/2011 02:14 PM, Shane Hathaway wrote:
> However, there is a different class of problems that prefetching could
> help solve. Let's say you have pages with a lot of little pieces on it,
> such as a comment page with a profile image for every comment. It would
> be usefu
On 05/23/2011 01:58 PM, Martijn Pieters wrote:
> I've cleared the last area where RelStorage packing could hold the
> transaction lock for long periods of time, during empty transaction
> deletion:
>
>http://zope3.pov.lt/trac/changeset/121783/relstorage/trunk
>
> During a large pack, this secti
On 06/09/2011 06:32 AM, Martijn Pieters wrote:
> We've looked over the RelsStorage ZODB Blob storage implementation and
> came to the conclusion that the current use of blob chunks is
> unnecessary in Oracle when using the cx_Oracle database connector. Not
> splitting ZODB Blobs into chunks may hav
On 06/09/2011 02:05 PM, Martijn Pieters wrote:
> On Thu, Jun 9, 2011 at 22:03, Martijn Pieters wrote:
>> I'm retaining the schema; there is a chance people have updated to
>> 1.5b2 already and are using blobs in production. My refactor maintains
>> compatibility with the chunked blob storage.
>
>
On 06/10/2011 02:53 AM, Hanno Schlichting wrote:
> This looks like the typical problem, where some code opens a file
> without explicitly closing it. But instead relies on garbage
> collection to do the job during __del__ of the file object. That
> generally doesn't work well on Windows.
Yes, that
On 06/10/2011 06:38 AM, Hanno Schlichting wrote:
> On Fri, Jun 10, 2011 at 1:03 PM, Hanno Schlichting wrote:
>> /me is still trying to get the postgres tests to run. I only just now
>> found the relstorage/tests/readme instructions
>
> I got the postgres tests running now and get actual test failu
On 06/12/2011 01:39 PM, Martijn Pieters wrote:
> On Sun, Jun 12, 2011 at 16:40, Hanno Schlichting wrote:
>> Looking at the most recent docs for the bytea type [1] there's two
>> encoding schemes. The new default in 9.0+ is called hex and doesn't
>> suffer from the same problems as the old "escape"
On 06/12/2011 04:01 AM, Martijn Pieters wrote:
> How big a userbase is there for 1.5.0b2 on PostgreSQL? I know schema
> changes are painful, but in this case we'd only have people on the
> bleeding edge using a beta version to switch. I think we can come up
> with a little script that would move th
On 06/15/2011 11:41 AM, Martijn Pieters wrote:
> On Wed, Jun 15, 2011 at 16:23, Martijn Pieters wrote:
>>> Last but not least I'll need to write a migration script for those
>>> users of RelStorage 1.5.0b2 already in production.
>>
>> I have figured out how to do this (see
>> http://archives.postg
On 06/21/2011 07:18 AM, Erik Dahl wrote:
> I'm using relstorage 1.4.2 during a batch job I was running last night I got
> the following errors.
>
> 2011-06-21 07:55:02,664 WARNING relstorage: POSKeyError on oid 23916102: no
> tid found; Current transaction is 256466219826629358; Recent object tid
On 06/21/2011 08:22 AM, Philip K. Warren wrote:
> Does RelStorage work with MySQL 5.5? I saw a note posted back in
> February on this list that said that 5.5 is not yet supported, however I
> haven't seen any issues with this configuration.
>
> I have been able to successfully run the 1.4.2 and 1.5
On 06/22/2011 04:27 PM, Erik Dahl wrote:
> Ugh. Ok I'll see what we get. So I'm clear we are looking for references to
> the object/tid pairs that don't exist. Pack will take at lest 2 days to run.
> :(
Well, just pre-pack shouldn't take that long (I hope).
Shane
On 06/22/2011 04:37 PM, Shane Hathaway wrote:
> On 06/22/2011 04:27 PM, Erik Dahl wrote:
>> Ugh. Ok I'll see what we get. So I'm clear we are looking for references
>> to the object/tid pairs that don't exist. Pack will take at lest 2 days to
>> run. :(
&g
On 07/14/2011 11:21 PM, Sean Upton wrote:
> On Thu, Jul 14, 2011 at 3:28 PM, Sean Upton wrote:
Full traceback: http://pastie.org/2214036
>> I am able to avoid this by commenting out cache-servers and
>> cache-module-name in my zope.conf.
>
> Looks like the ConflictError at start-up is self-in
On 07/18/2011 02:16 PM, Sean Upton wrote:
> On Fri, Jul 15, 2011 at 5:35 PM, Shane Hathaway wro=
> te:
>> I am thinking of changing the memcache code to use a random per-database =
> key
>> prefix. =A0If I had done that already, you would not have run into this
>>
On 08/31/2011 05:11 PM, Darryl Dixon - Winterhouse Consulting wrote:
>> Just had a quick query from my friendly local DBA; he wanted to know why
>> --clear was using DELETE rather than TRUNCATE; his comments were along the
>> lines of:
>> * TRUNCATE creates no UNDO
>> * TRUNCATE cleans out the inde
On 08/30/2011 04:02 AM, Sylvain Viollon wrote:
> I have a customer using RelStorage 1.5.0 in production, and he cannot
> pack anymore his Data.fs. When he tries he have the following error:
>
>> 2011-08-29 15:43:03,459 [zodbpack] INFO Opening storage
>> (RelStorageFactory)...
>> 2011-08-29 1
On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
> Hello all,
>
> While doing some googling on ZEO + memcache I came across this:
>
> https://github.com/eleddy/zeo.memcache
>
> Has anybody ever tried it?
Having implemented memcache integration for RelStorage, I now know what
it takes to make a decen
On 10/09/2011 08:26 AM, Jim Fulton wrote:
> On Sat, Oct 8, 2011 at 4:34 PM, Shane Hathaway wrote:
>> On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
>>> Hello all,
>>>
>>> While doing some googling on ZEO + memcache I came across this:
>>>
>>&
On 10/12/2011 04:53 PM, Shane Hathaway wrote:
> Given the choice to structure the cache as {(oid, tid): (state,
> last_tid)}, a simple way to use the cache would be to get the last
> committed tid from the database and use that tid for the lookup key.
> This would be extremely efficie
On 10/20/2011 05:41 AM, Martijn Pieters wrote:
> On a test server with a Plone 4.1 upgrade of a client setup, we are
> experience regular lock-ups of the 2 instances we run. After a
> restart, after a few hours at least one of the instances will be
> waiting on Oracle to roll back:
>
>File
> "
On 10/30/2011 07:05 PM, Darryl Dixon - Winterhouse Consulting wrote:
> Hi All,
>
> Part of the setup of our Oracle RelStorage environment involves the DBAs
> wanting to separate ownership of the schema from the rights to actually
> use the schema. In other words, user A owns all the tables etc that
On 11/07/2011 03:15 PM, Darryl Dixon - Winterhouse Consulting wrote:
> Hi All,
>
> We have just converted a site with a handful of editors over from ZEO to
> RelStorage on Oracle and and the rate of ConflictErrors has gone through
> the roof (5 an hour average continuously for days). This seems ver
On 11/10/2011 08:56 AM, Александр Неганов wrote:
> We had large history-free database with lot of "deleted" Blobs, so we
> packed and gc-ed it. Many rows from object_state as well as
> corresponding blob_chunk rows has been deleted, but postgresql large
> objects stays untouched. It seems that blob
On 11/10/2011 05:32 PM, Darryl Dixon - Winterhouse Consulting wrote:
> * What is the situation with using multiple memcache instances (eg, one
> per machine - will this lead to cache incoherency - inconsistency between
> instances, etc?
One memcache per client? That's fine as long as you don't set
On 11/17/2011 10:16 AM, Войнаровський Тарас wrote:
> Convert took around 2 hours and the speed is realy bad. I tried
> iterating through all objects, by using the findObjectsProviding of
> zope.app.generations and it took 600 seconds, in contrast to the 20
> seconds with FileStorage.
> The cache wo
On 11/21/2011 10:40 AM, Alexandru Plugaru wrote:
> Hello,
>
> I've got this error ( http://pastie.org/2898828 ) while upgrading from
> plone3 to plone4. The context in which that error happens is this:
> http://pastie.org/2898959
>
> The only place I could find the error message was in cPersistence
On 12/19/2011 02:50 AM, Eugene Morozov wrote:
Is there some other way to remove them?
I would remove all references to the broken objects, then let packing
wipe the old objects away. For example:
container = app['somefolder']
del container[obj_name]
It's possible to replace objects in the m
On 01/02/2012 11:54 PM, Chris Withers wrote:
Good to know, but my concern is taking a system which is currently
running 32-bit clients against RelStorage running on a MySQL on a 64-bit
server and replacing the 32-bit clients with 64-bit clients.
Is that going to cause any issues?
That will not
On 03/04/2012 04:16 PM, Chris Withers wrote:
Hi Shane,
What does this exception mean:
Traceback (innermost last):
Module ZPublisher.Publish, line 135, in publish
Module Zope2.App.startup, line 291, in commit
Module transaction._manager, line 89, in commit
Module transaction._transaction, line 3
On 05/08/2012 06:34 PM, Dylan Jay wrote:
I know it's not all about money but if we were to sponsor development of
microsoft sql server support for relstorage, is there someone who knows
how and has an estimated cost and available time?
I don't have enough cycles now to do it myself, but I will
On 05/23/2012 09:27 AM, Chris Withers wrote:
Okay, the issue appears to be that, in some circumstances, RelStorage is
leaving the read connection with an open transaction that isn't rolled
back.
That is what RelStorage is designed to do when you set poll-interval.
Does the bug go away when you
On 07/12/2012 01:30 PM, Santi Camps wrote:
My specific question is: if I disable pack-gc, can I safety empty
object_ref table and free this space?
Certainly. However, most of the 23 GB probably consists of blobs; blobs
are not shown in the query results you posted.
Shane
___
On 07/13/2012 02:42 PM, Santi Camps wrote:
On Fri, Jul 13, 2012 at 7:05 PM, Shane Hathaway mailto:sh...@hathawaymix.org>> wrote:
On 07/12/2012 01:30 PM, Santi Camps wrote:
My specific question is: if I disable pack-gc, can I safety empty
object_ref table and free this
On 08/14/2012 09:59 AM, Daniel Garcia wrote:
I've noticed that over time the number of objects in memcached grows
util memcached begins to evict old objects. I'm trying to figure out
how to size the memcached layer. Since there is no lifetime on the
objects they remain in the cache until they are
On 08/30/2012 10:14 AM, Marius Gedminas wrote:
On Wed, Aug 29, 2012 at 06:30:50AM -0400, Jim Fulton wrote:
On Wed, Aug 29, 2012 at 2:29 AM, Marius Gedminas wrote:
On Tue, Aug 28, 2012 at 06:31:05PM +0200, Vincent Pelletier wrote:
On Tue, 28 Aug 2012 16:31:20 +0200,
Martijn Pieters wrote :
A
By popular request, I've moved RelStorage development from svn.zope.org
to Github [1]. This should make it easier for the ZODB developer
community to contribute and submit issues. Happy hacking!
Also, I've created a "zodb" community on github. If you'd like to move
or create your ZODB-centr
On 12/01/2012 11:22 AM, Andreas Jung wrote:
Jim Fulton wrote:
On Fri, Nov 30, 2012 at 1:37 AM, Andreas Jung
wrote:
a customer made the observation that that ZEO clients became
inconsistent after some time (large CMF-based application running
on Zope 2.12 afaik). Customer made some investigatio
On 12/26/2012 10:43 AM, Sean Upton wrote:
For cron job RelStorage backups (databse not including blobs backed-up
seperately, using a PostgreSQL 9.0.x backend), I use both zodbconvert
to save FileStorage copies of my database, and pgdump for low-level
binary dumps (pg_restore custom format, preser
On 02/01/2013 09:08 PM, Juan A. Diaz wrote:
Reading the some comments [0] in the code
(relstorage/adapters/schema.py) I could see that the object_ref
database is uses during the packing, then the question is, in a
history-preserving database there is something that we could do to
decrease the siz
On 02/02/2013 04:13 PM, Juan A. Diaz wrote:
2013/2/2 Shane Hathaway :
On 02/01/2013 09:08 PM, Juan A. Diaz wrote:
Do you think that add one option in zodbpack to truncate this tables
after the pack could be a god idea?
The object_ref table is intended to help the next pack run quickly, but
On 02/06/2013 04:23 AM, Jürgen Herrmann wrote:
I think this is not entirely correct. I ran in to problems serveral
times when new_oid was emptied! Maybe Shane can confirm this?
(results in read conlfict errors)
Ah, that's true. You do need to replicate new_oid.
Then I'd like to talk a little
On 02/07/2013 01:54 PM, Jürgen Herrmann wrote:
Am 07.02.2013 21:18, schrieb Jürgen Herrmann:
I know that's entirely not your fault but may be worth mentioning
in the docs. Relstorage with MySQL works *very* well for DB sizes
<5GB or so, above that - not so much :/
Also for the docs: on disk Re
On 03/07/2013 10:48 AM, jason.mad...@nextthought.com wrote:
On Mar 7, 2013, at 11:35, Sean Upton wrote:
On Thu, Mar 7, 2013 at 7:31 AM,
wrote:
I only spotted two uses of this assumption in RelStrorage, the
above-mentioned `_prepare_tid`, plus `pack`. The following simple
patch to change th
On 11/15/2013 06:01 PM, Jens W. Klein wrote:
The idea is simple:
- iterate over all transactions starting with the lowest
transaction id (tid)
- for each transaction load the object states connected with tid
- for each state fetch its outgoing references and fill a table where
all incoming
On 03/13/2014 09:06 AM, Simone Deponti wrote:
1. Certain tables remain locked and automatic cleanup functions (e.g.
AUTOVACUUM) can't properly run
Are you using the "poll-interval" option? That option tells RelStorage
to leave the transaction open. In practice, it's just wrong (although i
On 07/19/2014 05:22 PM, Sean Upton wrote:
Folks,
I have been dealing with locking issues and RelStorage for the past
few days, and want to verify what I believe is a bug: without
RELSTORAGE_ABORT_EARLY set in environment, tpc_vote() could
potentially leave an ILocker adapter setting an RDBMS ta
Tim Peters wrote:
> Jim Fulton]
>>We should probably think harder about the semantics of sync. But it
>>implied a transaction boundary -- specifically, an abort. You wouldn't
>>want this to happen automatically.
>
>
> I assume Rajeev doesn't really want to call sync() automatically, because
> tha
Tim Peters wrote:
> What I seem to be missing entirely here is why Zope ships Prefix instances
> at the base ZEO msg level to begin with; maybe it's to call some method I
> misunderstand (or haven't even bumped into yet <0.6 wink>).
When you want to undo something, Zope asks the ZEO server for a l
Tino Wildenhain wrote:
> Am Sonntag, den 29.05.2005, 09:51 +0200 schrieb Andreas Jung:
>>The Pdata approach in general is not bad. I have implemented a CVS-like file
>>repository lately where we store binary content using a pdata like
>>structure.
>>Our largest files are around (100MB) and the per
Jeremy Hylton wrote:
> It's really too bad that ZEO only allows a single outstanding request.
> Restructuring the protocol to allow multiple simulatenous requests
> was on the task list years ago, but the protocol implementation is so
> complex I doubt it will get done :-(. I can't help but think
so
>>>complex I doubt it will get done :-(. I can't help but think building
>>>on top of an existing message/RPC layer would be profitable. (What's
>>>twisted's RPC layer?) Or at least something less difficult to use than
>>>asyncore.
>
>
&g
Dieter Maurer wrote:
> Currently, the ZODB cache can only be controlled via the maximal number
> of objects. This makes configuration complex as the actual limiting
> factor is the amount of available RAM and it is very difficult to
> estimate the size of the objects in the cache.
>
> I therefore
Tim Peters wrote:
> http://www.zope.org/Collectors/Zope/1800
>
> describes some of the code problems with Zope's current way of mounting
> databases. ZODB 3.4 (still) has a Mount.py module, unused and untested by
> ZODB. Jim and I were both surprised today to discover that Zope (2.8) still
>
Jim Fulton wrote:
> It also doesn't handle global data properly.
>
> It tries to do something that Python modules were never
> designed to support, which is to load them more than once.
However, given the existence of the reload() builtin, someone apparently
believed Python modules *were* designe
Tim Peters wrote:
> [Jim Fulton]
>>I agree. The right way to refresh is to detect code changes (ideally
>>using Linux's brand new inotify mechanism, or something similar when
>>inotify is not available), display a "please wait" message to the user,
>>and restart the process.
>
>
> If you're chan
Jim Fulton wrote:
Tim and I have discussed this for some time. We think an
asynchronous I/O approach is still appropriate, to handle
asynchronous messages from servers to clients, but we need
to get away from expecting a server to provide the asyncore
main loop needed by ZEO. Rather, ZEO should
Tim Peters wrote:
Some "auxiliary" code would definitely break if they weren't strings. For
example, ZEO cache tracing has its own trace-file format, which originally
blew up left and right when Shane tried to use it with APE. That got
generalized (and, alas, the trace files got bigger) to live
Tim Peters wrote:
[Shane Hathaway]
It can do this because it has to provide its own DB and Connection
objects anyway.
I trust that's for more reasons than _just_ because it doesn't want to use
'\0'*8 as the root-object oid.
It's for uniformity. In Ape, the stra
Tim Peters wrote:
[Shane]
It can do this because it has to provide its own DB and Connection
objects anyway.
[Tim]
I trust that's for more reasons than _just_ because it doesn't want to
use '\0'*8 as the root-object oid.
[Shane]
It's for uniformity. In Ape, the strategy for choosing
101 - 200 of 320 matches
Mail list logo