On Sat, Jul 3, 2010 at 8:28 AM, Scott Parkerson
<scott.parker...@gmail.com>wrote:

> On Sat, Jul 3, 2010 at 9:11 AM, Martin Bähr <
> mba...@email.archlab.tuwien.ac.at> wrote:
>
> > On Sat, Jul 03, 2010 at 09:02:29AM -0400, Scott Parkerson wrote:
> > > I had set up a github project, but alas! I didn't know the right git fu
> > to
> > > make the clone/branch happen out there. Help would be appreciated.
> >
> > you mean you could not figure out how to clone packagekit into your
> > repo? i'll look that up. it's more of a github question than a git
> > question i think.
> >
> > Yeah, I think you are right; at the time, I was a github newbie. Now, I'm
> a
> mere novice. :D
>
>
> > > I'm on vacation for a few days, and (actually!) might have some time to
> > do
> > > some work, as $DAY_JOB isn't eating up all my time. I'll have to
> squeeze
> > it
> > > in between family fun, etc., but I'll be around.
> >
> > i'd like to use this time to get as much as you know about the problem
> > into the public. ie: don't spend to much time on the github thing now,
> > we can sort that later, but try to remember what you did to fix the
> > problem, and share locations, files, line numbers where you remember
> > poking.
> >
>
> Well, being that most of my heavy development was done in the Feb-March
> timeframe, I don't remember specifics, but I'll give you an outline of what
> I was working on:
>
> * Making xmlRepo a singleton object instantated one time for the lifetime
> of
> the PK command. This basically was the thing that chewed up all of the RAM
> for PK. The rest was improved by Conary memory usage improvements made in
> 2.1.8+ (thanks to oot for that).
>

The high memory usage its cause on the update-system. this process not use
the XMLRepo.

¿smerp where you seen the high memory usages its caused by xmlcache ?

* Changing the object model. The object model of using dicts of dicts and
> tuple was a mess. I tried to simpify this by using objects with _slots_ for
> the specific things, and using dicts for things that couldn't be predicted.


I think about it
http://wiki.foresightlinux.org/display/DEV/Redesign+Conary+Backend and write
objectives

* What I considered a *major* showstopper (and why I think that the Conary
> PK backend is currently *fundamentally* broken is this:
>
>  1. When asking for updates, PK essentially gets the output from conary
> updateall --items. This is fine, and is what the command line does when you
> run updateall. The output of this updateall is frozen to disk. This is what
> is presented to the user as the list of packages that are in need of
> updating.
>

the method for asking updates is get-updates and its executed by
gpk-icon-viewer

This "frezze" its contains a Jobs this jobs are package:components what will
updated. and showed...


>
>  2. When *applying* the updates, it then queries the repository *again*
> using the package names in the list! This results in a new updatejob, which
> is then applied. This updatejob is (a) not the same as ther result from
> conary updateall --items, and (b) is not in optimal order for applying to
> the system. Therefore, most of the time, on large updates, the system is
> hosed after using PK to apply a set of updates.
>

This its fixed! on gnome-packagekit.
the gnome-packagekit execute a get-updates and for each package do and
update on the past. NOW()!  this phase replaced by update-system...

ref: http://bit.ly/amDZpj


> I had rectified this by freezing the updatejob *and* a normalized list of
> package objects (sorted) which was hashed. This hash -- not the updatejob's
> trovelist -- was checked against a list built from what the GUI persisted.
> If it matched, then the cached updatejob stored with that list was applyed.
> This was faster, and was more accurate
> Unfortanately, I lost all of this work. :( It was *almost* a complete
> rewrite of 60% of the backend. That'll teach me.
>


I have to say the update-system as the same methods and operations what
updateall do.

what as:

client = ConaryClient(cfg)
updateItems = client.fullUpdateItemList()
applyList = [ (x[0], (None, None), x[1:], True) for x in updateItems ]
updJob = client.newUpdateJob()
sugMap = client.prepareUpdateJob(updJob,applyList)
dir = client.applyUpdateJob(updJob)
updJob.close()


this basically as conary updateall do... but with  verbose callbacks
http://hg.rpath.com/conary/file/411f4e71df45/conary/callbacks.py#22

the unique difference is the fronzen dir for getting the updJob

So the idea of conary backend do a update-system on a single Job its wrong.
Because internally the backends download jobs, commit,rollback and install
to system. This its showed on /tmp/conarybackend.log when packagekit
running.(this not show now because i remove the logs as embee explain  (
http://lists.rpath.org/pipermail/foresight-devel/2010-July/001801.html))

So if the packagekit backend  do the same/equal of conary. i ask to my self
¿what cause the memory usage? other think of conary backend haves and conary
updateall not. It is a alot of loggers in code for debug on callbacks.

So the on the last hack i remove all logs on Callbacks and seen a change on
my machine .. on other machine (eMbee not work) so i take care the memory
usage its not on logs.

 next i try to use profiler like python-gup...@fl:2-devel what seen what
cause the memory usage ...

--sgp
>
>
> > greetings, martin.
> > --
> > cooperative communication with sTeam      -     caudium, pike, roxen and
> > unix
> > searching contract jobs:  programming, training and administration -
> > anywhere
> > --
> > pike programmer      working in china
> > community.gotpike.org
> > foresight developer  (open-steam|caudium).org
> > foresightlinux.org
> > unix sysadmin        iaeste.at
> > realss.com
> > Martin Bähr          http://www.iaeste.or.at/~mbaehr/<
> http://www.iaeste.or.at/%7Embaehr/>
> > is.schon.org
> >
> _______________________________________________
> Foresight-devel mailing list
> Foresight-devel@lists.rpath.org
> http://lists.rpath.org/mailman/listinfo/foresight-devel
>
_______________________________________________
Foresight-devel mailing list
Foresight-devel@lists.rpath.org
http://lists.rpath.org/mailman/listinfo/foresight-devel

Reply via email to