On 02/04/16 18:44, Jacco Ligthart wrote:
On 04/02/2016 05:53 PM, Gordan Bobic wrote:
On 02/04/16 16:44, Jacco Ligthart wrote:
On 04/02/2016 04:12 PM, Gordan Bobic wrote:
On 02/04/16 14:34, Jacco Ligthart wrote:
Hi,

I found an error in kde-workspace. whenever I install the latest
version
(from RSEL 7.2) I get windows without borders. Highly annoying. I
decided to revert the version of kde-workspace back to the working one
in RSEL 7.1

When you say you find an error, you mean you find the root cause, or
you mean you found that it was broken kde-workspace was causing the
borderless windows?

I did not find the root cause. I just noticed that, when updating
kde-workspace from 4.10.5-21 (the 7.1 version) to 4.11.19-7 (the 7.2
version) all windows in KDE would get borderless. I'm talking here about
the SRC rpm, after building it will expand to a number of binary rpms
(23 for the new version), which all depend on each other. I could not
determine which binary rpm holds the issue.

I could not find any reference that anybody else is seeing the same
issue.

The errata says that the update was a bugfix. I thought that the best
route was to just revert back to the old version.

Fair enough. As long as it's not a security related fix, I can live
with the package not being fully up to date.

This means that I now know of no open issues in 7.2 :)

I think it's time to work towards release.
@Gordan, how do we want to do this? same as 6.7?

Yes, can do. If everything broken is fixed/removed from the 7.2
branch, and the packages are deduplicated, I can put them through the
signing box, and re-create the same directory structure as we have
with 6.x.

It should be mostly deduped. let me check.
I only have the following dupes:
- I keep the original srpms when I patch, so those are all dupes

I don't think we should keep the original SRPMs, only the patched
SRPMS and the patches from original SRPMs to the modified ones. Do you
have a view on this?
I often use 'repodiff' to find out if our tree is up-to-date with
upstream. I find that it helps to keep the original SRPMs to keep the
noise down. For example:
db4: db4-4.7.25-20.el6_7 ->  db4-4.7.25-20.el6.0
If I would have had the original SRPM in the tree I would not have seen
this.
OTOH, it is no extra effort on my side to keep them in a separate tree.

I think keeping them in a separate tree would help on my side, since then I can simply do something like:

find . -name "*.rpm" -exec cp '{}' /path/to/signing/directory \;

and not have to worry about having duplicates or anything that is broken in there.

- mesa and mesa-private-llvm are duped between new and extra
If one set is broken, can we just outright remove them?
Yes, could be. We should move the new ones then from extra to new.

Agreed.

- kde-workspace is duped between base and new/broken
OK, I'll exclude the stuff in broken from the release.
I normally exclude broken in the createrepo step, but I thought to keep
it in to let others try to find out what's going on.

Indeed, as long as the broken stuff is in a separate directory called something obvious like "broken" that isn't in the man repository tree, that's great.

- I have multiple versions of kernels for raspberries and odroids.
      (which reminds me, I should have put them in separate SOC repos)
As long as you put them in an obviously separate directory, I should
be able to selective pull the base distro RPMs for signing.
They are now all in extra. I'll separate them.

Great, thanks.

(base is unchanged from 7.1; new is newly build for 7.2)
Are there any duplicates between base and new? Or have the deprecated
packages been removed from your 7.1 base directories?
no dupes. everything deprecated is still in the original 7.1 tree but
not here in 7.2/base

I see, OK, so 7.1 and 7.2 have some overlap, but neither tree has duplicates within that point release. Great.

Which reminds me, I still have 6.x cleanups left to do... :-/
Let me know what you're still syncing from me for RSEL6. I could
probably clean some on my side and start a new updates-testing repo.

I am currently syncing:
/Redsleeve6/raspberrypi/ -> /var/ftp/pub/el6-staging/rootfs/
AHA, that's where the raspberrypi kernels are. Can you move this out of
the rootfs tree? Even I could not find this.

I'll see what I can do. The el6-staging tree is the bit I still have to tidy up. It is what became of the old el6 tree with all the deprecated stuff deleted out, but there is a lot more pruning to do. Some of what's left there needs just deleting, other things like the Pi kernels need moving out to the el6 subtree.

/Redsleeve6/Redsleeve6.7 /var/ftp/pub/el6-staging/6.7/

I guess this ^^^ is now static, synced and re-signed, so I can drop it
from the sync as any updates will be going to updates, which is the
next entry below?
Yes. And I can then consequently drop it from my server.

I have now removed this from my sync list. Feel free to remove/archive off from your side.

/Redsleeve6/Redsleeve6.7-updates /var/ftp/pub/el6-staging/6.7/
Is this not identical to /var/ftp/pub/el6/6.7/updates/ ? And could
similarly be dropped?

Yes and no. It's the freshed point from where people could get your latest updates. This isn't "final", so before it goes into 6.7/updates it needs to go through my signing server. So this could from time to time be ahead of what is in 6.7/updates in the el6/6.7 path.

/Redsleeve6/odroid/ /var/ftp/pub/el6-devel/
Please move next to the raspberrypi tree. Both are kernels for specific SOCs

Ack, will do.

/Redsleeve6/rootfs/ /var/ftp/pub/el6-devel/

I'm getting a bit confused. what's the difference between staging and devel?

The difference is that I arbitrarily called what used to be el6 el6-staging when I re-created el6 with only the latest 6.7 signed packages. el6-staging is going to disappear when I delete or move stuff out of it to el6, as appropriate.

I just made a dummy /Redsleeve6/Redsleeve6.7-updates-testing In due
course I'll make new RSEL6.7 updates available here.

OK, noted.

Gordan

_______________________________________________
users mailing list
users@lists.redsleeve.org
https://lists.redsleeve.org/mailman/listinfo/users

Reply via email to