Re: KDE Builder: request for review
ash...@linuxcomp.ru wrote: > I've been working on KDE Builder project. This is a reimplementation of > kdesrc-build in Python. What is the point of rewriting a tool originally written in a stable language that guarantees long-term backwards compatibility (since the originally planned incompatible "Perl 6" was renamed, as it should) and with a syntax close to C/C++ (i.e., what KDE software is mostly written in), rewriting it in a programming language that releases completely backwards- incompatible major versions every few years and whose syntax depends on whitespace (!)? Do we not have enough pointless porting busywork for the incompatible Qt major releases yet, so now we need the same for KDE Builder? Kevin Kofler
Re: Kandalf: request for review
J. Varela wrote: > I think it shouldn't have a name that starts with K, and that it should be > named something related to llama, the large language model. The closest relatives to the llama are the vicuña, the guanaco, and the alpaca. The German spellings Vikunja, Guanako, and Alpaka kontain a 'k', so I think those are kooler. ;-) (Yes, the misspellings of "contain" and "cooler" in the previous sentence were intentional. ;-) ) Kevin Kofler
Re: what to do with phonon & why aren't you using it?
Harald Sitter wrote: > A bit of background perhaps: wy back when phonon was conceived as > multimedia abstraction library to serve as multimedia pillar. While it > is certainly abstract and a library I'm not so sure about the pillar > aspect ;) In fact, phonon is barely even an abstraction since I only > maintain the vlc backend and nobody else maintains any other backend. Well, that might be one (though not the only) answer to your first question ("Why don't you use phonon?"). The preferred backend in distributions has been GStreamer for years. (At some point, it was even the one preferred by upstream.) It is also what QtMultimedia uses by default. Why was VLC chosen as the one maintained backend for Phonon? These days, the Phonon GStreamer backend can have obvious defects (visible in less than 5 minutes of testing, e.g., Dragon Player crashing on exit), and nobody is here to care. This in turn will lead developers to stop using the "buggy" Phonon. > So, one has to wonder if it'd not make more sense to put the bit of > energy that flows into phonon to instead go into qtvlc and not have to > deal with the abstraction element at all. Indeed all these > abstractions seem a bit silly because the multimedia libraries > gstreamer and vlc are already abstractions... it's nesting dolls all > the way down. Ah, the famous "KDE Abstracted My Abstraction Layer" quote (which I see you even used as a presentation title in 2011). ;-) This is a bit of a mess indeed. One issue is of course that some people prefer GStreamer and some VLC (and then there may in principle be: native Windows or MacOS backends, a backend using FFmpeg directly, etc.), so then you end up with another layer to pick one of those. Kinda an instance of: https://xkcd.com/927/ The whole history of multimedia on Qt is a mess. Phonon was started when something like QtMultimedia simply did not exist. That had made it so interesting to the Qt project that they started to ship Phonon with Qt. Unfortunately, that cooperation never worked as well as intended (I remember subtle incompatibilities between "Qt Phonon" and "KDE Phonon", different default backends or even different backends altogether, issues initially getting the GStreamer backend from "Qt Phonon" to fully work with "KDE Phonon", and bootstrapping / build order issues) and abruptly ended when Qt decided to reinvent the wheel with QtMultimedia (first as a part of QtMobility, then standalone). As a result, we now have two competing Qt multimedia frameworks that both suffer or suffered from a lack of maintainers. QtMultimedia was in a horrible state a few months ago (so much that I ended up suggesting on the Qt developer mailing list to just use Phonon, which, I was later told, made some powerful people at Qt really hate me ;-) ). It seems to be better now. But now we have Phonon falling into pieces, with only one backend being maintained at all. I wish Qt and KDE could work together on *one* multimedia framework, with at least a working GStreamer backend. Though I am worried about the porting effort, since it would mean at least one of the two APIs will go away, maybe even both (in favor of a completely new one). But if you are going to revisit the Phonon API anyway, it may be worth considering to work with QtMultimedia instead. I would rather have one working multimedia framework for Qt than two half-unmaintained ones. Kevin Kofler
Re: Moving FutureSQL to KDE review
Jonah Brüchert wrote: > I would like to maintain it as an independent library for now, to be > able to use C++20 features […] What is the point of allowing to build KDE Frameworks with an older C++ standard (what is the minimum these days?) if "many KDE projects" are going to depend on a library that requires C++20 to build? Either we can use C++20 or we cannot. Kevin Kofler
Re: what to do with phonon & why aren't you using it?
[Resending per e-mail. I keep forgetting about the issue and sending through Gmane NNTP, and then the mail sits stuck in moderation (or mail server "quarantine") for days. So do not be surprised when the original copy ends up making it through days, weeks, or even months later, when somebody happens to look at the moderation/quarantine queue.] Harald Sitter wrote: > A bit of background perhaps: wy back when phonon was conceived as > multimedia abstraction library to serve as multimedia pillar. While it > is certainly abstract and a library I'm not so sure about the pillar > aspect ;) In fact, phonon is barely even an abstraction since I only > maintain the vlc backend and nobody else maintains any other backend. Well, that might be one (though not the only) answer to your first question ("Why don't you use phonon?"). The preferred backend in distributions has been GStreamer for years. (At some point, it was even the one preferred by upstream.) It is also what QtMultimedia uses by default. Why was VLC chosen as the one maintained backend for Phonon? These days, the Phonon GStreamer backend can have obvious defects (visible in less than 5 minutes of testing, e.g., Dragon Player crashing on exit), and nobody is here to care. This in turn will lead developers to stop using the "buggy" Phonon. > So, one has to wonder if it'd not make more sense to put the bit of > energy that flows into phonon to instead go into qtvlc and not have to > deal with the abstraction element at all. Indeed all these > abstractions seem a bit silly because the multimedia libraries > gstreamer and vlc are already abstractions... it's nesting dolls all > the way down. Ah, the famous "KDE Abstracted My Abstraction Layer" quote (which I see you even used as a presentation title in 2011). ;-) This is a bit of a mess indeed. One issue is of course that some people prefer GStreamer and some VLC (and then there may in principle be: native Windows or MacOS backends, a backend using FFmpeg directly, etc.), so then you end up with another layer to pick one of those. Kinda an instance of: https://xkcd.com/927/ The whole history of multimedia on Qt is a mess. Phonon was started when something like QtMultimedia simply did not exist. That had made it so interesting to the Qt project that they started to ship Phonon with Qt. Unfortunately, that cooperation never worked as well as intended (I remember subtle incompatibilities between "Qt Phonon" and "KDE Phonon", different default backends or even different backends altogether, issues initially getting the GStreamer backend from "Qt Phonon" to fully work with "KDE Phonon", and bootstrapping / build order issues) and abruptly ended when Qt decided to reinvent the wheel with QtMultimedia (first as a part of QtMobility, then standalone). As a result, we now have two competing Qt multimedia frameworks that both suffer or suffered from a lack of maintainers. QtMultimedia was in a horrible state a few months ago (so much that I ended up suggesting on the Qt developer mailing list to just use Phonon, which, I was later told, made some powerful people at Qt really hate me ;-) ). It seems to be better now. But now we have Phonon falling into pieces, with only one backend being maintained at all. I wish Qt and KDE could work together on *one* multimedia framework, with at least a working GStreamer backend. Though I am worried about the porting effort, since it would mean at least one of the two APIs will go away, maybe even both (in favor of a completely new one). But if you are going to revisit the Phonon API anyway, it may be worth considering to work with QtMultimedia instead. I would rather have one working multimedia framework for Qt than two half-unmaintained ones. Kevin Kofler
Re: Usage of KF5/KF6 in targets and CMake config files outside of Frameworks
Heiko Becker wrote: > while looking at a MR for libkcddb (part of Gear) I wondered if the > transition from Qt5/KF5 to Qt6/KF6 could be used to get rid of the KF5/6 > prefix in target names and CMake config files for libraries that aren't > acutally part of Frameworks. Huh? This kind of transition is exactly where the prefix is most valuable, because it ensures that you get a compatible version of the library, i.e., that you do not accidentally get, e.g., a version of KF5Cddb when you are looking for KF6Cddb. Plus, it makes things easier when libraries move into Frameworks later. Kevin Kofler
Re: KDE Review: Arianna
Carl Schwan wrote: > I want to move Arianna to KDE review. Arianna is an ebook reader > currently only supporting epubs. This is based on top of epub.js > and QtWebEngine for the actualy rendering of ebooks as doing that > from scratch in Qt would be too much work. Okular has an EPub backend using libepub and QTextDocument. How does Arianna compare to that? I would expect native code to be more efficient than JavaScript *especially* on mobile devices, which are apparently Arianna's main target. But I do not know whether there are, e.g., EPub documents that Arianna can render and Okular cannot, so that is why I am asking how they compare. Kevin Kofler
Re: KDE Review: Arianna
Carl Schwan wrote: > I want to move Arianna to KDE review. Arianna is an ebook reader > currently only supporting epubs. This is based on top of epub.js > and QtWebEngine for the actualy rendering of ebooks as doing that > from scratch in Qt would be too much work. Okular has an EPub backend using libepub and QTextDocument. How does Arianna compare to that? I would expect native code to be more efficient than JavaScript *especially* on mobile devices, which are apparently Arianna's main target. But I do not know whether there are, e.g., EPub documents that Arianna can render and Okular cannot, so that is why I am asking how they compare. Kevin Kofler
Re: Email server update - migration from Mailman 2
Am Montag, 6. Februar 2023 02:28:19 CET schrieb Neal Gompa: Most people expect normal proportional fonts when reading mail, not monospaced text. Even my email client doesn't show email in monospaced text by default. But using a proportional font breaks: * complex indentation, as I had already mentioned, * nicely aligned text tables, * ASCII art drawings, making mails using any of those display incorrectly. All those constructs can come up in technical discussions among tech-savvy persons such as here on kde-core-devel. (We are not "most people".) Keep in mind that code is usually displayed using a monospace font, too, and that e-mails on KDE mailing lists are likely to contain code snippets. I see no technical advantage in using a proportional font by default, only drawbacks. (And for those who want it, a JavaScript-heavy interface such as HyperKitty could make it switchable with one click and/or keypress. E.g., in KNode, you just push the X button on your keyboard to switch instantly. Wheeras Trojitá just always uses a monospace font for plaintext (non-HTML) e-mail.) And finally, HyperKitty is largely unusable without JavaScript. If you turn off JavaScript, significant portions of the interface just do not work, whereas Pipermail was completely free from client-side code. This is a regression in browser compatibility and in accessibility. HyperKitty also uses cookies, Pipermail does not. This is an unreasonable demand. Most of the internet does not function without JavaScript today. Most of the Internet is broken, so let us break our site too? There are browsers that by design do not handle JavaScript, such as lynx. Such browsers are used in various accessibility-related contexts, as well as in emergency situations. E.g., what if KDE Plasma fails to start up for you, you are stuck in text mode, and you are looking for a solution on KDE mailing lists using lynx? And the JavaScript-heavy stuff does not just require any JavaScript, but tends to require a very recent browser, refusing to work even on maintained LTS branches of browsers, such as QtWebEngine LTS (which is public and FOSS unlike the rest of Qt LTS). Some websites have already started breaking on QtWebEngine 5.15 LTS, e.g.: * the Nextcloud PDF viewer: https://github.com/nextcloud/files_pdfviewer/issues/684 * Discourse: https://forum.manjaro.org/t/new-version-of-discourse-dropped-support-for-qtwebengine-5-15-lts/132543 The reasons why they stopped working are pretty spurious in both cases: Nextcloud could trivially (a one-line change) switch to the "legacy" branch of PDF.js which is compatible with many more browsers than the default build (and I also blame PDF.js for not making the "legacy" build the default, the current default build is only suitable for bundling in, e.g., Firefox and NOT for the web!), and the stricter browser check in Discourse appears to be entirely unnecessary (since it works when I adblock the browser-detection script). If the same were to happen with HyperKitty, that would be a particularly serious issue for KDE mailing lists because Falkon is the official KDE browser and currently stuck on QtWebEngine 5.15 LTS. (Moving to Qt 6 will be needed to get a newer Chromium again, unless someone makes, e.g., QtWebEngine 6.2 LTS work with Qt 5 somehow.) I do not see how or why it is unreasonable to expect something that has worked without JavaScript for decades to keep working without JavaScript. There are things for which it may be necessary, but displaying static mailing list archives is not. Broken links sound like a showstopper to me. […] openSUSE developed a way to map legacy discussions on mlmmj to HyperKitty, while Fedora just retained the old Pipemail static pages. Either works. So either solution would need to be implemented on KDE mailing lists too. Kevin Kofler
Re: Gitlab update, 2FA now mandatory
Kevin Kofler wrote: > What am I expected to use with my PinePhone? Does > https://apps.kde.org/keysmith/ work? To answer my own question: Yes, Keysmith works, both on the desktop (and notebook) and on the PinePhone. It is also easily possible to synchronize the keyring between different devices using Keysmith just by copying ~/.config/org.kde.keysmith/Keysmith.conf to the other device over SFTP. Then any of the devices can be used to generate the TOTP. (They will generate the exact same one-time passwords, I can see it by running both instances in parallel.) GNOME Secrets (formerly known as Password Safe) also works on the PinePhone (which is useful because that app can also store the permanent password, and is mobile-friendly unlike KWalletManager, though I presume it will also work fine on desktops/notebooks). If I enter the same secret there, it also generates the exact same one-time passwords. Kevin Kofler
Re: ghostwriter is ready for your review
Megan wrote: > It's definitely not supposed to look like that. I tried a fresh install > on my machine (removing and rebuilding from scratch) but could not > replicate the issue. It's supposed to be using Font Awesome's font > glyphs for the icons, since they are easily styled along with the normal > text in QSS/CSS. I also double checked that I don't have Font Awesome > installed as a font. Weird. You probably have some other font that incorporates Font Awesome glyphs (or equivalent glyphs for the same code points). There are several "nerd fonts" that try to be supersets of multiple icon fonts including Font Awesome. What is sure is that if you use icon fonts in the application, it has a dependency on the font, or a font, any font, providing those icons. That dependency must be documented. And you also need to explicitly set the font of those buttons to Font Awesome or to whatever compatible font you decide to depend on. While fontconfig sometimes automatically falls back to the correct font when it hits a character not supported in the default font, there are reasons why that can fail, especially for private use area characters like the Font Awesome ones. And other operating systems might not even attempt to fall back to a font that actually provides the icon. Windows at least used to have no such fallback mechanism, though I have not used it for years, so that might have changed since. Kevin Kofler
Re: Email server update - migration from Mailman 2
Ben Cooksley wrote: > The most likely candidate for this is naturally Mailman 3 (an instance of > which can be found at > https://mail.python.org/archives/list/python-...@python.org/) This already shows one of the issues: The archive links contain unescaped e- mail addresses which get mangled by third-party mail filters such as the Gmane one, breaking the links. > It appears to be a substantial improvement in all regards over Mailman 2, > and therefore I intend to upgrade to that at this stage. Unfortunately, there are several usability issues with the Mailman 3 archive interface, known as HyperKitty. Fedora (the GNU/Linux distribution) was one of the first projects to switch to it (and it was originally developed by a Fedora developer), so I have run into a bunch of them. There was unfortunately little to no interest in fixing them when I reported them. The worst was that indentation in the mails was completely lost. Though looking at the Python 3.12.0 alpha 2 announcement: https://mail.python.org/archives/list/python-...@python.org/thread/M2ZJ3BAPJKVLU3XUTFEQXTNQOOJWWZRT/ (hoping the link will not get mangled), at least this seems to have been fixed. The indentation still looks wrong though because the mails are displayed using a proportional font rather than a fixed-sized one as in Pipermail (the Mailman 2 archive interface). Time stamps use strange formats. The front page shows me time stamps of the form "Di Nov 15, 2:02 nachm.", which is not a valid way to format times in German. (We do not use 12-hour times in Austria.) I did bring that to the attention of the (at the time) main HyperKitty developer when this was deployed in Fedora, but it does not seem to have been addressed. Somebody filed a bug asking for the time format to be configurable, also mentioning this issue: https://gitlab.com/mailman/hyperkitty/-/issues/357 and it has been open mostly untouched for a year and a half. Also, a request to always show the date next to the time was simply turned down: https://gitlab.com/mailman/hyperkitty/-/issues/299 which is IMHO also a usability regression compared to Pipermail. And finally, HyperKitty is largely unusable without JavaScript. If you turn off JavaScript, significant portions of the interface just do not work, whereas Pipermail was completely free from client-side code. This is a regression in browser compatibility and in accessibility. HyperKitty also uses cookies, Pipermail does not. > - Mailman 3 uses a completely different URL format, so existing list > archive links will likely be broken. It may be possible to retain static > copies of the existing Pipermail archives to mitigate the impact of this > but they won't be updated any further following the upgrade. Broken links sound like a showstopper to me. Either keeping the static pages up or somehow setting up a redirect mapping (I believe it has been done at least once by some project, but it does not seem to be currently deployed in Fedora at least, they are using what I assume to be a static copy of the old archives) is IMHO required. Kevin Kofler
Re: Email server update - migration from Mailman 2
Ben Cooksley wrote: > The most likely candidate for this is naturally Mailman 3 (an instance of > which can be found at > https://mail.python.org/archives/list/python-dev-+zn9apsxkcednm+yrof...@public.gmane.org/) This already shows one of the issues: The archive links contain unescaped e- mail addresses which get mangled by third-party mail filters such as the Gmane one, breaking the links. > It appears to be a substantial improvement in all regards over Mailman 2, > and therefore I intend to upgrade to that at this stage. Unfortunately, there are several usability issues with the Mailman 3 archive interface, known as HyperKitty. Fedora (the GNU/Linux distribution) was one of the first projects to switch to it (and it was originally developed by a Fedora developer), so I have run into a bunch of them. There was unfortunately little to no interest in fixing them when I reported them. The worst was that indentation in the mails was completely lost. Though looking at the Python 3.12.0 alpha 2 announcement: https://mail.python.org/archives/list/python-...@python.org/thread/M2ZJ3BAPJKVLU3XUTFEQXTNQOOJWWZRT/ (hoping the link will not get mangled), at least this seems to have been fixed. The indentation still looks wrong though because the mails are displayed using a proportional font rather than a fixed-sized one as in Pipermail (the Mailman 2 archive interface). Time stamps use strange formats. The front page shows me time stamps of the form "Di Nov 15, 2:02 nachm.", which is not a valid way to format times in German. (We do not use 12-hour times in Austria.) I did bring that to the attention of the (at the time) main HyperKitty developer when this was deployed in Fedora, but it does not seem to have been addressed. Somebody filed a bug asking for the time format to be configurable, also mentioning this issue: https://gitlab.com/mailman/hyperkitty/-/issues/357 and it has been open mostly untouched for a year and a half. Also, a request to always show the date next to the time was simply turned down: https://gitlab.com/mailman/hyperkitty/-/issues/299 which is IMHO also a usability regression compared to Pipermail. And finally, HyperKitty is largely unusable without JavaScript. If you turn off JavaScript, significant portions of the interface just do not work, whereas Pipermail was completely free from client-side code. This is a regression in browser compatibility and in accessibility. HyperKitty also uses cookies, Pipermail does not. > - Mailman 3 uses a completely different URL format, so existing list > archive links will likely be broken. It may be possible to retain static > copies of the existing Pipermail archives to mitigate the impact of this > but they won't be updated any further following the upgrade. Broken links sound like a showstopper to me. Either keeping the static pages up or somehow setting up a redirect mapping (I believe it has been done at least once by some project, but it does not seem to be currently deployed in Fedora at least, they are using what I assume to be a static copy of the old archives) is IMHO required. Kevin Kofler
Re: Gitlab update, 2FA now mandatory
Kevin Kofler wrote: > What am I expected to use with my PinePhone? Does > https://apps.kde.org/keysmith/ work? To answer my own question: Yes, Keysmith works, both on the desktop (and notebook) and on the PinePhone. It is also easily possible to synchronize the keyring between different devices using Keysmith just by copying ~/.config/org.kde.keysmith/Keysmith.conf to the other device over SFTP. Then any of the devices can be used to generate the TOTP. (They will generate the exact same one-time passwords, I can see it by running both instances in parallel.) GNOME Secrets (formerly known as Password Safe) also works on the PinePhone (which is useful because that app can also store the permanent password, and is mobile-friendly unlike KWalletManager, though I presume it will also work fine on desktops/notebooks). If I enter the same secret there, it also generates the exact same one-time passwords. Kevin Kofler
Re: Gitlab update, 2FA now mandatory
Ingo Klöcker wrote: > You are the only person in this thread (on kde-core-devel) who has voiced > their disagreement with using 2FA and who demand its immediate > deactivation. Why do you think a single person (you) who isn't tasked with > keeping our infrastructure and the data stored thereon secure should be > able to decide this? To be honest, I am genuinely surprised that there are not more complaints about that. I would have expected lots more. (On kde-community, there are a few posts by Christoph Cullmann worrying about the impact on new contributors, but even he does not seem to be opposed to 2FA for KDE developers. Other than that, I do not see any kind of criticism either.) Unfortunately, it seems that people have learned to put up with pretty much any annoyance in the name of "security". (I blame airport "security".) > I for one applaud the requirement to use 2FA on invent. I would love to > see this on more websites. That just confirms that this is NOT actually an "industry standard best practice" as Ben Cooksley is claiming, but a completely non-standard PITA that only a handful websites dare imposing on their users. (Invent is the ONLY website that I use that requires this. Note that I do not use online banking, and the ever-increasing security theater banks are imposing is the main reason why. There is a reason mandatory 2FA has not caught on outside of the banking sector.) A lot of websites allow users to opt into 2FA (letting the security nerds have their toy to play around with without bothering the rest of the world), but forcing it down our throat is a wholely different matter. > And, for what it's worth, since invent keeps personal information and > since the GDPR requires using state-of-the-art technology to protect > personal information, using 2FA is, in my opinion (but I'm not a lawyer), > a must for any website that stores personal information. See above, almost nobody else does this, so that interpretation of the GDPR is pure nonsense. Kevin Kofler
Re: Gitlab update, 2FA now mandatory
Ben Cooksley wrote: > On Mon, Oct 24, 2022 at 3:36 AM Kevin Kofler > wrote: >> IMHO, this is both an absolutely unacceptable barrier to entry and a >> constant annoyance each time one has to log in. > > You shouldn't have any issues with remaining logged in as long as your > browser remains open. I wrote "each time one has to log in", not "remaining logged in". I sure hope that I just have to jump through the 2FA hoops only once per log in and not several times. But that is still one time too many. And "as long as your browser remains open" is at most one day. I turn the computer off while I sleep. So if this change forces me to log in each time I restart the browser, and hence at least each time I restart the computer (which is currently *not* the case, I can remain logged in for days throughout hundreds of browser sessions), that would mean going through the 2FA procedure at least every day. > I did not supply a list of applications that people should be using as > there is a diverse range of devices and appstore ecosystems in use by > different people, and I don't have access to hardware such as a PinePhone > to validate any of that. So you are single-handedly forcing a new requirement on everyone, but are not willing to help us in any way with it, even just by telling us how to fulfill it. That is very unhelpful. And you conveniently evaded my main questions: * why such a change can be decided by one person suddenly on a Sunday morning, with no warning (well, the software "gracefully" gives us 2 days to comply… only two days!), let alone (transparent) discussion. * what the point of two-factor is at all considering that you have no way to prevent the developer from storing the password and the OTP generator on the same device. In short, the 2FA requirement is unacceptable and needs to be disabled immediately. Kevin Kofler PS/OT: > For most people the set of addresses they will be logging in from won't > change much (given that the vast majority of people use always-on internet > connections now, which means IP addresses - even if theoretically dynamic > - are in practice fairly static). "fairly static" does not mean it never changes, as in my case. But we need not discuss this tangent any further. The mandatory 2FA nonsense is the real issue, let us please focus on that.
Re: Gitlab update, 2FA now mandatory
PS: Kevin Kofler wrote: > Ben Cooksley wrote: >> I have also enabled Mandatory 2FA, which Gitlab will ask you to configure >> next time you access it. > > IMHO, this is both an absolutely unacceptable barrier to entry and a > constant annoyance each time one has to log in. Why is such a major policy change that affects all KDE developers taken overnight by a single person, with no discussion or vote of any kind? Kevin Kofler
Re: Gitlab update, 2FA now mandatory
Hi, Ben Cooksley wrote: > As part of securing Invent against recently detected suspicious activity What kind of suspicious activity would that be? Yesterday, Invent even considered it "suspicious" enough to send a warning e-mail that my semi- static IP address (TV-cable broadband ISP) has changed after several months. Dynamic IP addresses are not exactly unusual. > I have also enabled Mandatory 2FA, which Gitlab will ask you to configure > next time you access it. IMHO, this is both an absolutely unacceptable barrier to entry and a constant annoyance each time one has to log in. > This can be done using either a Webauthn token (such as a Yubikey) or TOTP > (using the app of choice on your phone) What am I expected to use with my PinePhone? Does https://apps.kde.org/keysmith/ work? And how do you intend to prevent users from running the TOTP app on the same device as the web browser (both on the smartphone or even both on the desktop/notebook)? You just cannot. (As far as I know, even Yubikeys can be emulated in software.) Two-factor is a farce. Kevin Kofler
Re: Move Licentia to KDEREVIEW
Am Montag, 8. August 2022 21:05:44 CEST schrieb David Redondo: Do the licenses really require these specially named files? Note that all used licenses in the repo are all included in the repos in a LICENSES folder. it would be a massive oversight and afaik there were never complains from fedora when this was started three years ago (https://phabricator.kde.org/ T11550) The licenses do not require specific file names, so as long as *every* repository includes the licenses relevant for that repository in a LICENSES/ folder, and as long as these licenses are included in the tarball, you are good to go. You just need to ensure that *every* tarball contains all licenses used by any code in the same tarball. (Then they should also end up in every source and binary RPM, but that is the distribution's job.) Where exactly does not matter. (But it is not enough to point to some dependency's copy of the license, the license really has to be included with every tarball.) Kevin Kofler
Re: Is there (going to be) an auto-retracer service for KDE?
Lyubomir Parvanov wrote: > Currently to be able to submit bug reports with stacktraces one has to > install the debug symbols locally for each package. As far as I know Gnome > doesn't suffer from this issue because Apport is invoked and when it > uploads the error report it is automatically retraced with the debug > symbols on Canonical servers. Retracing on a server implies uploading a core dump from the client to that server, which, considering that a core dump can contain all sorts of sensitive data, I consider an unacceptable privacy invasion. In addition, core dumps can be very huge, which means uploading them can actually take longer than downloading the debugging information on asymmetric consumer broadband, and which also means it can come out expensive on metered or capped Internet connection plans. Kevin Kofler
Re: MauiKit and Index review
Camilo Higuita Rodriguez wrote: > To start I want to submit MauiKit and Index for review, and later on the > other apps. > https://invent.kde.org/maui/mauikit > https://invent.kde.org/maui/index Looks like the correct link is actually: https://invent.kde.org/maui/index-fm Unfortunately, Index is an extremely generic name. Kevin Kofler
Re: KDEREVIEW: Proposing utilities/markdownpart to become community/release-service-managed
Ivan Čukić wrote: > markdownpartfactory.cpp:45 > > Personal preference - use `auto` when you have `= new Something(:::)` on > the right - no need for `Something` to be written twice: > Something* p = new Something(...) > > The variable part can even go away - just return new MarkdownPart. > > Similar lines exist in markdownpart.cpp, though there you use auto almost > in all these cases. IMHO, this just makes the code harder to read (as most uses of "auto"). Kevin Kofler
Re: Gitlab Turn-off Issues
Ben Cooksley wrote: > I should note that you can hardly call Gitlab crippled with the > feature set it offers in the Community Edition, that is a gross insult > towards what it provides capability wise. The fact is that the Community Edition deliberately omits some features that are available in the proprietary enterprise editions, which is exactly the definition of crippleware. > 3) Has a small upstream that isn't particularly open to external > contributions [snip] > I think that is a pretty good list of reasons as to why we started > moving away from Phabricator, especially (3). The thing is, GitLab is also not particularly open to external contributions, in particular, they are not going to add features to the Community Edition that they deliberately omitted, and if you submit a new feature, they can also decide to restrict it to enterprise customers. So seeing this stated as the main reason for having switched to GitLab strikes me as very odd. Kevin Kofler
Re: Gitlab Turn-off Issues
Nicolás Alvarez wrote: > Apparently that's not available in GitLab's free edition: > https://docs.gitlab.com/ee/user/project/description_templates.html#setting-a-default-template-for-merge-requests-and-issues--starter :-( The cripplewareness of GitLab is a real problem, especially because everyone depends on this new monopoly now. (I see more and more big Free Software projects moving to it.) I think we really need either an uncrippled fork of GitLab or the high-profile users switching to really Free, not crippled, forges. (What was wrong with Phabricator?) Kevin Kofler
Re: Move Koko to KDEReview
Carl Schwan wrote: > I asked a few packagers I know and for them, since the packagers can > download the files and put them into the tarball, it should be fine. Then why don't you put them into the tarball? Or should they get packaged as a separate package that you can then depend on? > But they also said that it would be way better to have it fetched as run > time, While that would at least not fail the build in Fedora, packages downloading additional files at runtime that they need to be able to do anything is somewhat frowned upon around here. (Whether you will get away with it depends on what the files do and what license they fall under.) I personally do not think that it is acceptable for an image viewer to require an Internet connection at runtime. Your proposed solution only moves the problem instead of solving it. Kevin Kofler
Re: Move Koko to KDEReview
Adriaan de Groot wrote: > On Thursday, 11 June 2020 23:43:52 CEST Albert Astals Cid wrote: >> I'm kind of unsure how i feel about it downloading things on cmake time. > > A fair number of distro's / packagers will go "um nope" (if the package > building machines even *have* an internet connection during configure / > build stages). Yep, this is an absolute no go in Fedora. The build system (Koji) has all Internet connection deliberately blocked. Packages MUST NOT attempt to access the Internet during builds. No exceptions. Kevin Kofler
Re: CI talk (Was: re: Manner in which kde-gtk-config development is conducted)
Ben Cooksley wrote: > I'm unhappy with them due to how they handled the complete disaster > that was a significant version update to a core system library (libc I > think) which they did in a stable, released distribution. I cannot really speak for that part, but for the following part: > It was at this point that I had finally had enough of Fedora (having > previously had to deal with their internal politics over the packaging > of QtWebEngine, which meant we ended up having to use Qt 5.8 rather > than the Qt 5.9 which we had initially targeted as the 5.9 packaging > was blocked) and dumped them for SUSE, the images that we continue to > use today. the technical issue back then was that QtWebEngine was heavily patched in Fedora, in particular, in order to support x86 machines without SSE2. (Yes, these exist.) All the patches, including the huge no-sse2 patch, had to be ported from release to release by one person that happened to be me. (Needless to say, I was strictly opposed to pushing an update removing support for some hardware to a stable release, as I hope you can understand. Hence, rebasing the no-sse2 patch was a blocker.) As soon as the patch rebases were ready, the package update was submitted. These days, QtWebEngine is not as heavily patched anymore, and in particular, we no longer attempt to support x86 without SSE2, because Fedora has dropped support for it distrowide. In addition, I have since resigned as the primary maintainer of QtWebEngine. You will occasionally find me committing some fixes, but I am no longer the point of contact nor the one doing most of the work. QtWebEngine in Fedora is now maintained by Rex Dieter. And the comaintainer who attempted (and failed) to do the QtWebEngine 5.9 upgrade at your request has since left Fedora entirely. So the political issue should also be resolved to your liking. I hereby formally apologize for any extra work the delays in getting Qt 5.9 out (and the possible miscommunications around them – I do not know what the former comaintainer may have promised to you without consulting me first) may have caused to you. It will never happen the same way again. Kevin Kofler
Re: Keysmith in kdereview
Albert Astals Cid wrote: > I think it'd be good if you used a QVarLengthArray instead of "char > code[m_pinLength];" Sadly, QVarLengthArray is much less efficient, because it cannot do variable-size stack allocation, only fixed-size stack allocation (wasting stack space) and heap allocation for anything that does not fit (wasting the entire stack space and slowing down the application). But unfortunately, it is the only thing that will compile on M$VC and other stubborn compilers that refuse to implement VLAs and whose developers have successfully sabotaged all attempts to bring this standardized C99 feature into the C++ standard (and even got C11 to make it optional for C too). Kevin Kofler
Re: Blacklisting of PIM from the CI system
Harald Sitter wrote: > Random thought to make dependency problems more obvious: glue the git > sha into the cmake package version for frameworks (when built from > git) so it finds "5.65.0.abcd123" given us a lead on what is actually > available. The git SHA is not going to work for this, because it is not monotonic. What you really want to know is whether you have something >= 5.65.0.abcd123, and having a 5.65.0.commithash version is not going to tell you that. Kevin Kofler
Re: rekonq to unmaintained
Harald Sitter wrote: > Its master branch is still kdelibs4, the frameworks branch hasn't seen > any progress in 21 months. Hasn't had a release in years. Very > unmaintained all in all. And there is also a working, maintained successor (Falkon). Kevin Kofler
D8810: Do not look for kioslave binary in applicationDirPath on *nix (#386859)
kkofler added a comment. If you ask me (there are 2 Kevins on this bug), I think this will work. I think it's kinda overkill (it pretty much defeats the point of having version-specific libexecdirs if we end up renaming the binaries anyway), but if it works (which should be the case), I'm OK with it. REPOSITORY R241 KIO REVISION DETAIL https://phabricator.kde.org/D8810 To: cullmann, #frameworks, kfunk, kkofler, dfaure Cc: kde-frameworks-devel, dfaure, ngraham, broulik, kfunk, LeGast00n, GB_2, michaelh, bruns
D8810: Do not look for kioslave binary in applicationDirPath on *nix (#386859)
kkofler added a comment. Herald added a subscriber: kde-frameworks-devel. Ping? This has been stuck for over a year now. REPOSITORY R241 KIO REVISION DETAIL https://phabricator.kde.org/D8810 To: kkofler, #frameworks, kfunk, cullmann Cc: kde-frameworks-devel, dfaure, ngraham, broulik, kfunk, michaelh, bruns
Re: Missing newline check
David Faure wrote: > IMHO text editors should (and most do) just ensure a newline is present at > the end so that this all works without human intervention. Well, there are cases where you don't want your editor to mess with existing files that way. The right thing to do is to make this feature a preference as in the KatePart. Kevin Kofler
Re: Falkon in kdereview
Adriaan de Groot wrote: > That said, the FreeBSD CI VMs should have debug symbols, so we'll have to > look at that since -- as you notice -- it makes the CI less useful for > application developers. QtWebEngine is special, it is built without debugging information for all the Chromium code by default. Make sure you build your QtWebEngine with: CONFIG += "webcore_debug v8base_debug force_debug_info" This will build Chromium code with -g1 debuginfo. To get the normal -g2 (the default if you pass just -g, at least in GCC), you can try: sed -i -e 's/symbol_level=1/symbol_level=2/g' src/core/config/common.pri but this may cause the linker to run out of memory, and it can also crash other tools (e.g., the eu-strip tool that Fedora uses to split the debuginfo into a separate file crashes on it). So caveat emptor. Another possibility is that the crash is in JITted code (e.g., JavaScript compiled by the V8 JIT), in which case you cannot possibly get a useful backtrace at all, no matter how much debugging information you enable. Kevin Kofler
Re: Python bindings using cppyy (was: An update on Python bindings)
Philipp A. wrote: > No, because you’re missing something here: There’s no KF5 bindings. So > every project that’ll use Shaheed’s new cool KF5 bindings will be a new > project. There is PyKDE4 that people will want to port their legacy programs from. Kevin Kofler
D8810: Do not look for kioslave binary in applicationDirPath on *nix (#386859)
kkofler added a comment. And the historical reason the name was not suffixed is because the binary was moved to a private libexec subdirectory to prevent conflicts. This worked fine as long as `/usr/bin` did not end up in the search path (in particular, for the whole 4.x series). REPOSITORY R241 KIO REVISION DETAIL https://phabricator.kde.org/D8810 To: kkofler, #frameworks, kfunk, cullmann Cc: broulik, kfunk
D8810: Do not look for kioslave binary in applicationDirPath on *nix (#386859)
kkofler added a comment. In https://phabricator.kde.org/D8810#167568, @cullmann wrote: > Yeah, that was done to be able to provide win/linux install bundles. A GNU/Linux install bundle should have a reparented FHS-like tree, possibly wrapped in an image file (Flatpak/Snap/AppImage). In such a tree, the libexec binaries will never be in `$prefix/bin` (the applicationDirPath), but always somewhere under `$prefix/libexec` or `$prefix/lib`. Windows and macOS are different and I believe you when you say that looking in the applicationDirPath makes sense there. REPOSITORY R241 KIO REVISION DETAIL https://phabricator.kde.org/D8810 To: kkofler, #frameworks, kfunk, cullmann Cc: broulik, kfunk
D8810: Do not look for kioslave binary in applicationDirPath on *nix (#386859)
kkofler updated this revision to Diff 22338. kkofler retitled this revision from "Do not look for kioslave binary in applicationDirPath (#386859)" to "Do not look for kioslave binary in applicationDirPath on *nix (#386859)". kkofler edited the summary of this revision. kkofler added a comment. So, since this line makes sense on Windows/Mac, let's only get rid of it on Q_OS_UNIX then. REPOSITORY R241 KIO CHANGES SINCE LAST UPDATE https://phabricator.kde.org/D8810?vs=22305=22338 REVISION DETAIL https://phabricator.kde.org/D8810 AFFECTED FILES src/core/slave.cpp To: kkofler, #frameworks, kfunk, cullmann Cc: broulik, kfunk
D8810: Do not look for kioslave binary in applicationDirPath (#386859)
kkofler created this revision. kkofler added a reviewer: Frameworks. Restricted Application added a project: Frameworks. REVISION SUMMARY src/core/slave.cpp (Slave::createSlave): Do not look for the kioslave binary in QCoreApplication::applicationDirPath(). In distribution packages, this ends up looking for the binary in /usr/bin, which is where the legacy kdelibs 3 installed its kioslave binary. As a result, we end up invoking the kdelibs 3 kioslave binary with our KF5 KIO slave and crashing it due to the mixed Qt/KDE libraries. BUG: 386859 REPOSITORY R241 KIO REVISION DETAIL https://phabricator.kde.org/D8810 AFFECTED FILES src/core/slave.cpp To: kkofler, #frameworks
Re: Python bindings using cppyy (was: An update on Python bindings)
Shaheed Haque wrote: > As promised, here is an interim update on the investigation into the > use of cppyy-based bindings for KF5 (and more...) instead of SIP-based > bindings. > > The first thing is that the underlying technology of cppyy, > cling/ROOT, has been under development at CERN for quite a while. It > directly reads regular C++ files (there is no intermediate format like > SIP). Unfortunately, if I understand correctly, it does that at runtime, lazily generating bindings only when needed the first time. They explicitly advertise that as a feature. But, in addition to the performance concerns I have with that, it turns deployment into a nightmare. It means we cannot just ship precompiles bindings in the distros, but our users will have to install the whole LLVM stack, and a special forked version at that (so it cannot even be shared with other LLVM users such as Mesa/Gallium3D/llvmpipe). ROOT is a huge bloated framework that is not exactly reputed for its simplicitly to package. Kevin Kofler
Re: How to install icons for multiple themes
Albert Astals Cid wrote: > Humanity does not, ubuntu-mono does not either. Of course Ubuntu has to do everything its own way. But I doubt KDE applications would want to install to those themes (through ECM) anyway. The concrete problem at hand can easily be fixed in the Breeze icon theme. Kevin Kofler
Re: How to install icons for multiple themes
Matthias Klumpp wrote: > And since the spec allows themes to define arbitrary layouts, there is > technically nothing wrong with the Breeze theme. IMHO, doing things differently just because you can, even if it breaks KDE's own ECM macros, is not helpful. Kevin Kofler
Re: How to install icons for multiple themes
Milian Wolff wrote: >> > ECM installs to e.g.: >> > >> > /home/milian/projects/compiled/other/share/icons/breeze/16x16/apps/ >> > hotspot.svgz >> > >> > Strace shows me the nearest match it looks into: >> > /home/milian/projects/compiled/kf5/share/icons/breeze/apps/16/hotspot.svg Pretty much ALL the other icon themes around use the 16x16/apps naming scheme. I think it even used to be mandated by the fd.o spec. At least all the ones I have installed (except for Breeze) do so, in alphabetical order: Adwaita, Bluecurve, crystalsvg, gnome, hicolor, HighContrast, locolor, oxygen. I am pretty sure most other themes around on the network do, too. Breeze is really the odd one out. > sorry for the delay and thanks for the response. The above makes me wonder > about the functionality of ecm_install_icons. As it stands, it is > completely broken for anything but the hicolor theme, don't you agree? I > would say we should fix it there, somehow... As you can see above, it is only really broken for the Breeze theme. Instead of removing useful functionality from ECM, wouldn't it make more sense to fix the Breeze paths to match the other themes? The Fedora package actually creates a whole hierarchy of empty directories such as 16x16/apps under Breeze: http://pkgs.fedoraproject.org/cgit/rpms/breeze-icon-theme.git/tree/breeze-icon-theme.spec#n82 I think it would really be helpful to make Breeze match the de-facto standard directory hierarchy. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Kevin Kofler wrote: > The new xkbcommon requirement is also an issue for us in Fedora. The new > xkbcommon 0.7.0 is only available in Rawhide and it is doubtful that it > will ever be backported to Fedora 25 or 24. So, if we cannot get > libxkbcommon updated, we will either be unable to provide your new Plasma > releases for our stable releases, or we will be forced to add a patch to > revert your dependency bump, as we already had to do for Qt in the past: > http://pkgs.fedoraproject.org/cgit/rpms/qt5-qtbase.git/tree/qtbase-opensource-src-5.3.2-old_xkbcommon.patch?id=e26fd4ffda851bfe13d975547e218e16f72ce556 > (Yes, this reintroduced a bug. It was just not possible to fix that bug > with the xkbcommon that was available in the Fedora 19 release. Fedora 20 > eventually got the xkbcommon update, so we were able to drop this patch > there. We still had to patch Qt for the old xcb-xkb (which could not be > updated due to a soname bump) though.) For the record, xkbcommon was updated to 0.7.1 in Fedora 25 as a response to https://bugzilla.redhat.com/show_bug.cgi?id=1414493 , so this particular dependency is no longer an issue for this particular Fedora release. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > I think that is a reasonable suggestion. If distros patch our > dependencies we need to consider this as a fork. And a fork should be > called a fork. It needs to be clear that KDE is not responsible for any > issues caused by the fork and thus the complete product must be renamed. > > Also if a component like KWin gets forked this means that the complete > product Plasma has to be considered as forked. Oh dear, no! The good thing about KDE has always been that it has allowed downstream modifications in an unbureaucratic way, allowing full use of the KDE trademark and other related names even for modified versions. Switching to a Firefox-style policy where every single modification has to be OKed by Mozilla (in your case, by KDE) would make your software a real pain to distribute, especially since you want to disallow modifications absolutely necessary to make your software work on some distributions. (Sure, in this particular case, xkbcommon can be updated, if distribution policies allow it. But you are talking about dependency changes in general, which can have bumped sonames or other incompatibilities.) Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > I'm sorry, but we need exceptions. Shit happens, sometimes not > everything is working as flawless as we want. If the quality of our > product is in danger, it doesn't matter anymore what policies are. The > patches to fix it will be pushed. No matter whether the process was kept > or not. > > So let's better think ahead of the possible exceptions and clearly write > down what would allow for an exception instead of then when it happens > to have nasty discussion. Common sense would dictate that you would have to ask the CI sysadmins for an exception, and if the process (of course you have to wait for their answer, and ideally for them to update the CI) makes you miss the feature freeze, ask the release team for an exception to the freeze. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Alexander Neundorf wrote: > As far as is know there is no such list of "supported releases of common > distributions". > If the rule is that it is Ok to require new versions of libs, because > future distro releases will have it, this list would basically be empty. With "supported releases", I meant supported by the distribution. E.g., for RHEL/CentOS, that is currently RHEL/CentOS 5, 6 and 7. For Fedora, that is currently Fedora 24 and 25. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Nicolás Alvarez wrote: > It is not true that users will be no worse off. An application could > increase the dependency of libfoo to 1.3 and add code using a feature that > was broken in 1.2. If you then revert the version bump, you get code that > uses the new feature but allows libfoo 1.2, where it's broken. Users are > now worse off than if you had stuck to the old version. Sure, that can happen (that the code will build just fine against the old library, but not actually run properly), but that is not the common case. The common case is that the new library version is used for an API addition, and that reverting the dependency bump in the application will necessarily also revert the application code using the new library API (because otherwise it won't build) and restore the known state from the previous release of the application. (This can reintroduce bugs, but only ones which were already in the previous release.) As I understand it, this is exactly the situation we are in with KWin and xkbcommon now. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Jan Kundrát wrote: > do you have some examples of distribution maintainers actually doing such > a stupid thing? I've done it more than once. If the dependency that the latest upstream version wants is not available and will not be made available for whatever reason, reverting the dependency bump is really the only way. And the users will be no worse off than if we had stuck to the old version that did not have the changes I am reverting to begin with. I have already given an example, from when I was still maintaining the core Qt 5 packages in Fedora: http://pkgs.fedoraproject.org/cgit/rpms/qt5-qtbase.git/tree/qtbase-opensource-src-5.3.2-old_xkbcommon.patch?id=e26fd4ffda851bfe13d975547e218e16f72ce556 (for Fedora 19, and for Fedora 20 until xkbcommon got updated there). I am not a maintainer of Qt 5 nor KWin in Fedora anymore, but I would still be happy to give assistance with coming up with such a patch IF the maintainers ask me. (I find producing such patches very easy.) But to be clear, all this is hypothetical planning for future releases, and the maintainers may decide to do something different (e.g., to upgrade Plasma/KWin only in a Copr also offering a newer xkbcommon), or xkbcommon might get updated anyway (there is already a build for Fedora 25, it was just not submitted in Bodhi, so I don't know what the plans are there). So, to make it clear, Fedora at this time DOES NOT REVERT any dependency bump in KWin! > In my professional epxerience, the distro maintainers that I have worked > with were reasonable people who invest time into doing valuable QA and > packaging duties. Surely there's no place for "hey, let's go break this > code" as your proposal suggests. What you call "break", I call "make work". It is that or not have the code at all. And the existing old version was already "broken" in the same way. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > You know what happens when we ifdef the version of dependencies? Thinks > break in distributions. They ignore the optional dependency and ship > with the older one. Which results in issues we upstream developers have > to care about. The quality of our product goes down and users complain > about the lousy quality of plasma and the distribution. What will happen now is that they will revert your commits that require the unavailable version of the library. It is just more work for us packagers (instead of one upstream developer maintaining a simple #ifdef, every distro will have to maintain the reversion patch individually) and will not change anything for what the users ultimately get (the output will likely be bit- identical to what #ifdef would produce). The distros that do have the latest xkbcommon already available will have things just work no matter whether you require the latest or #ifdef it. > The paramount issue resulting from it is the maintainer of a well known > KDE distribution stepping down from his job complaining loudly in public > about the lousy quality of KDE. I still remember the issues we had > especially with Fedora due to incorrect dependencies in the early 5.x > times. So if you reread my old mail from back then, the main complaint I had about the quality of KDE software was the PITA that it has become to package it. Unrealistic system-level dependencies are a part of that problem. (The worst, though, is the insane level of splitting that has produced an unmaintainable number of packages, but that is a separate topic. However, that is the main reason I can no longer comaintain the entire KDE software stack in Fedora, not the quality of what's inside those packages.) Now it is true that the quality of some KDE software was also less than impressive. The situation in the Plasma workspaces (including KWin) has improved a lot since (My complaint came after the first 2 Fedora releases shipping Plasma 5.), but those were never the main offender to begin with. Akonadi is the biggest pain point when it comes to the quality of the software itself, and that is unrelated to the dependency issues you mention. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > Everybody except I. I would have to maintain that mess. And I don't have > time to maintain multiple compile time paths. I don't see how it would be any more work to maintain #if FEATURE as compared to #if 0. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > #if 0 > // code already written, but not enabled as CI doesn't have it > #endif If you already have this, then why don't you just write: #ifdef HAVE_FOO_1_23 or #if HAVE_FOO_1_23 with an appropriate #cmakedefine instead of #if 0? Then things will just work and everybody will be happy. Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > Email threads don't work to codify such requirements. What we need is > something like an "announce new dependency to sysadmin freeze" prior to > the dependency freeze in the release schedule. That's what I mean with > codifying it. We need to have it in a way where devs actually check. > It needs to be part of the process. An old email thread cannot be part > of the process. IMHO, the rule should be: If you need a version of a system-level dependency (such as xkbcommon – things that you can't just expect the KDE packagers to upgrade willy nilly) that is not available (as an official stable update) in the OLDEST supported releases of common distributions, you MUST #ifdef it. Then there will also be no problem for the CI. At most, you can argue the exact list of common distributions, but surely something as fast-moving as Fedora should be included. Ideally, you'd take RHEL, i.e., support everything back to the oldest still supported release of RHEL, which is currently RHEL 5 from 2007 (EOL on March 31, 2017). But there is room for compromise there. (For example, you could opt to support only the LATEST RHEL release, currently RHEL 7 from 2014. Or you could ignore RHEL entirely and only consider fast-moving distros such as Fedora.) Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Alexander Neundorf wrote: > there would be the pragmatic solution to (I know distros don't like that > etc. etc.) to include a copy of the required xkbcommon library and link it > statically if no matching version is found on the system. There could be > an extra cmake switch to enable it... For the record, that's how Qt 5 is actually built for EL7. But yeah, it's frowned upon in Fedora. (RHEL/EPEL is more pragmatic because they know the system libraries are old there. Though the rules against bundling are not as strict in Fedora as they used to be, either. Still, we see it as a last resort only.) Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
PS: I wrote: > Martin Gräßlin wrote: >> For packagers it should not matter at all. This is the most common >> situation for distribution. And in the release of e.g. Plasma they have >> to handle this for hundreds of updated dependencies. >> >> It's also not unexpected, because we have a dependency freeze in place >> prior to the release, thus the dependencies are announced way ahead. >> >> It's only a "problem" for distros if they want to backport to an older >> release as in the case of Fedora. Honestly I consider this unreasonably. >> If you don't have a problem with backporting hundreds of packages I don't >> think that the one additional package is a problem. > > The problem is that core packages such as xkbcommon are maintained by > different people than the Qt/KDE stack. It is not always possible for the > KDE maintainers to get such non-KDE dependencies updated in stable > releases (or in the worst case, even in Rawhide). An even more extreme example is Rex Dieter's efforts to provide current KDE packages for RHEL. There is little to no chance to getting a package such as xkbcommon updated in RHEL itself. So the repository ends up having to update not only software from KDE, but also several other packages. Obviously, the idea is to keep those at a minimum, not to replace half of the distro! It kinda defeats the point of rebuilding the KDE packages for RHEL if at the end you are essentially running the latest Fedora with .el7 disttags. KDE software used to be much less demanding of the base system than the competition, it used to be easy to provide the software even for fairly old base systems such as RHEL n or even RHEL n-1. This has become much worse lately, with dependencies on bleeding-edge versions of: xkbcommon, Wayland libraries, etc. (And KWin is one of the worst offenders there, though definitely not the only one.) Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Martin Gräßlin wrote: > For packagers it should not matter at all. This is the most common > situation for distribution. And in the release of e.g. Plasma they have to > handle this for hundreds of updated dependencies. > > It's also not unexpected, because we have a dependency freeze in place > prior to the release, thus the dependencies are announced way ahead. > > It's only a "problem" for distros if they want to backport to an older > release as in the case of Fedora. Honestly I consider this unreasonably. > If you don't have a problem with backporting hundreds of packages I don't > think that the one additional package is a problem. The problem is that core packages such as xkbcommon are maintained by different people than the Qt/KDE stack. It is not always possible for the KDE maintainers to get such non-KDE dependencies updated in stable releases (or in the worst case, even in Rawhide). Kevin Kofler
Re: CI Requirements - Lessons Not Learnt?
Ben Cooksley wrote: > On Thu, Jan 5, 2017 at 10:28 PM, Martin Gräßlin wrote: >> It should be rather obvious that we don't introduce new dependencies >> because we like to. There is a very important software reason to it. >> That's the case for the xkbcommon dependency increase. Should I have let >> the code broken as it was, expecting half a year of bug reports till >> build.kde.org has the base upgraded to Ubuntu 16.04? > > That's what #ifdef is for... +1 The new xkbcommon requirement is also an issue for us in Fedora. The new xkbcommon 0.7.0 is only available in Rawhide and it is doubtful that it will ever be backported to Fedora 25 or 24. So, if we cannot get libxkbcommon updated, we will either be unable to provide your new Plasma releases for our stable releases, or we will be forced to add a patch to revert your dependency bump, as we already had to do for Qt in the past: http://pkgs.fedoraproject.org/cgit/rpms/qt5-qtbase.git/tree/qtbase-opensource-src-5.3.2-old_xkbcommon.patch?id=e26fd4ffda851bfe13d975547e218e16f72ce556 (Yes, this reintroduced a bug. It was just not possible to fix that bug with the xkbcommon that was available in the Fedora 19 release. Fedora 20 eventually got the xkbcommon update, so we were able to drop this patch there. We still had to patch Qt for the old xcb-xkb (which could not be updated due to a soname bump) though.) >> Similar for Mesa 13 which I'm also eagerly waiting for build.kde.org to >> fetch it. > > Mesa 13 is news to me. This is also an issue for us: Mesa 13 is only available in Fedora 25 (updates) and Rawhide, not in Fedora 24. Kevin Kofler
Re: Dropping kdelibs4-based applications in KDE Applications 17.12
Christoph Feck wrote: > Will we still release the KDE4 platform for not-yet-ported extragear > applications (Amarok etc.) with 17.12? > > If we stop releasing it, then we should also move all unported > applications to 'unmaintained'. Any developer willing to port can > surrect it from there. Even if you stop releasing the kdelibs, distros will keep releasing them for much longer, some for a lot longer. (E.g., we still also ship kdelibs3 in Fedora and I have no plans to let it go.) Kevin Kofler
Re: Review Request 129233: [kdelibs] Make Qt4 WebKit optional (default on)
smokeqt-0:4.14.3-7.fc24.i686 smokeqt-0:4.14.3-7.fc24.x86_64 timetablemate-0:0.10-0.15.20111204git.fc24.x86_64 tomahawk-0:0.8.4-12.fc26.x86_64 tomahawk-libs-0:0.8.4-12.fc26.i686 tomahawk-libs-0:0.8.4-12.fc26.x86_64 vtk-qt-0:6.3.0-11.fc26.i686 vtk-qt-0:6.3.0-11.fc26.x86_64 Kevin Kofler
QtWebEngine on Wayland (was: Re: Test your applications on Wayland)
Hi, one major issue is that QtWebEngine does not work on Wayland at this time, at least not if it is built with desktop OpenGL. See: https://bugreports.qt.io/browse/QTBUG-55384 This affects kdepim applications (at least KMail) and also some Qt applications developed outside of KDE infrastructure (QupZilla, the Calamares webview module, etc.). Kevin Kofler
Re: Review Request 128437: raise to core dump handlers when drkonqi is done
> On Juli 13, 2016, 12:43 nachm., Kai Uwe Broulik wrote: > > +1 to the idea > > > > However, will this mean we get this awful apport "something crashed, now or > > in the past" tray icon in addition to Drkonqi? > > Harald Sitter wrote: > yes. its upon ubuntu to make that go away though as their use case is > more involved. > > Harald Sitter wrote: > To expand. here's how I did this for kubuntu in kdelibs4: > https://community.kde.org/Kubuntu/QA/Whoopsie tldr: disable apport UI > (replaced by drkonqi) but retain whoopsie (crash report sending to > errors.ubuntu.com). As I mentioned, this needs dealing with in ubuntu since > they essentially attach a UI to coredumps, so either that or drkonqi need > disabling on their platform. > > Kevin Kofler wrote: > The problem with the duplicate UI will also affect Fedora/RHEL's ABRT and > probably also other similar crash handlers. In addition, I am not convinced > automatic crash report sending to a downstream tracker (i.e., the part of the > Apport functionality you retain in Kubuntu) is helpful. (I like how DrKonqi > sends the reports directly to the actual developers.) > > Harald Sitter wrote: > We keep drkonqi, this is an add-on to what we have. Distros being weird > and thinking they know better than us is hardly our problem. But why would we want the bugs that DrKonqi already handles forwarded to the distro's weird thing? You can hardly blame the distro's crash handler for reporting the bugs it intercepts in whatever way it wants (UI dialog, downstream tracker, etc.). How should it know that the crash was already handled elsewhere (i.e., in this case, in DrKonqi) if you rethrow it? - Kevin --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/128437/#review97345 --- On Juli 13, 2016, 12:40 nachm., Harald Sitter wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://git.reviewboard.kde.org/r/128437/ > --- > > (Updated Juli 13, 2016, 12:40 nachm.) > > > Review request for KDE Frameworks. > > > Repository: kcrash > > > Description > --- > > with the rise of useful core dump handlers such as systemd's coredump and > ubuntu's apport it is no longer useful to handle things exclusively in > drkonqi. it bypasses sysadmins as well as distros in debugging efforts, > putting the entire flow of information on us. > > the new behavior instead checks if a core pattern executable is set and if > so re-raises the signal so the kernel jumps in and invokes the handler. > > (this unfortunately means that the core will contain our kcrash frames, > but that seems hardly avoidable) > > > Diffs > - > > autotests/CMakeLists.txt e442520269835df71968bf7818aa34bd8bd945cf > autotests/core_patterns/exec PRE-CREATION > autotests/core_patterns/no-exec PRE-CREATION > autotests/coreconfigtest.cpp PRE-CREATION > src/CMakeLists.txt e733be69c6ca6e6c1a0608c8910cf4a9b52ffcc9 > src/coreconfig.cpp PRE-CREATION > src/coreconfig_p.h PRE-CREATION > src/kcrash.cpp b8c6477a70291ca9c1f0efef3bba061b6af247b0 > > Diff: https://git.reviewboard.kde.org/r/128437/diff/ > > > Testing > --- > > builds and passes > > > Thanks, > > Harald Sitter > > ___ Kde-frameworks-devel mailing list Kde-frameworks-devel@kde.org https://mail.kde.org/mailman/listinfo/kde-frameworks-devel
Re: Review Request 128437: raise to core dump handlers when drkonqi is done
> On Juli 13, 2016, 12:43 nachm., Kai Uwe Broulik wrote: > > +1 to the idea > > > > However, will this mean we get this awful apport "something crashed, now or > > in the past" tray icon in addition to Drkonqi? > > Harald Sitter wrote: > yes. its upon ubuntu to make that go away though as their use case is > more involved. > > Harald Sitter wrote: > To expand. here's how I did this for kubuntu in kdelibs4: > https://community.kde.org/Kubuntu/QA/Whoopsie tldr: disable apport UI > (replaced by drkonqi) but retain whoopsie (crash report sending to > errors.ubuntu.com). As I mentioned, this needs dealing with in ubuntu since > they essentially attach a UI to coredumps, so either that or drkonqi need > disabling on their platform. The problem with the duplicate UI will also affect Fedora/RHEL's ABRT and probably also other similar crash handlers. In addition, I am not convinced automatic crash report sending to a downstream tracker (i.e., the part of the Apport functionality you retain in Kubuntu) is helpful. (I like how DrKonqi sends the reports directly to the actual developers.) - Kevin --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/128437/#review97345 --- On Juli 13, 2016, 12:40 nachm., Harald Sitter wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://git.reviewboard.kde.org/r/128437/ > --- > > (Updated Juli 13, 2016, 12:40 nachm.) > > > Review request for KDE Frameworks. > > > Repository: kcrash > > > Description > --- > > with the rise of useful core dump handlers such as systemd's coredump and > ubuntu's apport it is no longer useful to handle things exclusively in > drkonqi. it bypasses sysadmins as well as distros in debugging efforts, > putting the entire flow of information on us. > > the new behavior instead checks if a core pattern executable is set and if > so re-raises the signal so the kernel jumps in and invokes the handler. > > (this unfortunately means that the core will contain our kcrash frames, > but that seems hardly avoidable) > > > Diffs > - > > autotests/CMakeLists.txt e442520269835df71968bf7818aa34bd8bd945cf > autotests/core_patterns/exec PRE-CREATION > autotests/core_patterns/no-exec PRE-CREATION > autotests/coreconfigtest.cpp PRE-CREATION > src/CMakeLists.txt e733be69c6ca6e6c1a0608c8910cf4a9b52ffcc9 > src/coreconfig.cpp PRE-CREATION > src/coreconfig_p.h PRE-CREATION > src/kcrash.cpp b8c6477a70291ca9c1f0efef3bba061b6af247b0 > > Diff: https://git.reviewboard.kde.org/r/128437/diff/ > > > Testing > --- > > builds and passes > > > Thanks, > > Harald Sitter > > ___ Kde-frameworks-devel mailing list Kde-frameworks-devel@kde.org https://mail.kde.org/mailman/listinfo/kde-frameworks-devel
Re: Review Request 128437: raise to core dump handlers when drkonqi is done
> On Juli 13, 2016, 10:15 nachm., Kevin Kofler wrote: > > I am opposed to this change, because it spams downstream packagers with > > crash bugs they are usually not qualified to fix. Those few distributions > > that really do want to get the reports downstream (e.g. RHEL) already > > explicitly disable DrKonqi. > > Harald Sitter wrote: > That's not what the change does. It raises the crash to the downstream bug handler, which in turn will unfortunately encourage the users to file the bug in the downstream bug tracker. - Kevin --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/128437/#review97363 --- On Juli 13, 2016, 12:40 nachm., Harald Sitter wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://git.reviewboard.kde.org/r/128437/ > --- > > (Updated Juli 13, 2016, 12:40 nachm.) > > > Review request for KDE Frameworks. > > > Repository: kcrash > > > Description > --- > > with the rise of useful core dump handlers such as systemd's coredump and > ubuntu's apport it is no longer useful to handle things exclusively in > drkonqi. it bypasses sysadmins as well as distros in debugging efforts, > putting the entire flow of information on us. > > the new behavior instead checks if a core pattern executable is set and if > so re-raises the signal so the kernel jumps in and invokes the handler. > > (this unfortunately means that the core will contain our kcrash frames, > but that seems hardly avoidable) > > > Diffs > - > > autotests/CMakeLists.txt e442520269835df71968bf7818aa34bd8bd945cf > autotests/core_patterns/exec PRE-CREATION > autotests/core_patterns/no-exec PRE-CREATION > autotests/coreconfigtest.cpp PRE-CREATION > src/CMakeLists.txt e733be69c6ca6e6c1a0608c8910cf4a9b52ffcc9 > src/coreconfig.cpp PRE-CREATION > src/coreconfig_p.h PRE-CREATION > src/kcrash.cpp b8c6477a70291ca9c1f0efef3bba061b6af247b0 > > Diff: https://git.reviewboard.kde.org/r/128437/diff/ > > > Testing > --- > > builds and passes > > > Thanks, > > Harald Sitter > > ___ Kde-frameworks-devel mailing list Kde-frameworks-devel@kde.org https://mail.kde.org/mailman/listinfo/kde-frameworks-devel
Re: KRandom regression + fix
Albert Astals Cid wrote: > From my "i know nothing about random numbers", i guess it's hard to write > a unit test for a sequente of random numbers, you can get ten "3" in a row > and it's still a valid random sequence. https://xkcd.com/221/ ;-) Kevin Kofler
Re: [Kde-pim] Qt 4 Builds
laurent Montel wrote: > Oh?:) > Fedora continue to support some programs which are not supported from > several years before kdepim5 ? Fedora would not have to do that if you were not removing functionality as large as entire applications (!) between releases. Also, KNode still works great, maybe BECAUSE it is "not supported from several years" and so did not get infected with Akonadi. KNode still works as well as in the pre-Akonadi days. KMail 2 has become literally 100 to 1000 times slower (!) and a lot more buggy than KMail 1, simply from getting ported to Akonadi. > Interesting good luck :) > > Better solution to find others new programs which are maintaining no ? Point me to a maintained NNTP client using Qt, and at least I will happily consider switching to it. (At least if it doesn't use Akonadi. ;-) ) This message was brought to you by KNode (through Gmane). Kevin Kofler
Re: [Kde-pim] Qt 4 Builds
laurent Montel wrote: > We will not create more release from kdepim4, no distro uses it even > debian :) Fedora will ship (it's currently under review) a kdepim4 package containing KNode and KTimeTracker, which are not included in the new KF5 kdepim. Kevin Kofler
Re: KDE file dialog
Martin Graesslin wrote: > No, because everything in the current plugin is Plasma specific. If we > want to change the font, we will do so! Forcing a default font as you have done is a bad idea even on Plasma. It is not the desktop environment's business to pick a default font. (And yes, I know GNOME does it too, with a much worse font (really poor glyph coverage). That's not a reason to do the same.) Distributions set up distribution-wide "Sans", "Serif" and "Monospace" aliases for a reason. The fonts are carefully selected by the distribution based on a variety of criteria, including glyph coverage (OK, Noto is great there; your previous default Oxygen was not, though!), quality, looks, etc. And most importantly, the distro-wide aliases ensure consistency across applications using different toolkits. Desktops deciding they know better break this. Kevin Kofler
Re: Policy regarding QtWebKit and QtScript
I've been digging deep into QtWebEngine in the hope of to polishing it up for Fedora (which sounds less hopeless now that Fedora has become less strict on bundled libraries), seeing how QtWebKit has no future with nobody fixing security bugs in it, so let me clear up a few misconceptions in your post: Vadim Zhukov wrote: > Same applies to OpenBSD. QtWebEngine uses its own, qmake-based, build > system, so at least 50% of effort of porting Chromium should be > repeated. The Chromium part is actually built using gyp. QMake just shells out to gyp to build it. You can add gyp flags in the src/core/config/*.pri files (you'll probably want a config/openbsd.pri anyway, unless you abuse linux.pri). > Next thing is that chromium already requires some tricks to allow it > being compiled (or, more technically, linked) on 32-bit platforms. In > fact, on OpenBSD it currently packages on i386 and amd64; it's going > to became amd64 soon: > http://betanews.com/2015/11/30/google-killing-chrome-for-32-bit-linux/ > . QtWebkit is large enough, too, but still fits into 32-bit address > space. That's not about trashing swap partition; that's about being > able to link stuff at all. That article is about the proprietary Chrome. It even explicitly says that Chromium will keep supporting 32-bit x86. What it does require though (on x86) is SSE2. In case anybody is interested, I have a patch fixing that (basically a cumulative revert of all the related upstream changes) that I still need to test. Here is what I have so far: https://bugzilla.redhat.com/attachment.cgi?id=1108340=diff but it might not even compile yet. > KDE4 runs on OpenBSD/sparc64 (I had successful reports from users). > Chromium doesn't work there due to (at least) memory alignment bugs. > QtWebEngine is out of SPARC game, therefore, too. Adoption of > QtWebEngine will mean no modern KDE for sparc64. And it's a 64-bit > platform, not limited by 2/3/4GB of address space! I'm not ever > talking about MIPS world... You'd also need a V8 port, which means a JIT for that platform (because V8 has no interpreter fallback). This is a major concern with QtWebEngine, it destroys the portability of Qt and KDE to new architectures. We were happy when the V8 dependency was dropped from QML 2, and very unhappy when it was reintroduced through the backdoor with QtWebEngine. > And, as it was already mentioned many times, Chromium/QtWebEngine > bundles a lot of software, often outdated. What happens when a > security flaw is found in, say, giflib? - The packager adds an > upstream patch or rolls a new release from upstream. What happens in > case of bundled copy? - Nothing, because chromium developers don't > want to break things and thus do not care about updating software > ASAP. And if they do, those updates are delayed because the > chromium/Qt packages need to be redone (reviewed, rebuilt, repackaged > and verified). And that's far less trivial and takes far more time > than patching and repackaging giflib. End users get unsecure software > as a result, possible even thinking: "I'm secure now, since I've just > updated giflib package". Who'll stand up and say: "I want to make > users of my software feel safe while my software is actually unsafe"? > - Noone will, right? But it happens. This is a very valid concern. Qt upstream has done some work on unbundling libraries, but: 1. There are libraries where Chromium does not support unbundling. 2. There are a few libraries that Chromium allows to unbundle, but QtWebEngine doesn't yet. 3. The unbundling in QtWebEngine does not run replace_gyp_files.py, so the unbundling is not complete. 2. and 3. are easy to fix, this patch works for me: http://copr-dist-git.fedorainfracloud.org/cgit/kkofler/qtwebengine/qt5-qtwebengine.git/tree/qtwebengine-opensource-src-5.6.0-beta-linux-pri.patch?id=849178c1d044af50e13552490c81801565aef547 I'm looking into upstreaming it to Qt, but I'll have to add configure checks for the use_system_* flags (point 2.) that I hardcoded to "=1". But issue 1. is the bigger issue. That said, your example is actually a bad example because Chromium/QtWebEngine does not actually bundle giflib. (It uses its own GIF decoder, which is forked from WebKit's, which is forked from Mozilla's.) Kevin Kofler
Re: Policy regarding QtWebKit and QtScript
Adriaan de Groot wrote: > - Why am I building ninja when it's already packaged externally? export NINJA_PATH=/usr/bin/ninja (or ninja-build or however your OS's package calls the binary) is enough to fix that. > - Why am I building yasm? Add GYP_CONFIG += "use_system_yasm=1" to your src/core/config/*.pri file. > - Same applies to most of the bundled stuff. A lot of the FreeBSD patches > for Chromium itself are, indeed, unbundlings. Actually, a lot of the FreeBSD patches are adding BSD #ifdefs (or the gyp equivalent), I don't see much unbundling there. I see a fix for unbundling libusb1/libusbx (which is not needed for QtWebEngine because QtWebEngine does not use libusb*, by the way), maybe I'm missing 1 or 2 things, but unbundling doesn't seem to be the main focus. And the reason there are so many patches is because they produced a different patch for EVERY SINGLE source file they modified! (That, and the fact that they're only named after the source files and not after the modifications actually done, also makes it really hard to decide at a glance what those patches really do. IMHO, this is an example of how NOT to manage downstream patches.) > But those need to be re-done for webengine, because who knows how the > versions differ. The Chromium patches should mostly work as is. Some will not be needed because QtWebEngine does not build everything from Chromium, but they shouldn't hurt either. > - The qmake and gyp (horse pucky!) are strongly tied into > linux/mac/boot2qt, so finding all the bits and pieces that need adjusting > is tricky. The hardcoded list of operating systems is indeed a hindrance to portability. (Basically, "linux" should really be "any working OS, i.e. anything other than the broken Window$ and Mac crap".) > - Example, I thought I had bunged freebsd-clang into the system properly, > but gyp is still trying to discover the assembler version by calling gcc. Are you already setting the: GYP_CONFIG += clang=1 host_clang=1 clang_use_chrome_plugins=0 \ make_clang_dir=/usr flags that desktop-linux.pri sets when building for a linux-clang target? > - Example from qt3d (so external to this discussion), using a broken > OffsetOf in a bundled third party library. Yes, bundled libraries suck and this kind of issues is another reason why. Kevin Kofler
Re: Policy regarding QtWebKit and QtScript
Adriaan de Groot wrote: > Kevin, first off thank you for responding so carefully to Vadim and to me. > It does make a difference to porting efforts. I'm glad to be of help. I also spent quite some time fighting with this thing and I'm not entirely done yet, so I know how you feel. And you guys have the even harder task porting to *BSD. Don't give up, it can be done. Kevin Kofler
Re: Why is C90 enforced in KDE?
Thomas Lübking wrote: > as a build dep requires it, "we just need //" isn't true - the > requirements are controlled outside Well, in that case, this: > - pipe flex/yacc results through c++ rather than the C compiler won't do much good, because the C++ compiler will not necessarily grok any other C99isms in the code. This only really helps for "//" comments. My suggestion is: * do not ship Flex-generated code, and * document the compatibility issue, and let it be the user's responsibility to use a version of Flex compatible with their compiler, which may be an older or patched one. This is entirely a compatibility issue between 2 build tools (Flex and the compiler), I don't see how this should be our (KDE's) problem. Kevin Kofler
Re: Why is C90 enforced in KDE?
Nicolás Alvarez wrote: > I disagree with this. It has happened to me more than once that I > modify a flex file, regenerate it, and find it has been broken for > *years* due to incompatibilities with newer flex versions and nobody > noticed. Or I forget to regenerate it. > > Should we put .mo files in the SVN translation directories so that > people don't need msgfmt? Should we put .moc files in git? > kdevelop-pg-qt generated parsers? Code made from .ui files? Code made > by kconfig_compiler from .kcfg files? kapptemplate tarballs? +1, of course we should not. Generated files have no business being in a source control or in source tarballs. "BuildRequires: flex" is one line in a distro specfile. Kevin Kofler
Re: Why is C90 enforced in KDE?
Thomas Lübking wrote: > Wtf does flex/yacc produce incompliant comments? > Seriously, COMMENTS! > That's a convenience thing. I remember a mail from several years ago where Flex developers said they had no interest in supporting an ancient C standard when C99 has been out for years. (I think it was about using stdint.h or something like that.) So I am not surprised that they are now using C99 comments (which ARE compliant to the current C standard, and have been for 16 years (!)). Kevin Kofler
Re: Naming scheme for Qt5/KF5-based libraries outside of KF5
Friedrich W. H. Kossebau wrote: > Given the high standards and required ABI stability there is a good chance > that some API brush up (e.g. due to review feedback while proposed as KF5 > lib) is made before turning into a KF5 lib, as was already pointed out by > Sune. Having the same name would prevent that (for the usual problems with > ABI changes). Not if you ship your not-yet-in-KF5 library with a soversion (soname major version) < 5. (I'd just pick 0.) Kevin Kofler
Re: Naming scheme for Qt5/KF5-based libraries outside of KF5
Sune Vuorela wrote: > I do think that having things named KF5 that aren't actual frameworks is > bad for several reasons. > > 1) It blurs what's a framework That's more a political distinction than a technical one. For all practical purposes, the application using the library doesn't care whether it is a "Framework" or not. > 2) We promise ABI and API compatibility for frameworks, but not for > other things But it means you will gratuitously break both source (!) and binary compatibility for all users of the library when the library actually becomes a Framework. > 3) Moving something from "not a KDE Framework" to "KDE Framework" gives > a last chance for fixing up abi/api. If you need to fix the ABI, you should just bump the soname major version. I'd just use libKF5*.so.0 (instead of the normal .5) for libraries that are not yet Frameworks. Kevin Kofler
Re: RFC: KDE Bugzilla Bugs Expiration
Christoph Cullmann wrote: I think one of the problems with our current Bugzilla database is that it contains a lot of old bugs and wishs. As the manpower is limited and we sometimes not even keep up with the incoming new bugs, might it be a good idea to adopt a similar strategy like the Qt Project and expire bugs that got not changed since more than one year? The idea would that a scripts closes all bugs that have no activity in the last year e.g. on a weekly basis and the closing comment would contain some gentle note that if the bug is still an issue, the reporter (or any person on CC) can just reopen it again. I think this would make a lot of time consuming bug triaging much easier. IMHO, KDE (as in, everything that uses bugs.kde.org) is too large a project with too different release cycles and maintainership for it to make sense to do this with global scripts. Keep in mind that 1. bugs.kde.org is used by much more than just the former KDE SC and 2. even that SC has now been split into 3 parts (Frameworks, Plasma, Applications) on different release schedules. Different policies make sense for different applications. So this should be done with per-application scripts. I would also strongly argue for keeping this manual for all applications whose maintainer(s) didn't explicitly opt in to such an autoclosing policy. Kevin Kofler
Re: Notes from the Phabricator BoF
Luigi Toscano wrote: Feedback on Phabricator gathered outside the BoF from people who could not attend: Were there no complaints about the fact that you can still not view anything at all without logging in? Kevin Kofler
Re: Review Request 124163: Use KIO::Overwrite when saving a changed file otherwise it always fails.
--- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/124163/#review82295 --- Ship it! Ship It! - Kevin Kofler On Juni 24, 2015, 12:33 nachm., Jeremy Whiting wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/124163/ --- (Updated Juni 24, 2015, 12:33 nachm.) Review request for kdelibs and Kevin Kofler. Bugs: 347760 http://bugs.kde.org/show_bug.cgi?id=347760 Repository: libkomparediff2 Description --- As the summary says use KIO::Overwrite when saving files. BUG:347760 Diffs - komparemodellist.cpp 32242777411ddb8549154a6f2ef0fbc9fff7a239 Diff: https://git.reviewboard.kde.org/r/124163/diff/ Testing --- Kompare now correctly saves the destination file when comparing and making changes. Thanks, Jeremy Whiting
Re: Alternative to QDateTime::isDateOnly ?
Christian Mollekopf wrote: 4. Would be pretty good IMO, but unfortunately leads to an unexpressive interface (because QVariant can't be parametrized with valid values). You would just document in the API documentation what types the returned QVariant can take. Kevin Kofler
Re: Distros and QtWebEngine
Thomas Lübking wrote: I know nothing about the trouble w/ QWebEngine¹, but what is insinuated here is that it's completely unusable, unmaintainable, undistributable - ie. Qt then simply won't have any full blown web engine, resp. has one that nobody uses? That issue would seem -a tiny bit- fr beyond kdepim or anything KDE related, to me those claims read: Qt now has no longer a web kit/engine. That's exactly the problem. I have objected to the QtWebKit deprecation on those exact grounds on the upstream Qt mailing list, but my complaints have fallen on deaf ears. [1] Google more Evil than Apple? WebKit blob trustworthy, but Blink blob [isn't? Arch just rolled that on my disk, am I now tracked or what?!? The main problem is not about the trustworthiness of Chromium itself, but about its bundling of many system libraries. Packages in Fedora and Debian MUST build against the system version of libraries, for many practical reasons: https://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries That said, Google is a web-centric company and as such more likely to put evil stuff into their browser than Apple. In fact, Chromium and Chrome already have the reputation of hiding spyware (mis)features in their code. Kevin Kofler
Re: Distros and QtWebEngine
Raymond Wooninck wrote: Isn't this the real main issue with the new QtWebEngine and Chromium itself ?? In the past I have been trying to get Chromium to build using system version of the 3rd party stuff, but this only worked out for some of them. Google didn't just included the 3rd party stuff, but also altered it to their needs and some things never got upstreamed. Yes, this is the main issue, for both Fedora and Debian. From what I understood the main reason for Fedora not to provide Chromium is the inclusion of the ffmpeg sources. Fedora is not allowed to provide binaries nor sources that contain stuff that could have legal implications. This was also initially openSUSE's main concern, however the legal department of SUSE accepted having the sourcecode on our BuildSercie, as long as we did not build any codecs from it that could cause these legal issues. This is also a concern, but it could be fixed the same way as for other affected packages, by ripping out the encumbered source code from the tarball. That said, having maintained such a cleaning script for xine-lib for a while, I am not looking forward to trying to clean FFmpeg that way (FFmpeg is not in Fedora at all at this time; for some other packages that bundle FFmpeg, we rm -rf the entire FFmpeg, but that is not doable for Chromium/QtWebEngine), and the bundling of the forked FFmpeg is also against Fedora policies to begin with. This situation will not likely change as that there are old bug reports regarding this situation and they were never resolved. And this is exactly why we urge KDE to not require QtWebEngine for anything. Kevin Kofler
Re: Distros and QtWebEngine
Lisandro Damián Nicanor Pérez Meyer wrote: Actually when it comes to the web engine it's not true. When I suggested to use an external ffmpeg and libv8 (javascript engine) the answer was directly no, simply because they are too entangled to be possible. And ffmpeg tends to be quite a source of CVEs... Not to mention that we want our web browsers to not use FFmpeg at all (at least not directly), but GStreamer 1. Sadly, due to how deeply FFmpeg is entangled into Chromium, this does not look realistic for QtWebEngine. Using GStreamer would mean that we could ship it only with unencumbered codecs while still allowing our users to easily add patent-encumbered codecs, the same codecs would work for all applications, and there would also be an automated plugin installation mechanism. Chromium's hardcoded use of a forked FFmpeg breaks all that. We also want our web browsers to support a JavaScript engine that has a non- JIT fallback, because the JIT does not work on our secondary architectures. (For Debian, those are even PRIMARY architectures!) This is even less realistic in Chromium, because V8 is hardcoded everywhere, and there is no interest whatsoever in V8 upstream in supporting an interpreter fallback. This issue means anything that requires QtWebEngine in KDE will NOT be available on all those platforms, even if we were to package QtWebEngine. (It would also increase our maintenance workload if we were to package QtWebEngine, by requiring ExcludeArch or ExclusiveArch lists all over the place.) Kevin Kofler
Re: Distros and QtWebEngine
Milian Wolff wrote: When did this take place and what is the threads subject? I seem to have missed it, and also can't find it in my recent mails. Sorry for that. It was a subthread starting here: http://lists.qt-project.org/pipermail/development/2015-February/019900.html (It also overflowed into March.) Kevin Kofler
Re: Review Request 122475: Fix bug 343906 - Unable to handle plain directory paths as QUrl
On Feb. 9, 2015, 10:01 nachm., Kevin Kofler wrote: IMHO, QUrl::fromUserInput(str, QString() QUrl::AssumeLocalFile) would be safer. Or do you really think dolphin nonexistentfile should look up nonexistentfile over DNS? Thomas Lübking wrote: +1, notably since http://nonexistenfile won't be very helpful in dolphin, but will directly open a browser. One could end up on nasty pages. Arjun AK wrote: IMHO, QUrl::fromUserInput(str, QString() QUrl::AssumeLocalFile) would be safer. Or do you really think dolphin nonexistentfile should look up nonexistentfile over DNS? [Done](http://commits.kde.org/kde-baseapps/0f91025a752b37ea4b6f2e7c02507bda5863e71f) Frank Reininghaus wrote: QUrl::AssumeLocalFile looks like a good idea! Unfortunately, it seems that it's only available in Qt 5.4 and later: http://mail.kde.org/pipermail/kde-frameworks-devel/2015-February/022157.html http://mail.kde.org/pipermail/kde-frameworks-devel/2015-February/022158.html So there should either be an ifdef-version check, or the Qt version requirement should be bumped to 5.4. I'm not sure if there are any distros who will still use Qt 5.3 in their next releases - if not, then bumping the required Qt version is probably easier and less ugly. Oops, I didn't realize that we're still supporting 5.3. :-( Fedora ships 5.4 as an official update to all supported releases, so we'd be fine with the requirement just getting bumped. Kompare (ported in October 2014) uses this: https://projects.kde.org/projects/kde/kdesdk/kompare/repository/revisions/master/entry/libdialogpages/diffpage.cpp#L45 - Kevin --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/122475/#review75736 --- On Feb. 9, 2015, 12:48 nachm., Arjun AK wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/122475/ --- (Updated Feb. 9, 2015, 12:48 nachm.) Review request for KDE Base Apps. Bugs: 343906 http://bugs.kde.org/show_bug.cgi?id=343906 Repository: kde-baseapps Description --- URLs passed as commandline arguments should be constructed using `QUrl::fromUserInput()` Diffs - dolphin/src/main.cpp 094402f Diff: https://git.reviewboard.kde.org/r/122475/diff/ Testing --- dolphin /tmp dolphin ftp.debian.org Thanks, Arjun AK
Re: Review Request 122475: Fix bug 343906 - Unable to handle plain directory paths as QUrl
--- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/122475/#review75736 --- IMHO, QUrl::fromUserInput(str, QString() QUrl::AssumeLocalFile) would be safer. Or do you really think dolphin nonexistentfile should look up nonexistentfile over DNS? - Kevin Kofler On Feb. 9, 2015, 12:48 nachm., Arjun AK wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/122475/ --- (Updated Feb. 9, 2015, 12:48 nachm.) Review request for KDE Base Apps. Bugs: 343906 http://bugs.kde.org/show_bug.cgi?id=343906 Repository: kde-baseapps Description --- URLs passed as commandline arguments should be constructed using `QUrl::fromUserInput()` Diffs - dolphin/src/main.cpp 094402f Diff: https://git.reviewboard.kde.org/r/122475/diff/ Testing --- dolphin /tmp dolphin ftp.debian.org Thanks, Arjun AK
Re: Another proposal for modernization of our infrastructure
Hi, Jan Kundrát wrote: Feedback is very welcome. First of all, I would like to apologize for my overly negative tone in your prior feedback threads. I would also like to point out that I have absolutely no experience with Phabricator (the solution proposed by the competing proposal), and as such, I cannot really compare the 2 proposals nor give a personal preference. There are 2 points in your (Gerrit-based) proposal that I would like to comment on: 1. File-level conflict resolution 3.2.2 Conflicting Changes When a project is big enough, sooner or later there will be patches laying around which are mutually incompatible. By default, Gerrit uses merge algorithm that solves conflicts on a file level. A list of changes which modify the same files is shown within the UI. Changes which cannot be submitted due to a merge failure are clearly marked as needing a rebase in all UIs. That way, a developer can make sure that a conflict is solved in a meaningful way and without introducing bugs. Unfortunately, file level strikes me as a less than helpful default. Can this be changed to line-level merges in our instance? (I think the ideal would be to use git's native merging algorithm(s), but I expect some limitations due to the convenient web resolving UI.) In community-developed Free Software projects (also known as the Open Source Development Model or the bazaar model), very often, reviews modify files that have also been touched by other people before the review is processed, or an uploaded patch was generated against a release tarball and the file has since changed in master. I fear that having to manually merge as soon as there is a conflict at the level of the entire file is going to be really painful. This is different from some of the existing large-scale deployments (e.g., OpenStack), which, even where the end product is Free, are largely company- driven. I guess concurrent modification of a single file by different people is not so common in such corporate (cathedral) development models, which explains the default. 2. Reliance on client-side JavaScript Another thing I'm a bit concerned about is the widespread reliance on client-side JavaScript in both your proposal and upstream Gerrit. As you write in section 3.2.1: The bundled web UIs are implemented as a client-side JavaScript application that talk to Gerrit via the REST APIs. There is no artificial feature gap between what can be done with official tools and what is available to other UIs. Alternative web UIs using various modern web frameworks are under development. In addition, your existing and proposed code for integrating independent services between each other (section 4.1, and section 3.2.11 for the special case of Bugzilla) is (or will be) also written in client-side JavaScript. As a result, people who opt to disable JavaScript in their browser for whatever reason (e.g., security) will have: * the Gerrit web interface not working at all (or at least not until such an alternative web UI is implemented in a way not requiring client-side JavaScript and deployed on KDE infrastructure), * the integration between various utilities also not working, e.g., Bugzilla will not list pending review requests at all. To me, this contradicts the web maxim of graceful degradation. Why can the work not be done on the server side? Especially for the integration between services, I would expect a simple API call for data lookup to be doable on the server side at least as easily as from client- side JavaScript. Kevin Kofler
Re: libkgeomap
Pino Toscano wrote: Why does libkgeomap need to move somewhere just to be used by some other extragear application? Just do independent releases of it, and stop bundling it in digikam, so a) it's easier to package it in distros b) can be really seen as something more than digikam's own private library libkgeomap (just like all the other libraries used by digikam) has been built as standalone on build.kde.org for years, and digikam (and now kphotoalbum as well) are able to use it fine as separate library. +1, this Digikam SC nonsense really needs to stop! All the stuff that's now bundled in Digikam should be a dependency of Digikam as it used to be in the past. Kevin Kofler
Re: Feature matrix for future infrastructure
Thomas Lübking wrote: If you had followed the discussion or at least looked at that feature matrix Milian started and that you liked to high-handedly deem as rubbish, you'd have noticed that webfrontends to upload patches (like suggested https://tools.wmflabs.org/gerrit-patch-uploader/) are available to follow a download tarball, edit, diff files by hand and upload the patch workflow. The reason that this is not the suggested approach in the techbase article is likely, that it is an incredibly inefficient approach that contradicts the very basic idea of SCM. The above process is how distribution patches are typically produced though, so accepting such diffs makes it much easier for distribution packagers to upstream their patches. An alternative process that also works with web uploaders is git diff or git format-patch (which any decent GUI for git can do, so it can be done without ever touching the git command line) and uploading the result. I find this much nicer to work with than magic refs. (It shall be noted that ReviewBoard currently supports the latter, but not the former, because it is very picky about what patches it accepts. So I have actually have to clone the repository, apply the distribution patch and then reexport it from git. It's still better than having to figure out some obscure ref magic, but it could be even nicer if it accepted the distribution-produced patch directly.) So, with my distribution packager hat on, I think a web upload feature should be a requirement. (I also agree with other posters that it would be more friendly to newcomers, too.) Kevin Kofler
Re: State of kdesvn?
Christian Ehrlicher wrote: I recently tried to fix some bugs in kdesvn but got stuck because there is nobody who can review my patches. Rajko Albrecht as the original maintainer stopped the development of kdesvn sometime in 2013 and the other two project members (David Faure and Christophe Giboudeaux) have no time to dig deep into the subversion api for a proper review. Therefore I'm asking for help and suggestions how to go further. Maybe there are some people on this list who are still working with subversion and kdesvn and are interested in the further development of kdesvn. I'd say just commit/push the patches. If there's no maintainer, there's also nobody to complain. :-) (Wo kein Kläger, da auch kein Richter.) Kevin Kofler
Re: Adding experimental parts to a KF5 library
Ivan Čukić wrote: I do agree that is would be a proper way to handle it. The only problem I see with it is that the point is actually not to provide binary compatibility, nor proper handling of BIC. At least in the case I have. Namely, the point is for the library to be used *only* for things that are in development - because projects that wish to use it have a longer release cycles than the frameworks. But, on the other hand, if one of those projects were to release a stable version against a 0.x version of the library, it would need BIC handling. What does giving the library a fixed (i.e. unchanging) soname improve there? We would still need to rebuild the programs for the new version of the library, but our packaging tools would NOT tell us that we need to do that. The result is packages that install without errors and then fail to run, which is very nasty. If you give it a 0.x soversion and remember to increment x on each BIC change, we will know to rebuild affected packages. The only thing worse than ABI changes is SILENT ABI changes. Kevin Kofler
Re: Adding experimental parts to a KF5 library
Ivan Čukić wrote: - 0 soversion to show that the library has no stable ABI. I'd actually set both the soname and the fully-versioned name to libfoo.so.0.1, then if you change something binary-incompatibly, libfoo.so.0.2, etc. (or use libfoo.so.0.1 etc. as the soname and something like libfoo.so.0.1.0.0 as the fully-versioned name, if you really want to track binary-compatible changes too). Unlike libtool, CMake easily allows you to use such versioning, and it's really the right way to handle it. It allows both clearly identifying the library as preliminary (whereas libfoo.so.0 is also very commonly used for the first stable-ABI version of a library) and tracking binary incompatible changes in a sane way (without losing the zero major version). Kevin Kofler
Re: [Kde-pim] Problems with infrastructure
Martin Klapetek wrote: Our very own manifesto, which we've established not so long ago, does not dictate that a project must be kf5 or kdelibs based application to be considered a KDE project. But there *is* an expectation that the projects use KDE infrastructure, so the implication in I also admit that I would probably feel a little bit sad if Trojita ended up to be the only project which sticked with Gerrit (Jan Kundrát) that it would keep using Gerrit no matter what KDE decides to use officially does not fit into that. That creates the situation that we either all switch and have uniformity or we don't and then we end up with reviewborad+gerrit (Albert Astals Cid), which to me sounds a lot like blackmail (of course not by Albert, he's just the messenger). Kevin Kofler
Re: [Kde-pim] Problems with infrastructure
Albert Astals Cid wrote: It also puts the discussion about a possible switch to gerrit in a weird situaion since we either all switch and have uniformity or we don't and then we end up with reviewborad+gerrit :/ Or we just stop the Gerrit experiment in the core KDE projects as a failure (it was always made clear that it is only an experiment and can be ended at any moment), and kick out Trojitá from KDE if Jan absolutely wants to use Gerrit. (It's not even a KF5 or kdelibs application, but a Qt-only one.) Then he can use whatever tools he wants. Problem solved. Kevin Kofler
Re: API Breakage?: Re: [kcmutils] src: Fix typo in headers generation
Martin Klapetek wrote: I was acutally thinking about the same, would mean the headers are duplicated though. Additionally the old headers could have some #pragma message so it tells people while building. And you also have to check whether the file system you're installing to is case-sensitive to begin with, and only install the compatibility headers on case-sensitive file systems. Kevin Kofler
Re: New framework to review: KPackage
Marco Martin wrote: In the past weeks I have been working on a new framework, called KPackage. You ARE aware that KPackage was the name of an old frontend for RPM and other package managers that used to be part of the KDE Software Compilation 4? Kevin Kofler
Re: KFind
laurent Montel wrote: Indeed it's not finish to port (I worked on it too). Still depend against kdelibs4support . So what? That will only become a problem when Qt 6 gets released years from now. Kevin Kofler
Re: Review Request 120627: Remove kdelibs4support.
On Monday 20 October 2014 at 20:53:51, Jeremy Whiting wrote: Thanks again, another review just posted. Guess I need to port applications on a vm that doesn't have kdelibs4 installed to make sure I get these right and complete. Or use a Fedora machine, our kdelibs4 headers are under /usr/include/kde4. :-) Kevin Kofler
Re: Review Request 120676: Remove other kdelibs4support headers
--- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120676/#review68809 --- Ship it! Ship It! - Kevin Kofler On Okt. 21, 2014, 2:53 vorm., Jeremy Whiting wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120676/ --- (Updated Okt. 21, 2014, 2:53 vorm.) Review request for kdelibs and Kevin Kofler. Repository: kompare Description --- Remove other kdelibs4support headers Diffs - komparenavtreepart/komparenavtreepart.cpp 7be64bb8aafec02cd6f9f4a61bf8f1f56f36d1ea komparepart/kompare_part.h 52bcc0b68cf3f665ae3ac09eebe8234044bd6c90 komparepart/kompare_part.cpp 1521518f97603ee81c7f57cf4b721fe9bf18ae9b komparepart/komparesaveoptionswidget.h a10d972d4fc02a40e711874021ca381c93f8ba50 kompareurldialog.h 875a645821daa30f65fb4f91e624e808a8ec6541 libdialogpages/viewpage.cpp d49ef6e1d9aefc994180cfc46ec1df79ed326a3a main.cpp c89cedd69bb03df3888e1d6fd883e50850c5a06c Diff: https://git.reviewboard.kde.org/r/120676/diff/ Testing --- It builds and runs, but I'm not sure if I got all of the kdelibs4support stuff since I have kdelibs headers in /usr/include here. I'll do further cleanup on a vm witohut kdelibs to check later on. Thanks, Jeremy Whiting
Re: Review Request 120697: Remove one last klocale.h include.
--- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120697/#review68836 --- Ship it! Ship It! - Kevin Kofler On Okt. 21, 2014, 7:15 nachm., Jeremy Whiting wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120697/ --- (Updated Okt. 21, 2014, 7:15 nachm.) Review request for kdelibs and Kevin Kofler. Repository: kompare Description --- Remove one last klocale.h include. Diffs - komparenavtreepart/komparenavtreepart.cpp f5a7f9fc8df9b9ee330ebfad1fb57835aadaca7f Diff: https://git.reviewboard.kde.org/r/120697/diff/ Testing --- It still builds and runs. Thanks, Jeremy Whiting
Re: Review Request 120627: Remove kdelibs4support.
On Okt. 18, 2014, 4:40 vorm., Kevin Kofler wrote: komparepart/kompare_part.cpp, line 303 https://git.reviewboard.kde.org/r/120627/diff/3/?file=320382#file320382line303 So where does this temporary file get deleted? Apparently nowhere. You have to handle this the same way as the QTemporaryDir, by allocating tempFile with new, setting autoRemove to true rather than false, storing tempFile in m_info, and deleting it in cleanUpTemporaryFiles. (This is what the KIO::NetAccess::removeTempFile calls you removed from cleanUpTemporaryFiles were for, but those obviously wouldn't work anymore anyway if the files are not downloaded through KIO::NetAccess anymore.) Jeremy Whiting wrote: I guess we can't always QFile::remove the m_info.localSource or m_info.localDestination in the cleanup, since those could be user's files, right? Right. You'd need a boolean flag to track whether it's a temporary file. But see my later comment, the way you're using QTemporaryFile now is insecure. - Kevin --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120627/#review68642 --- On Okt. 18, 2014, 5:01 vorm., Jeremy Whiting wrote: --- This is an automatically generated e-mail. To reply, visit: https://git.reviewboard.kde.org/r/120627/ --- (Updated Okt. 18, 2014, 5:01 vorm.) Review request for kdelibs and Kevin Kofler. Repository: kompare Description --- Change KUrl to QUrl. Use QLayout/QFrame instead of KVBox (seems broken though somehow) Use QFileDialog instead of KFileDialog. Diffs - libdialogpages/pagebase.cpp ba1574aed7124ede49e1c5908a8fe693cf7bc5d3 libdialogpages/viewpage.h b5b770d1441650564106e1cc7ef7e587f6ee142d libdialogpages/viewpage.cpp 07bdba5e1edf55a6dcd02e5deef58d30c07660c2 libdialogpages/filessettings.h dc3306e34fe1b4eb7cb6a9d2b598f91932bedda0 libdialogpages/filessettings.cpp 0e19dc00f22a2f6e9588bf2d110dbde682888472 libdialogpages/pagebase.h 0cef46feaa2cc81deff12c2c5f739e6be6df1b49 libdialogpages/CMakeLists.txt 769a1154c56e8eb8aa42f1bc6d84e0f9a4154fd0 libdialogpages/dialogpagesexport.h b2de57f6616739d353d4889ef4965ab07f1191aa libdialogpages/diffpage.h 37490b1ebb245e9648530429da63a9240010 libdialogpages/diffpage.cpp 7800b486e023cffe41e1fa3e9e60781250ea4199 libdialogpages/filespage.h 42afafcd0fc8bc0a01e32b79d414742937d791fb libdialogpages/filespage.cpp 6a87fe36abd57bdaa09b516de38969db6c6f2298 kompareurldialog.h dc50c588e70835ad9292da1baf5222f58f512f67 kompareurldialog.cpp 7de050bc44770a79f8f7d789cabd95d6707a40f1 komparepart/komparesplitter.cpp 8d496bf279caa7cb9a305c2d15131f591c48818d komparepart/komparelistview.cpp 35bbab849d8b7938cba518e97a00ed50cae35612 komparepart/kompareprefdlg.cpp 118485663390e9563a77741b490a9cdf8bf6d464 komparepart/komparesaveoptionswidget.cpp 4c9acba6a7f9c6dda04130946faac37138422875 interfaces/kompareinterface.h 53b19d944b2a4a65c14ea41b8f1c0997581933db kompare_shell.h 8549fcdc4d1536c58734f2bc3a78b9ebc42c6c5f kompare_shell.cpp dcc45513f3f9f5f94869046989b6b4f5b1c0995e komparenavtreepart/CMakeLists.txt 53e8e670e70629afac9197fc108d844733ec5c07 komparenavtreepart/komparenavtreepart.cpp 3faceff78fbbd2f083cd0a7837c74f50fe543474 komparepart/CMakeLists.txt 09b61e6ca0cdce391fc759be49a672a050cc16cd komparepart/kompare_part.h 24475f1b0ccf7fbeda56860a9a69955cd0b82808 komparepart/kompare_part.cpp 4d40be0dedcfb91b77ee239de11188b328f8bc13 CMakeLists.txt 38167c2099d0ea1600bd5a6893982e809902fa3a doc/index.docbook 578d12a41d9a6afed441ffd38c39bff16c096ab2 libdialogpages/viewsettings.h dbf6afe0d0c70e548e32dfc09391d67ef595cdba libdialogpages/viewsettings.cpp 5a69d0bd9a49f7a3881940c4ea8ad407be56adc1 main.cpp 4132c8442f8546ee7d365051dda0e32196249217 Diff: https://git.reviewboard.kde.org/r/120627/diff/ Testing --- It builds and runs. The compare dialog ui looks squished though and doesn't resize like it used to, must be something I did wrong when porting away from KVBox Thanks, Jeremy Whiting