Re: Give us a feedback on API situation
Forwarding to Fedora devel. User perspective (API v1 vs. v2) is important. Pavel On Monday, February 12, 2018 11:17:26 PM CET Jakub Kadlcik wrote: > Hello, > we are currently discussing how to deal with the current situation of > inconsistent Copr API. > > If you are interested in this topic and want to give us some feedback, > please see the https://pagure.io/copr/copr/issue/218 > > COPR team ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Mass Rebuild for Fedora 28
I am see that my package libzen in need rebuild list. but I can't find failed koji build for it to see logs. Where I can find failed build? Or I must run it manually? вт, 13 февр. 2018 г. в 3:37, Mamoru TASAKA : > Dennis Gilmore wrote on 02/13/2018 08:06 AM: > > Hi All, > > > > We have now completed the automated part of the Fedora 28 mass rebuild, > > The details for the scheduled mass rebuild for Fedora 28 can be found > > here[1]. The failure page for the rebuilds can be found here[3] and the > > full list of packages that are needing rebuilding can be found here[4]. > > The needs rebuild list includes packages that failed to get submitted > > to koji for various reasons, things like the spec bumping failing due > > to incomplete or incorrect retirement > > > > > > [1] https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild > > [2] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-failures.html > > [3] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-need-rebuild.ht > > ml > > [4] https://fedoraproject.org/wiki/Releases/28/Schedule > > > > f28-need-rebuild.html [4] shows that most of "need-rebuild" packages are > assigned to rel-eng, and does not seem to show the "real" owner of packages > correctly. > > Regards, > Mamoru > ___ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org > ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: F28 System Wide Change: Replace glibc's libcrypt with libxcrypt
Is anybody going to fix the errors this caused? Namely this one: https://bugzilla.redhat.com/show_bug.cgi?id=1537140 Thx Vít Dne 9.1.2018 v 18:46 Jan Kurik napsal(a): > = System Wide Change: Replace glibc's libcrypt with libxcrypt = > https://fedoraproject.org/wiki/Changes/Replace_glibc_libcrypt_with_libxcrypt > > Change owner(s): > * Björn Esser > * Florian Weimer > > There are plans to remove libcrypt from glibc, so we should have a > replacement. > > > == Detailed Description == > Since there has been some discussion in the last time about removing > libcrypt from glibc in some time and splitting it out into a separate > project which can evolve quicker, Zack Weinberg and I put some work > into libxcrypt to make it a basically suitable replacement. > > It comes with a set of extended interfaces pioneered by Openwall > Linux, crypt_rn, crypt_ra, crypt_gensalt, crypt_gensalt_rn, and > crypt_gensalt_ra. > > The crypt and gensalt functions are supporting all (except for > Crypt16, which was used on Ultrix and Tru64, only) widely used > password hashing algorithms, which before were specific to just some > operating system's implementations of libcrypt. > > > == Scope == > * Proposal owners: > - Apply needed packaging changes to glibc > - Review, import and build libxcrypt for Fedora 28 > > * Other developers: > Test their applications using one of the following interfaces for > unexpected changes in functionality: > - crypt() > - crypt_r() > - encrypt() > - encrypt_r() > - fcrypt() > - setkey() > - setkey_r() > > Release engineering: > #7160 https://pagure.io/releng/issue/7160 > none expected > > * Policies and guidelines: > N/A (not needed for this Change) > > * Trademark approval: > N/A (not needed for this Change) ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On 12/02/18 22:14, J. Bruce Fields wrote: On Mon, Feb 12, 2018 at 08:12:58PM +, Terry Barnaby wrote: On 12/02/18 17:35, Terry Barnaby wrote: On 12/02/18 17:15, J. Bruce Fields wrote: On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote: One thing on this, that I forgot to ask, doesn't fsync() work properly with an NFS server side async mount then ? No. If a server sets "async" on an export, there is absolutely no way for a client to guarantee that data reaches disk, or to know when it happens. Possibly "ignore_sync", or "unsafe_sync", or something else, would be a better name. ... Just tried the use of fsync() with an NFS async mount, it appears to work. That's expected, it's the *export* option that cheats, not the mount option. Also, even if you're using the async export option--fsync will still flush data to server memory, just not necessarily to disk. With a simple 'C' program as a test program I see the following data rates/times when the program writes 100 MBytes to a single file over NFS (open, write, write .., fsync) followed by close (after the timing): NFS Write multiple small files 0.001584 ms/per file 0.615829 MBytes/sec CpuUsage: 3.2% Disktest: Writing/Reading 100.00 MBytes in 1048576 Byte Chunks Disk Write sequential data rate fsync: 1 107.250685 MBytes/sec CpuUsage: 13.4% Disk Write sequential data rate fsync: 0 4758.953878 MBytes/sec CpuUsage: 66.7% Without the fsync() call the data rate is obviously to buffers and with the fsync() call it definitely looks like it is to disk. Could be, or you could be network-limited, hard to tell without knowing more. Interestingly, it appears, that the close() call actually does an effective fsync() as well as the close() takes an age when fsync() is not used. Yes: http://nfs.sourceforge.net/#faq_a8 --b. Quite right, it was network limited (disk vs network speed is about the same). Using a slower USB stick disk shows that fsync() is not working with a NFSv4 "async" export. But why is this ? It just doesn't make sense to me that fsync() should work this way even with an NFS "async" export ? Why shouldn't it do the right thing "synchronize a file's in-core state with storage device" (I don't consider an NFS server a storage device only the non volatile devices it uses). It seems it would be easy to flush the clients write buffer to the NFS server (as it does now) and then perform the fsync() on the server for the file in question. What am I missing ? Thinking out loud (and without a great deal of thought), on removing the NFS export "async" option, improving write small files performance and keeping data security it seems to me one method might be: 1. NFS server is always in "async" export mode (Client can mount in sync mode if wanted). Data and metadata (optionally) is buffered in RAM on client and server. 2. Client fsync() works all the way to disk on the server. 3. Client sync() does an fsync() of each open for write NFS file. (Maybe this will be too much load on NFS servers ...) 4. You implement NFSv4 write delegations :) 5. There is a transaction based system for file writes: 5.1 When a file is opened for write, a transaction is created (id). This is sent with the OPEN call. 5.2 Further file operations including SETATTR, WRITE are allocated as stages in this transaction (id.stage) and are just buffered in the client (no direct server RPC calls). 5.3 The client sends the NFS operations for this write, as and when, optimised into full sized network packets to the server. But the data and metadata are kept buffered in the client. 5.4 The server stores the data in its normal FS RAM buffers during the NFS RPC calls. 5.5 When the server actually writes the data to disk (using its normal optimised disk writing system for the file system and device in question), the transaction and stage (id.stage) are returned to the client (within an NFS reply). The client can now release the buffers up to this stage in the transaction. The transaction system allows the write delegation to send the data to the servers RAM without the overhead of synchronous writes to the disk. It does mean the data is stored in RAM in both the client and server at the same time (twice as much RAM usage). Not sure how easy it would be to implement in the Linux kernel (NFS informed on FS buffer free ?) and would require NFS protocol extensions for the transactions. With this method the client can resend the data on a server fail/reboot and the data can be ensured to be on the disk after an fsync(), sync() (within reason!). It should offer the fastest write performance and should eliminate the untar performance issue with small file creation/writes and still be relatively secure with data if the server dies. Unless I am missing something ? PS: I have some RPC latency figures for some other NFS servers at work. The NFS RPC latency on some of them is nearer the ICMP ping times, ie about 100us. Ma
Re: Escaping macros in %changelog
On Tue, Feb 13, 2018 at 3:03 AM, J. Randall Owens < jrowens.fed...@ghiapet.net> wrote: > > When you say 2 releases, are you talking about package or Fedora > releases? I'd favour an approach of keeping all the changes since > release, or since branching might be even better, or since the release > before the package's release, myself. 2 package releases seems a bit curt. > I was thinking about 2 last package releases by upstream, that has been packed into Fedora. (So if fedora skipped some upstream release, still keeping 2 last upstream release packed) However It was just a thought in general, on how to have changelogs shorter. I'm not sure if it would really work as good as I imagine. -- Michal Schorm Associate Software Engineer Core Services - Databases Team Red Hat ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Escaping macros in %changelog
On 13/02/18 01:00, Michal Schorm wrote: > 5) > The changelogs are long ass hell. > What about keeping just 2 latest releases in it and deleting the rest? > (It will be still kept in GIT history) > 2 releases could be 2-20 entries, depends of work done. > But still it looks short enough for me. When you say 2 releases, are you talking about package or Fedora releases? I'd favour an approach of keeping all the changes since release, or since branching might be even better, or since the release before the package's release, myself. 2 package releases seems a bit curt. -- J. Randall Owens | http://www.GhiaPet.net/ signature.asc Description: OpenPGP digital signature ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Escaping macros in %changelog
I don't think, removing the changelog entirely is a good idea. We do not have suitable replacement. 1) I agree with said. GIT commit messages should describe the work of developer. Thus messages like "typo fix" "revert of revert of merge of ..." "forgot to add new-sources" are common, and should stay as they are. (Because they well describe what changes has been made from developer / maintainer POV) 2) Generating BODHI updates messages? Definitelly +1 ! But with chance to still edit it by hand. 3) What should be written to the changelog? IMHO the stuff that changed from user POV. So stuff like "update to NVR" "changelog can be found at URL" "CVEs fixed: 1, 2, 3, 4, ..." "bugs solved: RHBZ#1, 2, 3, 4, ..." "compiler optimization for F22 disabled for PPC64le, because of bug XYZ" And I don't see any other suitable place to add this. 4) A research should be made first, about how do users use the changelogs. If they do, I'd be cautious. 5) The changelogs are long ass hell. What about keeping just 2 latest releases in it and deleting the rest? (It will be still kept in GIT history) 2 releases could be 2-20 entries, depends of work done. But still it looks short enough for me. 6) It should be part of the packaging guidelines - where should be written what. Probabbly not in a form of unbreakable rule, but rather information to the packagers how to do it, and remind of things they shouldn't forget to write down there. -- Michal Schorm Associate Software Engineer Core Services - Databases Team Red Hat ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Mass Rebuild for Fedora 28
Dennis Gilmore wrote on 02/13/2018 08:06 AM: Hi All, We have now completed the automated part of the Fedora 28 mass rebuild, The details for the scheduled mass rebuild for Fedora 28 can be found here[1]. The failure page for the rebuilds can be found here[3] and the full list of packages that are needing rebuilding can be found here[4]. The needs rebuild list includes packages that failed to get submitted to koji for various reasons, things like the spec bumping failing due to incomplete or incorrect retirement [1] https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild [2] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-failures.html [3] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-need-rebuild.ht ml [4] https://fedoraproject.org/wiki/Releases/28/Schedule f28-need-rebuild.html [4] shows that most of "need-rebuild" packages are assigned to rel-eng, and does not seem to show the "real" owner of packages correctly. Regards, Mamoru ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: OpenImageIO GCC 8 build problem?
On Mon, Feb 12, 2018 at 6:29 AM, Jonathan Wakely wrote: > On 10/02/18 12:32 -0600, Richard Shaw wrote: >> >> On Sat, Feb 10, 2018 at 12:24 PM, Jakub Jelinek wrote: >> >>> On Sat, Feb 10, 2018 at 06:42:00AM -0600, Richard Shaw wrote: >>> > A scratch build works fine on Fedora 27... >>> >>> Likely http://gcc.gnu.org/PR83204 . >> >> >> >> Looks like it, rebuilding with C++11 let it complete. Would rebuilding >> with >> 11 cause an ABI change? > > > I already read the other replies suggesting waiting for the new GCC, > but for the record: no, it wouldn't. I found another strange, what seems to be a compiler bug in gcc-8, when building legion-18.02 on f28, which works fine on f27: https://github.com/StanfordLegion/legion/issues/350 Christoph > > > > ___ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org -- Christoph Junghans Web: http://www.compphys.de ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Mass Rebuild for Fedora 28
Hi All, We have now completed the automated part of the Fedora 28 mass rebuild, The details for the scheduled mass rebuild for Fedora 28 can be found here[1]. The failure page for the rebuilds can be found here[3] and the full list of packages that are needing rebuilding can be found here[4]. The needs rebuild list includes packages that failed to get submitted to koji for various reasons, things like the spec bumping failing due to incomplete or incorrect retirement Please quickly clean up all build failures as the schedule[5] has us branching next Tuesday, on the 20th of February and enabling Bodhi on the 6th of March, So expect to see a 28 branched compose in about a week from now. Many Thanks Dennis [1] https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild [2] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-failures.html [3] https://kojipkgs.fedoraproject.org/mass-rebuild/f28-need-rebuild.ht ml [4] https://fedoraproject.org/wiki/Releases/28/Schedule signature.asc Description: This is a digitally signed message part ___ devel-announce mailing list -- devel-annou...@lists.fedoraproject.org To unsubscribe send an email to devel-announce-le...@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On Mon, Feb 12, 2018 at 08:12:58PM +, Terry Barnaby wrote: > On 12/02/18 17:35, Terry Barnaby wrote: > > On 12/02/18 17:15, J. Bruce Fields wrote: > > > On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote: > > > > One thing on this, that I forgot to ask, doesn't fsync() work > > > > properly with > > > > an NFS server side async mount then ? > > > No. > > > > > > If a server sets "async" on an export, there is absolutely no way for a > > > client to guarantee that data reaches disk, or to know when it happens. > > > > > > Possibly "ignore_sync", or "unsafe_sync", or something else, would be a > > > better name. ... > Just tried the use of fsync() with an NFS async mount, it appears to work. That's expected, it's the *export* option that cheats, not the mount option. Also, even if you're using the async export option--fsync will still flush data to server memory, just not necessarily to disk. > With a simple 'C' program as a test program I see the following data > rates/times when the program writes 100 MBytes to a single file over NFS > (open, write, write .., fsync) followed by close (after the timing): > > NFS Write multiple small files 0.001584 ms/per file 0.615829 MBytes/sec > CpuUsage: 3.2% > Disktest: Writing/Reading 100.00 MBytes in 1048576 Byte Chunks > Disk Write sequential data rate fsync: 1 107.250685 MBytes/sec CpuUsage: > 13.4% > Disk Write sequential data rate fsync: 0 4758.953878 MBytes/sec CpuUsage: > 66.7% > > Without the fsync() call the data rate is obviously to buffers and with the > fsync() call it definitely looks like it is to disk. Could be, or you could be network-limited, hard to tell without knowing more. > Interestingly, it appears, that the close() call actually does an effective > fsync() as well as the close() takes an age when fsync() is not used. Yes: http://nfs.sourceforge.net/#faq_a8 --b. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
[Fedocal] Reminder meeting : Modularity Office Hours
Dear all, You are kindly invited to the meeting: Modularity Office Hours on 2018-02-13 from 10:00:00 to 11:00:00 US/Eastern At https://meet.jit.si/fedora-modularity The meeting will be about: This is where you ask the Fedora Modularity Team questions (and we try to answer them)! Join us on [IRC](irc://chat.freenode.net/#fedora-modularity): #fedora-modularity on [FreeNode](https://freenode.net) Source: https://apps.fedoraproject.org/calendar/meeting/8711/ ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
[Fedocal] Reminder meeting : Modularity Office Hours
Dear all, You are kindly invited to the meeting: Modularity Office Hours on 2018-02-13 from 10:00:00 to 11:00:00 US/Eastern At fedora-modular...@chat.freenode.net The meeting will be about: This is where you ask the Fedora Modularity Team questions (and we try to answer them)! Join us on [IRC](irc://chat.freenode.net/#fedora-modularity): #fedora-modularity on [FreeNode](https://freenode.net) Source: https://apps.fedoraproject.org/calendar/meeting/5910/ ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On Mon, Feb 12, 2018 at 05:35:49PM +, Terry Barnaby wrote: > Well that seems like a major drop off, I always thought that fsync() would > work in this case. No, it never has. > I don't understand why fsync() should not operate as > intended ? Sounds like this NFS async thing needs some work ! By "NFS async" I assume you mean the export option. Believe me, I'd remove it entirely if I thought I could get away with it. > I still do not understand why NFS doesn't operate in the same way as a > standard mount on this. The use for async is only for improved performance > due to disk write latency and speed (or are there other reasons ?) Reasons for the async export option? Historically I believe it was a workaround for the fact that NFSv2 didn't have COMMIT, so even writes of ordinary file data suffered from the problem that metadata-modifying operations still have today. > So with a local system mount: > > async: normal mode: All system calls manipulate in buffer memory disk > structure (inodes etc). Data/Metadata is flushed to disk on fsync(), sync() > and occasionally by kernel. Processes data is not actually stored until > fsync(), sync() etc. > > sync: with sync option. Data/metadata is written to disk before system calls > return (all FS system calls ?). > > With an NFS mount I would have thought it should be the same. As a distributed filesystem which aims to survive server reboots, it's more complicated. > async: normal mode: All system calls manipulate in buffer memory disk > structure (inodes etc) this would normally be on the server (so multiple > clients can work with the same data) but with some options (particular > usage) maybe client side write buffering/caching could be used (ie. data > would not actually pass to server during every FS system call). Definitely required if you want to, for example, be able to use the full network bandwidth when writing data to a file. > Data/Metadata is flushed to server disk on fsync(), sync() and occasionally > by kernel (If client side write caching is used flushes across network and > then flushes server buffers). Processes data is not actually stored until > fsync(), sync() etc. I'd be nervous about the idea of a lot unsync'd metadata changes sitting around in server memory. On server crash/restart that's a bunch of files and directories that are visible to every client, and that vanish without anyone actually deleting them. I wonder what the consequences would be? This is something that can only happen on a distributed filesystem: on ext4, a crash takes down all the users of the filesystem too (Thinking about this: don't we already have a tiny window during the rpc processing, after a change has been made but before it's been committed, when a server crash could make the change vanish? But, no, actually, I believe we hold a lock on the parent directory in every such case, preventing anyone from seeing the change till the commit has finished.) Also, delegations potentially hide both network and disk latency, whereas your proposal only hides disk latency. The latter is more important in your case. I'm not sure what the ratio is for higher-end setups, actually--probably disk latency is still higher if not as high. > sync: with client side sync option. Data/metadata is written across NFS and > to Server disk before system calls return (all FS system calls ?). > > I really don't understand why the async option is implemented on the server > export although a sync option here could force sync for all clients for that > mount. What am I missing ? Is there some good reason (rather than history) > it is done this way ? So, again, Linux knfsd's "async" export behavior is just incorrect, and I'd be happier if we didn't have to support it. See above for why I don't think what you describe as async-like behavior would fly. As for adding protocol to allow the server to tell all clients that they should do "sync" mounts: I don't know, I suppose it's possible, but a) I don't know how much use it would actually get (I suspect "sync" mounts are pretty rare), and b) that's meddling with client implementation behavior a little more than we normally would in the protocol. The difference between "sync" and "async" mounts is purely a matter of client behavior, after all, it's not really visible to the protocol at all. --b. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On 12/02/18 17:35, Terry Barnaby wrote: On 12/02/18 17:15, J. Bruce Fields wrote: On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote: One thing on this, that I forgot to ask, doesn't fsync() work properly with an NFS server side async mount then ? No. If a server sets "async" on an export, there is absolutely no way for a client to guarantee that data reaches disk, or to know when it happens. Possibly "ignore_sync", or "unsafe_sync", or something else, would be a better name. --b. Well that seems like a major drop off, I always thought that fsync() would work in this case. I don't understand why fsync() should not operate as intended ? Sounds like this NFS async thing needs some work ! I still do not understand why NFS doesn't operate in the same way as a standard mount on this. The use for async is only for improved performance due to disk write latency and speed (or are there other reasons ?) So with a local system mount: async: normal mode: All system calls manipulate in buffer memory disk structure (inodes etc). Data/Metadata is flushed to disk on fsync(), sync() and occasionally by kernel. Processes data is not actually stored until fsync(), sync() etc. sync: with sync option. Data/metadata is written to disk before system calls return (all FS system calls ?). With an NFS mount I would have thought it should be the same. async: normal mode: All system calls manipulate in buffer memory disk structure (inodes etc) this would normally be on the server (so multiple clients can work with the same data) but with some options (particular usage) maybe client side write buffering/caching could be used (ie. data would not actually pass to server during every FS system call). Data/Metadata is flushed to server disk on fsync(), sync() and occasionally by kernel (If client side write caching is used flushes across network and then flushes server buffers). Processes data is not actually stored until fsync(), sync() etc. sync: with client side sync option. Data/metadata is written across NFS and to Server disk before system calls return (all FS system calls ?). I really don't understand why the async option is implemented on the server export although a sync option here could force sync for all clients for that mount. What am I missing ? Is there some good reason (rather than history) it is done this way ? Just tried the use of fsync() with an NFS async mount, it appears to work. With a simple 'C' program as a test program I see the following data rates/times when the program writes 100 MBytes to a single file over NFS (open, write, write .., fsync) followed by close (after the timing): NFS Write multiple small files 0.001584 ms/per file 0.615829 MBytes/sec CpuUsage: 3.2% Disktest: Writing/Reading 100.00 MBytes in 1048576 Byte Chunks Disk Write sequential data rate fsync: 1 107.250685 MBytes/sec CpuUsage: 13.4% Disk Write sequential data rate fsync: 0 4758.953878 MBytes/sec CpuUsage: 66.7% Without the fsync() call the data rate is obviously to buffers and with the fsync() call it definitely looks like it is to disk. Interestingly, it appears, that the close() call actually does an effective fsync() as well as the close() takes an age when fsync() is not used. (By the way just go bitten by a Fedora27 KDE/plasma/NetworkManager change that sets the Ethernet interfaces of all my systems to 100 MBits/s half duplex. Looks like the ability to configure Ethernet auto negotiation has been added and the default is fixed 100 MBits/s half duplex !) Basic test code (just the write function): void nfsPerfWrite(int doFsync){ int f; char buf[bufSize]; int n; double st, et, r; int nb; int numBuf; CpuStat cpuStatStart; CpuStat cpuStatEnd; double cpuUsed; double cpuUsage; sync(); f = open64(fileName, O_RDWR | O_CREAT, 0666); if(f < 0){ fprintf(stderr, "Error creating %s: %s\n", fileName, strerror(errno)); return; } sync(); cpuStatGet(&cpuStatStart); st = getTime(); for(n = 0; n < diskNum; n++){ if((nb = write(f, buf, bufSize)) != bufSize) fprintf(stderr, "WriteError: %d\n", nb); } if(doFsync) fsync(f); et = getTime(); cpuStatGet(&cpuStatEnd); cpuStatEnd.user = cpuStatEnd.user - cpuStatStart.user; cpuStatEnd.nice = cpuStatEnd.nice - cpuStatStart.nice; cpuStatEnd.sys = cpuStatEnd.sys - cpuStatStart.sys; cpuStatEnd.idle = cpuStatEnd.idle - cpuStatStart.idle; cpuStatEnd.wait = cpuStatEnd.wait - cpuStatStart.wait; cpuStatEnd.hi = cpuStatEnd.hi - cpuStatStart.hi; cpuStatEnd.si = cpuStatEnd.si - cpuStatStart.si; cpuUsed = (cpuStatEnd.user + cpuStatEnd.nice + cpuStatEnd.sys + cpuStatEnd.hi + cpuStatEnd.si); cpuUsage = cpuUsed / (cpuUsed + cpuStatEnd.idle); r = (double(diskNum) * bufSize) / (et - st); pri
Re: RANT: Packaging is changing too fast and is not well documented
On 02/12/2018 10:14 AM, Ken Dreyer wrote: > On Sat, Feb 10, 2018 at 6:48 AM, Richard Shaw wrote: >> Not coming from a programming background I found the learning curve pretty >> steep when I first tried to become a packager, I'm not sure I wouldn't have >> given up if I had to do it now. > > Thanks for speaking up about this. I'm having trouble following along > with the latest changes too. > > Pagure brings a ton of benefits to dist-git management, so I don't > diss Pagure and all the hard work folks have put into making that a > reality. I just miss the easy birds-eye-view that the pkgdb web UI > provided. I agree things are rough around the edges. Thats why I proposed the number one deliverable from our upcoming Infrastructure Hackfest ( https://fedoraproject.org/wiki/Infrastructure_Hackathon_2018 ) be cleaning up all our documentation and improving any workflows and scripts we can. I hope we can fix some of these issues there... kevin signature.asc Description: OpenPGP digital signature ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: RANT: Packaging is changing too fast and is not well documented
On Sat, Feb 10, 2018 at 6:48 AM, Richard Shaw wrote: > Not coming from a programming background I found the learning curve pretty > steep when I first tried to become a packager, I'm not sure I wouldn't have > given up if I had to do it now. Thanks for speaking up about this. I'm having trouble following along with the latest changes too. Pagure brings a ton of benefits to dist-git management, so I don't diss Pagure and all the hard work folks have put into making that a reality. I just miss the easy birds-eye-view that the pkgdb web UI provided. - Ken ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On 12/02/18 17:15, J. Bruce Fields wrote: On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote: One thing on this, that I forgot to ask, doesn't fsync() work properly with an NFS server side async mount then ? No. If a server sets "async" on an export, there is absolutely no way for a client to guarantee that data reaches disk, or to know when it happens. Possibly "ignore_sync", or "unsafe_sync", or something else, would be a better name. --b. Well that seems like a major drop off, I always thought that fsync() would work in this case. I don't understand why fsync() should not operate as intended ? Sounds like this NFS async thing needs some work ! I still do not understand why NFS doesn't operate in the same way as a standard mount on this. The use for async is only for improved performance due to disk write latency and speed (or are there other reasons ?) So with a local system mount: async: normal mode: All system calls manipulate in buffer memory disk structure (inodes etc). Data/Metadata is flushed to disk on fsync(), sync() and occasionally by kernel. Processes data is not actually stored until fsync(), sync() etc. sync: with sync option. Data/metadata is written to disk before system calls return (all FS system calls ?). With an NFS mount I would have thought it should be the same. async: normal mode: All system calls manipulate in buffer memory disk structure (inodes etc) this would normally be on the server (so multiple clients can work with the same data) but with some options (particular usage) maybe client side write buffering/caching could be used (ie. data would not actually pass to server during every FS system call). Data/Metadata is flushed to server disk on fsync(), sync() and occasionally by kernel (If client side write caching is used flushes across network and then flushes server buffers). Processes data is not actually stored until fsync(), sync() etc. sync: with client side sync option. Data/metadata is written across NFS and to Server disk before system calls return (all FS system calls ?). I really don't understand why the async option is implemented on the server export although a sync option here could force sync for all clients for that mount. What am I missing ? Is there some good reason (rather than history) it is done this way ? ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On Mon, Feb 12, 2018 at 05:09:32PM +, Terry Barnaby wrote: > One thing on this, that I forgot to ask, doesn't fsync() work properly with > an NFS server side async mount then ? No. If a server sets "async" on an export, there is absolutely no way for a client to guarantee that data reaches disk, or to know when it happens. Possibly "ignore_sync", or "unsafe_sync", or something else, would be a better name. --b. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On 12/02/18 17:06, J. Bruce Fields wrote: On Mon, Feb 12, 2018 at 09:08:47AM +, Terry Barnaby wrote: On 09/02/18 08:25, nicolas.mail...@laposte.net wrote: - Mail original - De: "Terry Barnaby" If it was important to get the data to disk it would have been using fsync(), FS sync, or some other transaction based app ??? Many people use NFS NAS because doing RAID+Backup on every client is too expensive. So yes, they *are* using NFS because it is important to get the data to disk. Regards, Yes, that is why I said some people would be using "FS sync". These people would use the sync option, but then they would use "sync" mount option, (ideally this would be set on the NFS client as the clients know they need this). The "sync" mount option should not be necessary for data safety. Carefully written apps know how to use fsync() and related calls at points where they need data to be durable. The server-side "async" export option, on the other hand, undermines exactly those calls and therefore can result in lost or corrupted data on a server crash, no matter how careful the application. Again, we need to be very careful to distinguish between the client-side "sync" mount option and the server-side "sync" export option. --b. One thing on this, that I forgot to ask, doesn't fsync() work properly with an NFS server side async mount then ? I would have thought this would still work correctly. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On Mon, Feb 12, 2018 at 09:08:47AM +, Terry Barnaby wrote: > On 09/02/18 08:25, nicolas.mail...@laposte.net wrote: > > - Mail original - > > De: "Terry Barnaby" > > > If > > > it was important to get the data to disk it would have been using > > > fsync(), FS sync, or some other transaction based app > > ??? Many people use NFS NAS because doing RAID+Backup on every client is > > too expensive. So yes, they *are* using NFS because it is important to get > > the data to disk. > > > > Regards, > > > Yes, that is why I said some people would be using "FS sync". These people > would use the sync option, but then they would use "sync" mount option, > (ideally this would be set on the NFS client as the clients know they need > this). The "sync" mount option should not be necessary for data safety. Carefully written apps know how to use fsync() and related calls at points where they need data to be durable. The server-side "async" export option, on the other hand, undermines exactly those calls and therefore can result in lost or corrupted data on a server crash, no matter how careful the application. Again, we need to be very careful to distinguish between the client-side "sync" mount option and the server-side "sync" export option. --b. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Escaping macros in %changelog
On 02/09/2018 08:34 AM, Josh Boyer wrote: > On Thu, Feb 8, 2018 at 1:32 PM, Matthew Miller > wrote: >> On Thu, Feb 08, 2018 at 05:02:10PM +0100, Igor Gnatenko wrote: >>> It seems that a lot of people have %file, %check, %build, %whatsoever >>> in their changelog section. >>> Is there any reason I should not go and automatically escape them? >> >> This seems like a lot of churn. If we're going to do this, let's go big >> and get rid of RPM changelogs. >> >> When we have a package update, there are basically two different kinds >> of changelog information. Well, three. >> >> First, there's the upstream changelog. We don't generally do much with >> these except maybe package as %doc. >> >> Second, there's package maintainer changelogs. These are really >> redundant with the dist-git log. We don't really need this anymore. >> It's just a chore. >> >> Third, though, there's end-user information. Why should a user care >> *This* is redundant with bodhi update info, at least if packagers fill >> that out, and it often also duplicates upstream changelogs, *and* it >> often also covers things like "fixes CVE-' also carried the >> specfile changelog. >> >> This is neither most helpful for user *nor* ideal for packages. Why >> don't we drop changelogs entirely in favor of 1) using the dist-git >> logs for specfile maintainers and 2) providing the end-user information >> in a different way. This could be through specially formatted log lines >> going with the commit, or it could be simply in a standard separate >> file (`fedora.user-visible-changes`). Optionally, it could include both >> a high level end-user summary, and a detailed description for sysadmins >> and the curious. >> >> Wherever it lives, this would be read by Bodhi, so there's >> would be need to enter it more than once. And, perhaps a DNF plugin >> could be made to read and display this information for systems >> administrators. > > I fully support the removal of RPM changelogs. However, you've missed > two cases: I will also add that I fully support removal of RPM changelogs. To me it ends up being a very common merge conflict if you are trying to backport patches to previous branches and we could fix that by eliminating the changelogs entirely. > 1) Rawhide, which doesn't go through bodhi > 2) Fedora release upgrades, which don't go through bodhi > > Now, I would actually LOVE for Rawhide to go through bodhi but > whatever. The release -> release upgrade isn't really solvable that > way though. > > Someone else suggested changelogs could be inserted during koji build > time. That would be interesting to look into. The dist-git changelogs are mostly noise and I would prefer better organized information about impacts to users and developers. Like tell me what things changed in the new glibc package, not that the glibc RPM has been upgraded to the new release. I can figure out that part myself. Many projects include a NEWS file which summarizes the major changes and fixes in a new release. This is usually nicer to consume than changelogs. Sometimes the summarized changes are in another file. Sometimes there's nothing like that. Maybe we could mark the relevant changes for that release in the %files section. Like: %changes NEWS Like a doc macro. An 'rpm -q --changelog' would just pipe that file through the pager. Or display the path or whatever. If a package lacks a file like that, rpm -q --changelog could just return nothing. This also leaves open the option for package maintainers to create their own summary files and package readmes, which could be expanded to explain specifics about that software on Fedora. -- David Cantrell Red Hat, Inc. | Boston, MA | EST5EDT ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Escaping macros in %changelog
On Fri, Feb 09, 2018 at 08:34:02AM -0500, Josh Boyer wrote: > > Wherever it lives, this would be read by Bodhi, so there's > > would be need to enter it more than once. And, perhaps a DNF plugin > > could be made to read and display this information for systems > > administrators. > > I fully support the removal of RPM changelogs. However, you've missed > two cases: > > 1) Rawhide, which doesn't go through bodhi > 2) Fedora release upgrades, which don't go through bodhi I'm actually suggesting something different: write the user-focused changelog in one place when updating the package, and have bodhi read that and use it for its description (making the bodhi step less work). -- Matthew Miller Fedora Project Leader ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Escaping macros in %changelog
On Fri, Feb 09, 2018 at 08:12:43AM +0100, Igor Gnatenko wrote: > * Many times people put some useless messages in there, so we > probably don't want to convert old history to git-based changelogs > and have point where we ask people to start writing useful commit > messages. There are lots of useless messages in many changelogs, too. > * No easy way to map changelogs with versions and releases in > package. Imagine that you pushed commit with update, then realised > that it doesn't build and reverted. What should be in changelog? I think we *need* separate changelog streams -- the packager one (with all the messiness) and a user-focused one. This could either be a separate file or some kind of standard we agree on for special lines in the git changelog. (In that case, we could have a tool to extract those lines, and it could understand to remove user messages found in reverted commits.) I'm not sure it's super-useful to inject them into the RPM itself. Better, I think, to put it in /usr/share/doc somewhere or something. That makes the metadata more lightweight (what percentage of /var/lib/rpm/Packages is changelog text)? -- Matthew Miller Fedora Project Leader ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: KDE print dialog does not see CUPS settings
Kevin Kofler wrote: > Kevin Kofler wrote: >> Thankfully, as Rex Dieter points out, this is finally being addressed in >> Qt 5.11 (see https://wiki.qt.io/New_Features_in_Qt_5.11 and >> https://bugreports.qt.io/browse/QTBUG-54464). Unfortunately, that release >> is still a few months away from now. (Qt upstream currently expects to >> release Qt 5.11.0 on May 31.) I would actually argue that we should >> backport this to our packages sooner, backports from OpenSUSE are linked >> in QTBUG-54464. But I am not a maintainer of qt5-qtbase, so it is not my >> decision to make. > > I have built qt5-qtbase with these backported patches in a Copr: > https://copr.fedorainfracloud.org/coprs/kkofler/qt5-qtbase-print-dialog-advanced/ Thanks Kevin! -- Rex ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: mock slow to get new packages for Rawhide?
On 02/12/2018 02:51 PM, Richard Shaw wrote: > On Mon, Feb 12, 2018 at 7:44 AM, Mikolaj Izdebski > wrote: > >> >> There hasn't been any successful rawhide compose in a couple days. > > > Ah! I didn't even think of it that way... So mock doesn't pull directly > from the buildroot. I guess most of the time that makes sense, you want to > have a (usually) known good snapshot or lots of builds would fail with the > normal Rawhide churn. Mock does pull from buildroot when you run it with "--enablerepo local" -- Mikolaj Izdebski Senior Software Engineer, Red Hat IRC: mizdebsk ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: mock slow to get new packages for Rawhide?
On Mon, Feb 12, 2018 at 7:44 AM, Mikolaj Izdebski wrote: > > There hasn't been any successful rawhide compose in a couple days. Ah! I didn't even think of it that way... So mock doesn't pull directly from the buildroot. I guess most of the time that makes sense, you want to have a (usually) known good snapshot or lots of builds would fail with the normal Rawhide churn. Thanks, Richard ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Un-retiring tcl-thread package
Hi. I am want to un-retire tcl-thread package. New Review Request already approved https://bugzilla.redhat.com/show_bug.cgi?id=1544384 ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: mock slow to get new packages for Rawhide?
On 02/12/2018 02:35 PM, Richard Shaw wrote: > With the recent gcc 8 issues I've been trying to do some build tests on my > system but I keep getting > gcc-8.0.1-0.9.fc28 even after performing a --scrub=all instead of > gcc-8.0.1-0.13.fc28 which was built yesterday or even gcc-8.0.1-0.12.fc28 > from the 9th... > > What gives? It can't just be slow mirror propagation... Try with --enablerepo local. There hasn't been any successful rawhide compose in a couple days. -- Mikolaj Izdebski Senior Software Engineer, Red Hat IRC: mizdebsk ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
mock slow to get new packages for Rawhide?
With the recent gcc 8 issues I've been trying to do some build tests on my system but I keep getting gcc-8.0.1-0.9.fc28 even after performing a --scrub=all instead of gcc-8.0.1-0.13.fc28 which was built yesterday or even gcc-8.0.1-0.12.fc28 from the 9th... What gives? It can't just be slow mirror propagation... Thanks, Richard ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: OpenImageIO GCC 8 build problem?
On 10/02/18 12:32 -0600, Richard Shaw wrote: On Sat, Feb 10, 2018 at 12:24 PM, Jakub Jelinek wrote: On Sat, Feb 10, 2018 at 06:42:00AM -0600, Richard Shaw wrote: > A scratch build works fine on Fedora 27... Likely http://gcc.gnu.org/PR83204 . Looks like it, rebuilding with C++11 let it complete. Would rebuilding with 11 cause an ABI change? I already read the other replies suggesting waiting for the new GCC, but for the record: no, it wouldn't. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: orphaned vim-vimoutliner
On 2018-02-12, 08:08 GMT, Petr Lautrbach wrote: > I orphaned vim-vimoutliner. > > I switched to emacs and haven't touched it for some time. You could let me know. Adopting this poor orphan. Matěj -- https://matej.ceplovi.cz/blog/, Jabber: mc...@ceplovi.cz GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8 Don't come crying to me about your "30 minute compiles"!! I have to build X uphill both ways! In the snow! With bare feet! And we didn't have compilers! We had to translate the C code to mnemonics OURSELVES! And I was 18 before we even had assemblers! ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Please review use /$ in %files (Was: Re: Escaping macros in %changelog)
On Thu, 8 Feb 2018 18:39:19 +0100, Petr Stodulka wrote: > > The following: > > %files > > /some/directory/ > > > > is equivalent to: > > %files > > %dir /some/directory > > /some/directory/* > > > > > > There's nothing wrong here. > > > > > > Exactly. IMHO, use of %dir macro for "top" pkg directories is more clean > solution, but > doesn't matter in case the rpm is packaged correctly. That makes no sense, if you restrict yourself to a "top" directory. It could be a huge tree with lots of subdirectories. How would you package it then? With explicit %dir everywhere? Or with %dir only for the "top" directory? Including complete directory trees with /foo/bar/ is fine and clean and is an advertised solution for many years. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Please review use /$ in %files (Was: Re: Escaping macros in %changelog)
On Thu, 8 Feb 2018 18:09:25 +, Tomasz Kłoczko wrote: > I'm sure that in the past it was difference here :| You are mistaken about that. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Build python-jsonpatch-1.21-1.fc28 not in rawhide repo
On 02/12/2018 11:00 AM, Alfredo Moralejo Alonso wrote: > Hi, > > A new build for python-jsonpatch was done some days ago in > https://koji.fedoraproject.org/koji/buildinfo?buildID=1024370 but it has > not appeared in rawhide repo yet. > > Any idea about what may be happening? This build was completed on 6 February, but latest successful rawhide compose is from 4 February. -- Mikolaj Izdebski Senior Software Engineer, Red Hat IRC: mizdebsk ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Was APLM accidently enabled in Fedora 27?
May it be related to this? https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/T2STBDPXKZ7DHC7GS6VLM3ESYI6RHVDM/ V. Dne 9.2.2018 v 16:18 Clemens Eisserer napsal(a): > Hi, > > Since a week or so my laptop has severe SATA issues when running on > battery-only. > > Was APLM enabled by accident in Fedora-27 (I know there are currently > experiments going on for Rawhide / F28): > https://bugzilla.redhat.com/show_bug.cgi?id=1543493 > > Best regards, Clemens > ___ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Build python-jsonpatch-1.21-1.fc28 not in rawhide repo
Hi, A new build for python-jsonpatch was done some days ago in https://koji.fedoraproject.org/koji/buildinfo?buildID=1024370 but it has not appeared in rawhide repo yet. Any idea about what may be happening? Regards, Alfredo ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Re: Fedora27: NFS v4 terrible write performance, is async working
On 09/02/18 08:25, nicolas.mail...@laposte.net wrote: - Mail original - De: "Terry Barnaby" If it was important to get the data to disk it would have been using fsync(), FS sync, or some other transaction based app ??? Many people use NFS NAS because doing RAID+Backup on every client is too expensive. So yes, they *are* using NFS because it is important to get the data to disk. Regards, Yes, that is why I said some people would be using "FS sync". These people would use the sync option, but then they would use "sync" mount option, (ideally this would be set on the NFS client as the clients know they need this). Personally we use rsync via a rsync server or over ssh for backups like this as NFS sync would be far too slow and rsync provides an easy incremental mode plus other benefits. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org
orphaned vim-vimoutliner
Hi, I orphaned vim-vimoutliner. I switched to emacs and haven't touched it for some time. Thanks, Petr signature.asc Description: PGP signature ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org