[Guidelines change] Changes to the packaging guidelines

2018-01-30 Thread Jason L Tibbitts III
Here are the recent changes to the packaging guidelines.

We have more things coming in concert with various distro changes that
are happening, but I wanted to get these two out there now.

-

The icon cache scriptlets were removed from the scriptlet guidelines, as
no live Fedora release needs them.  They have been moved to the EPEL
packaging guidelines instead.
* https://fedoraproject.org/wiki/Packaging:Scriptlets
* https://fedoraproject.org/wiki/EPEL:Packaging
* https://pagure.io/packaging-committee/issue/736

-

A section was added to the beginning of the guidelines, clarifying how
they apply to the various Fedora releases and to EPEL.
* https://fedoraproject.org/wiki/Packaging:Guidelines#Applicability
* https://pagure.io/packaging-committee/issue/744
___
devel-announce mailing list -- devel-annou...@lists.fedoraproject.org
To unsubscribe send an email to devel-announce-le...@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Fedora Rawhide-20180130.n.0 compose check report

2018-01-30 Thread Fedora compose checker
Missing expected images:

Workstation live i386
Kde live i386

Failed openQA tests: 14/129 (x86_64), 3/22 (i386), 1/2 (arm)

New failures (same test did not fail in Rawhide-20180128.n.0):

ID: 189390  Test: x86_64 Workstation-live-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189390
ID: 189406  Test: i386 Workstation-boot-iso memory_check
URL: https://openqa.fedoraproject.org/tests/189406
ID: 189465  Test: x86_64 universal install_simple_encrypted
URL: https://openqa.fedoraproject.org/tests/189465
ID: 189476  Test: x86_64 universal install_xfs@uefi
URL: https://openqa.fedoraproject.org/tests/189476
ID: 189484  Test: x86_64 universal install_blivet_xfs@uefi
URL: https://openqa.fedoraproject.org/tests/189484

Old failures (same test failed in Rawhide-20180128.n.0):

ID: 189375  Test: x86_64 Server-dvd-iso server_role_deploy_domain_controller
URL: https://openqa.fedoraproject.org/tests/189375
ID: 189407  Test: x86_64 KDE-live-iso desktop_notifications_live
URL: https://openqa.fedoraproject.org/tests/189407
ID: 189408  Test: x86_64 KDE-live-iso install_default_upload
URL: https://openqa.fedoraproject.org/tests/189408
ID: 189409  Test: x86_64 KDE-live-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189409
ID: 189410  Test: x86_64 KDE-live-iso install_no_user
URL: https://openqa.fedoraproject.org/tests/189410
ID: 189420  Test: arm Minimal-raw_xz-raw.xz 
install_arm_image_deployment_upload
URL: https://openqa.fedoraproject.org/tests/189420
ID: 189422  Test: x86_64 Atomic-dvd_ostree-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189422
ID: 189423  Test: x86_64 Atomic-dvd_ostree-iso install_default
URL: https://openqa.fedoraproject.org/tests/189423
ID: 189443  Test: x86_64 universal upgrade_server_domain_controller
URL: https://openqa.fedoraproject.org/tests/189443
ID: 189449  Test: x86_64 universal install_package_set_kde
URL: https://openqa.fedoraproject.org/tests/189449
ID: 189493  Test: x86_64 universal install_asian_language
URL: https://openqa.fedoraproject.org/tests/189493
ID: 189498  Test: i386 universal install_package_set_kde
URL: https://openqa.fedoraproject.org/tests/189498
ID: 189504  Test: i386 universal upgrade_2_desktop_32bit
URL: https://openqa.fedoraproject.org/tests/189504

Soft failed openQA tests: 22/129 (x86_64), 4/22 (i386)
(Tests completed, but using a workaround for a known bug)

New soft failures (same test did not soft fail in Rawhide-20180128.n.0):

ID: 189405  Test: i386 Workstation-boot-iso install_default
URL: https://openqa.fedoraproject.org/tests/189405

Old soft failures (same test soft failed in Rawhide-20180128.n.0):

ID: 189360  Test: x86_64 Server-boot-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189360
ID: 189361  Test: x86_64 Server-boot-iso install_default
URL: https://openqa.fedoraproject.org/tests/189361
ID: 189366  Test: x86_64 Server-dvd-iso install_default_upload
URL: https://openqa.fedoraproject.org/tests/189366
ID: 189367  Test: x86_64 Server-dvd-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189367
ID: 189383  Test: i386 Server-boot-iso install_default
URL: https://openqa.fedoraproject.org/tests/189383
ID: 189384  Test: i386 Server-dvd-iso install_default
URL: https://openqa.fedoraproject.org/tests/189384
ID: 189388  Test: x86_64 Workstation-live-iso install_default_upload
URL: https://openqa.fedoraproject.org/tests/189388
ID: 189391  Test: x86_64 Workstation-live-iso install_no_user
URL: https://openqa.fedoraproject.org/tests/189391
ID: 189401  Test: x86_64 Workstation-boot-iso install_default
URL: https://openqa.fedoraproject.org/tests/189401
ID: 189404  Test: x86_64 Workstation-boot-iso install_default@uefi
URL: https://openqa.fedoraproject.org/tests/189404
ID: 189431  Test: x86_64 universal upgrade_minimal_64bit
URL: https://openqa.fedoraproject.org/tests/189431
ID: 189432  Test: x86_64 universal upgrade_desktop_encrypted_64bit
URL: https://openqa.fedoraproject.org/tests/189432
ID: 189439  Test: x86_64 universal install_anaconda_text
URL: https://openqa.fedoraproject.org/tests/189439
ID: 189444  Test: x86_64 universal upgrade_kde_64bit
URL: https://openqa.fedoraproject.org/tests/189444
ID: 189446  Test: x86_64 universal upgrade_2_desktop_64bit
URL: https://openqa.fedoraproject.org/tests/189446
ID: 189455  Test: x86_64 universal upgrade_desktop_64bit
URL: https://openqa.fedoraproject.org/tests/189455
ID: 189456  Test: x86_64 universal upgrade_server_64bit
URL: https://openqa.fedoraproject.org/tests/189456
ID: 189458  Test: x86_64 universal upgrade_2_minimal_64bit
URL: https://openqa.fedoraproject.org/tests/189458
ID: 189460  Test: x86_64 universal upgrade_2_kde_64bit
URL: https://openqa.fedoraproject.org/tests/189460
ID: 189468  Test: x86_64 universal install_iscsi
URL: 

FDP Release and Statistics 1/2018: C# IDEs, .NET, and Multiple Pythons

2018-01-30 Thread Pavel Valena
Hello,

new Fedora Developer Portal was released.

What's updated?
  - `C# IDEs` and `.NET installation` pages [1][2]
  by Radka Janek 

  - `Multiple Pythons` page [3]
  by Miro Hrončok 

  - Moved PHP web Frameworks to Web Application subsection [4]
  by Yotam Nachum  

Thanks to all!

[1] https://developer.fedoraproject.org/tech/languages/csharp/csharp-ide.html
[2] 
https://developer.fedoraproject.org/tech/languages/csharp/dotnet-installation.html
[3] 
https://developer.fedoraproject.org/tech/languages/python/multiple-pythons.html
[4] https://developer.fedoraproject.org/tech/languages/php/php-frameworks.html


New content coming!
  - Plenty of Reviews are in progress and your help is much appreciated!

  https://github.com/developer-portal/content/pulls


Statistics

Jan 1, 2018-Jan 30, 2018 Pageviews Unique Pageviews 
 Avg. Time on Page
 =  
 =
Total25,92221,609   
 00:02:05
/2,156(8.32%)  1,766(8.17%) 
 00:01:13
/tech/languages/python/python-installation.html  1,315(5.07%)  1,119(5.18%) 
 00:02:38
/tools/docker/docker-installation.html   1,099(4.24%)  964(4.46%)   
 00:04:27
/tech/languages/nodejs/nodejs.html   1,086(4.19%)  928(4.29%)   
 00:03:49
/tech/database/mongodb/about.html937(3.61%)808(3.74%)   
 00:07:25
/tech.html   907(3.50%)597(2.76%)   
 00:00:35
/tools.html  798(3.08%)585(2.71%)   
 00:00:41
/start.html  711(2.74%)455(2.11%)   
 00:00:19
/tech/languages/go/go-installation.html  673(2.60%)590(2.73%)   
 00:03:39
/tech/languages/python/multiple-pythons.html 564(2.18%)515(2.38%)   
 00:04:05

Dec 1, 2017-Dec 31, 2017 Pageviews Unique Pageviews 
 Avg. Time on Page
 =  
 =
Total27,33521,316   
 00:01:49
/2,522(9.23%)  1,927(9.04%) 
 00:01:05
/tech.html   1,370(5.01%)  707(3.32%)   
 00:01:05
/tech/languages/python/python-installation.html  1,293(4.73%)  1,077(5.05%) 
 00:03:13
/tools.html  1,105(4.04%)  668(3.13%)   
 00:00:37
/tools/docker/docker-installation.html   1,031(3.77%)  900(4.22%)   
 00:04:22
/start.html  887(3.24%)462(2.17%)   
 00:00:19
/tech/languages/nodejs/nodejs.html   862(3.15%)766(3.59%)   
 00:04:13
/tech/database/mongodb/about.html857(3.14%)755(3.54%)   
 00:05:36
/tech/languages/go/go-installation.html  615(2.25%)534(2.51%)   
 00:03:18
/tech/languages/python/multiple-pythons.html 613(2.24%)545(2.56%)   
 00:03:26

Nov 1, 2017-Nov 30, 2017 Pageviews Unique Pageviews 
 Avg. Time on Page
 =  
 =
Total26,35321,692   
 00:02:05
/2,263(8.59%)  1,818(8.38%) 
 00:01:10
/tech/languages/python/python-installation.html  1,299(4.93%)  1,072(4.94%) 
 00:03:09
/tools/docker/docker-installation.html   1,227(4.66%)  1,050(4.84%) 
 00:04:33
/tech.html   978(3.71%)600(2.77%)   
 00:00:32
/tech/database/mongodb/about.html946(3.59%)811(3.74%)   
 00:07:23
/tools.html  946(3.59%)676(3.12%)   
 00:00:46
/tech/languages/nodejs/nodejs.html   807(3.06%)702(3.24%)   
 00:02:49
/start.html  778(2.95%)452(2.08%)   
 00:00:25
/tech/languages/go/go-installation.html  679(2.58%)593(2.73%)   
 00:04:38
/tech/languages/ruby/ror-installation.html   610(2.31%)499(2.30%)   
 00:03:41


And as always- any feedback is welcome.

Regards,

Pavel Valena
Software Engineer, Red Hat
Brno, Czech Republic


> Statistics
>   Oct 1, 2017-Oct 31, 2017   Pageviews Unique 
> Pageviews
>   Avg. Time on Page
>   ==
>     =
>   Total26,28521,788
>   00:02:01
>   / 

Re: [HEADS UP] Removal of systemd-units

2018-01-30 Thread Nathanael Noblet

> beanstalkd   gnat jjh

Updated in epel7 - rawhide was fine already

Oddly, dspam which is the package I updated earlier today wasn't on this list 
and has the Requires/BuildRequires of systemd-units. Not sure why it isn't on 
your list. However it could mean that if you remove systemd-units some packages 
will break at that point instead.

I've also updated dspam in epel7 now
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[Test-Announce] Fedora 28 Rawhide 20180130.n.0 nightly compose nominated for testing

2018-01-30 Thread rawhide
Announcing the creation of a new nightly release validation test event
for Fedora 28 Rawhide 20180130.n.0. Please help run some tests for this
nightly compose if you have time. For more information on nightly
release validation testing, see:
https://fedoraproject.org/wiki/QA:Release_validation_test_plan

Notable package version changes:
lorax - 20180126.n.0: lorax-28.4-1.fc28.src, 20180130.n.0: lorax-28.5-1.fc28.src

Test coverage information for the current release can be seen at:
https://www.happyassassin.net/testcase_stats/28

You can see all results, find testing instructions and image download
locations, and enter results on the Summary page:

https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Summary

The individual test result pages are:

https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Installation
https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Base
https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Server
https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Cloud
https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Desktop
https://fedoraproject.org/wiki/Test_Results:Fedora_28_Rawhide_20180130.n.0_Security_Lab

Thank you for testing!
-- 
Mail generated by relvalconsumer: https://pagure.io/fedora-qa/relvalconsumer
___
test-announce mailing list -- test-annou...@lists.fedoraproject.org
To unsubscribe send an email to test-announce-le...@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Package Wishlist

2018-01-30 Thread Björn Persson
inderau...@arcor.de wrote:
> I'm interest in GNU Ring's communication solution.
> https://ring.cx/en
> https://ring.cx/en/about/practical

I see that FFMPEG is mentioned on Ring's website. FFMPEG isn't permissible in 
Fedora because the USA's patent system is crap, so it's packaged in RPM Fusion 
instead. Anyone who wants to package Ring will have to do it in RPM Fusion (or 
another add-on repository), unless Ring can function without FFMPEG.

Björn Persson


signature.asc
Description: This is a digitally signed message part.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Tomasz Kłoczko
On 28 January 2018 at 07:48, Terry Barnaby  wrote:

> When doing a tar -xzf ... of a big source tar on an NFSv4 file system the
> time taken is huge. I am seeing an overall data rate of about 1 MByte per
> second across the network interface. If I copy a single large file I see a
> network data rate of about 110 MBytes/sec which is about the limit of the
> Gigabit Ethernet interface I am using.
>
> Now, in the past I have used the NFS "async" mount option to help with
> write speed (lots of small files in the case of an untar of a set of source
> files).
>
> However, this does not seem to speed this up in Fedora27 and also I don't
> see the "async" option listed when I run the "mount" command. When I use
> the "sync" option it does show up in the "mount" list.
>
> The question is, is the "async" option actually working with NFS v4 in
> Fedora27 ?
>

Usually big NFS speed degradation is possible to observe when there is no
running rpcbind on the NFS client or communication whit this process is
blocked by FW.

kloczek
-- 
Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH




> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
>
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 10:30:04PM +, Terry Barnaby wrote:
> Also, on the 0.5ms. Is this effectively the 1ms system tick ie. the NFS
> processing is not processing based on the packet events (not pre-emptive)
> but on the next system tick ?
> 
> An ICMP ping is about 0.13ms (to and fro) between these systems. Although
> 0.5ms is relatively fast, I wouldn't have thought it should have to take
> 0.5ms for a minimal RPC even over TCPIP.

It'd be interesting to break down that latency.  I'm not sure where it's
coming from.  I doubt it has to do with the system tick.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby

On 30/01/18 21:31, J. Bruce Fields wrote:

On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:

It looks like each RPC call takes about 0.5ms. Why do there need to be some
many RPC calls for this ? The OPEN call could set the attribs, no need for
the later GETATTR or SETATTR calls.

The first SETATTR (which sets ctime and mtime to server's time) seems
unnecessary, maybe there's a client bug.

The second looks like tar's fault, strace shows it doing a utimensat()
on each file.  I don't know why or if that's optional.


Even the CLOSE could be integrated with the WRITE and taking this
further OPEN could do OPEN, SETATTR, and some WRITE all in one.

We'd probably need some new protocol to make it safe to return from the
open systemcall before we've gotten the OPEN reply from the server.

Write delegations might save us from having to wait for the other
operations.

Taking a look at my own setup, I see the same calls taking about 1ms.
The drives can't do that, so I've got a problem somewhere too

--b.


Also, on the 0.5ms. Is this effectively the 1ms system tick ie. the NFS 
processing is not processing based on the packet events (not 
pre-emptive) but on the next system tick ?


An ICMP ping is about 0.13ms (to and fro) between these systems. 
Although 0.5ms is relatively fast, I wouldn't have thought it should 
have to take 0.5ms for a minimal RPC even over TCPIP.

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 04:31:58PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> > It looks like each RPC call takes about 0.5ms. Why do there need to be some
> > many RPC calls for this ? The OPEN call could set the attribs, no need for
> > the later GETATTR or SETATTR calls.
> 
> The first SETATTR (which sets ctime and mtime to server's time) seems
> unnecessary, maybe there's a client bug.
> 
> The second looks like tar's fault, strace shows it doing a utimensat()
> on each file.  I don't know why or if that's optional.
> 
> > Even the CLOSE could be integrated with the WRITE and taking this
> > further OPEN could do OPEN, SETATTR, and some WRITE all in one.
> 
> We'd probably need some new protocol to make it safe to return from the
> open systemcall before we've gotten the OPEN reply from the server.
> 
> Write delegations might save us from having to wait for the other
> operations.
> 
> Taking a look at my own setup, I see the same calls taking about 1ms.
> The drives can't do that, so I've got a problem somewhere too

Whoops, I totally forgot it was still set up with an external journal on
SSD:

# tune2fs -l /dev/mapper/export-export |grep '^Journal'
Journal UUID: dc356049-6e2f-4e74-b185-5357bee73a32
Journal device:   0x0803
Journal backup:   inode blocks
# blkid --uuid dc356049-6e2f-4e74-b185-5357bee73a32
/dev/sda3
# cat /sys/block/sda/device/model 
INTEL SSDSA2M080

So, most of the data is striped across a couple big hard drives, but the
journal is actually on a small partition on an SSD.

If I remember correctly, I initially tried this with an older intel SSD
and didn't get a performance improvement.  Then I replaced it with this
model which has the "Enhanced Power Loss Data Protection" feature, which
I believe means the write cache is durable, so it should be able to
safely acknowledge writes as soon as they reach the SSD's cache.

And weirdly I think I never actually got around to rerunning these tests
after I installed the new SSD.

Anyway, so that might explain the difference we're seeing.

I'm not sure how to find new SSDs with that feature, but it may be worth
considering as a cheap way to accelerate this kind of workload.  It can
be a very small SSD as it only needs to hold the journal.  Adding an
external journal is a quick operation (you don't have to recreate the
filesystem or anything).

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: net-snmp, cmake, mc unresponsive packagers

2018-01-30 Thread Tomasz Kłoczko
On 30 January 2018 at 21:53, Ben Rosser  wrote:

> On Tue, Jan 30, 2018 at 4:32 PM, Tomasz Kłoczko
>  wrote:
> > https://fedoraproject.org/wiki/Policy_for_nonresponsive_
> package_maintainers
> >
> > net-snmp
> >   https://bugzilla.redhat.com/show_bug.cgi?id=1529716
> >   https://src.fedoraproject.org/rpms/net-snmp/pull-request/2
>
> There were responses on the pull request you linked within the last
> week, including from the maintainer 5 days ago. I'm not sure how this
> qualifies as "unresponsive"?
>
> https://src.fedoraproject.org/rpms/net-snmp/pull-request/2#comment-3999


Sorry, you are right in this case.
I've been just looking on all my still opened tickets and I've noticed that
it is already +1 month since whole set of commits have been submitted.
Now it will be necessary to rebase this set as I see that net-snmp package
release moved in meantime forward.
Please commit what you want from this PR and send a comment to allow me
have look on your changes as on committing not everything as whole PR it is
quite easy to make some mistakes (like it was recently with OpenIPMI
similar set of changes).
Just please remember that if you will be not cherry picking exact patches
or accepting whole PR I'll be not informed automatically over email about
your commits.

kloczek
-- 
Tomasz Kłoczko | LinkedIn: *http://lnkd.in/FXPWxH *
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


copr mailist?

2018-01-30 Thread Adrian Sevcenco
Hi! I was wondering what would be the best place to ask help about 
problems building packages on copr...


Thank you and sorry for off-topic,
Adrian



smime.p7s
Description: S/MIME Cryptographic Signature
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


FYI: Fedora/RISC-V third and final bootstrap

2018-01-30 Thread Richard W.M. Jones
About a month ago I posted about the state of the RISC-V architecture
for Fedora:

  
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/EYY4TFJTV66EAG322F3E6V6TA7I3RZAZ/

Quoting from that email:

| First the basics: RISC-V is a free and open Instruction Set
| Architecture (ISA).  You can read more about it on the RISC-V
| Foundation's website here:
| 
|   https://riscv.org/
| 
| Fedora/RISC-V is a project to port Fedora to RISC-V.  Actual, real
| 64-bit RISC-V hardware you can buy is going to be released in Q1 2018
| (it's already sampling to a few lucky developers), and I want Fedora
| to be the first choice to run on that hardware.
| 
| The Fedora/RISC-V project web pages are here:
| 
|   https://fedoraproject.org/wiki/Architectures/RISC-V

As noted in the earlier email the project was on a hiatus since the
end of 2016, waiting for the RISC-V Foundation to commit to a stable
Linux libc ABI for the architecture.  Well, finally that has happened.
glibc 2.27, due to be released on Thursday, will contain a stable
RISC-V ABI allowing us to sanely develop a Linux distro.

And thus the Fedora/RISC-V project is back in business.  Next week
we'll be starting the third (and final) bootstrap.  You can follow the
work at these links:

  https://github.com/rwmjones/fedora-riscv-bootstrap
  https://fedoraproject.org/wiki/Architectures/RISC-V
  https://fedoraproject.org/wiki/Architectures/RISC-V/Bootstrapping

The approximate timelines (don't hold me to any of this) are:

  * mid-February: Stage 3 disk image.
  * end-March: Pristine, pure RPM-built stage 4 disk image,
  autobuilder picking up Fedora packages and building them.
  * Summer: Shadow-Koji instance, Fedora 28/Rawhide RPM hosting.

You can already try out the interim (and very minimal and hacky)
bbl-bootloader/kernel/stage 3 disk image:

  http://oirase.annexia.org/riscv/

Install the riscv-qemu package from:

  https://copr.fedorainfracloud.org/coprs/rjones/riscv/

and then run:

  qemu-system-riscv64 \
-nographic \
-machine virt \
-m 2G \
-kernel bbl \
-append "console=ttyS0 ro root=/dev/vda init=/init" \
-device virtio-blk-device,drive=hd0 \
-drive file=stage3-disk.img,format=raw,id=hd0 \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet

I'll be at FOSDEM on Saturday if anyone is interested in Fedora and
RISC-V.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: net-snmp, cmake, mc unresponsive packagers

2018-01-30 Thread Ben Rosser
On Tue, Jan 30, 2018 at 4:32 PM, Tomasz Kłoczko
 wrote:
> https://fedoraproject.org/wiki/Policy_for_nonresponsive_package_maintainers
>
> net-snmp
>   https://bugzilla.redhat.com/show_bug.cgi?id=1529716
>   https://src.fedoraproject.org/rpms/net-snmp/pull-request/2

There were responses on the pull request you linked within the last
week, including from the maintainer 5 days ago. I'm not sure how this
qualifies as "unresponsive"?

https://src.fedoraproject.org/rpms/net-snmp/pull-request/2#comment-3999

Ben Rosser
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: net-snmp, cmake, mc unresponsive packagers

2018-01-30 Thread Tomasz Kłoczko
On 30 January 2018 at 21:37, Jindrich Novy  wrote:

> Hi Tomasz,
>
> I haven't done much on mc front recently but I'm pretty responsive. Have
> you tried to get in touch with me at all?
>

You are not reacting on bugzuilla tickets and now on pull requests. Why?
Prev tickets:
https://bugzilla.redhat.com/show_bug.cgi?id=1436394
https://bugzilla.redhat.com/show_bug.cgi?id=1442842

This is always should be base channel of communication.
By this probably you didn't even notice that mc.spec looks now completely
different than it was year ago.

kloczek
-- 
Tomasz Kłoczko | LinkedIn: *http://lnkd.in/FXPWxH *
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


net-snmp, cmake, mc unresponsive packagers

2018-01-30 Thread Tomasz Kłoczko
https://fedoraproject.org/wiki/Policy_for_nonresponsive_package_maintainers

net-snmp
  https://bugzilla.redhat.com/show_bug.cgi?id=1529716
  https://src.fedoraproject.org/rpms/net-snmp/pull-request/2

cmake:
  https://src.fedoraproject.org/rpms/cmake/pull-request/2
  https://bugzilla.redhat.com/show_bug.cgi?id=1530574

mc
  https://bugzilla.redhat.com/show_bug.cgi?id=1536909
  https://src.fedoraproject.org/rpms/mc/pull-request/1

In case of mc there is no contact with maintainer since my last mc updates
(May 2017)

kloczek
-- 
Tomasz Kłoczko | LinkedIn: *http://lnkd.in/FXPWxH *
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 07:03:17PM +, Terry Barnaby wrote:
> It looks like each RPC call takes about 0.5ms. Why do there need to be some
> many RPC calls for this ? The OPEN call could set the attribs, no need for
> the later GETATTR or SETATTR calls.

The first SETATTR (which sets ctime and mtime to server's time) seems
unnecessary, maybe there's a client bug.

The second looks like tar's fault, strace shows it doing a utimensat()
on each file.  I don't know why or if that's optional.

> Even the CLOSE could be integrated with the WRITE and taking this
> further OPEN could do OPEN, SETATTR, and some WRITE all in one.

We'd probably need some new protocol to make it safe to return from the
open systemcall before we've gotten the OPEN reply from the server.

Write delegations might save us from having to wait for the other
operations.

Taking a look at my own setup, I see the same calls taking about 1ms.
The drives can't do that, so I've got a problem somewhere too

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[EPEL-devel] Fedora EPEL 6 updates-testing report

2018-01-30 Thread updates
The following Fedora EPEL 6 Security updates need testing:
 Age  URL
 931  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2015-7168   
rubygem-crack-0.3.2-2.el6
 821  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2015-e2b4b5b2fb   
mcollective-2.8.4-1.el6
 792  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2015-35e240edd9   
thttpd-2.25b-24.el6
 403  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-e3e50897ac   
libbsd-0.8.3-2.el6
 132  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2017-4c76ddcc92   
libmspack-0.6-0.1.alpha.el6
  52  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2017-6aaee32b7e   
optipng-0.7.6-6.el6
  33  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2017-6e4ce19598   
monit-5.25.1-1.el6
  24  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-8c9006d462   
heimdal-7.5.0-1.el6
  18  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-752a7c9ad4   
rootsh-1.5.3-17.el6
  12  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-2ba6bfc5d8   
wordpress-4.9.2-1.el6
   6  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-1049ca4872   
GraphicsMagick-1.3.28-1.el6
   1  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-369a48191f   
clamav-0.99.3-1.el6


The following builds have been pushed to Fedora EPEL 6 updates-testing

epel-rpm-macros-6-17
ncdu-1.13-1.el6

Details about builds:



 epel-rpm-macros-6-17 (FEDORA-EPEL-2018-580a31cb75)
 Extra Packages for Enterprise Linux RPM macros

Update Information:

Add macros.ldconfig  * `%ldconfig_scriptlets` * `%ldconfig_post` *
`%ldconfig_postun` * `%ldconfig`
https://fedoraproject.org/wiki/Changes/Removing_ldconfig_scriptlets




 ncdu-1.13-1.el6 (FEDORA-EPEL-2018-b96ae423cd)
 Text-based disk usage viewer

Update Information:

Update to 1.13. Changes in this version:  * Add "extended information" mode and
`-e` flag * Add file mode, modification time and uid/gid to info window with
`-e` * Add experimental color support and `--color` flag * Add `-rr` option to
disable shell spawning * Remove directory nesting limit on file import * Fix
handling of interrupts during file import * Fix undefined behavior that
triggered crash on OS X

References:

  [ 1 ] Bug #1539676 - ncdu-1.13 is available
https://bugzilla.redhat.com/show_bug.cgi?id=1539676

___
epel-devel mailing list -- epel-devel@lists.fedoraproject.org
To unsubscribe send an email to epel-devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Jeremy Linton

Hi,

On 01/30/2018 01:03 PM, Terry Barnaby wrote:


Being a daredevil, I have used the NFS async option for 27 years 
without an issue on multiple systems :)


I have just mounted my ext4 disk with the same options you were using 
and the same NFS export options and the speed here looks the same as I 
had previously. As I can't wait 2+ hours so I'm just looking at 
ksysguard and it is showing a network rate of about 10 KBytes/s and 
the directory on the server is growing in size very very slowly.


This is using the current Fedora27 kernel 4.14.14-300.fc27.x86_64.

I will have a look at using wireshark to see if this shows anything.


This is a snippet from a wireshark trace of the NFS when untaring the 
linux kernel 4.14.15 sources into an NFSv4.2 mounted directory with 
"sync" option on my NFS server. The whole untar would take > 2 hours vs 
13 seconds direct to the disk. This is about 850 MBytes of 60k files. 
The following is a single, small file write.


No. Time   Source Destination   Protocol Length Info
    1880 11.928600315   192.168.202.2 192.168.202.1 NFS 380
V4 Call (Reply In 1881) OPEN DH: 0xac0502f2/sysfs-c2port
    1881 11.950329198   192.168.202.1 192.168.202.2 NFS 408
V4 Reply (Call In 1880) OPEN StateID: 0xaa72
    1882 11.950446430   192.168.202.2 192.168.202.1 NFS 304
V4 Call (Reply In 1883) SETATTR FH: 0x825014ee
    1883 11.972608880   192.168.202.1 192.168.202.2 NFS 336
V4 Reply (Call In 1882) SETATTR
    1884 11.972754709   192.168.202.2 192.168.202.1 TCP 
1516   785 → 2049 [ACK] Seq=465561 Ack=183381 Win=8990 Len=1448 
TSval=1663691771 TSecr=3103357902 [TCP segment of a reassembled PDU]
    1885 11.972763078   192.168.202.2 192.168.202.1 TCP 
1516   785 → 2049 [ACK] Seq=467009 Ack=183381 Win=8990 Len=1448 
TSval=1663691771 TSecr=3103357902 [TCP segment of a reassembled PDU]
    1886 11.972979437   192.168.202.2 192.168.202.1 NFS 332
V4 Call (Reply In 1888) WRITE StateID: 0xafdf Offset: 0 Len: 2931
    1887 11.973074490   192.168.202.1 192.168.202.2 TCP 
68 2049 → 785 [ACK] Seq=183381 Ack=468721 Win=24557 Len=0 
TSval=3103357902 TSecr=1663691771
    1888 12.017153631   192.168.202.1 192.168.202.2 NFS 248
V4 Reply (Call In 1886) WRITE
    1889 12.017338766   192.168.202.2 192.168.202.1 NFS 260
V4 Call (Reply In 1890) GETATTR FH: 0x825014ee
    1890 12.017834411   192.168.202.1 192.168.202.2 NFS 312
V4 Reply (Call In 1889) GETATTR
    1891 12.017961690   192.168.202.2 192.168.202.1 NFS 328
V4 Call (Reply In 1892) SETATTR FH: 0x825014ee
    1892 12.039456634   192.168.202.1 192.168.202.2 NFS 336
V4 Reply (Call In 1891) SETATTR
    1893 12.039536705   192.168.202.2 192.168.202.1 NFS 284
V4 Call (Reply In 1894) CLOSE StateID: 0xaa72
    1894 12.039979528   192.168.202.1 192.168.202.2 NFS 248
V4 Reply (Call In 1893) CLOSE
    1895 12.040077180   192.168.202.2 192.168.202.1 NFS 392
V4 Call (Reply In 1896) OPEN DH: 0xac0502f2/sysfs-cfq-target-latency
    1896 12.061903798   192.168.202.1 192.168.202.2 NFS 408
V4 Reply (Call In 1895) OPEN StateID: 0xaa72


It looks like this takes about 100ms to write this small file. With the 
approx 60k files in the archive this would take about 6000 secs, so is 
in the 2 hours ballpark or the untar that I am seeing.


Looks like OPEN 21ms, SETATTR 22ms, WRITE 44ms, second SETATTR 21ms a 
lot of time ...


The following is for an "async" mount:

No. Time   Source Destination   Protocol Length Info
   37393 7.630012608    192.168.202.2 192.168.202.1 NFS 396
V4 Call (Reply In 37394) OPEN DH: 0x1f828ac9/vidioc-dbg-g-chip-info.rst
   37394 7.630488451    192.168.202.1 192.168.202.2 NFS 408
V4 Reply (Call In 37393) OPEN StateID: 0xaa72
   37395 7.630525117    192.168.202.2 192.168.202.1 NFS 304
V4 Call (Reply In 37396) SETATTR FH: 0x0f65c554
   37396 7.630980560    192.168.202.1 192.168.202.2 NFS 336
V4 Reply (Call In 37395) SETATTR
   37397 7.631035171    192.168.202.2 192.168.202.1 TCP 
1516   785 → 2049 [ACK] Seq=13054241 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
   37398 7.631038994    192.168.202.2 192.168.202.1 TCP 
1516   785 → 2049 [ACK] Seq=13055689 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
   37399 7.631042228    192.168.202.2 192.168.202.1 TCP 
1516   785 → 2049 [ACK] Seq=13057137 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
   37400 7.631195554    192.168.202.2 192.168.202.1 NFS 448
V4 Call (Reply In 37402) WRITE StateID: 0xafdf Offset: 0 Len: 4493
   37401 7.631277423    192.168.202.1 192.168.202.2 TCP 
68 2049 → 785 [ACK] Seq=3620329 

Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby


Being a daredevil, I have used the NFS async option for 27 years 
without an issue on multiple systems :)


I have just mounted my ext4 disk with the same options you were using 
and the same NFS export options and the speed here looks the same as I 
had previously. As I can't wait 2+ hours so I'm just looking at 
ksysguard and it is showing a network rate of about 10 KBytes/s and 
the directory on the server is growing in size very very slowly.


This is using the current Fedora27 kernel 4.14.14-300.fc27.x86_64.

I will have a look at using wireshark to see if this shows anything.


This is a snippet from a wireshark trace of the NFS when untaring the 
linux kernel 4.14.15 sources into an NFSv4.2 mounted directory with 
"sync" option on my NFS server. The whole untar would take > 2 hours vs 
13 seconds direct to the disk. This is about 850 MBytes of 60k files. 
The following is a single, small file write.


No. Time   Source Destination   Protocol Length Info
   1880 11.928600315   192.168.202.2 192.168.202.1 NFS  
380    V4 Call (Reply In 1881) OPEN DH: 0xac0502f2/sysfs-c2port
   1881 11.950329198   192.168.202.1 192.168.202.2 NFS  
408    V4 Reply (Call In 1880) OPEN StateID: 0xaa72
   1882 11.950446430   192.168.202.2 192.168.202.1 NFS  
304    V4 Call (Reply In 1883) SETATTR FH: 0x825014ee
   1883 11.972608880   192.168.202.1 192.168.202.2 NFS  
336    V4 Reply (Call In 1882) SETATTR
   1884 11.972754709   192.168.202.2 192.168.202.1 TCP  
1516   785 → 2049 [ACK] Seq=465561 Ack=183381 Win=8990 Len=1448 
TSval=1663691771 TSecr=3103357902 [TCP segment of a reassembled PDU]
   1885 11.972763078   192.168.202.2 192.168.202.1 TCP  
1516   785 → 2049 [ACK] Seq=467009 Ack=183381 Win=8990 Len=1448 
TSval=1663691771 TSecr=3103357902 [TCP segment of a reassembled PDU]
   1886 11.972979437   192.168.202.2 192.168.202.1 NFS  
332    V4 Call (Reply In 1888) WRITE StateID: 0xafdf Offset: 0 Len: 2931
   1887 11.973074490   192.168.202.1 192.168.202.2 TCP  
68 2049 → 785 [ACK] Seq=183381 Ack=468721 Win=24557 Len=0 
TSval=3103357902 TSecr=1663691771
   1888 12.017153631   192.168.202.1 192.168.202.2 NFS  
248    V4 Reply (Call In 1886) WRITE
   1889 12.017338766   192.168.202.2 192.168.202.1 NFS  
260    V4 Call (Reply In 1890) GETATTR FH: 0x825014ee
   1890 12.017834411   192.168.202.1 192.168.202.2 NFS  
312    V4 Reply (Call In 1889) GETATTR
   1891 12.017961690   192.168.202.2 192.168.202.1 NFS  
328    V4 Call (Reply In 1892) SETATTR FH: 0x825014ee
   1892 12.039456634   192.168.202.1 192.168.202.2 NFS  
336    V4 Reply (Call In 1891) SETATTR
   1893 12.039536705   192.168.202.2 192.168.202.1 NFS  
284    V4 Call (Reply In 1894) CLOSE StateID: 0xaa72
   1894 12.039979528   192.168.202.1 192.168.202.2 NFS  
248    V4 Reply (Call In 1893) CLOSE
   1895 12.040077180   192.168.202.2 192.168.202.1 NFS  
392    V4 Call (Reply In 1896) OPEN DH: 0xac0502f2/sysfs-cfq-target-latency
   1896 12.061903798   192.168.202.1 192.168.202.2 NFS  
408    V4 Reply (Call In 1895) OPEN StateID: 0xaa72


It looks like this takes about 100ms to write this small file. With the 
approx 60k files in the archive this would take about 6000 secs, so is 
in the 2 hours ballpark or the untar that I am seeing.


Looks like OPEN 21ms, SETATTR 22ms, WRITE 44ms, second SETATTR 21ms a 
lot of time ...


The following is for an "async" mount:

No. Time   Source Destination   Protocol Length Info
  37393 7.630012608    192.168.202.2 192.168.202.1 NFS  
396    V4 Call (Reply In 37394) OPEN DH: 
0x1f828ac9/vidioc-dbg-g-chip-info.rst
  37394 7.630488451    192.168.202.1 192.168.202.2 NFS  
408    V4 Reply (Call In 37393) OPEN StateID: 0xaa72
  37395 7.630525117    192.168.202.2 192.168.202.1 NFS  
304    V4 Call (Reply In 37396) SETATTR FH: 0x0f65c554
  37396 7.630980560    192.168.202.1 192.168.202.2 NFS  
336    V4 Reply (Call In 37395) SETATTR
  37397 7.631035171    192.168.202.2 192.168.202.1 TCP  
1516   785 → 2049 [ACK] Seq=13054241 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
  37398 7.631038994    192.168.202.2 192.168.202.1 TCP  
1516   785 → 2049 [ACK] Seq=13055689 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
  37399 7.631042228    192.168.202.2 192.168.202.1 TCP  
1516   785 → 2049 [ACK] Seq=13057137 Ack=3620329 Win=8990 Len=1448 
TSval=1664595527 TSecr=3104261711 [TCP segment of a reassembled PDU]
  37400 7.631195554    192.168.202.2 192.168.202.1 NFS  
448    V4 Call (Reply In 37402) WRITE StateID: 0xafdf Offset: 0 Len: 4493
  37401 7.631277423    192.168.202.1 

Re: GCC broken in rawhide?

2018-01-30 Thread Florian Weimer

On 01/30/2018 06:59 PM, Charalampos Stratakis wrote:

Noting here that python2 segfaults now when using profile guided optimizations 
on x86_64 since the latest gcc update.


This is a bug in Python 2.7:

  https://bugzilla.redhat.com/show_bug.cgi?id=1540316

We're working on fixing it.

Thanks,
Florian
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Package Wishlist (Was: Working on a Dracut bug?)

2018-01-30 Thread inderaue23

> Robert-André Mauchin  hat am 30. Januar 2018 um 19:29 
> geschrieben:
> 
> 
> On lundi 29 janvier 2018 10:06:10 CET inderau...@arcor.de wrote:
> > Hi Matthew, that would be great. But i have no technical background to
> > realise that :(
> > I'm interest in GNU Ring's communication solution.
> > https://ring.cx/en
> > https://ring.cx/en/about/practical
> > 
> 
> They seem to provide their own RPM for Fedora 27:
> https://ring.cx/en/download/gnu-linux
> 
> > sudo dnf config-manager --add-repo https://dl.ring.cx/ring-nightly/
> fedora_27/ring-nightly.repo
> > sudo dnf install ring
> 
> What else do you need?

I would love to see a way that people become aware of this piece of GNU 
software running GNOME Software.

> 
> 
> 
> 
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Package Wishlist (Was: Working on a Dracut bug?)

2018-01-30 Thread Robert-André Mauchin
On lundi 29 janvier 2018 10:06:10 CET inderau...@arcor.de wrote:
> Hi Matthew, that would be great. But i have no technical background to
> realise that :(
> I'm interest in GNU Ring's communication solution.
> https://ring.cx/en
> https://ring.cx/en/about/practical
> 

They seem to provide their own RPM for Fedora 27:
https://ring.cx/en/download/gnu-linux

> sudo dnf config-manager --add-repo https://dl.ring.cx/ring-nightly/
fedora_27/ring-nightly.repo
> sudo dnf install ring

What else do you need?




___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby

On 30/01/18 17:54, J. Bruce Fields wrote:

On Tue, Jan 30, 2018 at 12:31:22PM -0500, J. Bruce Fields wrote:

On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:

I have just tried running the untar on our work systems. These are again
Fedora27 but newer hardware.
I set one of the servers NFS exports to just rw (removed the async option in
/etc/exports and ran exportfs -arv).
Remounted this NFS file system on a Fedora27 client and re-ran the test. I
have only waited 10mins but the overal network data rate is in the order of
0.1 MBytes/sec so it looks like it will be a multiple hour job as at home.
So I have two completely separate systems with the same performance over
NFS.
With your NFS "sync" test are you sure you set the "sync" mode on the server
and re-exported the file systems ?

Not being a daredevil, I use "sync" by default:

# exportfs -v /export 
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)

For the "async" case I changed the options and actually rebooted, yes.

The filesystem is:

/dev/mapper/export-export on /export type ext4 
(rw,relatime,seclabel,nodelalloc,stripe=32,data=journal)

(I think data=journal is the only non-default, and I don't remember why
I chose that.)

Hah, well, with data=ordered (the default) the same untar (with "sync"
export) took 15m38s.  So... that probably wasn't an accident.

It may be irresponsible for me to guess given the state of my ignorance
about ext4 journaling, but perhaps writing everything to the journal and
delaying writing it out to its real location as long as possible allows
some sort of tradeoff between bandwidth and seeks that helps with this
sync-heavy workload.

--b.


Being a daredevil, I have used the NFS async option for 27 years without 
an issue on multiple systems :)


I have just mounted my ext4 disk with the same options you were using 
and the same NFS export options and the speed here looks the same as I 
had previously. As I can't wait 2+ hours so I'm just looking at 
ksysguard and it is showing a network rate of about 10 KBytes/s and the 
directory on the server is growing in size very very slowly.


This is using the current Fedora27 kernel 4.14.14-300.fc27.x86_64.

I will have a look at using wireshark to see if this shows anything.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Wyland is a disaster

2018-01-30 Thread Howard Howell
On Tue, 2018-01-30 at 10:49 -0500, Przemek Klosowski wrote:
> On 01/29/2018 09:33 PM, mcatanz...@gnome.org wrote:
> > Please see my earlier post in this thread regarding how to get a 
> > stacktrace out of coredumpctl
> 
> This is a great debugging harness; thanks for pointing it out as I 
> didn't know about it.
> 
> I am currently reporting my pan-crashing-Wayland crashing issue on 
> gnome.org---but in the true Heisenbug fashion it stopped crashing, so
> I 
> can't reproduce it now. It was crashing almost every time I used the
> pan 
> newsreader (the three crash dumps I submitted were just few days
> apart 
> in my coredumpctl records), but now they're as solid as rock.
> 
> I have to say that the GNOME developers responded almost
> immediately, 
> and I feel stupid for not being able to demonstrate replication. I
> am 
> really curious about the nature of this bug: it must be some weird 
> interaction---perhaps related to memory use or something like that.
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org

I had the same experience.  I loaded all the tools that were
recommended, and the crashes seem to have stopped ( that is no crashes
since I loaded all the tools.) Maybe there is a hidden fix in the debug
stuff.

Regards,
Les H
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[Bug 1535735] perl-MongoDB-v1.8.1 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1535735



--- Comment #7 from Fedora Update System  ---
perl-MongoDB-1.8.1-1.fc27 has been pushed to the Fedora 27 stable repository.
If problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1392472] root is not built for ppc64

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392472

Fedora Update System  changed:

   What|Removed |Added

   Fixed In Version|root-6.12.04-1.fc26 |root-6.12.04-1.fc26
   ||root-6.12.04-1.fc27



--- Comment #10 from Fedora Update System  ---
root-6.12.04-1.fc27 has been pushed to the Fedora 27 stable repository. If
problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1392478] root is not built for ppc64le

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392478

Fedora Update System  changed:

   What|Removed |Added

   Fixed In Version|root-6.12.04-1.fc26 |root-6.12.04-1.fc26
   ||root-6.12.04-1.fc27



--- Comment #9 from Fedora Update System  ---
root-6.12.04-1.fc27 has been pushed to the Fedora 27 stable repository. If
problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Tomasz Kłoczko
On 30 January 2018 at 17:21, Tim Landscheidt  wrote:
[..]

> You can help move this forward by publishing the script(s)
> you used (or the patches that still apply cleanly if you
> wrote them manually).
>

There is no any this kind script because ALL Fedora specs unstructured
formatting/indentations makes writing such scripts IMPOSSIBLE!!!
Because of this MOST of the changes per each package needs to be done
MANUALLY.
Is that CLEAR NOW?
How many times I must repeat this to punch kind of "mental helmet" people
like you?

Whole idea of the series mass changes was about aligning all specs to the
form when after many such iterations it will be possible to do more and
more other changes using simple sed regexps. NOW it is NOT POSSIBLE. Is
that clear now?

It is classic chicken and egg problem, and by proposing series of mass
cleanups I'm trying to cut this IMPOSSIBILITY loop.

Try to think why Igor already have done all changes in few batches just
going across all specs ALPHABETICALLY?
Answer: because each spec needs to be checked and modified MANUALLY!!!

And just for the record: again you did not answered on my questions
ignoring "en block" arguments and questions turning this thread again into
"ad hominem" conversation.
Again: my humble person has nothing to do with subject called "Fedora
methodology".

kloczek
-- 
Tomasz Kłoczko | LinkedIn: *http://lnkd.in/FXPWxH *
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[EPEL-devel] [Fedocal] Reminder meeting : EPEL Steering Committee

2018-01-30 Thread smooge
Dear all,

You are kindly invited to the meeting:
   EPEL Steering Committee on 2018-01-31 from 18:00:00 to 19:00:00 GMT
   At fedora-meet...@irc.freenode.net

The meeting will be about:
The EPEL Steering Committee will have a weekly meeting to cover current tasks 
and problems needed to keep EPEL going.


Source: https://apps.fedoraproject.org/calendar/meeting/8724/

___
epel-devel mailing list -- epel-devel@lists.fedoraproject.org
To unsubscribe send an email to epel-devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Charalampos Stratakis


- Original Message -
> From: "Vít Ondruch" 
> To: devel@lists.fedoraproject.org
> Sent: Tuesday, January 30, 2018 1:14:43 PM
> Subject: Re: GCC broken in rawhide?
> 
> 
> 
> Dne 30.1.2018 v 11:16 Jakub Jelinek napsal(a):
> > On Tue, Jan 30, 2018 at 11:11:02AM +0100, Ralf Corsepius wrote:
> >> On 01/30/2018 10:00 AM, Florian Weimer wrote:
> >>> On 01/30/2018 09:54 AM, Ralf Corsepius wrote:
> > annobin.spec now uses:
> >
> >    %undefine _annotated_build
> >
> > so at least the circular dependency is no longer there.  You still
> > have to remember to rebuild it when a new version of GCC comes out
> > however.
>  ... which apparently has just happened.
> >>> Yes, Fedora 28 will use GCC 8.
> >>>
>  Consequences are affecting all released versions of Fedora, because
>  it's impossible to apply bugfix updates to Fedora < rawhide, due to
>  the GCC-chaos in rawhide.
> >>> Please provide more context when reporting issues, otherwise we have a
> >>> hard time helping you.
> >> Seeming hundreds of packages currently carry broken deps, dnf is
> >> malfunctioning etc.
> > That is what we have the mass rebuild scheduled for (AFAIK this week).
> 
> I'd say that when major GCC lands in Fedora, what happened yesterday
> [1], it would not hurt to send a note to fedora-devel.
> 
> Vít
> 
> 
> [1] https://koji.fedoraproject.org/koji/buildinfo?buildID=1021625
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> 

Agreed.

Noting here that python2 segfaults now when using profile guided optimizations 
on x86_64 since the latest gcc update. And that is on top of the breakage 
caused by
the Sun RPC removal and the change to a new libnsl within glibc, the 
replacement of libcrypt with libxcrypt, the -z defs flag. That applies for 
python3 as well. 

And this is due to too many invasive changes at the same time.

koschei is good in notifying us on those failures, however when so many core 
components of the distro get rebased or alter functionality
at the same time it really adds unnecessary overhead onto packagers when trying 
to pinpoint the exact cause for failures and which package caused it.

-- 
Regards,

Charalampos Stratakis
Software Engineer
Python Maintenance Team, Red Hat
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Package Wishlist (Was: Working on a Dracut bug?)

2018-01-30 Thread inderaue23
so if there is anything you could do/initiate, that would be really wonderful!
Anna
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 12:31:22PM -0500, J. Bruce Fields wrote:
> On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> > I have just tried running the untar on our work systems. These are again
> > Fedora27 but newer hardware.
> > I set one of the servers NFS exports to just rw (removed the async option in
> > /etc/exports and ran exportfs -arv).
> > Remounted this NFS file system on a Fedora27 client and re-ran the test. I
> > have only waited 10mins but the overal network data rate is in the order of
> > 0.1 MBytes/sec so it looks like it will be a multiple hour job as at home.
> > So I have two completely separate systems with the same performance over
> > NFS.
> > With your NFS "sync" test are you sure you set the "sync" mode on the server
> > and re-exported the file systems ?
> 
> Not being a daredevil, I use "sync" by default:
> 
>   # exportfs -v /export 
> (rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)
> 
> For the "async" case I changed the options and actually rebooted, yes.
> 
> The filesystem is:
> 
>   /dev/mapper/export-export on /export type ext4 
> (rw,relatime,seclabel,nodelalloc,stripe=32,data=journal) 
> 
> (I think data=journal is the only non-default, and I don't remember why
> I chose that.)

Hah, well, with data=ordered (the default) the same untar (with "sync"
export) took 15m38s.  So... that probably wasn't an accident.

It may be irresponsible for me to guess given the state of my ignorance
about ext4 journaling, but perhaps writing everything to the journal and
delaying writing it out to its real location as long as possible allows
some sort of tradeoff between bandwidth and seeks that helps with this
sync-heavy workload.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Atomic, Cloud and Docker images for s390x

2018-01-30 Thread Daniel Walsh

On 01/30/2018 11:50 AM, Jan Kurik wrote:

= Proposed Self Contained Change: Atomic, Cloud and Docker images for s390x =
https://fedoraproject.org/wiki/Changes/Atomic_Cloud_and_Docker_images_for_s390x

Change owner(s):
* Sinny Kumari 


This change is to bring s390x architecture closer to other Fedora
architectures by adding widely used Fedora variants. This includes
docker images, Atomic Host (iso, qcow2 and raw format) and regular
Cloud Images (qcow2 and raw format).


== Detailed Description ==
We already ship Atomic, Cloud and Docker images on other 64-bit Fedora
supported architectures- aarch64, x86_64 and ppc64le. With Fedora 27,
s390x is part of primary koji build system. Currently, we only ship
Server and Everything variants for s390x. So, our next steps should be
to have missing Fedora variants on s390x architecture which users will
find useful. This brings in shipping Atomic, Cloud and Docker images
in Fedora for s390x as well.


== Scope ==
* Proposal owners:
These are isolated changes which doesn't impact existing Fedora 28
release plan on s390x. To have these changes ready to ship in Fedora
28, we mainly require s390x koji builders configured to run these
composes, changes in pungi configuration [
https://pagure.io/pungi-fedora/pull-request/496 ] to enable the
additional compose and fixing s390x specific issues encountered when
compose fails to run.

* Other developers:
Changes in Fedora infrastructure configs/scripts will be required to
have s390x builders configured to run additional composes. Fedora
Infrastructure issue [
https://pagure.io/fedora-infrastructure/issue/6659 ] has been filed to
keep track of required changes to be done.

* Release engineering:
#Releng 7286: https://pagure.io/releng/issue/7286

* Policies and guidelines:
N/A (not a System Wide Change)

* Trademark approval:
N/A (not needed for this Change)


You mean OCI Images...
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 04:49:41PM +, Terry Barnaby wrote:
> I have just tried running the untar on our work systems. These are again
> Fedora27 but newer hardware.
> I set one of the servers NFS exports to just rw (removed the async option in
> /etc/exports and ran exportfs -arv).
> Remounted this NFS file system on a Fedora27 client and re-ran the test. I
> have only waited 10mins but the overal network data rate is in the order of
> 0.1 MBytes/sec so it looks like it will be a multiple hour job as at home.
> So I have two completely separate systems with the same performance over
> NFS.
> With your NFS "sync" test are you sure you set the "sync" mode on the server
> and re-exported the file systems ?

Not being a daredevil, I use "sync" by default:

# exportfs -v /export 
(rw,sync,wdelay,hide,no_subtree_check,sec=sys,insecure,no_root_squash,no_all_squash)

For the "async" case I changed the options and actually rebooted, yes.

The filesystem is:

/dev/mapper/export-export on /export type ext4 
(rw,relatime,seclabel,nodelalloc,stripe=32,data=journal) 

(I think data=journal is the only non-default, and I don't remember why
I chose that.)

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[Bug 1392472] root is not built for ppc64

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392472

Fedora Update System  changed:

   What|Removed |Added

 Status|ON_QA   |CLOSED
   Fixed In Version||root-6.12.04-1.fc26
 Resolution|--- |ERRATA
Last Closed||2018-01-30 12:31:59



--- Comment #9 from Fedora Update System  ---
root-6.12.04-1.fc26 has been pushed to the Fedora 26 stable repository. If
problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1535735] perl-MongoDB-v1.8.1 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1535735



--- Comment #6 from Fedora Update System  ---
perl-MongoDB-1.8.1-1.fc26 has been pushed to the Fedora 26 stable repository.
If problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1392475] perl-Alien-ROOT not available on ppc64 because root is not there

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392475
Bug 1392475 depends on bug 1392472, which changed state.

Bug 1392472 Summary: root is not built for ppc64
https://bugzilla.redhat.com/show_bug.cgi?id=1392472

   What|Removed |Added

 Status|ON_QA   |CLOSED
 Resolution|--- |ERRATA



-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1392478] root is not built for ppc64le

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392478

Fedora Update System  changed:

   What|Removed |Added

 Status|ON_QA   |CLOSED
   Fixed In Version||root-6.12.04-1.fc26
 Resolution|--- |ERRATA
Last Closed||2018-01-30 12:31:45



--- Comment #8 from Fedora Update System  ---
root-6.12.04-1.fc26 has been pushed to the Fedora 26 stable repository. If
problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1392479] perl-Alien-ROOT not available on ppc64le because root is not there

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1392479
Bug 1392479 depends on bug 1392478, which changed state.

Bug 1392478 Summary: root is not built for ppc64le
https://bugzilla.redhat.com/show_bug.cgi?id=1392478

   What|Removed |Added

 Status|ON_QA   |CLOSED
 Resolution|--- |ERRATA



-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1530928] perl-Time-HiRes-1.9751 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1530928



--- Comment #9 from Fedora Update System  ---
perl-Time-HiRes-1.9753-1.fc26 has been pushed to the Fedora 26 stable
repository. If problems still persist, please make note of it in this bug
report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1528842] perl-Time-HiRes-1.9750 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1528842



--- Comment #13 from Fedora Update System  ---
perl-Time-HiRes-1.9753-1.fc26 has been pushed to the Fedora 26 stable
repository. If problems still persist, please make note of it in this bug
report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1533691] perl-Time-HiRes-1.9753 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1533691



--- Comment #7 from Fedora Update System  ---
perl-Time-HiRes-1.9753-1.fc26 has been pushed to the Fedora 26 stable
repository. If problems still persist, please make note of it in this bug
report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1531332] perl-Time-HiRes-1.9752 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1531332



--- Comment #9 from Fedora Update System  ---
perl-Time-HiRes-1.9753-1.fc26 has been pushed to the Fedora 26 stable
repository. If problems still persist, please make note of it in this bug
report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


[Bug 1532914] perl-Socket-2.025 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1532914



--- Comment #7 from Fedora Update System  ---
perl-Socket-2.025-1.fc26 has been pushed to the Fedora 26 stable repository. If
problems still persist, please make note of it in this bug report.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Tim Landscheidt
Tomasz Kłoczko  wrote:

> […]

> Who said that I'm demanding something?
> Look one more time on https://pagure.io/packaging-committee/issue/736
> Igor took this tasks VOLUNTARILY and started working on necessary specs 
> before I've delivered batch of patches.
> When I found that number of already done modifications is trashing already 
> many patches which I had prepared it was no sense to be (as me) involved in 
> helping finish
> this.
> Now still is not finished about 20%.
> Just please answer on the questions:
> - Who will finish this?
> - Why it is done so badly?
> - What is the sense submitting more such mass changes if it is good chance 
> that they'll be not finished as well? (and now you are telling me that I'm 
> this bad guy because
> I've been showing those "naked pictures" to other people)

You can help move this forward by publishing the script(s)
you used (or the patches that still apply cleanly if you
wrote them manually).

Also, just to state the obvious for most: This is some tidy-
ing up.  It's good if it's done, but it is not blocking any-
thing.  If someone already has patched "only" 20 % of the
specs, that is good, not bad, because the work to be done
has decreased by a fifth.

Tim
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby

On 30/01/18 16:22, J. Bruce Fields wrote:

On Tue, Jan 30, 2018 at 03:29:41PM +, Terry Barnaby wrote:

On 30/01/18 15:09, J. Bruce Fields wrote:

By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.

Ok, that is far more reasonable, so something is up on my systems :)
What speed do you get with the server export set to async ?

I tried just now and got 4m2s.

The drives probably still have to do a seek or two per create, the
difference now is that we don't have to wait for one create to start the
next one, so the drives can work in parallel.

So given that I'm striping across two drives, I *think* it makes sense
that I'm getting about double the performance with the async export
option.

But that doesn't explain the difference between async and local
performance (22s when I tried the same untar directly on the server, 25s
when I included a final sync in the timing).  And your numbers are a
complete mystery.
I have just tried running the untar on our work systems. These are again 
Fedora27 but newer hardware.
I set one of the servers NFS exports to just rw (removed the async 
option in /etc/exports and ran exportfs -arv).
Remounted this NFS file system on a Fedora27 client and re-ran the test. 
I have only waited 10mins but the overal network data rate is in the 
order of 0.1 MBytes/sec so it looks like it will be a multiple hour job 
as at home.
So I have two completely separate systems with the same performance over 
NFS.
With your NFS "sync" test are you sure you set the "sync" mode on the 
server and re-exported the file systems ?




--b.


What's the disk configuration and what filesystem is this?

Those tests above were to a single: SATA Western Digital Red 3TB, WDC
WD30EFRX-68EUZN0 using ext4.
Most of my tests have been to software RAID1 SATA disks, Western Digital Red
2TB on one server and Western Digital RE4 2TB WDC WD2003FYYS-02W0B1 on
another quad core Xeon server all using ext4 and all having plenty of RAM.
All on stock Fedora27 (both server and client) updated to date.


Is it really expected for NFS to be this bad these days with a reasonably
typical operation and are there no other tuning parameters that can help  ?

It's expected that the performance of single-threaded file creates will
depend on latency, not bandwidth.

I believe high-performance servers use battery backed write caches with
storage behind them that can do lots of IOPS.

(One thing I've been curious about is whether you could get better
performance cheap on this kind of workload ext3/4 striped across a few
drives and an external journal on SSD.  But when I experimented with
that a few years ago I found synchronous write latency wasn't much
better.  I didn't investigate why not, maybe that's just the way SSDs
are.)

--b.



___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: [HEADS UP] Removal of systemd-units

2018-01-30 Thread Nathanael Noblet
Apologies, I no longer have the email with the list of packages and 
maintainers. However I’ve updated dspam in rawhide.

— 
Nathanael
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 03:29:41PM +, Terry Barnaby wrote:
> On 30/01/18 15:09, J. Bruce Fields wrote:
> > By comparison on my little home server (Fedora, ext4, a couple WD Black
> > 1TB drives), with sync, that untar takes is 7:44, about 8ms/file.
> Ok, that is far more reasonable, so something is up on my systems :)
> What speed do you get with the server export set to async ?

I tried just now and got 4m2s.

The drives probably still have to do a seek or two per create, the
difference now is that we don't have to wait for one create to start the
next one, so the drives can work in parallel.

So given that I'm striping across two drives, I *think* it makes sense
that I'm getting about double the performance with the async export
option.

But that doesn't explain the difference between async and local
performance (22s when I tried the same untar directly on the server, 25s
when I included a final sync in the timing).  And your numbers are a
complete mystery.

--b.

> > 
> > What's the disk configuration and what filesystem is this?
> Those tests above were to a single: SATA Western Digital Red 3TB, WDC
> WD30EFRX-68EUZN0 using ext4.
> Most of my tests have been to software RAID1 SATA disks, Western Digital Red
> 2TB on one server and Western Digital RE4 2TB WDC WD2003FYYS-02W0B1 on
> another quad core Xeon server all using ext4 and all having plenty of RAM.
> All on stock Fedora27 (both server and client) updated to date.
> 
> > 
> > > Is it really expected for NFS to be this bad these days with a reasonably
> > > typical operation and are there no other tuning parameters that can help  
> > > ?
> > It's expected that the performance of single-threaded file creates will
> > depend on latency, not bandwidth.
> > 
> > I believe high-performance servers use battery backed write caches with
> > storage behind them that can do lots of IOPS.
> > 
> > (One thing I've been curious about is whether you could get better
> > performance cheap on this kind of workload ext3/4 striped across a few
> > drives and an external journal on SSD.  But when I experimented with
> > that a few years ago I found synchronous write latency wasn't much
> > better.  I didn't investigate why not, maybe that's just the way SSDs
> > are.)
> > 
> > --b.
> 
> 
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Proposed Fedora packaging guideline: More Go packaging

2018-01-30 Thread Neal Gompa
On Tue, Jan 30, 2018 at 10:11 AM,   wrote:
> Hi,
>
> Now the technical PR is submitted
> https://src.fedoraproject.org/rpms/go-srpm-macros/pull-request/1
>
> and waiting for action from the go-srpm-macros maintainers, I took (quite a 
> long) time to refresh and flesh out the corresponding packaging guidelines 
> proposal. It should be fairly complete now:
>
> https://fedoraproject.org/wiki/More_Go_packaging
>
> I'd appreciate review to check if I have forgotten an important case, if 
> people understand the text, if they have enhancements or corrections to 
> propose, and so on.
>
> Then I will push it FPC side again.
>
> Actual practice should be fairly simple and self-explanatory, the proposal 
> length can be scary but that's because it documents all kinds of corner cases 
> that required digging through specs and mailing lists to find resolution 
> examples before. The basic Go packaging skeleton will be sufficient is most 
> cases without requiring to read any documentation.
>

The only thing I see that might be missing is autogenerating
bundled(golang()) Provides when a vendor tree exists (with the
appropriate automatic filters on Requires).



-- 
真実はいつも一つ!/ Always, there's only one truth!
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Tomasz Kłoczko
On 30 January 2018 at 12:32, Matthew Miller 
wrote:

> On Tue, Jan 30, 2018 at 03:07:22AM +, Tomasz Kłoczko wrote:
> > OK. So if I'll be nice, explicit and concise it will cause that Igor will
> > finish at least one mass change before start another one?
> > Igor could you pleas confirm above?
>
> Seriously, quit it. Igor doesn't work for you. Igor doesn't work for
> me. Yet, he's actively working to make Fedora better and to improve our
> packaging. That's awesome. An appropriate response might be: "Awesome!
> Thanks for kicking off this effort. I see there's work to be done yet.
> How can I help?"
>
> Demanding that other people do things isn't the way we do things.
>

Who said that I'm demanding something?
Look one more time on https://pagure.io/packaging-committee/issue/736
Igor took this tasks VOLUNTARILY and started working on necessary specs
before I've delivered batch of patches.
When I found that number of already done modifications is trashing already
many patches which I had prepared it was no sense to be (as me) involved in
helping finish this.
Now still is not finished about 20%.
Just please answer on the questions:
- Who will finish this?
- Why it is done so badly?
- What is the sense submitting more such mass changes if it is good chance
that they'll be not finished as well? (and now you are telling me that I'm
this bad guy because I've been showing those "naked pictures" to other
people)

kloczek
-- 
Tomasz Kłoczko | LinkedIn: *http://lnkd.in/FXPWxH *
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Wyland is a disaster

2018-01-30 Thread Adam Williamson
On Tue, 2018-01-30 at 10:49 -0500, Przemek Klosowski wrote:
> On 01/29/2018 09:33 PM, mcatanz...@gnome.org wrote:
> > Please see my earlier post in this thread regarding how to get a 
> > stacktrace out of coredumpctl
> 
> This is a great debugging harness; thanks for pointing it out as I 
> didn't know about it.
> 
> I am currently reporting my pan-crashing-Wayland crashing issue on 
> gnome.org---but in the true Heisenbug fashion it stopped crashing, so I 
> can't reproduce it now. It was crashing almost every time I used the pan 
> newsreader (the three crash dumps I submitted were just few days apart 
> in my coredumpctl records), but now they're as solid as rock.

IIRC we did fix at least one of the more commonly encountered crashers
recently, so you might've been running into that one. There are still a
few out there, though.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Wyland is a disaster

2018-01-30 Thread mcatanzaro
On Tue, Jan 30, 2018 at 9:49 AM, Przemek Klosowski 
 wrote:
I am currently reporting my pan-crashing-Wayland crashing issue on 
gnome.org---but in the true Heisenbug fashion it stopped crashing, so 
I can't reproduce it now. It was crashing almost every time I used 
the pan newsreader (the three crash dumps I submitted were just few 
days apart in my coredumpctl records), but now they're as solid as 
rock.


That's the great thing about coredumpctl: as long as the issue occurs 
again sometime in the future, the backtrace will be there waiting for 
you at that time, so it's not a big deal that you can't reproduce 
immediately.


Michael


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Wyland is a disaster

2018-01-30 Thread Przemek Klosowski

On 01/29/2018 09:33 PM, mcatanz...@gnome.org wrote:
Please see my earlier post in this thread regarding how to get a 
stacktrace out of coredumpctl


This is a great debugging harness; thanks for pointing it out as I 
didn't know about it.


I am currently reporting my pan-crashing-Wayland crashing issue on 
gnome.org---but in the true Heisenbug fashion it stopped crashing, so I 
can't reproduce it now. It was crashing almost every time I used the pan 
newsreader (the three crash dumps I submitted were just few days apart 
in my coredumpctl records), but now they're as solid as rock.


I have to say that the GNOME developers responded almost immediately, 
and I feel stupid for not being able to demonstrate replication. I am 
really curious about the nature of this bug: it must be some weird 
interaction---perhaps related to memory use or something like that.

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Stephen John Smoogen
On 30 January 2018 at 03:12, Petr Pisar  wrote:

> On Tue, Jan 30, 2018 at 08:31:05AM +0100, Reindl Harald wrote:
> > Am 30.01.2018 um 08:25 schrieb Petr Pisar:
> > > On 2018-01-29, J. Bruce Fields  wrote:
> > > > The file create isn't allowed to return until the server has created
> the
> > > > file and the change has actually reached disk.
> > > >
> > > Why is there such a requirement? This is not true for local file
> > > systems. This is why fsync() exists
> >
> > pretty simply because on the NFS server side the whole VFS layer sits
> again
> > and without "async" in the export you need a way to rely on "the stuff i
> > wrote to the network ended on the filesystem on the other end"
>
> If I need reliability, I issue fsync from the client process, client VFS
> passes it to the NFS client, NFS client translates it into NFS COMMIT
> message,
> sends it to the NFS server, NFS server translates it back to fsync, pass
> it to
> the server VFS and from there to the local file system driver.
>
> I don't understand why NFS should be reliable by default.
>
>
I think it has to do with what NFS 'promises' to try to do and that it is
built for dealing with multiple clients being able to read to the same
areas at the same time.  Having it so that you have unreliable writes tends
to make multiple clients get different versions of the data. It was also
originally written to work on UDP for the original sets so that means that
you have to be more careful. While NFSv4 is a tcp it had other requirements
which makes reliability what the standard requires.

In the end, networks are not 100% reliable and corner cases not small
percentage. Things get weird at times and the client and server may need to
resend various commands. If the client VFS assumes that the data was
already written, it may drop that from its cache or allow other changes
which the server won't allow.  If another process on that system looks for
the data in that file because its VFS says its there, but the data is not
in cache and the system asks the server before the server has written.. it
can get garbage. [I expect there are special deadlocks and such which could
happen.]

[For some reason I think this is a common question I have seen with NFS and
various developers over the years. I may be parroting poorly something
written by others.]

-- 
Stephen J Smoogen.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby

On 30/01/18 15:09, J. Bruce Fields wrote:

On Tue, Jan 30, 2018 at 08:49:27AM +, Terry Barnaby wrote:

On 29/01/18 22:28, J. Bruce Fields wrote:

On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:

Ok, that's a shame unless NFSv4's write performance with small files/dirs
is relatively ok which it isn't on my systems.
Although async was "unsafe" this was not an issue in main standard
scenarios such as an NFS mounted home directory only being used by one
client.
The async option also does not appear to work when using NFSv3. I guess it
was removed from that protocol at some point as well ?

This isn't related to the NFS protocol version.

I think everybody's confusing the server-side "async" export option with
the client-side mount "async" option.  They're not really related.

The unsafe thing that speeds up file creates is the server-side "async"
option.  Sounds like you tried to use the client-side mount option
instead, which wouldn't do anything.


What is the expected sort of write performance when un-taring, for example,
the linux kernel sources ? Is 2 MBytes/sec on average on a Gigabit link
typical (3 mins to untar 4.14.15) or should it be better ?

It's not bandwidth that matters, it's latency.

The file create isn't allowed to return until the server has created the
file and the change has actually reached disk.

So an RPC has to reach the server, which has to wait for disk, and then
the client has to get the RPC reply.  Usually it's the disk latency that
dominates.

And also the final close after the new file is written can't return
until all the new file data has reached disk.

v4.14.15 has 61305 files:

$ git ls-tree -r  v4.14.15|wc -l
61305

So time to create each file was about 3 minutes/61305 =~ 3ms.

So assuming two roundtrips per file, your disk latency is probably about
1.5ms?

You can improve the storage latency somehow (e.g. with a battery-backed
write cache) or use more parallelism (has anyone ever tried to write a
parallel untar?).  Or you can cheat and set the async export option, and
then the server will no longer wait for disk before replying.  The
problem is that on server reboot/crash, the client's assumptions about
which operations succeeded may turn out to be wrong.

--b.

Many thanks for your reply.

Yes, I understand the above (latency and normally synchronous nature of
NFS). I have async defined in the servers /etc/exports options. I have,
later, also defined it on the client side as the async option on the server
did not appear to be working and I wondered if with ongoing changes it had
been moved there (would make some sense for the client to define it and pass
this option over to the server as it knows, in most cases, if the bad
aspects of async would be an issue to its usage in the situation in
question).

It's a server with large disks, so SSD is not really an option. The use of
async is ok for my usage (mainly /home mounted and users home files only in
use by one client at a time etc etc.).

Note it's not concurrent access that will cause problems, it's server
crashes.  A UPS may reduce the risk a little.


However I have just found that async is actually working! I just did not
believe it was, due to the poor write performance. Without async on the
server the performance is truly abysmal. The figures I get for untaring the
kernel sources (4.14.15 895MBytes untared) using "rm -fr linux-4.14.15;
sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp; sync)" are:

Untar on server to its local disk:  13 seconds, effective data rate: 68
MBytes/s

Untar on server over NFSv4.2 with async on server:  3 minutes, effective
data rate: 4.9 MBytes/sec

Untar on server over NFSv4.2 without async on server:  2 hours 12 minutes,
effective data rate: 115 kBytes/s !!

2:12 is 7920 seconds, and you've got 61305 files to write, so that's
about 130ms/file.  That's more than I'd expect even if you're waiting
for a few seeks on each file create, so there may indeed be something
wrong.

By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.

Ok, that is far more reasonable, so something is up on my systems :)
What speed do you get with the server export set to async ?


What's the disk configuration and what filesystem is this?
Those tests above were to a single: SATA Western Digital Red 3TB, WDC 
WD30EFRX-68EUZN0 using ext4.
Most of my tests have been to software RAID1 SATA disks, Western Digital 
Red 2TB on one server and Western Digital RE4 2TB WDC WD2003FYYS-02W0B1 
on another quad core Xeon server all using ext4 and all having plenty of 
RAM.

All on stock Fedora27 (both server and client) updated to date.




Is it really expected for NFS to be this bad these days with a reasonably
typical operation and are there no other tuning parameters that can help  ?

It's expected that the performance of single-threaded file creates will
depend on latency, not bandwidth.

I believe 

Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 10:00:44AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 09:49 schrieb Terry Barnaby:
> > Untar on server to its local disk:  13 seconds, effective data rate: 68
> > MBytes/s
> > 
> > Untar on server over NFSv4.2 with async on server:  3 minutes, effective
> > data rate: 4.9 MBytes/sec
> > 
> > Untar on server over NFSv4.2 without async on server:  2 hours 12
> > minutes, effective data rate: 115 kBytes/s !!
> > 
> > Is it really expected for NFS to be this bad these days with a
> > reasonably typical operation and are there no other tuning parameters
> > that can help  ?
> 
> no, we are running a virtual backup appliance (VMware Data Protection aka
> EMC Avamar) on vSphere 5.5 on a HP microserver running CentOS7 with a RAID10
> built of 4x4 TB consumer desktop disks and the limiting factor currently is
> the Gigabit Ethernet
> 
> 35 TB network IO per month, around 1 TB per day which happens between 1:00
> AM and 2:00 AM as well as garbage collection between 07:AM and 08:00 AM

Again, this is highly dependent on the workload.

Your backup appliance is probably mainly doing large sequential writes
to a small number of big files, and we aim for that sort of workload to
be limited only by available bandwidth, which is what you're seeing.

If you have a single-threaded process creating lots of small files,
you'll be limited by disk write latency long before you hit any
bandwidth limits.

--b.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Proposed Fedora packaging guideline: More Go packaging

2018-01-30 Thread nicolas . mailhot
Hi,

Now the technical PR is submitted
https://src.fedoraproject.org/rpms/go-srpm-macros/pull-request/1

and waiting for action from the go-srpm-macros maintainers, I took (quite a 
long) time to refresh and flesh out the corresponding packaging guidelines 
proposal. It should be fairly complete now:

https://fedoraproject.org/wiki/More_Go_packaging

I'd appreciate review to check if I have forgotten an important case, if people 
understand the text, if they have enhancements or corrections to propose, and 
so on.

Then I will push it FPC side again.

Actual practice should be fairly simple and self-explanatory, the proposal 
length can be scary but that's because it documents all kinds of corner cases 
that required digging through specs and mailing lists to find resolution 
examples before. The basic Go packaging skeleton will be sufficient is most 
cases without requiring to read any documentation. 

Regards,

-- 
Nicolas Mailhot
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread J. Bruce Fields
On Tue, Jan 30, 2018 at 08:49:27AM +, Terry Barnaby wrote:
> On 29/01/18 22:28, J. Bruce Fields wrote:
> > On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:
> > > Ok, that's a shame unless NFSv4's write performance with small files/dirs
> > > is relatively ok which it isn't on my systems.
> > > Although async was "unsafe" this was not an issue in main standard
> > > scenarios such as an NFS mounted home directory only being used by one
> > > client.
> > > The async option also does not appear to work when using NFSv3. I guess it
> > > was removed from that protocol at some point as well ?
> > This isn't related to the NFS protocol version.
> > 
> > I think everybody's confusing the server-side "async" export option with
> > the client-side mount "async" option.  They're not really related.
> > 
> > The unsafe thing that speeds up file creates is the server-side "async"
> > option.  Sounds like you tried to use the client-side mount option
> > instead, which wouldn't do anything.
> > 
> > > What is the expected sort of write performance when un-taring, for 
> > > example,
> > > the linux kernel sources ? Is 2 MBytes/sec on average on a Gigabit link
> > > typical (3 mins to untar 4.14.15) or should it be better ?
> > It's not bandwidth that matters, it's latency.
> > 
> > The file create isn't allowed to return until the server has created the
> > file and the change has actually reached disk.
> > 
> > So an RPC has to reach the server, which has to wait for disk, and then
> > the client has to get the RPC reply.  Usually it's the disk latency that
> > dominates.
> > 
> > And also the final close after the new file is written can't return
> > until all the new file data has reached disk.
> > 
> > v4.14.15 has 61305 files:
> > 
> > $ git ls-tree -r  v4.14.15|wc -l
> > 61305
> > 
> > So time to create each file was about 3 minutes/61305 =~ 3ms.
> > 
> > So assuming two roundtrips per file, your disk latency is probably about
> > 1.5ms?
> > 
> > You can improve the storage latency somehow (e.g. with a battery-backed
> > write cache) or use more parallelism (has anyone ever tried to write a
> > parallel untar?).  Or you can cheat and set the async export option, and
> > then the server will no longer wait for disk before replying.  The
> > problem is that on server reboot/crash, the client's assumptions about
> > which operations succeeded may turn out to be wrong.
> > 
> > --b.
> 
> Many thanks for your reply.
> 
> Yes, I understand the above (latency and normally synchronous nature of
> NFS). I have async defined in the servers /etc/exports options. I have,
> later, also defined it on the client side as the async option on the server
> did not appear to be working and I wondered if with ongoing changes it had
> been moved there (would make some sense for the client to define it and pass
> this option over to the server as it knows, in most cases, if the bad
> aspects of async would be an issue to its usage in the situation in
> question).
> 
> It's a server with large disks, so SSD is not really an option. The use of
> async is ok for my usage (mainly /home mounted and users home files only in
> use by one client at a time etc etc.).

Note it's not concurrent access that will cause problems, it's server
crashes.  A UPS may reduce the risk a little.

> However I have just found that async is actually working! I just did not
> believe it was, due to the poor write performance. Without async on the
> server the performance is truly abysmal. The figures I get for untaring the
> kernel sources (4.14.15 895MBytes untared) using "rm -fr linux-4.14.15;
> sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp; sync)" are:
> 
> Untar on server to its local disk:  13 seconds, effective data rate: 68
> MBytes/s
> 
> Untar on server over NFSv4.2 with async on server:  3 minutes, effective
> data rate: 4.9 MBytes/sec
> 
> Untar on server over NFSv4.2 without async on server:  2 hours 12 minutes,
> effective data rate: 115 kBytes/s !!

2:12 is 7920 seconds, and you've got 61305 files to write, so that's
about 130ms/file.  That's more than I'd expect even if you're waiting
for a few seeks on each file create, so there may indeed be something
wrong.

By comparison on my little home server (Fedora, ext4, a couple WD Black
1TB drives), with sync, that untar takes is 7:44, about 8ms/file.

What's the disk configuration and what filesystem is this?

> Is it really expected for NFS to be this bad these days with a reasonably
> typical operation and are there no other tuning parameters that can help  ?

It's expected that the performance of single-threaded file creates will
depend on latency, not bandwidth.

I believe high-performance servers use battery backed write caches with
storage behind them that can do lots of IOPS.

(One thing I've been curious about is whether you could get better
performance cheap on this kind of workload ext3/4 striped across a few
drives and an external journal on SSD.  But 

[Bug 1540220] New: perl-Gearman-Client-Async-0.94-28.fc28 FTBFS: Timeout, test fails at t/allinone.t line 62.

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1540220

Bug ID: 1540220
   Summary: perl-Gearman-Client-Async-0.94-28.fc28 FTBFS: Timeout,
test fails at t/allinone.t line 62.
   Product: Fedora
   Version: rawhide
 Component: perl-Gearman-Client-Async
  Assignee: ppi...@redhat.com
  Reporter: ppi...@redhat.com
QA Contact: extras...@fedoraproject.org
CC: perl-devel@lists.fedoraproject.org, ppi...@redhat.com,
ru...@rubenkerkhof.com



perl-Gearman-Client-Async-0.94-28.fc28 fails to build in F28 because a test
fails:

+ make test
PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness"
"-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')"
t/*.t
provided value is not a blessed object at
/usr/share/perl5/vendor_perl/Gearman/Util.pm line 120.
Timeout, test fails at t/allinone.t line 62.
# Looks like your test exited with 255 before it could output anything.
t/allinone.t .. 
Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 2/2 subtests 

A difference between passing and failing build root is at
. This could be caused by
upgrading perl-Gearman.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


Re: [HEADS UP] Removal of systemd-units

2018-01-30 Thread Michal Ruprich

> mruprich   net-tools net-tools rsync
Done for both packages.

-- 
Michal Ruprich
Associate Software Engineer

Email: mrupr...@redhat.com
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00, Brno, Czech Republic
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Matthew Miller
On Tue, Jan 30, 2018 at 03:07:22AM +, Tomasz Kłoczko wrote:
> OK. So if I'll be nice, explicit and concise it will cause that Igor will
> finish at least one mass change before start another one?
> Igor could you pleas confirm above?

Seriously, quit it. Igor doesn't work for you. Igor doesn't work for
me. Yet, he's actively working to make Fedora better and to improve our
packaging. That's awesome. An appropriate response might be: "Awesome!
Thanks for kicking off this effort. I see there's work to be done yet.
How can I help?"

Demanding that other people do things isn't the way we do things.


-- 
Matthew Miller

Fedora Project Leader
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Tom Hughes

On 30/01/18 12:06, Florian Weimer wrote:

On 01/30/2018 01:01 PM, Vít Ondruch wrote:



Dne 30.1.2018 v 10:01 Florian Weimer napsal(a):

On 01/30/2018 09:57 AM, Igor Gnatenko wrote:

brp-ldconfig simply calls `/sbin/ldconfig -N -r "$RPM_BUILD_ROOT"`
and there is
no /etc/ld.so.conf under $RPM_BUILD_ROOT when you build... In theory
we could
supply one (empty), but I would ask Florian what would be the best to
do.


But why does it print /etc/ld.so.conf, and not the path in the
buildroot?  That concerns me.


I would be concerned even if it issued warning about /etc/ld.so.conf in
buildroot ...


Sure, it would be nice to avoid the warning, and I will work on 
something for glibc in rawhide to support this.  But I would also like 
to understand if ldconfig is actually using the wrong path.


So you're expecting to see $RPM_BUILD_ROOT/etc/ld.so.conf as the 
reported path right?


That doesn't seem to be how the -r option to ldconfig works though
because it chroots so as far as it's concerned the relative path is
the path it is accessing:

% sudo strace ldconfig -N -r /tmp
...
chroot("/tmp")  = 0
chdir("/")  = 0
openat(AT_FDCWD, "/etc/ld.so.conf", O_RDONLY) = -1 ENOENT (No such file 
or directory)

...
ldconfig: Warning: ignoring configuration file that cannot be opened: 
/etc/ld.so.conf: No such file or directory


Tom

--
Tom Hughes (t...@compton.nu)
http://compton.nu/
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Vít Ondruch


Dne 30.1.2018 v 11:16 Jakub Jelinek napsal(a):
> On Tue, Jan 30, 2018 at 11:11:02AM +0100, Ralf Corsepius wrote:
>> On 01/30/2018 10:00 AM, Florian Weimer wrote:
>>> On 01/30/2018 09:54 AM, Ralf Corsepius wrote:
> annobin.spec now uses:
>
>    %undefine _annotated_build
>
> so at least the circular dependency is no longer there.  You still
> have to remember to rebuild it when a new version of GCC comes out
> however.
 ... which apparently has just happened.
>>> Yes, Fedora 28 will use GCC 8.
>>>
 Consequences are affecting all released versions of Fedora, because
 it's impossible to apply bugfix updates to Fedora < rawhide, due to
 the GCC-chaos in rawhide.
>>> Please provide more context when reporting issues, otherwise we have a
>>> hard time helping you.
>> Seeming hundreds of packages currently carry broken deps, dnf is
>> malfunctioning etc.
> That is what we have the mass rebuild scheduled for (AFAIK this week).

I'd say that when major GCC lands in Fedora, what happened yesterday
[1], it would not hurt to send a note to fedora-devel.

Vít


[1] https://koji.fedoraproject.org/koji/buildinfo?buildID=1021625
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Florian Weimer

On 01/30/2018 01:01 PM, Vít Ondruch wrote:



Dne 30.1.2018 v 10:01 Florian Weimer napsal(a):

On 01/30/2018 09:57 AM, Igor Gnatenko wrote:

brp-ldconfig simply calls `/sbin/ldconfig -N -r "$RPM_BUILD_ROOT"`
and there is
no /etc/ld.so.conf under $RPM_BUILD_ROOT when you build... In theory
we could
supply one (empty), but I would ask Florian what would be the best to
do.


But why does it print /etc/ld.so.conf, and not the path in the
buildroot?  That concerns me.


I would be concerned even if it issued warning about /etc/ld.so.conf in
buildroot ...


Sure, it would be nice to avoid the warning, and I will work on 
something for glibc in rawhide to support this.  But I would also like 
to understand if ldconfig is actually using the wrong path.


Thanks,
Florian
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Vít Ondruch


Dne 30.1.2018 v 10:01 Florian Weimer napsal(a):
> On 01/30/2018 09:57 AM, Igor Gnatenko wrote:
>> brp-ldconfig simply calls `/sbin/ldconfig -N -r "$RPM_BUILD_ROOT"`
>> and there is
>> no /etc/ld.so.conf under $RPM_BUILD_ROOT when you build... In theory
>> we could
>> supply one (empty), but I would ask Florian what would be the best to
>> do.
>
> But why does it print /etc/ld.so.conf, and not the path in the
> buildroot?  That concerns me.

I would be concerned even if it issued warning about /etc/ld.so.conf in
buildroot ...


Vít
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


F28 Self Contained Change: Atomic, Cloud and Docker images for s390x

2018-01-30 Thread Jan Kurik
= Proposed Self Contained Change: Atomic, Cloud and Docker images for s390x =
https://fedoraproject.org/wiki/Changes/Atomic_Cloud_and_Docker_images_for_s390x

Change owner(s):
* Sinny Kumari 


This change is to bring s390x architecture closer to other Fedora
architectures by adding widely used Fedora variants. This includes
docker images, Atomic Host (iso, qcow2 and raw format) and regular
Cloud Images (qcow2 and raw format).


== Detailed Description ==
We already ship Atomic, Cloud and Docker images on other 64-bit Fedora
supported architectures- aarch64, x86_64 and ppc64le. With Fedora 27,
s390x is part of primary koji build system. Currently, we only ship
Server and Everything variants for s390x. So, our next steps should be
to have missing Fedora variants on s390x architecture which users will
find useful. This brings in shipping Atomic, Cloud and Docker images
in Fedora for s390x as well.


== Scope ==
* Proposal owners:
These are isolated changes which doesn't impact existing Fedora 28
release plan on s390x. To have these changes ready to ship in Fedora
28, we mainly require s390x koji builders configured to run these
composes, changes in pungi configuration [
https://pagure.io/pungi-fedora/pull-request/496 ] to enable the
additional compose and fixing s390x specific issues encountered when
compose fails to run.

* Other developers:
Changes in Fedora infrastructure configs/scripts will be required to
have s390x builders configured to run additional composes. Fedora
Infrastructure issue [
https://pagure.io/fedora-infrastructure/issue/6659 ] has been filed to
keep track of required changes to be done.

* Release engineering:
#Releng 7286: https://pagure.io/releng/issue/7286

* Policies and guidelines:
N/A (not a System Wide Change)

* Trademark approval:
N/A (not needed for this Change)
-- 
Jan Kuřík
Platform & Fedora Program Manager
Red Hat Czech s.r.o., Purkynova 99/71, 612 45 Brno, Czech Republic
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Orphaning some Spacewalk packages

2018-01-30 Thread Miroslav Suchý
I orphaned
  rhnmd
This is Spacewalk package which is not developed any more.

And I orphaned
  perl-Socket-MsgHdr
  perl-Crypt-GeneratePassword
as I these are not used in Spacewalk as well.

Miroslav
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[Bug 1539952] perl-B-Keywords-1.18 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1539952

Paul Howarth  changed:

   What|Removed |Added

 Status|NEW |CLOSED
   Fixed In Version||perl-B-Keywords-1.18-1.fc28
 Resolution|--- |RAWHIDE
   Assignee|robinlee.s...@gmail.com |p...@city-fan.org
Last Closed||2018-01-30 06:13:08



--- Comment #3 from Paul Howarth  ---
Build done:
https://koji.fedoraproject.org/koji/taskinfo?taskID=24566347

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Tom Hughes

On 30/01/18 10:50, Ralf Corsepius wrote:

On 01/30/2018 11:16 AM, Jakub Jelinek wrote:

On Tue, Jan 30, 2018 at 11:11:02AM +0100, Ralf Corsepius wrote:

On 01/30/2018 10:00 AM, Florian Weimer wrote:

On 01/30/2018 09:54 AM, Ralf Corsepius wrote:

annobin.spec now uses:

    %undefine _annotated_build

so at least the circular dependency is no longer there.  You still
have to remember to rebuild it when a new version of GCC comes out
however.


... which apparently has just happened.


Yes, Fedora 28 will use GCC 8.


Consequences are affecting all released versions of Fedora, because
it's impossible to apply bugfix updates to Fedora < rawhide, due to
the GCC-chaos in rawhide.


Please provide more context when reporting issues, otherwise we have a
hard time helping you.

Seeming hundreds of packages currently carry broken deps, dnf is
malfunctioning etc.


That is what we have the mass rebuild scheduled for (AFAIK this week).

How comes, I am observing these breakages right now, in the official 
rawhide repos?


They are preventing me to bugfix-update released fedoras, because the 
corresponding rawhide build are failing.


Well it's quite simple. The new gcc has changed the soname for 
libgfortran which means that everything build using fortran needs

to be rebuilt.

What Jakub is saying is that the mass rebuild will fix that by 
rebuilding those packages.


Tom

--
Tom Hughes (t...@compton.nu)
http://compton.nu/
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Ralf Corsepius

On 01/30/2018 11:16 AM, Jakub Jelinek wrote:

On Tue, Jan 30, 2018 at 11:11:02AM +0100, Ralf Corsepius wrote:

On 01/30/2018 10:00 AM, Florian Weimer wrote:

On 01/30/2018 09:54 AM, Ralf Corsepius wrote:

annobin.spec now uses:

    %undefine _annotated_build

so at least the circular dependency is no longer there.  You still
have to remember to rebuild it when a new version of GCC comes out
however.


... which apparently has just happened.


Yes, Fedora 28 will use GCC 8.


Consequences are affecting all released versions of Fedora, because
it's impossible to apply bugfix updates to Fedora < rawhide, due to
the GCC-chaos in rawhide.


Please provide more context when reporting issues, otherwise we have a
hard time helping you.

Seeming hundreds of packages currently carry broken deps, dnf is
malfunctioning etc.


That is what we have the mass rebuild scheduled for (AFAIK this week).

How comes, I am observing these breakages right now, in the official 
rawhide repos?


They are preventing me to bugfix-update released fedoras, because the 
corresponding rawhide build are failing.


Ralf
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Removing ldconfig scriptlets

2018-01-30 Thread Igor Gnatenko
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Tue, 2018-01-30 at 11:25 +0100, Miroslav Lichvar wrote:
> On Tue, Jan 30, 2018 at 10:04:48AM +0100, Igor Gnatenko wrote:
> > For those who didn't check Change page since today's morning: Thanks to
> > Jason
> > Tibbits (tibbs) who proposed %ldconfig_scriptlets macro and its
> > implementation. 
> > 
> > Now we have 4 macros you could use: %ldconfig, %ldconfig_post,
> > %ldconfig_postun, %ldconfig_scriptlets.
> 
> The macros should be used instead of this conditional from the change
> page?

Yes.

> %if (0%{?rhel} && 0%{?rhel} <= 7) || (0%{?fedora} && 0%{?fedora} <= %27)
> 
> Which package is supposed to provide the macros? I don't see them in
> an up-to-date F27 (and neither rawhide) buildroot.

redhat-rpm-config and epel-rpm-macros. Those are already in buildroot, but not
in updates yet (update is still pending).
- -- 
- -Igor Gnatenko
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEhLFO09aHZVqO+CM6aVcUvRu8X0wFAlpwSyEACgkQaVcUvRu8
X0zseBAAqs2BcWiE8TZzMJHrsoZ2K4KK+ll5QApBXzOq9/lS8hvQhsKJIqeu5BLU
uS9XxUdv34xcaKnc5bmnf7fZ54+i7H6DL/Cym3SOnYajJImxqztpQYKkLsKiLjNK
Pl+Y3qWJJWFPKX/DdMO7R4I+Gf2LJxLUyoB4K9aN/ysUZxJW2y2ZpJLWNDBlV0A4
7Lm1+Jp4T82R95OyYTNwxQoNclOHFG0kVCvBN8sPKRDzdSiD4eVwJ+UmmIaV2BF+
9SZBmVA9tO+P9z+GfFSj8MJN7OuWdcNvOkGl/+o3zqKuJ+O+rlX+dQiH6HQTwb1O
Pyl9JGwAct7ko7Je9mIyTQxZKmcUGNdPRJCVg9kJ34w4Yf/QU6fxpENkitFBuPRI
uKgGuqQcLgTwwBarCa2l+mvxj7BrcoAsWRgZ/AiCSa9lvmztM5L9DZM9jkp5tSCZ
K7xicbWc7KNn9mt666FyFJw083ld8eougGBzPForNj6WWBNhp8FjpjtZBsa7aIc/
L9Jq49yDhLRlwtNKdU5ISmNZ+AyX4kzo/Jckn5eliFf7r/nEfsqV9fa/pTuBeTgm
TDt9epyWCkQMtUMMQGekxmMXdJwqPC9jYJgy2lLkn4cZWsm6CTFTllbWA3DvzEBu
mH0rfcrTUsxJdgdvmwWzYiPS60Oaq7ahcF8P94BT7CKn8AKqMqY=
=cgpB
-END PGP SIGNATURE-
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Removing ldconfig scriptlets

2018-01-30 Thread Miroslav Lichvar
On Tue, Jan 30, 2018 at 10:04:48AM +0100, Igor Gnatenko wrote:
> For those who didn't check Change page since today's morning: Thanks to Jason
> Tibbits (tibbs) who proposed %ldconfig_scriptlets macro and its
> implementation. 
> 
> Now we have 4 macros you could use: %ldconfig, %ldconfig_post,
> %ldconfig_postun, %ldconfig_scriptlets.

The macros should be used instead of this conditional from the change
page?

%if (0%{?rhel} && 0%{?rhel} <= 7) || (0%{?fedora} && 0%{?fedora} <= %27)

Which package is supposed to provide the macros? I don't see them in
an up-to-date F27 (and neither rawhide) buildroot.

-- 
Miroslav Lichvar
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: [HEADS UP] Removal of systemd-units

2018-01-30 Thread Miroslav Suchý
Dne 25.1.2018 v 19:17 Jason L Tibbitts III napsal(a):
> msuchy abrt rhnmd rhnsd

Abrt and rhnsd fixed in upstream. Will be pushed to dist-git on next release

I am going to orphan rhnmd (will be separate announce).

Miroslav
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Jakub Jelinek
On Tue, Jan 30, 2018 at 11:11:02AM +0100, Ralf Corsepius wrote:
> On 01/30/2018 10:00 AM, Florian Weimer wrote:
> > On 01/30/2018 09:54 AM, Ralf Corsepius wrote:
> > > > annobin.spec now uses:
> > > > 
> > > >    %undefine _annotated_build
> > > > 
> > > > so at least the circular dependency is no longer there.  You still
> > > > have to remember to rebuild it when a new version of GCC comes out
> > > > however.
> > > 
> > > ... which apparently has just happened.
> > 
> > Yes, Fedora 28 will use GCC 8.
> > 
> > > Consequences are affecting all released versions of Fedora, because
> > > it's impossible to apply bugfix updates to Fedora < rawhide, due to
> > > the GCC-chaos in rawhide.
> > 
> > Please provide more context when reporting issues, otherwise we have a
> > hard time helping you.
> Seeming hundreds of packages currently carry broken deps, dnf is
> malfunctioning etc.

That is what we have the mass rebuild scheduled for (AFAIK this week).

Jakub
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Ralf Corsepius

On 01/30/2018 10:00 AM, Florian Weimer wrote:

On 01/30/2018 09:54 AM, Ralf Corsepius wrote:

annobin.spec now uses:

   %undefine _annotated_build

so at least the circular dependency is no longer there.  You still
have to remember to rebuild it when a new version of GCC comes out
however.


... which apparently has just happened.


Yes, Fedora 28 will use GCC 8.

Consequences are affecting all released versions of Fedora, because 
it's impossible to apply bugfix updates to Fedora < rawhide, due to 
the GCC-chaos in rawhide.


Please provide more context when reporting issues, otherwise we have a 
hard time helping you.
Seeming hundreds of packages currently carry broken deps, dnf is 
malfunctioning etc.


Eg.

Error:
 Problem 1: cannot install both libgfortran-8.0.1-0.6.fc28.x86_64 and 
libgfortran-7.3.1-1.fc28.x86_64
  - package gcc-gfortran-8.0.1-0.6.fc28.x86_64 requires 
libgfortran.so.5()(64bit), but none of the providers can be installed
  - package blacs-mpich-devel-2.0.2-23.fc27.x86_64 requires 
libgfortran.so.4()(64bit), but none of the providers can be installed

  - cannot install the best candidate for the job
 Problem 2: gcc-gfortran-7.3.1-1.fc28.i686 has inferior architecture
  - package mpich-devel-3.2.1-2.fc28.x86_64 requires gcc-gfortran, but 
none of the providers can be installed
  - package gcc-gfortran-7.3.1-1.fc28.x86_64 requires gcc = 
7.3.1-1.fc28, but none of the providers can be installed
  - package gcc-7.3.1-1.fc28.i686 requires cpp = 7.3.1-1.fc28, but none 
of the providers can be installed
  - package gcc-7.3.1-1.fc28.x86_64 requires cpp = 7.3.1-1.fc28, but 
none of the providers can be installed
  - package libtool-2.4.6-21.fc28.x86_64 requires gcc(major) = 8, but 
none of the providers can be installed
  - package gcc-8.0.1-0.6.fc28.x86_64 requires cpp = 8.0.1-0.6.fc28, 
but none of the providers can be installed
  - cannot install both cpp-7.3.1-1.fc28.x86_64 and 
cpp-8.0.1-0.6.fc28.x86_64
  - cannot install both cpp-8.0.1-0.6.fc28.x86_64 and 
cpp-7.3.1-1.fc28.x86_64

  - cannot install the best candidate for the job
  - package gcc-gfortran-8.0.1-0.6.fc28.x86_64 requires 
libgfortran.so.5()(64bit), but none of the providers can be installed
  - cannot install both libgfortran-8.0.1-0.6.fc28.x86_64 and 
libgfortran-7.3.1-1.fc28.x86_64
  - package blacs-openmpi-devel-2.0.2-23.fc27.x86_64 requires 
libgfortran.so.4()(64bit), but none of the providers can be installed
 Problem 3: package scalapack-openmpi-devel-2.0.2-23.fc27.x86_64 
requires libscalapack.so.2()(64bit)(openmpi-x86_64), but none of the 
providers can be installed
  - package scalapack-openmpi-2.0.2-23.fc27.x86_64 requires 
libgfortran.so.4()(64bit), but none of the providers can be installed
  - cannot install both libgfortran-8.0.1-0.6.fc28.x86_64 and 
libgfortran-7.3.1-1.fc28.x86_64
  - package gcc-gfortran-8.0.1-0.6.fc28.x86_64 requires 
libgfortran.so.5()(64bit), but none of the providers can be installed
  - package openmpi-devel-2.1.1-5.fc28.x86_64 requires gcc-gfortran, 
but none of the providers can be installed
  - package gcc-gfortran-7.3.1-1.fc28.i686 requires gcc = 7.3.1-1.fc28, 
but none of the providers can be installed
  - package gcc-7.3.1-1.fc28.i686 requires cpp = 7.3.1-1.fc28, but none 
of the providers can be installed
  - package gcc-7.3.1-1.fc28.x86_64 requires cpp = 7.3.1-1.fc28, but 
none of the providers can be installed
  - package gcc-gfortran-7.3.1-1.fc28.x86_64 requires gcc = 
7.3.1-1.fc28, but none of the providers can be installed
  - package gcc-c++-8.0.1-0.6.fc28.x86_64 requires gcc = 
8.0.1-0.6.fc28, but none of the providers can be installed
  - package gcc-8.0.1-0.6.fc28.x86_64 requires cpp = 8.0.1-0.6.fc28, 
but none of the providers can be installed
  - cannot install both cpp-7.3.1-1.fc28.x86_64 and 
cpp-8.0.1-0.6.fc28.x86_64
  - cannot install both cpp-8.0.1-0.6.fc28.x86_64 and 
cpp-7.3.1-1.fc28.x86_64

  - cannot install the best candidate for the job
(try to add '--allowerasing' to command line to replace conflicting 
packages or '--skip-broken' to skip uninstallable packages)

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Removing ldconfig scriptlets

2018-01-30 Thread Igor Gnatenko
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Tue, 2018-01-30 at 09:49 +, Jonathan Wakely wrote:
> On 30/01/18 10:04 +0100, Igor Gnatenko wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> > 
> > For those who didn't check Change page since today's morning: Thanks to
> > Jason
> > Tibbits (tibbs) who proposed %ldconfig_scriptlets macro and its
> > implementation.
> > 
> > Now we have 4 macros you could use: %ldconfig, %ldconfig_post,
> > %ldconfig_postun, %ldconfig_scriptlets.
> > 
> > So long story short:
> > * If you have %post -p /sbin/ldconfig and %postun -p /sbin/ldconfig,
> > replace it
> > with %ldconfig_scriptlets
> > * If you have just one of those, replace it with %ldconfig_post or
> > %ldconfig_postun accordingly
> > * If you just call to /sbin/ldconfig from one of your scriptlets in shell,
> > just
> > replace it with %?ldconfig
> > 
> > However, if you are not interested to support distro versions less than
> > F28,
> > then just remove scriptlets  Although on F28+, those macro expand t
> > o nothin
> > g so it's just matter of having 1-2 additional lines in spec file.
> 
> Is there any way to use the macros for subpackages?

Sure!

> The boost package has lots of subpackages that install shared libs, so
> each one does:
> 
> %post foo -p /sbin/ldconfig
> %postun foo -p /sbin/ldconfig
> 
> %post bar -p /sbin/ldconfig
> %postun bar -p /sbin/ldconfig

%ldconfig_scriptlets foo
%ldconfig_scriptlets bar

> (Although for Boost we don't have a single spec file that works for
> all distro versions, so I can just remove them entirely in rawhide,
> and keep them in older branches).

It also accepts -n, so %ldconfig_scriptlets -n foo-bar would work as well!
- -- 
- -Igor Gnatenko
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEhLFO09aHZVqO+CM6aVcUvRu8X0wFAlpwQ6UACgkQaVcUvRu8
X0wPLRAAmzXRfDQuCLGeGTcUHajpXcThH/t6df0/gEN/RFRK2EygZSg9kDcmbBA0
hsPWjn8wFRUcWawGC7CbMEOWzPswjdsnikKs74YK2t4R7/HRCC1YnEeXRvP44ELg
20eaPOLmOG4DIPoEf4hldQKU8p/sI6udGt3npIHyYoWIci3e/yscLI83YxOvNuCU
QNNaFUng7zBQVoMKwdiIdh1vpwX0jazPUr7ZySS97zQepFUXmWT1ahj1055s5An9
Zl90UlnAjJU1KjrNT3PbglLl5V2bDcMmTBCoug3LFPxYVEPfeI1t1DSDB9rhec59
+tbdktX0LYE9mXzTGqQ4W00XjRVw1LGgj6+xQFLkJWtPcGt2EEB4TEiz2gXGJCLv
fu9BxZh9XNUCaxpIBGQziJpFjlap3Xb4OG6hG8+sndRZ6EVqbEF+DppTKdF7PoK8
leuKuD4eVnF3zNpIdmepPwJ9ASZDZfH7eKQhphblaMKSCuzK0RkxfFqitwMG1nz8
T9ehF2SL3g3kS9x7R+Hy8LsTI5Enc7QU8O0pEjmvvW0qyCIBpA4Zr7j3zWzA0jRl
ow07rMTLaqB8oX3+XvkPhmclPX22bjevQY/c8q1A5zRcy+oG3SouJ04NtbE5Gcka
y2mHVW/P3n1LcVrxOCc4CpbaQ9SI6/ylJf7UbNEr5j9BLtB29v0=
=bKQP
-END PGP SIGNATURE-
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Removing ldconfig scriptlets

2018-01-30 Thread Jonathan Wakely

On 30/01/18 10:04 +0100, Igor Gnatenko wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

For those who didn't check Change page since today's morning: Thanks to Jason
Tibbits (tibbs) who proposed %ldconfig_scriptlets macro and its
implementation.

Now we have 4 macros you could use: %ldconfig, %ldconfig_post,
%ldconfig_postun, %ldconfig_scriptlets.

So long story short:
* If you have %post -p /sbin/ldconfig and %postun -p /sbin/ldconfig, replace it
with %ldconfig_scriptlets
* If you have just one of those, replace it with %ldconfig_post or
%ldconfig_postun accordingly
* If you just call to /sbin/ldconfig from one of your scriptlets in shell, just
replace it with %?ldconfig

However, if you are not interested to support distro versions less than F28,
then just remove scriptlets  Although on F28+, those macro expand to nothin
g so it's just matter of having 1-2 additional lines in spec file.


Is there any way to use the macros for subpackages?

The boost package has lots of subpackages that install shared libs, so
each one does:

%post foo -p /sbin/ldconfig
%postun foo -p /sbin/ldconfig

%post bar -p /sbin/ldconfig
%postun bar -p /sbin/ldconfig

(Although for Boost we don't have a single spec file that works for
all distro versions, so I can just remove them entirely in rawhide,
and keep them in older branches).

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: F28 Self Contained Change: Removing ldconfig scriptlets

2018-01-30 Thread Igor Gnatenko
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

For those who didn't check Change page since today's morning: Thanks to Jason
Tibbits (tibbs) who proposed %ldconfig_scriptlets macro and its
implementation. 

Now we have 4 macros you could use: %ldconfig, %ldconfig_post,
%ldconfig_postun, %ldconfig_scriptlets.

So long story short:
* If you have %post -p /sbin/ldconfig and %postun -p /sbin/ldconfig, replace it
with %ldconfig_scriptlets
* If you have just one of those, replace it with %ldconfig_post or
%ldconfig_postun accordingly
* If you just call to /sbin/ldconfig from one of your scriptlets in shell, just
replace it with %?ldconfig

However, if you are not interested to support distro versions less than F28,
then just remove scriptlets  Although on F28+, those macro expand to nothin
g so it's just matter of having 1-2 additional lines in spec file.

On Mon, 2018-01-29 at 22:30 +0100, Jan Kurik wrote:
> = Proposed Self Contained Change: Removing ldconfig scriptlets =
> https://fedoraproject.org/wiki/Changes/Removing_ldconfig_scriptlets
> 
> Change owner(s):
> * Igor Gnatenko 
> * Neal Gompa 
> 
> For many years, package maintainers were required to write scriptlets
> which call ldconfig in %post/%postun if they package shared libraries.
> 
> == Detailed Description ==
> Since time immemorial, Red Hat/Fedora packagers have been required to
> add a stanza to spec files for packages containing libraries to update
> the ldconfig cache.
> 
> %post -p /sbin/ldconfig
> %postun -p /sbin/ldconfig
> 
> To say this is annoying is to put it mildly. However, there was no
> standard mechanism to make this boilerplate go away. Now with RPM
> 4.13+, we should change this to file triggers and make all of that go
> away.
> 
> With this change, these scriptlets can be removed and ldconfig would
> be run just once per transaction.
> 
> If your package places shared libraries in special locations
> referenced by ld.so.conf, you still need to run ldconfig manually.
> 
> For those who concerned about whether this is self-contained or
> system-wide change: there is no overhead if packagers don't remove
> ldconfig scriptlets in time, so completion doesn't depend whether
> packagers remove them or not. We are just making it possible.
> 
> 
> == Scope ==
> * Proposal owners:
> Make sure that DSO symlinks are being packagedcommit, add transaction
> filetriggers to glibccommit + commit.
> 
> * Other developers:
> Package maintainers are advised to remove ldconfig scriptlets in order
> to achieve benefits specified above.
> 
> * Release engineering:
> #7284: https://pagure.io/releng/issue/7284
> 
> * List of deliverables:
> N/A (not a System Wide Change)
> 
> * Policies and guidelines:
> Packaging guidelines need to be updated to reflect reality.
> 
> * Trademark approval:
> N/A (not needed for this Change)
> -- 
> Jan Kuřík
> Platform & Fedora Program Manager
> Red Hat Czech s.r.o., Purkynova 99/71, 612 45 Brno, Czech Republic
- -- 
- -Igor Gnatenko
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEhLFO09aHZVqO+CM6aVcUvRu8X0wFAlpwNTAACgkQaVcUvRu8
X0x4FRAAtGD9BxH75UYdRbSVpP9uo+rLSRd0+QZTB0b47P5+tJStDBujbiIjz1MG
nCeYvYqXKDqVMvdhRG9X5V/XIer5Uf5FYWVhpLNB/6ssOU0xuVM7vKlMealOAYGw
p5CtgpDF48ukmFwbIyJOwM1g/Q7hW/2lMcvvGvXBQmRNE0MeaTy5i91oVXjyZvI4
gdK+qv79typk8//8vHOXEXKpAKBWvAjjqzfvM3h4jIveqRqnkZwDBrDuIDSmMuV3
EbiUbkQUFBEh44TFZi5Z+vj2d0gR/apabhQvQWezVJf+FoQSaAD1saOW+aqHVOki
IHXNB9kiyDbLHkj5pj53x6BfFkPGLxIy6jAIJlPUn6EiXtIrjhrBN/rrPlGH/iUH
0Xm8vA0IsQsFRbuTJWkNqjDvIlIvU7gPAsbUWFesymRj2wLEiUONy2qfPQp7TUr0
NEhkzVP4AcKoQ16rJLmyaeixdP0E0tNqMSUrrsoETG0LvictCSyGth1KkMfoCsAp
5bwxIqT0z3Y3rYvyL1uIxgLdOJPXbYA28WLv5im8zNmKBd/JSQwt9uMFf2ehum2P
5NRDZi7eQdYsQUVsbgeRC8sKzyj0q3l1gRaN6QyfkKoSvSZ/1UQjyhfL82xJB62M
cefA85qmjH160gG+ZrVvB5L1CM9jc2bMQHS7y9yiW4kvCL0GmAg=
=OZ+u
-END PGP SIGNATURE-
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Florian Weimer

On 01/30/2018 09:57 AM, Igor Gnatenko wrote:

brp-ldconfig simply calls `/sbin/ldconfig -N -r "$RPM_BUILD_ROOT"` and there is
no /etc/ld.so.conf under $RPM_BUILD_ROOT when you build... In theory we could
supply one (empty), but I would ask Florian what would be the best to do.


But why does it print /etc/ld.so.conf, and not the path in the 
buildroot?  That concerns me.


Thanks,
Florian
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Florian Weimer

On 01/30/2018 09:54 AM, Ralf Corsepius wrote:

annobin.spec now uses:

   %undefine _annotated_build

so at least the circular dependency is no longer there.  You still
have to remember to rebuild it when a new version of GCC comes out
however.


... which apparently has just happened.


Yes, Fedora 28 will use GCC 8.

Consequences are affecting all released versions of Fedora, because it's 
impossible to apply bugfix updates to Fedora < rawhide, due to the 
GCC-chaos in rawhide.


Please provide more context when reporting issues, otherwise we have a 
hard time helping you.


Thanks,
Florian
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: GCC broken in rawhide?

2018-01-30 Thread Ralf Corsepius

On 01/29/2018 04:45 PM, Richard W.M. Jones wrote:

On Mon, Jan 29, 2018 at 03:55:53PM +0100, Florian Weimer wrote:

On 01/29/2018 03:43 PM, Kevin Kofler wrote:

Is https://fedoraproject.org/wiki/Changes/Annobin (no user-visible
improvements, only yet another global distrowide size increase) really worth
the circular dependency nightmare (rebuilding annobin requires GCC, but GCC
is configured to not work without annobin) or is it time to drop the feature
and enact the contingency plan?


Yes, it is required for meeting our security hardening objectives.

Ideally, annobin would be built from the GCC source package, but
since GCC needs more than twelve hours to build on armhfp, that is
not really an option while we are still adjusting the information
that annobin collects.


annobin.spec now uses:

   %undefine _annotated_build

so at least the circular dependency is no longer there.  You still
have to remember to rebuild it when a new version of GCC comes out
however.


... which apparently has just happened.

Consequences are affecting all released versions of Fedora, because it's 
impossible to apply bugfix updates to Fedora < rawhide, due to the 
GCC-chaos in rawhide.


Ralf

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Igor Gnatenko
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Tue, 2018-01-30 at 09:02 +0100, Vít Ondruch wrote:
> ~~~
> 
> + /usr/lib/rpm/redhat/brp-ldconfig
> /sbin/ldconfig: Warning: ignoring configuration file that cannot be
> opened: /etc/ld.so.conf: No such file or directory
> 
> ~~~
> 
> 
> What is this ^^? Should I be worried about it?

brp-ldconfig simply calls `/sbin/ldconfig -N -r "$RPM_BUILD_ROOT"` and there is
no /etc/ld.so.conf under $RPM_BUILD_ROOT when you build... In theory we could
supply one (empty), but I would ask Florian what would be the best to do.
- -- 
- -Igor Gnatenko
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEhLFO09aHZVqO+CM6aVcUvRu8X0wFAlpwM4MACgkQaVcUvRu8
X0wmKg/9EpqqwJ+uG3dVB4NhktxUS/MFZdh1eAw31YyxYTZROg61UnwGFfwi3yDB
mjpoTvckrgQnsYqa9NkUqrZ4/b7hKgEHJn5Gv6k//sGEHL85mStHfTz/9Ada/Xf1
mIdKI6OnIiYZCUeBzN3bA7YQbb9a1xeFj+Jd7A/rwmDAXfYNKtAWhyXwGCGDYflQ
68xcL1yE9EKRvJ5gYPl1/JeipwNcBe9ztmLooDhCVXb/4eRNaC7NP84QRDgh23hB
EDC8KhP2kQWHpKHEWI3irqwFK4i+WjtgDVMi4u/wSdtw9dAhm/jiEgI8nqQS+o3A
j02hCDxgbTx6uhxFDM5RaEbm9H4QMy7FwwOFc4I8YVqIsorRk/va50rQYqYY9iu/
EzSn2UK6xkltwBs2xi/6IRNfCvt+wHcOK4c8u20Nr/yxBl+n0AvbrFntPiIyoaKD
9PWySte92oa2kP8JSdh4xZlE+IpZkW6oPLh2rwaRMriAuZNFKeUSQceQCUjO8mFe
QVzjFjuwhOv7V7xMXZiSiEVt1LAren5Nt88WxWOyG4LRqJz87+oDfkzFijmtFgWD
YxTybfvgQRM3ZkUb02SbvKyFD82kCeXnf2Dr387Dr3AKQyIlqfpdFvCRwyb4Vc72
djsZkGgaUK87i4qLxbQRUpgM56U+ZZKV9+SBfRc1ipHwcRh/JXs=
=DCoF
-END PGP SIGNATURE-
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fwd: Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Terry Barnaby

On 29/01/18 22:28, J. Bruce Fields wrote:

On Mon, Jan 29, 2018 at 08:37:50PM +, Terry Barnaby wrote:

Ok, that's a shame unless NFSv4's write performance with small files/dirs
is relatively ok which it isn't on my systems.
Although async was "unsafe" this was not an issue in main standard
scenarios such as an NFS mounted home directory only being used by one
client.
The async option also does not appear to work when using NFSv3. I guess it
was removed from that protocol at some point as well ?

This isn't related to the NFS protocol version.

I think everybody's confusing the server-side "async" export option with
the client-side mount "async" option.  They're not really related.

The unsafe thing that speeds up file creates is the server-side "async"
option.  Sounds like you tried to use the client-side mount option
instead, which wouldn't do anything.


What is the expected sort of write performance when un-taring, for example,
the linux kernel sources ? Is 2 MBytes/sec on average on a Gigabit link
typical (3 mins to untar 4.14.15) or should it be better ?

It's not bandwidth that matters, it's latency.

The file create isn't allowed to return until the server has created the
file and the change has actually reached disk.

So an RPC has to reach the server, which has to wait for disk, and then
the client has to get the RPC reply.  Usually it's the disk latency that
dominates.

And also the final close after the new file is written can't return
until all the new file data has reached disk.

v4.14.15 has 61305 files:

$ git ls-tree -r  v4.14.15|wc -l
61305

So time to create each file was about 3 minutes/61305 =~ 3ms.

So assuming two roundtrips per file, your disk latency is probably about
1.5ms?

You can improve the storage latency somehow (e.g. with a battery-backed
write cache) or use more parallelism (has anyone ever tried to write a
parallel untar?).  Or you can cheat and set the async export option, and
then the server will no longer wait for disk before replying.  The
problem is that on server reboot/crash, the client's assumptions about
which operations succeeded may turn out to be wrong.

--b.


Many thanks for your reply.

Yes, I understand the above (latency and normally synchronous nature of 
NFS). I have async defined in the servers /etc/exports options. I have, 
later, also defined it on the client side as the async option on the 
server did not appear to be working and I wondered if with ongoing 
changes it had been moved there (would make some sense for the client to 
define it and pass this option over to the server as it knows, in most 
cases, if the bad aspects of async would be an issue to its usage in the 
situation in question).


It's a server with large disks, so SSD is not really an option. The use 
of async is ok for my usage (mainly /home mounted and users home files 
only in use by one client at a time etc etc.).


However I have just found that async is actually working! I just did not 
believe it was, due to the poor write performance. Without async on the 
server the performance is truly abysmal. The figures I get for untaring 
the kernel sources (4.14.15 895MBytes untared) using "rm -fr 
linux-4.14.15; sync; time (tar -xf linux-4.14.15.tar.gz -C /data2/tmp; 
sync)" are:


Untar on server to its local disk:  13 seconds, effective data rate: 68 
MBytes/s


Untar on server over NFSv4.2 with async on server:  3 minutes, effective 
data rate: 4.9 MBytes/sec


Untar on server over NFSv4.2 without async on server:  2 hours 12 
minutes, effective data rate: 115 kBytes/s !!


Is it really expected for NFS to be this bad these days with a 
reasonably typical operation and are there no other tuning parameters 
that can help  ?

___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Vít Ondruch


Dne 30.1.2018 v 09:15 Florian Weimer napsal(a):
> On 01/30/2018 09:02 AM, Vít Ondruch wrote:
>> ~~~
>>
>> + /usr/lib/rpm/redhat/brp-ldconfig
>> /sbin/ldconfig: Warning: ignoring configuration file that cannot be
>> opened: /etc/ld.so.conf: No such file or directory
>>
>> ~~~
>>
>>
>> What is this ^^? Should I be worried about it?
>
> Context?  ldconfig definitely shouldn't print /etc/ld.so.conf.
>
> Thanks,
> Florian

Local build of Ruby:


~~~
diff --git a/ruby.spec b/ruby.spec
index 6eff5d9..e7b5114 100644
--- a/ruby.spec
+++ b/ruby.spec
@@ -21,7 +21,7 @@
 %endif
 
 
-%global release 87
+%global release 88
 %{!?release_string:%global release_string
%{?development_release:0.}%{release}%{?development_release:.%{development_release}}%{?dist}}
 
 # The RubyGems library has to stay out of Ruby directory three, since the
@@ -742,10 +742,6 @@ sed -i '/def test_mdns_each_address$/,/^  end$/
s/^/#/' test/resolv/test_mdns.rb
 
 make check TESTS="-v $DISABLE_TESTS"
 
-%post libs -p /sbin/ldconfig
-
-%postun libs -p /sbin/ldconfig
-
 %files
 %license BSDL
 %license COPYING
@@ -1062,6 +1058,9 @@ make check TESTS="-v $DISABLE_TESTS"
 %{gem_dir}/specifications/xmlrpc-%{xmlrpc_version}.gemspec
 
 %changelog
+* Tue Jan 30 2018 Vít Ondruch  - 2.5.0-88
+- Drop obsolete ldconfig scriptlets.
+
 * Tue Jan 09 2018 Vít Ondruch  - 2.5.0-87
 - Fix segfaults during generating of documentation.
 
~~~


Vít
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: -z defs linker flag activated in Fedora rawhide

2018-01-30 Thread Petr Pisar
On 2018-01-25, Petr Pisar  wrote:
> On 2018-01-25, Daniel P  Berrangé  wrote:
>> Not neccessarily - with perl, the APIs used by extensions are actually
>> in libperl.so, not /usr/bin/perl, and the extensions link to libperl.so
>> So perl binary modules ought to still build without undefined symbols,
>> as IIUC they're not relying on things in /usr/bin/perl
>>
> Perl fails too
>.
>
> While extensions link to libperl.so that itself links to libpthread.so
> that defines pthread_getspecific symbol, the extensions linking fails
> because they use dTHX macro, provided by included thread.h via perl.h,
> that expands to pthread_getspecific.
>
Fixed in perl-5.26.1-405.fc28. Since now all binary extensions will be
linked with -lpthread.

-- Petr
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Florian Weimer

On 01/30/2018 09:02 AM, Vít Ondruch wrote:

~~~

+ /usr/lib/rpm/redhat/brp-ldconfig
/sbin/ldconfig: Warning: ignoring configuration file that cannot be
opened: /etc/ld.so.conf: No such file or directory

~~~


What is this ^^? Should I be worried about it?


Context?  ldconfig definitely shouldn't print /etc/ld.so.conf.

Thanks,
Florian
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


Re: Fedora27: NFS v4 terrible write performance, is async working

2018-01-30 Thread Petr Pisar
On Tue, Jan 30, 2018 at 08:31:05AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 08:25 schrieb Petr Pisar:
> > On 2018-01-29, J. Bruce Fields  wrote:
> > > The file create isn't allowed to return until the server has created the
> > > file and the change has actually reached disk.
> > > 
> > Why is there such a requirement? This is not true for local file
> > systems. This is why fsync() exists
> 
> pretty simply because on the NFS server side the whole VFS layer sits again
> and without "async" in the export you need a way to rely on "the stuff i
> wrote to the network ended on the filesystem on the other end"

If I need reliability, I issue fsync from the client process, client VFS
passes it to the NFS client, NFS client translates it into NFS COMMIT message,
sends it to the NFS server, NFS server translates it back to fsync, pass it to
the server VFS and from there to the local file system driver.

I don't understand why NFS should be reliable by default.

-- Petr
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org


[Bug 1539966] perl-PPIx-Regexp-0.054 is available

2018-01-30 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1539966

Petr Pisar  changed:

   What|Removed |Added

 Status|ASSIGNED|CLOSED
   Fixed In Version||perl-PPIx-Regexp-0.054-1.fc
   ||28
 Resolution|--- |RAWHIDE
Last Closed||2018-01-30 03:03:42



--- Comment #1 from Petr Pisar  ---
An enhancement release suitable for Fedora ≥ 28.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
perl-devel mailing list -- perl-devel@lists.fedoraproject.org
To unsubscribe send an email to perl-devel-le...@lists.fedoraproject.org


Re: RPM packaging and ldconfig handling

2018-01-30 Thread Vít Ondruch
~~~

+ /usr/lib/rpm/redhat/brp-ldconfig
/sbin/ldconfig: Warning: ignoring configuration file that cannot be
opened: /etc/ld.so.conf: No such file or directory

~~~


What is this ^^? Should I be worried about it?


V.


Dne 29.1.2018 v 18:18 Florian Weimer napsal(a):
> Igor committed a change to glibc so that from Fedora 28 going forward,
> glibc will run ldconfig after the transaction if any of the library
> directory trees was modified.
>
> This means that libraries which package the lib*.so.* symbolic links
> will no longer have to run ldconfig in %postin/%postun, and we can
> automate the creation of those symbolic links with a buildroot policy
> hook, see:
>
>   
>
> Packages which edit ld.so search paths will still have to run ldconfig
> in %postin/%postun, as before.
>
> Thanks,
> Florian
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org